CN117131457A - AI model-based electric power big data acquisition and processing method and system - Google Patents
AI model-based electric power big data acquisition and processing method and system Download PDFInfo
- Publication number
- CN117131457A CN117131457A CN202311394719.3A CN202311394719A CN117131457A CN 117131457 A CN117131457 A CN 117131457A CN 202311394719 A CN202311394719 A CN 202311394719A CN 117131457 A CN117131457 A CN 117131457A
- Authority
- CN
- China
- Prior art keywords
- power
- data acquisition
- event
- power data
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 238000003745 diagnosis Methods 0.000 claims abstract description 130
- 230000015654 memory Effects 0.000 claims abstract description 86
- 230000007787 long-term memory Effects 0.000 claims abstract description 55
- 230000006403 short-term memory Effects 0.000 claims abstract description 45
- 238000005192 partition Methods 0.000 claims abstract description 29
- 230000000875 corresponding effect Effects 0.000 claims description 233
- 238000012549 training Methods 0.000 claims description 79
- 238000013480 data collection Methods 0.000 claims description 56
- 230000010354 integration Effects 0.000 claims description 42
- 239000013598 vector Substances 0.000 claims description 37
- 238000000034 method Methods 0.000 claims description 26
- 230000006870 function Effects 0.000 claims description 22
- 230000005856 abnormality Effects 0.000 claims description 20
- 238000011156 evaluation Methods 0.000 claims description 20
- 230000006978 adaptation Effects 0.000 claims description 19
- 230000004044 response Effects 0.000 claims description 15
- 230000001364 causal effect Effects 0.000 claims description 11
- 238000013507 mapping Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 125000004122 cyclic group Chemical group 0.000 claims description 5
- 238000004458 analytical method Methods 0.000 claims description 4
- 238000012790 confirmation Methods 0.000 claims description 4
- 230000002596 correlated effect Effects 0.000 claims description 3
- 239000010410 layer Substances 0.000 description 18
- 238000013528 artificial neural network Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 4
- 238000003306 harvesting Methods 0.000 description 4
- 230000000630 rising effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000007664 blowing Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 235000003642 hunger Nutrition 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000002346 layers by function Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000037351 starvation Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2433—Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J13/00—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network
- H02J13/00002—Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the network; Circuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network characterised by monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2123/00—Data types
- G06F2123/02—Data types in the time domain, e.g. time-series data
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Power Engineering (AREA)
- Supply And Distribution Of Alternating Current (AREA)
Abstract
The embodiment of the application provides an AI (advanced technology attachment) -model-based power big data acquisition processing method and system. And then, acquiring a second long-term and short-term memory network corresponding to each power data acquisition event. And then, according to the sample power acquisition data sequence of each power data acquisition event, knowledge learning is circularly carried out on the corresponding second long-term and short-term memory network until the network meets the convergence requirement, and the second long-term and short-term memory network for completing the knowledge learning is generated. And finally, generating a target power system fault diagnosis network of the target power service partition based on the initial power system fault diagnosis network and a second long-short-term memory network which is corresponding to each power data acquisition event and completes knowledge learning. Therefore, the accuracy and the efficiency of fault diagnosis of the power system can be effectively improved.
Description
Technical Field
The application relates to the technical field of intelligent model algorithms, in particular to an AI model-based electric power big data acquisition and processing method and system.
Background
As an infrastructure for modern life, a power system operates stably and reliably, which is of vital importance for the overall consumer economic activity. However, due to the complexity of the power system itself and the uncertainty of the external environment, fault events in the power system are not completely avoided. Therefore, it is important to accurately and efficiently diagnose the faults of the power system.
With the increasing complexity and scale of modern power systems, fault diagnosis becomes an important link for ensuring the normal operation of the power system. However, the conventional fault diagnosis methods are mainly based on empirical rules or statistical analysis, and the methods often have poor effects when processing faults of a complex and dynamically-changed power system, and cannot meet the requirements of the modern power system on high-accuracy fault diagnosis.
Disclosure of Invention
In order to at least overcome the defects in the prior art, the application aims to provide an AI model-based power big data acquisition and processing method and system.
In a first aspect, the present application provides a method for acquiring and processing electric power big data based on an AI model, which is applied to an electric power service system, and the method includes:
Acquiring a sample power acquisition data sequence corresponding to each power data acquisition event in a plurality of power data acquisition events aiming at a target power service partition and an initial power system fault diagnosis network, wherein the sample power acquisition data sequence comprises a plurality of power characteristic training data carrying sample fault causal chain data, the sample fault causal chain data of one power characteristic training data is used for reflecting priori fault diagnosis data of the power characteristic training data, and the initial power system fault diagnosis network comprises a first long-term and short-term memory network and a classifier;
acquiring a second long-term and short-term memory network corresponding to each electric power data acquisition event;
for each electric power data acquisition event, performing cyclic network knowledge learning operation on a second long-short-term memory network corresponding to the electric power data acquisition event according to a sample electric power acquisition data sequence corresponding to the electric power data acquisition event until the second long-short-term memory network meets the network convergence requirement, and generating a second long-short-term memory network corresponding to the electric power data acquisition event for completing knowledge learning;
generating a target power system fault diagnosis network of the target power service partition according to the initial power system fault diagnosis network and a second long-short-term memory network which is corresponding to each power data acquisition event and completes knowledge learning;
Wherein, for each of the power data collection events, the network knowledge learning operation includes:
aiming at each electric power characteristic training data corresponding to the electric power data acquisition event, encoding the electric power characteristic training data corresponding to the electric power data acquisition event according to the first long-term and short-term memory network and the second long-term and short-term memory network corresponding to the electric power data acquisition event, and obtaining fault prediction data of the electric power characteristic training data corresponding to the electric power data acquisition event according to the classifier according to the electric power state sequence relation vector obtained by encoding;
generating network learning cost parameters corresponding to the power data acquisition events according to the feature distances between the fault prediction data and the priori fault diagnosis data corresponding to the power feature training data corresponding to the power data acquisition events;
and if the network convergence requirement is not met, updating the parameter information of a second long-short-period memory network corresponding to the power data acquisition event according to the network learning cost parameter.
In a possible implementation manner of the first aspect, the first long-short-term memory network includes a plurality of first coding units, and the second long-short-term memory network includes second coding units disposed in parallel with the local first coding units;
The encoding of the power feature training data corresponding to the power data acquisition event according to the first long-term and short-term memory network and the second long-term and short-term memory network corresponding to the power data acquisition event includes:
respectively carrying out feature dependency relation coding on the electric power feature training data according to a plurality of coding branches, wherein each coding branch comprises a first coding unit, and the local coding branch also comprises a second coding unit which is arranged in parallel with the first coding unit of the coding branch;
wherein the feature dependency encoding comprises:
for the coding branch which does not cover the second coding unit, coding the loading data of the coding branch according to the first coding unit of the coding branch, and taking the power state sequence relation vector obtained by coding as the generating data of the coding branch;
the method comprises the steps that the loading data of a first coding branch are electric power characteristic training data, the loading data of coding branches except the first coding branch are the generating data of the previous coding branch of the coding branch, and the generating data of a final coding unit are used as the loading data of the classifier;
for a coding branch covering a second coding unit, respectively coding loading data of the coding branch according to the first coding unit and the second coding unit, carrying out feature integration on the power state sequence relation vector obtained by coding of the first coding unit and the second coding unit, and taking the power state sequence relation vector after feature integration as generated data of the coding branch.
In a possible implementation manner of the first aspect, the acquiring the second long-term and short-term memory network corresponding to each of the power data acquisition events includes:
determining a first statistical value of power characteristic training data in a sample power acquisition data sequence corresponding to each power data acquisition event;
acquiring a preset mapping table, wherein the preset mapping table comprises a plurality of reference value intervals and target statistical values corresponding to all the reference value intervals in the plurality of reference value intervals;
for each electric power data acquisition event, determining a target interval related to the first statistical value corresponding to the electric power data acquisition event in the multiple reference value intervals, determining a target statistical value corresponding to the target interval as a second statistical value corresponding to the electric power data acquisition event, wherein the second statistical value is a statistical value of a second coding unit covered in a second long-short-term memory network, and the first statistical value and the second statistical value are positively related;
and aiming at each electric power data acquisition event, generating a second long-term and short-term memory network corresponding to the electric power data acquisition event according to a second coding unit of a second statistical value corresponding to the electric power data acquisition event.
In a possible implementation manner of the first aspect, the method further includes:
determining an event trigger tag for each of the plurality of power data collection events;
if the electric power data acquisition event matched with the event triggering tag exists in the plurality of electric power data acquisition events, taking the electric power data acquisition event matched with the event triggering tag as an electric power data acquisition event cluster, merging sample electric power acquisition data sequences corresponding to the electric power data acquisition event matched with the event triggering tag, and generating a sample electric power acquisition data sequence corresponding to the electric power data acquisition event cluster;
the obtaining the second long-term and short-term memory network corresponding to each electric power data acquisition event comprises the following steps:
and acquiring a second long-term and short-term memory network corresponding to each power data acquisition event cluster and a second long-term and short-term memory network corresponding to each power data acquisition event except the power data acquisition event cluster in the plurality of power data acquisition events.
In a possible implementation manner of the first aspect, the generating the target power system fault diagnosis network of the target power service partition according to the initial power system fault diagnosis network and the second long-short-term memory network that completes knowledge learning corresponding to each of the power data acquisition events includes any one of the following:
Performing feature integration on the second network function layer parameter information of the second long-short-period memory network which is used for completing knowledge learning and corresponds to a plurality of power data acquisition events in the plurality of power data acquisition events, and generating a second long-short-period memory network after feature integration;
determining the initial power system fault diagnosis network and the second long-short-period memory network after feature integration as a target power system fault diagnosis network corresponding to one of the plurality of power data acquisition events, wherein the model architecture of the second long-period memory network corresponding to the plurality of power data acquisition events is the same;
and aiming at each power data acquisition event, determining the initial power system fault diagnosis network and a second long-term and short-term memory network which is corresponding to the power data acquisition event and completes knowledge learning as a target power system fault diagnosis network corresponding to the power data acquisition event.
In a possible implementation manner of the first aspect, the feature integrating the second network function layer parameter information of the second long-term memory network for completing knowledge learning corresponding to a plurality of power data acquisition events includes:
Acquiring an acquisition path of each electric power data acquisition event;
determining a first dependency relationship value among the plurality of power data acquisition events according to the dependency relationship value among the acquisition paths of the power data acquisition events;
and carrying out feature integration on the second network function layer parameter information of the second long-term memory network which is used for completing knowledge learning and corresponds to a plurality of power data acquisition events with the first dependency relationship value meeting the target requirement.
In a possible implementation manner of the first aspect, the feature integrating the second network function layer parameter information of the second long-term memory network for completing knowledge learning corresponding to a plurality of power data acquisition events includes:
acquiring event evaluation information of each power data acquisition event in the plurality of power data acquisition events, and determining the event participation degree of each power data acquisition event in the plurality of power data acquisition events based on the event evaluation information of each power data acquisition event in the plurality of power data acquisition events, wherein the event evaluation information of one power data acquisition event comprises at least one of a statistical value of power characteristic training data in the sample power acquisition data sequence of the power data acquisition event or an influence coefficient of the power data acquisition event; or, acquiring target adaptation scenario information corresponding to the target power system fault diagnosis network, determining a target power data acquisition event corresponding to the target adaptation scenario information, if the target power data acquisition event corresponding to the target adaptation scenario information is any one of the plurality of power data acquisition events, determining an event participation degree corresponding to the any one of the power data acquisition events as 1, determining event participation degrees corresponding to all other than the any one of the power data acquisition events as 0, and if the target power data acquisition event corresponding to the target adaptation scenario information does not belong to any one of the plurality of power data acquisition events, determining second dependency relation values of all the power data acquisition events and the target power data acquisition event respectively, and determining event participation degrees of all the power data acquisition events based on the second dependency relation values of all the power data acquisition events and the target power data acquisition event, wherein the second dependency relation values and the event participation degrees are positively correlated;
And fusing the second network function layer parameter information which is corresponding to the plurality of electric power data acquisition events and completes knowledge learning according to the event participation degree of each electric power data acquisition event in the plurality of electric power data acquisition events.
In a possible implementation manner of the first aspect, the event evaluation information of a power data acquisition event includes a statistical value of power feature training data corresponding to the power data acquisition event and an influence coefficient of the power data acquisition event;
the determining the event participation degree of each power data acquisition event in the plurality of power data acquisition events based on the event evaluation information of each power data acquisition event in the plurality of power data acquisition events includes:
determining a first statistical value of the power characteristic training data corresponding to each power data acquisition event in the plurality of power data acquisition events and a total statistical value of the power characteristic training data corresponding to the plurality of power data acquisition events, and determining a first event participation degree corresponding to each power data acquisition event based on the proportion of the first statistical value corresponding to each power data acquisition event in the total statistical value;
Determining a second event participation degree corresponding to each power data acquisition event based on the influence coefficient corresponding to each power data acquisition event in the plurality of power data acquisition events;
and for each power data acquisition event, carrying out weighted calculation on the first event participation degree and the second event participation degree corresponding to the power data acquisition event, and generating the event participation degree corresponding to the power data acquisition event.
In a possible implementation manner of the first aspect, the method further includes:
acquiring target power acquisition data corresponding to a target power service partition;
performing fault diagnosis on the target power acquisition data according to a target power system fault diagnosis network of the target power service partition, and generating fault diagnosis data corresponding to the target power acquisition data;
the training step of the fault diagnosis network of the target power system comprises the following steps:
carrying out acquisition event demand analysis on the target power acquisition data to generate acquisition event demand data corresponding to the target power acquisition data, wherein the acquisition event demand data comprises demand probability values corresponding to each power data acquisition event in the plurality of power data acquisition events;
Determining a power system fault diagnosis network which is subjected to knowledge learning and corresponds to a power data acquisition event with the largest demand probability value as the target power system fault diagnosis network, or carrying out feature integration on second network function layer parameter information of a second long-short-period memory network which is subjected to knowledge learning and corresponds to a plurality of demand probability values arranged in front on the basis of descending order of the demand probability values, generating a second long-short-period memory network after feature integration, and generating the target power system fault diagnosis network according to the initial power system fault diagnosis network and the second long-short-period memory network after feature integration;
the target power acquisition data are first feedback power data of a power abnormality feedback user, and the fault diagnosis data are first feedback response data corresponding to the feedback power data;
after generating the fault diagnosis data corresponding to the target power acquisition data, the method further comprises:
extracting feedback problem nodes from the first feedback power data to generate feedback problem nodes corresponding to the first feedback power data;
generating target solution information corresponding to the feedback problem node according to the feedback problem node;
Issuing the first feedback response data and the target solution information to the power abnormality feedback user;
if the second feedback power data generated by the power abnormality feedback user and the confirmation request aiming at the target solution information are detected, the second feedback power data and the solution information selected by the power abnormality feedback user are used as new target power acquisition data, fault diagnosis is carried out on the new target power acquisition data according to the target power system fault diagnosis network, and second feedback response data is generated;
and transmitting the second feedback response data to the power abnormality feedback user.
In a second aspect, an embodiment of the present application further provides an electric power service system, where the electric power service system includes a processor and a machine-readable storage medium, where the machine-readable storage medium stores a computer program, and the computer program is loaded and executed in conjunction with the processor to implement the above AI model-based electric power big data collection processing method of the first aspect.
With the technical scheme of any aspect, first, a sample power acquisition data sequence of a plurality of power data acquisition events of a target power service partition and an initial power system fault diagnosis network are acquired. And then, acquiring a second long-term and short-term memory network corresponding to each power data acquisition event. And then, according to the sample power acquisition data sequence of each power data acquisition event, knowledge learning is circularly carried out on the corresponding second long-term and short-term memory network until the network meets the convergence requirement, and the second long-term and short-term memory network for completing the knowledge learning is generated. And finally, generating a target power system fault diagnosis network of the target power service partition based on the initial power system fault diagnosis network and a second long-short-term memory network which is corresponding to each power data acquisition event and completes knowledge learning. Therefore, the accuracy and the efficiency of fault diagnosis of the power system can be effectively improved.
That is, the application can capture and understand the complex time sequence mode by adopting the long-term memory network to learn knowledge, thereby improving the accuracy of fault diagnosis of the power system. By carrying out specific network knowledge learning operation on each power data acquisition event, different power data acquisition events can be better adapted, and the fault diagnosis process is finer and optimized. By generating the target power system fault diagnosis network of the target power service partition, the power service partition can be effectively monitored and managed, faults can be timely found and processed, and therefore stability of the power system is enhanced.
Drawings
For a clearer description of the technical solutions of the embodiments of the present application, reference will be made to the accompanying drawings, which are needed to be activated in the embodiments, and it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and that other related drawings can be obtained by those skilled in the art without the inventive effort.
FIG. 1 is a schematic flow chart of an AI model-based power big data acquisition and processing method according to an embodiment of the application;
Fig. 2 is a schematic functional block diagram of an electric power service system for implementing the above-mentioned AI-model-based electric power big data acquisition processing method according to an embodiment of the present application.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the application and is provided in the context of a particular application and its requirements. It will be apparent to those having ordinary skill in the art that various changes can be made to the disclosed embodiments and that the general principles defined herein may be applied to other embodiments and applications without departing from the principles and scope of the application. Therefore, the present application is not limited to the described embodiments, but is to be accorded the widest scope consistent with the claims.
Referring to fig. 1, the application provides an AI model-based power big data acquisition and processing method, which comprises the following steps.
Step S110, acquiring a sample power acquisition data sequence and an initial power system fault diagnosis network corresponding to each power data acquisition event in a plurality of power data acquisition events for one target power service partition.
In this embodiment, the sample power acquisition data sequence includes a plurality of power feature training data carrying sample fault causal link data, the sample fault causal link data of one power feature training data is used for reflecting prior fault diagnosis data of the power feature training data, and the initial power system fault diagnosis network includes a first long-short-term memory network and a classifier.
For example, assume that a north region of a city (target power service partition) is being processed. In this area, multiple power data acquisition events have been performed. Each power data collection event includes a series of sample power collection data, such as voltage, current, frequency, etc., and also carries sample fault causal link data, such as event records of wire blowing caused by overload. That is, these sample power acquisition data not only provide power state information, but also reflect a priori fault diagnosis information. At the same time, there is an initial power system fault diagnosis network, which consists of a first long-short-term memory network (LSTM) and a classifier.
The sample power acquisition data sequence may be, for example, data acquired by multiple power data acquisition activities, forming a series of power signature training data. For example, it is possible to collect data such as voltage, current, and frequency once per second and continuously for 24 hours, then a sequence of 86400 data points is obtained.
The sample fault causal link data may describe information of causes and results of the power system faults and may be used to reflect a priori fault diagnosis data of the power signature training data. For example, if a sudden drop in voltage is found at some time of data acquisition and later ascertained to be due to a certain switch tripping, this "switch tripping-voltage drop" information is the fault causal link data.
The initial power system fault diagnosis network is a predetermined neural network model for diagnosing power system faults, and includes a first long-term short-term memory network (LSTM) and a classifier. The first long-term and short-term memory network is used for processing time series characteristics of power data, because in a power system, a plurality of faults are related to the previous states; the classifier is used for judging whether the current power system has faults or not and which type of faults exist according to the output of the LSTM.
So, in brief, while collecting the operational data of the power system, the relevant prior fault diagnosis data is also recorded. These data are used to train a neural network model so that it can identify potential faults from the power data.
And step S120, obtaining a second long-term and short-term memory network corresponding to each electric power data acquisition event.
In each power data acquisition event, a corresponding one of the second long-term and short-term memory networks may be acquired. These second long and short term memory networks are provided to better understand and learn the power data changes and failure modes in each event.
Step S130, for each electric power data acquisition event, performing a cyclic network knowledge learning operation on a second long-short-term memory network corresponding to the electric power data acquisition event according to a sample electric power acquisition data sequence corresponding to the electric power data acquisition event until the second long-short-term memory network meets the network convergence requirement, and generating a second long-short-term memory network corresponding to the electric power data acquisition event and completing knowledge learning.
And then, a sample power acquisition data sequence corresponding to each power data acquisition event can be used for enabling the second long-short-term memory network to perform cyclic network knowledge learning operation. The process is just like training the second long-short-term memory network, so that the second long-short-term memory network can continuously adjust the parameters of the second long-term memory network according to the sample power acquisition data sequence so as to better understand and predict faults. This process continues until the second long and short term memory network reaches the set convergence requirement, e.g., the error falls within an acceptable range.
And step 140, generating a target power system fault diagnosis network of the target power service partition according to the initial power system fault diagnosis network and a second long-short-term memory network which is corresponding to each power data acquisition event and completes knowledge learning.
In this embodiment, the second long-term and short-term memory network for completing knowledge learning combines the original fault diagnosis network and the network knowledge learned for each power data acquisition event, so that fault diagnosis can be performed more accurately.
Wherein, for each of the power data collection events, the network knowledge learning operation includes:
Step S101, for each electric power feature training data corresponding to the electric power data acquisition event, encoding the electric power feature training data corresponding to the electric power data acquisition event according to the first long-short-term memory network and the second long-short-term memory network corresponding to the electric power data acquisition event, and obtaining fault prediction data of the electric power feature training data corresponding to the electric power data acquisition event according to the classifier according to the electric power state sequence relation vector obtained by encoding.
For example, assume that a power data acquisition event is being processed, where the power profile training data contained therein is information such as voltage, current, and frequency at a certain time. First, the power signature training data is encoded using a first long-short-term memory network (LSTM) and a second long-short-term memory network corresponding to the power data acquisition event. The result of the encoding is a power state sequence relationship vector, whereby the time dependence between the power signature training data is captured. And then, processing the power state sequence relation vector by using a classifier to obtain the fault prediction data of the power characteristic training data corresponding to the power data acquisition event. That is, the type of fault that may occur based on the current power state is predicted.
Illustratively, assuming that a power data acquisition event is being processed, the collected power signature training data includes information such as voltage, current, and frequency for 10 consecutive seconds per second. The power characteristic training data is the data to be encoded.
These power signature training data are first encoded using a first long short memory network (LSTM) and a second long short memory network corresponding to the power data acquisition event, the encoding being aimed at capturing a time series relationship in the power signature training data, e.g., the current power state may be affected by the power state for the last few seconds or even longer. For example, if the current is continuously rising, the subsequent voltage drop may result. The result of the encoding is a power state sequence relationship vector, which is a high-dimensional vector in which each element represents a pattern or trend in the original data. For another example, if a continuous decrease in voltage is found during the late peak period, this is an important pattern that the encoded power state sequence relationship vector reflects.
The power state sequence relation vector is obtained by encoding time series data through a neural network (in this example, a long-short-term memory network, LSTM). The actual expression format depends on the specific neural network structure and the nature of the input data, and in general, is a high-dimensional numerical vector, and each element is a real number.
For example, assuming that the last hidden layer of the LSTM network has a size of 128, each input sample is encoded by the LSTM network to obtain a vector of length 128. This vector is the power state sequence relationship vector.
For example, one possible power state sequence relationship vector may be:
[0.1, -0.2, 0.4, 0.6, -0.5, ], 0.3] (128 elements total)
This power state sequence relationship vector represents time-dependent patterns and trends in the original power state sequence. The specific values of each element are learned by a neural network and reflect certain characteristics or information extracted from the original data. It is noted that the concrete meaning of these elements is often not easily understood intuitively, as they are abstract features that are represented in a high-dimensional space.
This power state sequence relationship vector is then processed by a classifier. The classifier is used for predicting the possible fault type according to the input power state sequence relation vector. For example, if the pattern represented by the power state sequence relationship vector is "current continuously rising, voltage begins to drop", the classifier may predict a "current rising-voltage dropping-power supply overload" fault chain. This prediction result can be understood as failure prediction data of the power characteristic training data. As another example, if the input power state sequence relationship vector reflects a pattern of "late peak voltage drop," the classifier may predict a "voltage drop-current drop-power starvation" fault.
In this process, coding and failure prediction are closely related. The original power data is converted into a vector capable of representing the internal mode by encoding, and then the power state sequence relation vector is used for fault prediction.
Step S102, generating network learning cost parameters corresponding to the power data acquisition event according to the feature distance between the fault prediction data and the prior fault diagnosis data corresponding to the power feature training data corresponding to the power data acquisition event.
Next, a feature distance between the fault prediction data and the prior fault diagnosis data corresponding to each of the power feature training data is calculated. This can be understood as the gap between the predicted and the actual results. For example, if it is predicted that an overvoltage fault will occur, but in fact an overcurrent fault will occur, then this characteristic distance will be relatively large. From this feature distance, a cost parameter for network learning can be calculated.
For example, assume that the failure prediction data is "voltage drop-current rise-power supply overload". At the same time, there is also a priori fault diagnosis data corresponding to the power data acquisition event, i.e. a fault causal chain actually occurring, such as "voltage drop-current rise-power overload".
Next, a feature distance between the predicted fault causal link data and the prior fault diagnosis data is calculated. If the predicted and actually occurring fault cause and effect links are identical, the feature distance is 0, whereas if the two are completely different, the feature distance is large. This feature distance is a cost parameter for network learning, and the objective of this embodiment is to minimize this cost parameter by adjusting the network parameters, thereby improving the accuracy of the prediction.
And step S103, if the network convergence requirement is not met, updating the parameter information of the second long-short-period memory network corresponding to the electric power data acquisition event according to the network learning cost parameter.
Finally, if the predicted outcome of the network has not reached the set convergence requirement (e.g., the feature distance has not been small enough), the network parameters need to be updated to improve the prediction. Specifically, the parameters of the second long-short-period memory network corresponding to the power data acquisition event are updated according to the calculated network learning cost parameters.
Based on the above steps, first, a sample power acquisition data sequence of a plurality of power data acquisition events of a target power service partition and an initial power system fault diagnosis network are acquired. And then, acquiring a second long-term and short-term memory network corresponding to each power data acquisition event. And then, according to the sample power acquisition data sequence of each power data acquisition event, knowledge learning is circularly carried out on the corresponding second long-term and short-term memory network until the network meets the convergence requirement, and the second long-term and short-term memory network for completing the knowledge learning is generated. And finally, generating a target power system fault diagnosis network of the target power service partition based on the initial power system fault diagnosis network and a second long-short-term memory network which is corresponding to each power data acquisition event and completes knowledge learning. Therefore, the accuracy and the efficiency of fault diagnosis of the power system can be effectively improved.
That is, the application can capture and understand the complex time sequence mode by adopting the long-term memory network to learn knowledge, thereby improving the accuracy of fault diagnosis of the power system. By carrying out specific network knowledge learning operation on each power data acquisition event, different power data acquisition events can be better adapted, and the fault diagnosis process is finer and optimized. By generating the target power system fault diagnosis network of the target power service partition, the power service partition can be effectively monitored and managed, faults can be timely found and processed, and therefore stability of the power system is enhanced.
In one possible embodiment, the first long-short-term memory network comprises a plurality of first coding units, and the second long-term memory network comprises second coding units arranged in parallel with the local first coding units.
Assuming that a power data acquisition event is being processed, the collected power signature training data includes voltage, current and frequency readings (one reading per second) for 10 consecutive seconds. In this example, it is assumed that the first long short memory network (LSTM) comprises two first coding units and the second long short memory network comprises a second coding unit arranged in parallel with the first coding unit.
In step S101, encoding the power feature training data corresponding to the power data acquisition event according to the first long-short-term memory network and the second long-short-term memory network corresponding to the power data acquisition event may include:
firstly, feature dependency relation coding is carried out on the power feature training data according to a plurality of coding branches, each coding branch comprises a first coding unit, and the local coding branch further comprises a second coding unit which is arranged in parallel with the first coding unit of the coding branch.
Wherein the feature dependency encoding comprises: and for the coding branches which do not cover the second coding unit, coding the loading data of the coding branches according to the first coding unit of the coding branches, and taking the power state sequence relation vector obtained by coding as the generation data of the coding branches.
The loading data of the first coding branch is power characteristic training data, the loading data of the coding branches except for the first coding branch is the generating data of the previous coding branch of the coding branch, and the generating data of the final coding unit is used as the loading data of the classifier.
Then, for the coding branch covering the second coding unit, according to the first coding unit and the second coding unit, respectively coding the loading data of the coding branch, carrying out feature integration on the power state sequence relation vector obtained by coding of the first coding unit and the second coding unit, and taking the power state sequence relation vector after feature integration as the generation data of the coding branch.
First, the first coding branch (comprising only one first coding unit) codes the power feature training data. This process is just like extracting patterns or trends in the power data, such as whether the voltage and current have a synchronous rising or falling trend. The result of the encoding is a power state sequence relationship vector, which is the generated data of the encoding branch.
The load data of the second coding branch is then the generated data of the first coding branch, i.e. that power state sequence relation vector. Since the second coding branch covers one second coding unit, it will use both the first coding unit and the second coding unit for coding the loading data. This step is just like extracting higher level patterns, such as if the synchronous rising trend of voltage and current continues for a period of time. The first coding unit and the second coding unit respectively obtain two power state sequence relation vectors, and then the two vectors are subjected to feature integration to obtain the power state sequence relation vectors after feature integration, and the power state sequence relation vectors are used as the generated data of the coding branch.
Finally, the generated data of the final coding unit (i.e. the second coding branch), i.e. the power state sequence relation vector after that feature integration, is sent to the classifier for fault prediction.
Through the above process, the input data can be abstracted and encoded at different levels, enabling the network to capture more complex patterns and trends. The output of each coding branch can be used by the next coding branch, forming a multi-stage feature extraction and fusion process.
It should be noted that, in the above embodiment, the encoding unit refers to a unit or node in the long-short-term memory network, which is capable of processing input data and outputting the encoded result. The coding units form a layer-by-layer structure in the neural network, and deep learning is performed on complex input data through cooperative work.
In particular to LSTM, each coding unit is actually a part comprising four interactions: input gate, forget gate, output gate and cell state. These parts together determine how to update the state of the network.
For example, assume that a first LSTM includes two first coding units and a second LSTM includes one second coding unit.
1. A first encoding unit: when the electric power characteristic training data such as voltage, current and frequency are input into the first LSTM, each first coding unit receives the data and calculates the data according to the weight and the bias parameters of the first coding unit to obtain a coding result. This result can be seen as an abstract representation of the raw data, capturing a certain pattern or trend.
2. A second encoding unit: for the coding branches covering the second coding unit, the second coding unit will work in parallel with the first coding unit, jointly processing the input data. The second encoding unit may capture a different pattern or trend than the first encoding unit so that the input data may be understood from more angles.
In this way, each coding unit is independently learning and extracting features from the input data, and the final output result is an integration of all the coding unit results.
In one possible implementation, step S120 may include:
step S121, for each power data acquisition event, determines a first statistical value of power feature training data in a sample power acquisition data sequence corresponding to the power data acquisition event.
For example, assume that three power data acquisition events are being processed, power data acquisition event a, power data acquisition event B, and power data acquisition event C, respectively. Each power data acquisition event has a corresponding sample power acquisition data sequence that includes voltage, current, and frequency readings over a continuous period of time.
First, for each power data acquisition event, a first statistical value of power feature training data corresponding to the power data acquisition event is calculated. This statistic may be some measure, such as the number of data points. For example, the power profile training data for power data acquisition event a comprises 2000 data points, the power profile training data for power data acquisition event B comprises 3000 data points, and the power profile training data for power data acquisition event C comprises 4000 data points.
Step S122, obtaining a preset mapping table, where the preset mapping table includes a plurality of reference value intervals, and a target statistic value corresponding to each reference value interval in the plurality of reference value intervals.
For example, the preset mapping table may specify: if the number of data points is between 2000 and 2500, the corresponding target statistic is 2; if the number of data points is between 2500 and 3500, the corresponding target statistic is 3; if the number of data points is between 3500-4500, the corresponding target statistic is 4.
Step S123, for each of the electric power data collection events, determining a target interval associated with the first statistic value corresponding to the electric power data collection event in the multiple reference value intervals, determining a target statistic value corresponding to the target interval as a second statistic value corresponding to the electric power data collection event, where the second statistic value is a statistic value of a second coding unit covered in a second long-short-term memory network, and the first statistic value and the second statistic value are positively associated.
For example, the second statistic of the power data collection event a is 2, the second statistic of the power data collection event B is 3, and the second statistic of the power data collection event C is 4.
Step S124, for each electric power data acquisition event, generating a second long-term and short-term memory network corresponding to the electric power data acquisition event according to a second coding unit of a second statistical value corresponding to the electric power data acquisition event.
For example, it may be determined how many second encoding units to generate according to the second statistics of each power data acquisition event, so as to generate the second LSTM corresponding to the power data acquisition event. For example, if the second statistic is 2, two second coding units are generated; if the second statistic is 3, three second coding units are generated, and so on.
Thus, each power data acquisition event has a specific second LSTM dynamically generated according to the quantity of the power characteristic training data, so that the structure of the neural network can adapt to the data processing requirements of different scales.
In a possible implementation manner, the embodiment may further determine an event trigger tag of each of the plurality of power data collection events, if a power data collection event matching the event trigger tag exists in the plurality of power data collection events, take the power data collection event matching the event trigger tag as a power data collection event cluster, and combine sample power collection data sequences corresponding to the power data collection events matching the event trigger tag, so as to generate a sample power collection data sequence corresponding to the power data collection event cluster.
For example, suppose four power data acquisition events are being processed, power data acquisition event a, power data acquisition event B, power data acquisition event C, and power data acquisition event D, respectively. Each power data acquisition event has a corresponding sample power acquisition data sequence that includes voltage, current, and frequency readings over a continuous period of time. For each power data acquisition event, its event trigger tag is determined. An event-triggered tag may be some feature or condition such as "voltage drops more than 10%" or "current exceeds a threshold" or the like. In this example, it is assumed that the trigger tags for both the power data acquisition event a and the power data acquisition event B are "voltage drops exceeding 10%", while the trigger tags for the power data acquisition event C and the power data acquisition event D are "current exceeding the threshold".
Therefore, the power data acquisition events with the same event triggering label can be classified into one type and used as one power data acquisition event cluster. In this example, power data acquisition event a and power data acquisition event B would be grouped into one power data acquisition event cluster, and power data acquisition event C and power data acquisition event D would be grouped into another event cluster. And then merging the sample electric power collection data sequences in each electric power data collection event cluster to generate a sample electric power collection data sequence corresponding to the electric power data collection event cluster.
On this basis, step S120 may further include:
and acquiring a second long-term and short-term memory network corresponding to each power data acquisition event cluster and a second long-term and short-term memory network corresponding to each power data acquisition event except the power data acquisition event cluster in the plurality of power data acquisition events.
For example, this step is the same as the process described above, and a second statistic is determined based on the statistic of the power characteristic training data corresponding to the power data collection event (here, the sample power collection data sequence corresponding to the event cluster), and a second LSTM is generated based thereon.
In this way, events with similar characteristics can be categorized, so that the neural network can learn and extract features more effectively, thereby improving the accuracy of prediction.
In one possible implementation, step S140 may include any one of the following:
1. and carrying out feature integration on the parameter information of the second network function layer of the second long-short-period memory network which is used for completing knowledge learning and corresponds to the plurality of power data acquisition events, and generating a second long-short-period memory network after feature integration. And then, determining the initial power system fault diagnosis network and the second long-short-period memory network after feature integration as a target power system fault diagnosis network corresponding to one of the plurality of power data acquisition events, wherein the model architecture of the second long-period memory network corresponding to the plurality of power data acquisition events is the same.
2. And aiming at each power data acquisition event, determining the initial power system fault diagnosis network and a second long-term and short-term memory network which is corresponding to the power data acquisition event and completes knowledge learning as a target power system fault diagnosis network corresponding to the power data acquisition event.
For example, assume that there is an initial power system fault diagnosis network, and four power data collection events A, B, C and D, each corresponding to a second LSTM that has completed knowledge learning.
Firstly, feature integration is carried out on second network function layer parameter information of a second LSTM corresponding to the four events, and the second LSTM after feature integration is generated. Feature integration may be achieved by a variety of methods, such as averaging, weighted averaging, or more complex machine learning models.
Then, there are two ways to generate the target power system fault diagnosis network:
and determining the initial power system fault diagnosis network and the second LSTM after feature integration as a target power system fault diagnosis network corresponding to one of the power data acquisition events (such as event A). In this case, the second LSTM corresponding to all power data collection events needs to have the same model architecture.
For each power data acquisition event, determining an initial power system fault diagnosis network and a second LSTM corresponding to the event as a target power system fault diagnosis network corresponding to the event. In this case, events A, B, C and D would each result in a customized target power system fault diagnosis network.
In this way, a target power system fault diagnosis network suitable for processing the corresponding event can be generated according to the characteristics of each power data acquisition event and the learned knowledge, so that the prediction of the neural network is more accurate and effective.
In a possible implementation manner, the feature integrating the second network function layer parameter information of the second long-term memory network for completing knowledge learning corresponding to a plurality of power data acquisition events in the plurality of power data acquisition events includes:
1. and acquiring the acquisition path of each power data acquisition event.
For example, assume there are four power data collection events A, B, C and D, each corresponding to a second long-short-term memory network (LSTM) that has completed knowledge learning. Each event has an acquisition path, which can be understood as a route or process of power data acquisition.
First, an acquisition path of each power data acquisition event is acquired. For example, the acquisition path of event a may be "power plant- > substation- > user", the acquisition path of event B may be "power plant- > user", and the acquisition paths of event C and event D may both be "power plant- > relay- > user".
2. And determining a first dependency relationship value among the plurality of power data acquisition events according to the dependency relationship value among the acquisition paths of the power data acquisition events.
Then, according to the dependency relationship value among the collection paths of each power data collection event, a first dependency relationship value among each event is determined. The dependency value may be some measure, such as similarity or distance. In this example, since the acquisition paths of event C and event D are identical, the first dependency value between them may be the largest; while the acquisition paths for event a and event B are different, the first dependency value between them may be smaller.
For example, the dependency value between the collection paths of each power data collection event may be used to measure the degree of correlation of two power data collection events under a specific scenario. The degree of such association may be based on a number of factors, such as the time sequence, frequency, impact, etc., in which the events occur. One possible calculation formula is as follows:
Dependency value (PathA, pathB) =α×p (pathb|patha) +β×p (patha|pathb)
Wherein P (pathb|patha) represents the probability of the acquisition path of event B occurring after the acquisition path of event a occurs; p (patha|pathb) represents the probability of the acquisition path of event a occurring after the acquisition path of event B occurs. Alpha and beta are weight parameters, and are adjusted according to actual application scenes.
For example, if in 100 observations the acquisition path of event a occurs 30 times and the acquisition path of event B occurs 20 times after the acquisition path of event a occurs, and vice versa, the acquisition path of event a occurs 15 times after the acquisition path of event B occurs, then it is possible to obtain:
P(PathB|PathA) = 20/ 30 = 0.67
P(PathA|PathB) = 15 / 20 = 0.75
assuming that α is set to 0.6 and β is set to 0.4, the dependency value can be calculated:
dependency value (PathA, pathB) =0.6x0.67+0.4x0.75=0.7
This value represents the degree of dependence between the acquisition path of event a and the acquisition path of event B. This is a simple example, and the actual calculation method may be more complex and take into account more factors.
3. And carrying out feature integration on the second network function layer parameter information of the second long-term memory network which is used for completing knowledge learning and corresponds to a plurality of power data acquisition events with the first dependency relationship value meeting the target requirement.
For example, events whose first dependency values meet the target requirement (for example, are greater than a certain threshold value) may be selected, and feature integration may be performed on the second network function layer parameter information of the second LSTM corresponding to those events. For example, if the goal is to select the event with the identical acquisition path for feature integration, then event C and event D will be selected; if the goal is to select events for feature integration that have some similarity in acquisition paths, then event A, event C, and event D may be selected.
By the method, feature integration can be performed according to the acquisition path and the dependency relationship value of the power data acquisition event, so that the neural network can learn and extract features better, and the prediction accuracy is improved.
In another possible implementation manner, the feature integrating the second network function layer parameter information of the second long-term memory network for completing knowledge learning corresponding to a plurality of power data acquisition events in the plurality of power data acquisition events may further include:
1. acquiring event evaluation information of each of the plurality of power data acquisition events, and determining event participation of each of the plurality of power data acquisition events based on the event evaluation information of each of the plurality of power data acquisition events, wherein the event evaluation information of one power data acquisition event comprises at least one of a statistical value of power feature training data in the sample power acquisition data sequence of the power data acquisition event or an influence coefficient of the power data acquisition event. Or, acquiring target adaptation scenario information corresponding to the target power system fault diagnosis network, determining a target power data acquisition event corresponding to the target adaptation scenario information, if the target power data acquisition event corresponding to the target adaptation scenario information is any one of the plurality of power data acquisition events, determining an event participation degree corresponding to the any one of the power data acquisition events as 1, determining event participation degrees corresponding to all power data acquisition events except the any one of the power data acquisition events as 0, if the target power data acquisition event corresponding to the target adaptation scenario information does not belong to any one of the plurality of power data acquisition events, determining second dependency relation values of all power data acquisition events and the target power data acquisition event in the plurality of power data acquisition events, and determining event participation degrees of all power data acquisition events based on the second dependency relation values of all power data acquisition events and the target power data acquisition events in the plurality of power data acquisition events, wherein the second dependency relation values and the event participation degrees are positively correlated.
For example, assume there are four power data collection events A, B, C and D, each corresponding to a second long-short-term memory network (LSTM) that has completed knowledge learning.
First, event evaluation information of each power data acquisition event is acquired, wherein the event evaluation information may include statistics values or influence coefficients of power feature training data. And determining the event participation degree of each event based on the event evaluation information. For example, if the impact coefficient of event a is greatest, its event engagement may be highest; and if the impact coefficient of event B is small, its event engagement may be low.
Or, the target adaptation scenario information corresponding to the fault diagnosis network of the target power system can be obtained, so that the target power data acquisition event can be determined, and the event participation degree of each event can be redetermined according to the target adaptation scenario information. If the target power data acquisition event corresponding to the target adaptation scenario information is one of the events (such as event C), the event participation degree of the event is determined to be 1, and the event participation degrees of other events are determined to be 0. If the target power data collection event is not any of the four events, then a second dependency value for each event with respect to the target power data collection event needs to be calculated and the event engagement for each event determined therefrom.
In this embodiment, the target power system fault diagnosis network is trained to solve specific types of power problems (e.g., voltage anomalies). It is assumed that the target adaptation scenario information corresponding to this target power system fault diagnosis network may be "voltage abnormality". Then, it is necessary to determine a target power data collection event corresponding to the scenario information of "voltage abnormality". If there are four power data collection events A, B, C and D, each event covers information of one or more power problems. For example, event a covers information of abnormal voltage, event B covers information of excessive current, event C covers information of unstable frequency, and event D covers information of low power factor. In this case, the event a is determined as a target power data collection event corresponding to the scenario information of "voltage abnormality". Therefore, the target adaptation scenario information corresponding to the fault diagnosis network of the target power system is obtained, the target power data acquisition event corresponding to the target adaptation scenario information is determined, and the data acquisition event which can reflect the problem aimed by the fault diagnosis network of the target power system is found, so that the accuracy and efficiency of fault diagnosis can be improved.
The calculating process of the second dependency relationship value between each electric power data acquisition event and the target electric power data acquisition event in the plurality of electric power data acquisition events may be:
assume that there are two power data acquisition events a and B. The dependency value of event a on event B can be calculated by the following formula:
dependency value (a, B) =p (b|a) ×p (a)
Where P (b|a) represents the probability of occurrence of event B in the event a, and P (a) represents the probability of occurrence of event a.
For example, if event a occurs 30 times and event B also occurs 20 times after event a occurs in 100 observations, then it is possible to obtain:
P(A) = 30 / 100 = 0.3
P(B|A) = 20 / 30 = 0.67
then, the dependency value can be calculated:
dependency value (a, B) =0.67×0.3=0.2
This value represents the dependency of event a on event B. It should be noted that this is only a simple example, and that the actual calculation method may be more complex and take into account more factors.
2. And fusing the second network function layer parameter information which is corresponding to the plurality of electric power data acquisition events and completes knowledge learning according to the event participation degree of each electric power data acquisition event in the plurality of electric power data acquisition events.
For example, according to the event participation degree of each power data acquisition event, the second network function layer parameter information of the second LSTM corresponding to each power data acquisition event is fused. For example, if the event a has the highest event participation, the parameter information of the second LSTM corresponding to the event a may occupy a larger weight during the fusion. By the method, feature fusion can be performed according to the event evaluation information or the target adaptation scene information of the power data acquisition event, so that the neural network can learn and extract features better, and the prediction accuracy is improved.
Illustratively, the following is a specific example:
assuming three power data collection events A, B and C, their event engagement is 0.3, 0.4 and 0.3, respectively. Each event corresponds to a second network (e.g., deep neural network) that completes knowledge learning, and each network has a set of functional layer parameter information.
When fusing these parameter information, the parameter information of each network may be given different weights according to the event participation degree of each event. For example, the network parameter information for event a will be given a weight of 0.3, the network parameter information for event B will be given a weight of 0.4, and the network parameter information for event C will be given a weight of 0.3.
These parameter information may then be fused in a weighted average or the like. For example, if there is one parameter w for each network, the fused parameter w 'may be' w '=0.3×w_a+0.4×w_b+0.3×w_c'.
In this way, the network parameter information corresponding to each event can be fused according to the event participation degree of each power data acquisition event, so that a new network integrating all event information is generated. The new network can be better suitable for different power data acquisition events, so that the accuracy of fault diagnosis is improved.
In one possible embodiment, the event evaluation information of a power data collection event includes a statistical value of power feature training data corresponding to the power data collection event and an influence coefficient of the power data collection event.
The determining the event participation degree of each power data acquisition event in the plurality of power data acquisition events based on the event evaluation information of each power data acquisition event in the plurality of power data acquisition events includes:
1. determining a first statistical value of the power characteristic training data corresponding to each power data acquisition event in the plurality of power data acquisition events and a total statistical value of the power characteristic training data corresponding to the plurality of power data acquisition events, and determining a first event participation degree corresponding to each power data acquisition event based on the proportion of the first statistical value corresponding to each power data acquisition event in the total statistical value.
For example, assume there are four power data acquisition events A, B, C and D, each of which corresponds to a set of power feature training data and an impact coefficient.
First, a first statistical value and a total statistical value of the power feature training data corresponding to each event are determined. For example, the power signature training data for event a comprises 2000 data points, the power signature training data for event B comprises 3000 data points, the power signature training data for event C comprises 4000 data points, and the power signature training data for event D comprises 5000 data points. Then the total statistic is 14000 (2000 +3000+4000+ 5000). Then, the first event engagement of each event is determined based on the proportion of the first statistics of each event in the total statistics. For example, the first event engagement of event a may be 2000/14000=0.14.
2. And determining a second event participation degree corresponding to each power data acquisition event based on the influence coefficient corresponding to each power data acquisition event in the plurality of power data acquisition events.
For example, a second event engagement for each power data collection event may be determined based on the impact coefficients for the respective power data collection event. The influence coefficient may be calculated by an expert evaluation or algorithm, for example, the influence coefficient of the power data acquisition event a is 0.6, the influence coefficient of the power data acquisition event B is 0.8, the influence coefficient of the power data acquisition event C is 0.7, and the influence coefficient of the power data acquisition event D is 0.9.
3. And for each power data acquisition event, carrying out weighted calculation on the first event participation degree and the second event participation degree corresponding to the power data acquisition event, and generating the event participation degree corresponding to the power data acquisition event.
For example, the specific weighting method may be determined according to the actual requirement, for example, the second event may be given a higher weight, because it represents the influence of the event.
In this way, the event engagement can be determined based on the power feature training data statistics and the influence coefficient of the power data acquisition event, so that the neural network can learn and extract the features better, and the prediction accuracy is improved.
In one possible embodiment, the method further comprises:
step S150, acquiring target power acquisition data corresponding to the target power service partition.
Step S160, performing fault diagnosis on the target power acquisition data according to the target power system fault diagnosis network of the target power service partition, and generating fault diagnosis data corresponding to the target power acquisition data.
For example, assume there are four power data collection events A, B, C and D, each corresponding to a second long-short-term memory network (LSTM) that has completed knowledge learning. There is also a target power service partition that contains some target power harvesting data.
First, target power acquisition data corresponding to a target power service partition is acquired, which may include readings of various power parameters, such as voltage, current, frequency, and the like.
Then, the target power system fault diagnosis network is used for carrying out fault diagnosis on the target power acquisition data, and corresponding fault diagnosis data are generated. Such fault diagnosis data may help understand the status of the target power service partition, such as whether a fault exists, the type and severity of the fault, and the like.
The training step of the fault diagnosis network of the target power system comprises the following steps:
and step A110, carrying out acquisition event demand analysis on the target power acquisition data to generate acquisition event demand data corresponding to the target power acquisition data, wherein the acquisition event demand data comprises demand probability values corresponding to each power data acquisition event in the plurality of power data acquisition events.
And step A120, determining a power system fault diagnosis network which is subjected to knowledge learning and corresponds to a power data acquisition event with the largest demand probability value as the target power system fault diagnosis network, or carrying out feature integration on second network function layer parameter information of a second long-short-period memory network which is subjected to knowledge learning and corresponds to a plurality of demand probability values arranged in front on the basis of descending order of the demand probability values, generating a second long-period memory network after feature integration, and generating the target power system fault diagnosis network according to the initial power system fault diagnosis network and the second long-period memory network after feature integration.
For example, the target power harvesting data may be subjected to harvesting event demand analysis, generating corresponding harvesting event demand data. The acquisition event demand data may include demand probability values corresponding to individual power data acquisition events. Then, selecting a power system fault diagnosis network corresponding to an event (such as an event A) with the largest demand probability value as a target power system fault diagnosis network, or performing feature integration on second network function layer parameter information of second LSTMs corresponding to the events arranged in front N based on descending order of the demand probability value to generate a second LSTM after feature integration, and then generating the target power system fault diagnosis network according to the initial power system fault diagnosis network and the second LSTM after feature integration.
In a possible implementation manner, the target power collection data is first feedback power data of a power abnormality feedback user, and the fault diagnosis data is first feedback response data corresponding to the feedback power data.
In this way, fault diagnosis and network training can be performed according to the actual situation and requirements of the target power service partition, so that the requirements of users can be better met and the service quality can be improved.
In a possible implementation manner, after generating the fault diagnosis data corresponding to the target power collection data, the embodiment of the present application may further include the following steps:
and step B110, extracting feedback problem nodes from the first feedback power data, and generating feedback problem nodes corresponding to the first feedback power data.
And step B120, generating target solution information corresponding to the feedback problem node according to the feedback problem node.
And step B130, the first feedback response data and the target solution information are issued to the power abnormality feedback user.
And step B140, if the second feedback power data generated by the power abnormality feedback user and the confirmation request aiming at the target solution information are detected, taking the second feedback power data and the solution information selected by the power abnormality feedback user as new target power acquisition data, and performing fault diagnosis on the new target power acquisition data according to the target power system fault diagnosis network to generate second feedback response data.
And step B150, the second feedback response data is issued to the power abnormality feedback user.
For example, assume that there is a target power service partition in which there is a power anomaly feedback user. The user provides a first feedback of power data indicating that a power problem has occurred in their home.
Firstly, extracting feedback problem nodes from the first feedback power data to generate corresponding feedback problem nodes. For example, if the feedback data shows a voltage that is too low, then the "voltage" may be a feedback problem node.
Then, corresponding target solution information is generated according to the feedback problem node. For example, if the problem node is "voltage," the solution may include "check power line", "adjust power settings", and so on.
Next, the first feedback response data (i.e., the previous fault diagnosis data) and the target solution information are issued to the power abnormality feedback user.
If it is detected that the power anomaly feedback user generates second feedback power data and a confirmation request (for example, the user feedback says that they have operated according to the solution but the problem still exists), then the new feedback data and the solution information selected by the user are used as new target power collection data, and then the fault diagnosis is performed on the new data by using the target power system fault diagnosis network, so as to generate second feedback response data.
And finally, the second feedback response data is issued to the power abnormality feedback user.
In this way, the user's feedback data can be continuously processed, a solution provided, and adjusted based on the user's feedback to better address the user's problem.
Fig. 2 schematically illustrates a power service system 100 that may be used to implement various embodiments described in the present disclosure.
For one embodiment, FIG. 2 shows a power service system 100, the power service system 100 having a plurality of processors 102, a control module (chipset) 104 coupled to one or more of the processor(s) 102, a memory 106 coupled to the control module 104, a non-volatile memory (NVM)/storage device 108 coupled to the control module 104, a plurality of input/output devices 110 coupled to the control module 104, and a network interface 112 coupled to the control module 104.
Processor 102 may include a plurality of single-core or multi-core processors, and processor 102 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some alternative embodiments, the power service system 100 can be used as a server device such as a gateway in the embodiments of the present application.
In some alternative embodiments, power service system 100 may include a plurality of computer-readable media (e.g., memory 106 or NVM/storage 108) having instructions 114 and a plurality of processors 102 combined with the plurality of computer-readable media configured to execute instructions 114 to implement modules to perform actions described in this disclosure.
For one embodiment, the control module 104 may include any suitable interface controller to provide any suitable interface to one or more of the processor(s) 102 and/or any suitable device or component in communication with the control module 104.
The control module 104 may include a memory controller module to provide an interface to the memory 106. The memory controller modules may be hardware modules, software modules, and/or firmware modules.
Memory 106 may be used, for example, to load and store data and/or instructions 114 for power service system 100. For one embodiment, memory 106 may comprise any suitable volatile memory, such as, for example, a suitable DRAM. In some alternative embodiments, memory 106 may comprise a double data rate type four synchronous dynamic random access memory.
For one embodiment, the control module 104 may include a plurality of input/output controllers to provide interfaces to the NVM/storage 108 and the input/output device(s) 110.
For example, NVM/storage 108 may be used to store data and/or instructions 114. NVM/storage 108 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage(s).
NVM/storage 108 may include a storage resource that is physically part of the device on which power service system 100 is installed, or it may be accessible by the device, but may not be necessary as part of the device. For example, NVM/storage 108 may be accessed via input/output device(s) 110 in connection with a network.
Input/output device(s) 110 may provide an interface for power service system 100 to communicate with any other suitable device, and input/output device 110 may include a communication component, a pinyin component, a sensor component, and the like. The network interface 112 may provide an interface for the power service system 100 to communicate in accordance with a plurality of networks, and the power service system 100 may wirelessly communicate with a plurality of components of a wireless network based on any of a plurality of wireless network standards and/or protocols, such as accessing a wireless network in accordance with a communication standard, such as WiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof.
For one embodiment, one or more of the processor(s) 102 may be packaged together with logic of a plurality of controllers (e.g., memory controller modules) of the control module 104. For one embodiment, one or more of the processor(s) 102 may be packaged together with logic of multiple controllers of the control module 104 to form a system in package. For one embodiment, one or more of the processor(s) 102 may be integrated on the same die with logic of multiple controllers of the control module 104. For one embodiment, one or more of the processor(s) 102 may be integrated on the same die with logic of multiple controllers of the control module 104 to form a system-on-chip.
In various embodiments, the power service system 100 may be, but is not limited to being: a desktop computing device or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.), and the like. In various embodiments, power service system 100 may have more or fewer components and/or different architectures. For example, in some alternative embodiments, power service system 100 includes multiple cameras, a keyboard, a liquid crystal display screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an application specific integrated circuit, and speakers.
The foregoing has outlined rather broadly the more detailed description of the application in order that the detailed description of the principles and embodiments of the application may be implemented in conjunction with the detailed description of the application that follows, the examples being merely intended to facilitate an understanding of the method of the application and its core concepts; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Claims (10)
1. An AI model-based power big data acquisition and processing method is characterized by being applied to a power service system, and comprises the following steps:
acquiring a sample power acquisition data sequence corresponding to each power data acquisition event in a plurality of power data acquisition events aiming at a target power service partition and an initial power system fault diagnosis network, wherein the sample power acquisition data sequence comprises a plurality of power characteristic training data carrying sample fault causal chain data, the sample fault causal chain data of one power characteristic training data is used for reflecting priori fault diagnosis data of the power characteristic training data, and the initial power system fault diagnosis network comprises a first long-term and short-term memory network and a classifier;
Acquiring a second long-term and short-term memory network corresponding to each electric power data acquisition event;
for each electric power data acquisition event, performing cyclic network knowledge learning operation on a second long-short-term memory network corresponding to the electric power data acquisition event according to a sample electric power acquisition data sequence corresponding to the electric power data acquisition event until the second long-short-term memory network meets the network convergence requirement, and generating a second long-short-term memory network corresponding to the electric power data acquisition event for completing knowledge learning;
generating a target power system fault diagnosis network of the target power service partition according to the initial power system fault diagnosis network and a second long-short-term memory network which is corresponding to each power data acquisition event and completes knowledge learning;
the performing, for each power data acquisition event, a cyclic network knowledge learning operation on a second long-term and short-term memory network corresponding to the power data acquisition event according to a sample power acquisition data sequence corresponding to the power data acquisition event, including:
aiming at each electric power characteristic training data corresponding to the electric power data acquisition event, encoding the electric power characteristic training data corresponding to the electric power data acquisition event according to the first long-term and short-term memory network and the second long-term and short-term memory network corresponding to the electric power data acquisition event, and obtaining fault prediction data of the electric power characteristic training data corresponding to the electric power data acquisition event according to the classifier according to the electric power state sequence relation vector obtained by encoding;
Generating network learning cost parameters corresponding to the power data acquisition events according to the feature distances between the fault prediction data and the priori fault diagnosis data corresponding to the power feature training data corresponding to the power data acquisition events;
and if the network convergence requirement is not met, updating the parameter information of a second long-short-period memory network corresponding to the power data acquisition event according to the network learning cost parameter.
2. The AI-model-based power big data acquisition processing method of claim 1, wherein the first long-short-term memory network includes a plurality of first encoding units, and the second long-short-term memory network includes a second encoding unit disposed in parallel with the local first encoding unit;
the encoding of the power feature training data corresponding to the power data acquisition event according to the first long-term and short-term memory network and the second long-term and short-term memory network corresponding to the power data acquisition event includes:
respectively carrying out feature dependency relation coding on the electric power feature training data according to a plurality of coding branches, wherein each coding branch comprises a first coding unit, and the local coding branch also comprises a second coding unit which is arranged in parallel with the first coding unit of the coding branch;
Wherein the feature dependency encoding comprises:
for the coding branch which does not cover the second coding unit, coding the loading data of the coding branch according to the first coding unit of the coding branch, and taking the power state sequence relation vector obtained by coding as the generating data of the coding branch;
the method comprises the steps that the loading data of a first coding branch are electric power characteristic training data, the loading data of coding branches except the first coding branch are the generating data of the previous coding branch of the coding branch, and the generating data of a final coding unit are used as the loading data of the classifier;
for a coding branch covering a second coding unit, respectively coding loading data of the coding branch according to the first coding unit and the second coding unit, carrying out feature integration on the power state sequence relation vector obtained by coding of the first coding unit and the second coding unit, and taking the power state sequence relation vector after feature integration as generated data of the coding branch.
3. The AI-model-based power big data collection processing method according to claim 2, wherein the acquiring the second long-term and short-term memory network corresponding to each of the power data collection events includes:
Determining a first statistical value of power characteristic training data in a sample power acquisition data sequence corresponding to each power data acquisition event;
acquiring a preset mapping table, wherein the preset mapping table comprises a plurality of reference value intervals and target statistical values corresponding to all the reference value intervals in the plurality of reference value intervals;
for each electric power data acquisition event, determining a target interval related to the first statistical value corresponding to the electric power data acquisition event in the multiple reference value intervals, determining a target statistical value corresponding to the target interval as a second statistical value corresponding to the electric power data acquisition event, wherein the second statistical value is a statistical value of a second coding unit covered in a second long-short-term memory network, and the first statistical value and the second statistical value are positively related;
and aiming at each electric power data acquisition event, generating a second long-term and short-term memory network corresponding to the electric power data acquisition event according to a second coding unit of a second statistical value corresponding to the electric power data acquisition event.
4. The AI model-based power big data acquisition processing method of claim 1, further comprising:
Determining an event trigger tag for each of the plurality of power data collection events;
if the electric power data acquisition event matched with the event triggering tag exists in the plurality of electric power data acquisition events, taking the electric power data acquisition event matched with the event triggering tag as an electric power data acquisition event cluster, merging sample electric power acquisition data sequences corresponding to the electric power data acquisition event matched with the event triggering tag, and generating a sample electric power acquisition data sequence corresponding to the electric power data acquisition event cluster;
the obtaining the second long-term and short-term memory network corresponding to each electric power data acquisition event comprises the following steps:
and acquiring a second long-term and short-term memory network corresponding to each power data acquisition event cluster and a second long-term and short-term memory network corresponding to each power data acquisition event except the power data acquisition event cluster in the plurality of power data acquisition events.
5. The AI-model-based power big data collection and processing method according to claim 1, wherein the generating the target power system fault diagnosis network of the target power service partition according to the initial power system fault diagnosis network and the second long-short-term memory network for completing knowledge learning corresponding to each power data collection event includes any one of the following:
Performing feature integration on the second network function layer parameter information of the second long-short-period memory network which is used for completing knowledge learning and corresponds to a plurality of power data acquisition events in the plurality of power data acquisition events, and generating a second long-short-period memory network after feature integration;
determining the initial power system fault diagnosis network and the second long-short-period memory network after feature integration as a target power system fault diagnosis network corresponding to one of the plurality of power data acquisition events, wherein the model architecture of the second long-period memory network corresponding to the plurality of power data acquisition events is the same;
or, for each power data acquisition event, determining the initial power system fault diagnosis network and a second long-term and short-term memory network which is corresponding to the power data acquisition event and completes knowledge learning as a target power system fault diagnosis network corresponding to the power data acquisition event.
6. The AI-model-based power big data collection processing method according to claim 5, wherein the feature integrating the second network function layer parameter information of the second long-short-term memory network for completing knowledge learning corresponding to a plurality of power data collection events in the plurality of power data collection events comprises:
Acquiring an acquisition path of each electric power data acquisition event;
determining a first dependency relationship value among the plurality of power data acquisition events according to the dependency relationship value among the acquisition paths of the power data acquisition events;
and carrying out feature integration on the second network function layer parameter information of the second long-term memory network which is used for completing knowledge learning and corresponds to a plurality of power data acquisition events with the first dependency relationship value meeting the target requirement.
7. The AI-model-based power big data collection processing method according to claim 5, wherein the feature integrating the second network function layer parameter information of the second long-short-term memory network for completing knowledge learning corresponding to a plurality of power data collection events in the plurality of power data collection events comprises:
acquiring event evaluation information of each power data acquisition event in the plurality of power data acquisition events, and determining the event participation degree of each power data acquisition event in the plurality of power data acquisition events based on the event evaluation information of each power data acquisition event in the plurality of power data acquisition events, wherein the event evaluation information of one power data acquisition event comprises at least one of a statistical value of power characteristic training data in the sample power acquisition data sequence of the power data acquisition event or an influence coefficient of the power data acquisition event; or, acquiring target adaptation scenario information corresponding to the target power system fault diagnosis network, determining a target power data acquisition event corresponding to the target adaptation scenario information, if the target power data acquisition event corresponding to the target adaptation scenario information is any one of the plurality of power data acquisition events, determining an event participation degree corresponding to the any one of the power data acquisition events as 1, determining event participation degrees corresponding to all other than the any one of the power data acquisition events as 0, and if the target power data acquisition event corresponding to the target adaptation scenario information does not belong to any one of the plurality of power data acquisition events, determining second dependency relation values of all the power data acquisition events and the target power data acquisition event respectively, and determining event participation degrees of all the power data acquisition events based on the second dependency relation values of all the power data acquisition events and the target power data acquisition event, wherein the second dependency relation values and the event participation degrees are positively correlated;
And fusing the second network function layer parameter information which is corresponding to the plurality of electric power data acquisition events and completes knowledge learning according to the event participation degree of each electric power data acquisition event in the plurality of electric power data acquisition events.
8. The AI-model-based power big data collection processing method of claim 7, wherein the event evaluation information of a power data collection event includes a statistical value of power feature training data corresponding to the power data collection event and an influence coefficient of the power data collection event;
the determining the event participation degree of each power data acquisition event in the plurality of power data acquisition events based on the event evaluation information of each power data acquisition event in the plurality of power data acquisition events includes:
determining a first statistical value of the power characteristic training data corresponding to each power data acquisition event in the plurality of power data acquisition events and a total statistical value of the power characteristic training data corresponding to the plurality of power data acquisition events, and determining a first event participation degree corresponding to each power data acquisition event based on the proportion of the first statistical value corresponding to each power data acquisition event in the total statistical value;
Determining a second event participation degree corresponding to each power data acquisition event based on the influence coefficient corresponding to each power data acquisition event in the plurality of power data acquisition events;
and for each power data acquisition event, carrying out weighted calculation on the first event participation degree and the second event participation degree corresponding to the power data acquisition event, and generating the event participation degree corresponding to the power data acquisition event.
9. The AI model-based power big data acquisition processing method of claim 1, further comprising:
acquiring target power acquisition data corresponding to a target power service partition;
performing fault diagnosis on the target power acquisition data according to a target power system fault diagnosis network of the target power service partition, and generating fault diagnosis data corresponding to the target power acquisition data;
the training step of the fault diagnosis network of the target power system comprises the following steps:
carrying out acquisition event demand analysis on the target power acquisition data to generate acquisition event demand data corresponding to the target power acquisition data, wherein the acquisition event demand data comprises demand probability values corresponding to each power data acquisition event in the plurality of power data acquisition events;
Determining a power system fault diagnosis network which is subjected to knowledge learning and corresponds to a power data acquisition event with the largest demand probability value as the target power system fault diagnosis network, or carrying out feature integration on second network function layer parameter information of a second long-short-period memory network which is subjected to knowledge learning and corresponds to a plurality of demand probability values arranged in front on the basis of descending order of the demand probability values, generating a second long-short-period memory network after feature integration, and generating the target power system fault diagnosis network according to the initial power system fault diagnosis network and the second long-short-period memory network after feature integration;
the target power acquisition data are first feedback power data of a power abnormality feedback user, and the fault diagnosis data are first feedback response data corresponding to the feedback power data;
after generating the fault diagnosis data corresponding to the target power acquisition data, the method further comprises:
extracting feedback problem nodes from the first feedback power data to generate feedback problem nodes corresponding to the first feedback power data;
generating target solution information corresponding to the feedback problem node according to the feedback problem node;
Issuing the first feedback response data and the target solution information to the power abnormality feedback user;
if the second feedback power data generated by the power abnormality feedback user and the confirmation request aiming at the target solution information are detected, the second feedback power data and the solution information selected by the power abnormality feedback user are used as new target power acquisition data, fault diagnosis is carried out on the new target power acquisition data according to the target power system fault diagnosis network, and second feedback response data is generated;
and transmitting the second feedback response data to the power abnormality feedback user.
10. An electrical power service system comprising a processor and a machine-readable storage medium having stored therein machine-executable instructions that are loaded and executed by the processor to implement the AI-model-based electrical power big data collection processing method of any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311394719.3A CN117131457B (en) | 2023-10-26 | 2023-10-26 | AI model-based electric power big data acquisition and processing method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311394719.3A CN117131457B (en) | 2023-10-26 | 2023-10-26 | AI model-based electric power big data acquisition and processing method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117131457A true CN117131457A (en) | 2023-11-28 |
CN117131457B CN117131457B (en) | 2024-01-26 |
Family
ID=88856745
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311394719.3A Active CN117131457B (en) | 2023-10-26 | 2023-10-26 | AI model-based electric power big data acquisition and processing method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117131457B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117572159A (en) * | 2024-01-17 | 2024-02-20 | 成都英华科技有限公司 | Power failure detection method and system based on big data analysis |
CN118035731A (en) * | 2024-04-11 | 2024-05-14 | 深圳华建电力工程技术有限公司 | Electricity safety monitoring and early warning method and service system |
CN118132987A (en) * | 2024-03-05 | 2024-06-04 | 京源中科科技股份有限公司 | Heat energy meter heat data acquisition method and system |
Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108536123A (en) * | 2018-03-26 | 2018-09-14 | 北京交通大学 | The method for diagnosing faults of the train control on board equipment of the long neural network of memory network combination in short-term |
CN109033450A (en) * | 2018-08-22 | 2018-12-18 | 太原理工大学 | Lift facility failure prediction method based on deep learning |
CN110263846A (en) * | 2019-06-18 | 2019-09-20 | 华北电力大学 | The method for diagnosing faults for being excavated and being learnt based on fault data depth |
CN110829417A (en) * | 2019-11-14 | 2020-02-21 | 电子科技大学 | Electric power system transient stability prediction method based on LSTM double-structure model |
CN111241748A (en) * | 2020-01-13 | 2020-06-05 | 华北电力大学 | Wind driven generator fault diagnosis based on long-short term memory model recurrent neural network |
CN111414477A (en) * | 2020-03-11 | 2020-07-14 | 科大讯飞股份有限公司 | Vehicle fault automatic diagnosis method, device and equipment |
CN111552609A (en) * | 2020-04-12 | 2020-08-18 | 西安电子科技大学 | Abnormal state detection method, system, storage medium, program and server |
CN112101431A (en) * | 2020-08-30 | 2020-12-18 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Electronic equipment fault diagnosis system |
WO2021017416A1 (en) * | 2019-07-30 | 2021-02-04 | 重庆邮电大学 | Deep compression power lithium battery fault diagnosis method under perceptual adversarial generation |
CN112712205A (en) * | 2020-12-29 | 2021-04-27 | 南京后生远达科技有限公司 | Power distribution network fault prevention method based on long-term and short-term memory neural network |
US20210168021A1 (en) * | 2019-11-30 | 2021-06-03 | Huawei Technologies Co., Ltd. | Fault Root Cause Determining Method and Apparatus, and Computer Storage Medium |
CN113191556A (en) * | 2021-05-08 | 2021-07-30 | 上海核工程研究设计院有限公司 | Nuclear power Loca event fault prediction and diagnosis method |
CN113360679A (en) * | 2021-07-08 | 2021-09-07 | 北京国信会视科技有限公司 | Fault diagnosis method based on knowledge graph technology |
US20210357282A1 (en) * | 2020-05-13 | 2021-11-18 | Mastercard International Incorporated | Methods and systems for server failure prediction using server logs |
CN114528897A (en) * | 2021-07-27 | 2022-05-24 | 河北工业大学 | Equipment fault diagnosis method based on knowledge and data fusion drive |
CN114925809A (en) * | 2022-04-13 | 2022-08-19 | 北京印刷学院 | Printing machine bearing fault diagnosis method and device based on LSTM |
CN115062759A (en) * | 2022-05-27 | 2022-09-16 | 江苏大学 | Fault diagnosis method based on improved long and short memory neural network |
CN115166597A (en) * | 2022-06-24 | 2022-10-11 | 华北电力大学(保定) | Power transformer fault diagnosis method considering characteristic coupling relation |
CN115328088A (en) * | 2022-08-11 | 2022-11-11 | 北京理工大学重庆创新中心 | Cloud edge cooperation-based automobile fault diagnosis method and system and intelligent automobile |
CN115659583A (en) * | 2022-09-13 | 2023-01-31 | 王一凡 | Point switch fault diagnosis method |
CN115858825A (en) * | 2023-03-02 | 2023-03-28 | 山东能源数智云科技有限公司 | Equipment fault diagnosis knowledge graph construction method and device based on machine learning |
CN116304940A (en) * | 2023-03-02 | 2023-06-23 | 北京锐达芯集成电路设计有限责任公司 | Analog circuit fault diagnosis method based on long-short-term memory neural network |
CN116451148A (en) * | 2023-03-17 | 2023-07-18 | 广西大学 | Bearing fault diagnosis method for modal decomposition prediction bidirectional attention network |
CN116881737A (en) * | 2023-09-06 | 2023-10-13 | 四川川锅环保工程有限公司 | System analysis method in industrial intelligent monitoring system |
-
2023
- 2023-10-26 CN CN202311394719.3A patent/CN117131457B/en active Active
Patent Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108536123A (en) * | 2018-03-26 | 2018-09-14 | 北京交通大学 | The method for diagnosing faults of the train control on board equipment of the long neural network of memory network combination in short-term |
CN109033450A (en) * | 2018-08-22 | 2018-12-18 | 太原理工大学 | Lift facility failure prediction method based on deep learning |
CN110263846A (en) * | 2019-06-18 | 2019-09-20 | 华北电力大学 | The method for diagnosing faults for being excavated and being learnt based on fault data depth |
WO2021017416A1 (en) * | 2019-07-30 | 2021-02-04 | 重庆邮电大学 | Deep compression power lithium battery fault diagnosis method under perceptual adversarial generation |
CN110829417A (en) * | 2019-11-14 | 2020-02-21 | 电子科技大学 | Electric power system transient stability prediction method based on LSTM double-structure model |
US20210168021A1 (en) * | 2019-11-30 | 2021-06-03 | Huawei Technologies Co., Ltd. | Fault Root Cause Determining Method and Apparatus, and Computer Storage Medium |
CN111241748A (en) * | 2020-01-13 | 2020-06-05 | 华北电力大学 | Wind driven generator fault diagnosis based on long-short term memory model recurrent neural network |
CN111414477A (en) * | 2020-03-11 | 2020-07-14 | 科大讯飞股份有限公司 | Vehicle fault automatic diagnosis method, device and equipment |
CN111552609A (en) * | 2020-04-12 | 2020-08-18 | 西安电子科技大学 | Abnormal state detection method, system, storage medium, program and server |
US20210357282A1 (en) * | 2020-05-13 | 2021-11-18 | Mastercard International Incorporated | Methods and systems for server failure prediction using server logs |
CN112101431A (en) * | 2020-08-30 | 2020-12-18 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Electronic equipment fault diagnosis system |
CN112712205A (en) * | 2020-12-29 | 2021-04-27 | 南京后生远达科技有限公司 | Power distribution network fault prevention method based on long-term and short-term memory neural network |
CN113191556A (en) * | 2021-05-08 | 2021-07-30 | 上海核工程研究设计院有限公司 | Nuclear power Loca event fault prediction and diagnosis method |
CN113360679A (en) * | 2021-07-08 | 2021-09-07 | 北京国信会视科技有限公司 | Fault diagnosis method based on knowledge graph technology |
CN114528897A (en) * | 2021-07-27 | 2022-05-24 | 河北工业大学 | Equipment fault diagnosis method based on knowledge and data fusion drive |
CN114925809A (en) * | 2022-04-13 | 2022-08-19 | 北京印刷学院 | Printing machine bearing fault diagnosis method and device based on LSTM |
CN115062759A (en) * | 2022-05-27 | 2022-09-16 | 江苏大学 | Fault diagnosis method based on improved long and short memory neural network |
CN115166597A (en) * | 2022-06-24 | 2022-10-11 | 华北电力大学(保定) | Power transformer fault diagnosis method considering characteristic coupling relation |
CN115328088A (en) * | 2022-08-11 | 2022-11-11 | 北京理工大学重庆创新中心 | Cloud edge cooperation-based automobile fault diagnosis method and system and intelligent automobile |
CN115659583A (en) * | 2022-09-13 | 2023-01-31 | 王一凡 | Point switch fault diagnosis method |
CN115858825A (en) * | 2023-03-02 | 2023-03-28 | 山东能源数智云科技有限公司 | Equipment fault diagnosis knowledge graph construction method and device based on machine learning |
CN116304940A (en) * | 2023-03-02 | 2023-06-23 | 北京锐达芯集成电路设计有限责任公司 | Analog circuit fault diagnosis method based on long-short-term memory neural network |
CN116451148A (en) * | 2023-03-17 | 2023-07-18 | 广西大学 | Bearing fault diagnosis method for modal decomposition prediction bidirectional attention network |
CN116881737A (en) * | 2023-09-06 | 2023-10-13 | 四川川锅环保工程有限公司 | System analysis method in industrial intelligent monitoring system |
Non-Patent Citations (3)
Title |
---|
SOUFIANE BELAGOUNE等: "Deep learning through LSTM classification and regression for transmission line fault detection, diagnosis and location in large-scale multi-machine power systems", 《MEASUREMENT 》, pages 1 - 14 * |
孙永峰: "基于卷积和循环神经网络的风电机组滚动轴承故障诊断方法研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, vol. 2023, no. 2, pages 042 - 794 * |
童轶之: "结合残差收缩和长短期记忆网络的轴承故障诊断", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, vol. 2023, no. 2, pages 029 - 751 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117572159A (en) * | 2024-01-17 | 2024-02-20 | 成都英华科技有限公司 | Power failure detection method and system based on big data analysis |
CN117572159B (en) * | 2024-01-17 | 2024-03-26 | 成都英华科技有限公司 | Power failure detection method and system based on big data analysis |
CN118132987A (en) * | 2024-03-05 | 2024-06-04 | 京源中科科技股份有限公司 | Heat energy meter heat data acquisition method and system |
CN118035731A (en) * | 2024-04-11 | 2024-05-14 | 深圳华建电力工程技术有限公司 | Electricity safety monitoring and early warning method and service system |
Also Published As
Publication number | Publication date |
---|---|
CN117131457B (en) | 2024-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117131457B (en) | AI model-based electric power big data acquisition and processing method and system | |
US10317864B2 (en) | Systems and methods for adaptively updating equipment models | |
CN110413227B (en) | Method and system for predicting remaining service life of hard disk device on line | |
CN112118143B (en) | Traffic prediction model training method, traffic prediction method, device, equipment and medium | |
US20160239592A1 (en) | Data-driven battery aging model using statistical analysis and artificial intelligence | |
CN109992473B (en) | Application system monitoring method, device, equipment and storage medium | |
CN105637432A (en) | Identifying anomalous behavior of a monitored entity | |
CN113590429B (en) | Server fault diagnosis method and device and electronic equipment | |
CN117572159B (en) | Power failure detection method and system based on big data analysis | |
CN107426033B (en) | Method and device for predicting state of access terminal of Internet of things | |
CN115964211A (en) | Root cause positioning method, device, equipment and readable medium | |
CN116560794A (en) | Exception handling method and device for virtual machine, medium and computer equipment | |
KR101960755B1 (en) | Method and apparatus of generating unacquired power data | |
CN111614504A (en) | Power grid regulation and control data center service characteristic fault positioning method and system based on time sequence and fault tree analysis | |
RU2703874C1 (en) | Method of monitoring and predicting operation of a gas turbine plant using a matrix of defects | |
CN110347538A (en) | A kind of storage device failure prediction technique and system | |
CN112332529B (en) | Improved electronic protection device for an electric distribution network | |
CN114866438A (en) | Abnormal hidden danger prediction method and system under cloud architecture | |
CN118316190B (en) | Electric power system monitoring system based on Internet of things | |
CN117519052B (en) | Fault analysis method and system based on electronic gas production and manufacturing system | |
CN118069460B (en) | Automatic monitoring and optimizing method and system for application performance | |
CN118555510B (en) | Optical communication device resource allocation method and system based on virtualization technology | |
CN112512072B (en) | VoLTE network fault prediction method and equipment | |
US20220107878A1 (en) | Causal attention-based multi-stream rnn for computer system metric prediction and influential events identification based on metric and event logs | |
CN116840696A (en) | Method and device for predicting remaining life of storage battery, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |