CN116738354B - Method and system for detecting abnormal behavior of electric power Internet of things terminal - Google Patents
Method and system for detecting abnormal behavior of electric power Internet of things terminal Download PDFInfo
- Publication number
- CN116738354B CN116738354B CN202311022009.8A CN202311022009A CN116738354B CN 116738354 B CN116738354 B CN 116738354B CN 202311022009 A CN202311022009 A CN 202311022009A CN 116738354 B CN116738354 B CN 116738354B
- Authority
- CN
- China
- Prior art keywords
- data
- self
- model
- layer
- organizing map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 206010000117 Abnormal behaviour Diseases 0.000 title claims abstract description 24
- 238000001514 detection method Methods 0.000 claims abstract description 60
- 238000012545 processing Methods 0.000 claims abstract description 28
- 230000002159 abnormal effect Effects 0.000 claims abstract description 16
- 230000006399 behavior Effects 0.000 claims abstract description 15
- 238000013507 mapping Methods 0.000 claims abstract description 12
- 230000008569 process Effects 0.000 claims description 29
- 238000009826 distribution Methods 0.000 claims description 25
- 210000002569 neuron Anatomy 0.000 claims description 23
- 238000012549 training Methods 0.000 claims description 21
- 239000013598 vector Substances 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000007781 pre-processing Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 14
- 238000011897 real-time detection Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000003860 storage Methods 0.000 claims description 5
- 230000001965 increasing effect Effects 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000003247 decreasing effect Effects 0.000 claims description 2
- 230000004044 response Effects 0.000 abstract description 4
- 238000004519 manufacturing process Methods 0.000 abstract description 3
- 230000005856 abnormality Effects 0.000 description 14
- 230000000875 corresponding effect Effects 0.000 description 9
- 230000009286 beneficial effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000002860 competitive effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2433—Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
Abstract
The invention provides a method and a system for detecting abnormal behaviors of an electric power Internet of things terminal, which are characterized in that data are collected on intelligent power sensing terminal equipment close to a user side and an electric power production site side, irrelevant features and noise are removed after pretreatment, a positive sample and a negative sample for detecting whether the behaviors are normal are built through a self-organizing mapping model and a contrast learning model, a model detection task is migrated to a terminal node at the edge of a network, and the terminal node is processed, so that the pressure on the network bandwidth can be reduced, the response speed of the abnormal detection of the terminal is accelerated, the stable operation of the terminal equipment is ensured, and the real-time processing and the safe detection of the data of the electric power Internet of things terminal equipment are realized. Meanwhile, the self-organizing map model is combined with the contrast learning model, so that the characteristics between positive and negative samples are more obvious, classification of the samples is facilitated, and the accuracy and the efficiency of detection can be remarkably improved through self-learning and automatic updating of a knowledge base.
Description
Technical Field
The invention relates to the technical fields of electric power Internet of things and artificial intelligence, in particular to a method and a system for detecting abnormal behaviors of a terminal of the electric power Internet of things.
Background
Along with the rapid increase of the energy interconnection demand of the intelligent power grid, the data acquisition, storage and control quantity of terminal equipment of a power grid sensing layer are rapidly increased, and the power service also shows the development trend of diversity and timeliness. The wide application of the Internet of things technology, the novel sensor technology and the machine learning technology promotes the conversion of the electric intelligent terminal to the machine intelligence, the perception intelligence and the calculation intelligence, and further causes the generation of massive terminal heterogeneous data.
Anomaly detection is one of important means for guaranteeing the reliability and availability of the electric power Internet of things, and timely finding anomalies can avoid or reduce the influence of the anomalies on user experience satisfaction; the terminal abnormality detection method based on the behaviors is based on the terminal behaviors, stores and records the terminal behaviors, analyzes and counts the behaviors through a corresponding algorithm, and compares the existing behaviors of the terminal with analysis and statistics results, so that the purpose of mining the abnormal behaviors of the terminal is achieved.
However, the chinese patent of the invention with publication number CN115473671a discloses a method and system for detecting abnormality of a power terminal based on a flow baseline, comprising: collecting all network flow data of the power terminal, and identifying and analyzing the network flow data; storing the data to a big data platform and a structured database; performing anomaly detection on the flow dimension, the protocol dimension and the application layer function code dimension of the power terminal according to the statistics of the upper and lower limits of the IP flow of the power terminal, the statistics of the upper and lower limits of the protocol specification flow and the statistics of the application layer function code flow of the time slice, and obtaining first, second and third anomaly detection results; and calculating a final abnormality detection result according to the first, second and third abnormality detection results, and determining the possibility of abnormality of the power terminal.
However, the existing detection method for detecting the abnormality of the terminal of the electric power internet of things adopts a detection mode that data of all terminal equipment are collected and sent to a master station management system, and big data processing is conducted in a centralized mode. However, with explosive growth of the number of terminal devices of the electric power internet of things and diversification of device types, complex data types generated due to the diversity, real-time performance and multidimensional characteristics of data are provided, so that great challenges are brought to abnormal detection of terminal data behaviors, the devices are not always in abnormal states, further, the performance requirements of network architecture are increased, and higher computing capacity and higher storage capacity are required.
Therefore, it is necessary to develop a method and a system for detecting abnormal behaviors of the terminals of the electric power internet of things to improve the above problems.
Disclosure of Invention
Aiming at the problem of low network architecture performance of the existing terminal behavior detection method, the invention provides a method and a system for detecting the terminal behavior abnormality of the electric power Internet of things, which are characterized in that data are collected on intelligent power sensing terminal equipment close to a user side and an electric power production site side, irrelevant features and noise are removed after pretreatment, then a positive and negative sample for detecting whether the behavior is normal is constructed through a self-organizing mapping model and a contrast learning model, then a model detection task is migrated to a terminal node at the edge of a network and is processed on the edge node, so that the pressure on network bandwidth can be reduced, the response speed of terminal abnormality detection is accelerated, the stable operation of the terminal equipment is ensured, and the real-time processing and the safety detection on the data of the terminal equipment of the electric power Internet of things are realized. Meanwhile, the self-organizing map model is combined with the contrast learning model, so that the characteristics between positive and negative samples are more obvious, classification of the samples is facilitated, and the accuracy and the efficiency of detection can be remarkably improved through self-learning and automatic updating of a knowledge base.
In a first aspect, the invention provides a method for detecting abnormal behavior of a terminal of an electric power internet of things, which comprises the following steps:
preprocessing a data set;
constructing a self-organizing map model and a contrast learning model;
inputting the preprocessed data set data into the self-organizing map model, and updating parameters of the self-organizing map model based on matching the output of the self-organizing map model with a comparison learning model to obtain a trained self-organizing map model;
inputting the preprocessed data set into a trained self-organizing mapping model, outputting potential feature distribution, performing data processing to obtain reconstruction data, inputting a contrast learning model, and updating parameters of the contrast learning model based on positive and negative sample equalization of the enhancement reconstruction data to obtain a trained contrast learning model;
and issuing the trained self-organizing map model and the trained contrast learning model to an edge layer, collecting sample data to be detected, uploading the sample data to the edge layer and the cloud layer for real-time detection, and outputting an abnormal detection result.
The method for detecting the abnormal behavior of the terminal of the electric power internet of things has the beneficial effects that: the combination of multiple models can make the characteristics between positive and negative samples more obvious, is more beneficial to the classification of the samples, can obviously improve the accuracy and efficiency of detection by automatically updating a knowledge base through self-learning, and secondly, the edge calculation is introduced, so that the pressure on network bandwidth can be reduced, the response speed of terminal anomaly detection is accelerated, and the real-time processing and safety detection on the data of the terminal equipment of the electric power Internet of things can be realized.
Preferably, when the real-time detection process is performed, the method includes: judging the matching degree of the edge layer and the sample data to be detected; when the edge layer is matched with the sample data to be detected, inputting the sample data to be detected into a trained contrast learning model, outputting an abnormal detection result, uploading the abnormal detection result to the cloud layer for storage and feedback to terminal equipment; otherwise, the cloud layer carries out anomaly detection on the sample data to be detected, outputs an anomaly detection result, stores the anomaly detection result and feeds the result back to the terminal equipment. Therefore, cloud computing resources can be effectively saved, the efficiency of terminal behavior exception handling is improved, and the network bandwidth pressure is reduced.
Preferably, the process of performing the judgment of the matching degree between the edge layer and the sample data to be detected includes:
and outputting a matching result when the calculation task of the sample data to be detected is smaller than or equal to the calculation capacity of the edge layer, and otherwise outputting a non-matching result.
Preferably, the process of performing the judgment of the matching degree between the edge layer and the sample data to be detected includes:
calculating the running time of processing the sample data to be detected based on the edge layer, and outputting a matching result when the running time is less than or equal to a threshold time, or else outputting a non-matching result. In this way, the judgment can be performed by any one of the capacity and time comparison, and the edge layer can be ensured to be smoothly performed in the process of performing the terminal behavior abnormality detection processing.
Preferably, after the edge layer is matched with the sample data to be detected, the sample data to be detected is matched with the sample data stored in the cloud layer, and when the edge layer is matched with the sample data to be detected, an abnormal detection result stored in the cloud layer is called and fed back to a terminal device; otherwise, inputting the sample data to be detected into a trained contrast learning model. Therefore, repeated calculation can be avoided, resource waste is caused, and data processing efficiency can be effectively improved.
Preferably, the abnormality detection result includes a positive sample result indicating normal behavior and a negative sample result indicating abnormal behavior.
Preferably, the parameter update of the self-organizing map model includes an update of normal distribution weight coefficient, mean and variance. Therefore, the self-adaptive adjustment network can be realized by adjusting parameters in the self-organizing map model, and the classification of positive and negative samples is completed.
Preferably, the parameter update of the contrast learning model comprises parameter update of a query sequence encoder and a key value encoder. Therefore, the aim of momentum update can be achieved by adjusting the parameters of the encoder in the contrast learning model.
Preferably, the preprocessing of the data set includes: and converting the data set into numerical data by adopting single thermal coding, and carrying out normalization processing. Therefore, the model convergence can be facilitated in model training, and the influence of high-order sample data can be reduced by normalizing the sample data.
In a second aspect, the invention provides a system for detecting abnormal behavior of a terminal of an electric power internet of things, which adopts the following technical scheme:
the data preprocessing layer is used for preprocessing the data set;
the building model layer is used for building a self-organizing map model and a contrast learning model;
the self-organizing map training layer is used for inputting the preprocessed data set data into the self-organizing map model, and updating the parameters of the self-organizing map model based on matching and comparing the output of the self-organizing map model with the learning model to obtain a trained self-organizing map model;
the contrast learning training layer is used for inputting the preprocessed data set into a trained self-organizing mapping model, outputting potential feature distribution, performing data processing to obtain reconstruction data, inputting the contrast learning model, and updating parameters of the contrast learning model based on positive and negative sample equalization of the enhancement reconstruction data to obtain a trained contrast learning model;
and the anomaly detection layer is used for transmitting the trained self-organizing map model and the trained contrast learning model to an edge layer, collecting sample data to be detected, uploading the sample data to the edge layer and the cloud layer for real-time detection, and outputting an anomaly detection result.
The system for detecting the abnormal behavior of the terminal of the electric power internet of things has the beneficial effects that: the combination of multiple models can make the characteristics between positive and negative samples more obvious, is more beneficial to the classification of the samples, can obviously improve the accuracy and efficiency of detection by automatically updating a knowledge base through self-learning, and secondly, the edge calculation is introduced, so that the pressure on network bandwidth can be reduced, the response speed of terminal anomaly detection is accelerated, and the real-time processing and safety detection on the data of the terminal equipment of the electric power Internet of things can be realized.
Drawings
Fig. 1 is a flowchart of a method for detecting an abnormality of a terminal of an electric power internet of things provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a Self-organizing map (SOM) model in the present embodiment;
fig. 3 is a general flowchart of a method for detecting an abnormality of an electric power internet of things terminal provided in the present embodiment;
fig. 4 is a block flow chart of a system for detecting abnormality of a terminal of an electric power internet of things provided in this embodiment.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention. Unless otherwise defined, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. As used herein, the word "comprising" and the like means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof without precluding other elements or items.
The invention is described in detail below with reference to the drawings and the specific embodiments.
As shown in fig. 1, the embodiment of the invention provides a method for detecting abnormal behavior of a terminal of an electric power internet of things, which comprises the following steps:
s1, data preprocessing: preprocessing a data set;
s2, constructing a model: constructing a self-organizing map model and a contrast learning model;
s3, training a self-organizing mapping model: inputting the preprocessed data set data into the self-organizing map model, and updating parameters of the self-organizing map model based on matching the output of the self-organizing map model with a comparison learning model to obtain a trained self-organizing map model;
s4, training a comparison learning model: inputting the preprocessed data set into a trained self-organizing mapping model, outputting potential feature distribution, performing data processing to obtain reconstruction data, inputting a contrast learning model, and updating parameters of the contrast learning model based on positive and negative sample equalization of the enhancement reconstruction data to obtain a trained contrast learning model;
s5, outputting an abnormality detection result: and issuing the trained self-organizing map model and the trained contrast learning model to an edge layer, collecting sample data to be detected, uploading the sample data to the edge layer and the cloud layer for real-time detection, and outputting an abnormal detection result.
In practice, the data set in step S1 is collected from the intelligent power aware terminal device on the side of the power production site close to the user side.
In some embodiments, in the process of executing step S1, data preprocessing techniques that are conventionally used in the art, such as data digitizing and data normalization, may be performed on the data set. By preprocessing the data, the models trained in the step S3 and the step S4 can be more robust, and the generalization capability of the models can be improved. Specifically, the data set may be converted into numerical data by single-hot encoding, and normalized.
In practice, in order to facilitate model convergence in step S3 and step S4 and to improve the quality of the sample data within the data set, enhancement processing may be performed on the sample data within the data set in advance.
Specifically, the sample data information in the data set includes information such as tag codes and IP address digitization, and the non-digital characteristic information can be converted into numerical characteristic information through single-hot coding. For example, the sample data quintuple in the data set isWhen (I)>For the source port in the sample data +.>For the destination port in the sample data +.>For the source IP address in the sample data, +.>For the destination IP address in the sample data,for transport layer protocol in sample dataThe method comprises the steps of carrying out a first treatment on the surface of the The sample data after the single thermal encoding is converted intoData set->Consists of m data units.
In fact, in the process of executing step S1, the average value and variance of the sample data in the data set may also be calculated, so as to normalize the data in the data set, thereby reducing the influence of the high-magnitude sample data on model training, and the data set may be divided into a training set and a test set, where the training set may be used to train the model in step S3 and step S4.
Specifically, when the average value and variance of the sample data are calculated in step S1, the calculation formula is as follows:
,
,
wherein,indicate->Sample data->Personal characteristic value->For the mean value of the sample data characteristics, +.>For the number of sample data, +.>Is the variance of the sample data features.
Specifically, after calculating the average value and variance of the sample data, the sample data is normalized, and the normalized calculation formula is as follows:
,
wherein,indicate->Sample data->Normalized eigenvalues.
In fact, referring to fig. 2, the Self-Organizing Map (SOM) model constructed in step S2 is a neural network model based on competitive learning, for generating a discrete and low-dimensional mapping relationship, which optimizes its own network structure by learning input data, unlike the conventional neural network, the SOM model is trained using a competitive learning strategy instead of using a back propagation algorithm. The two-dimensional mapping of the SOM model maintains the relative distance between the sample data points to maintain the topology of the input space, i.e., adjacent samples can be mapped to adjacent output cells.
The SOM model is composed of an input layer and a competing layer, the number of neurons of the output layer being dependent on the dimension of the input, typically a feature represented by a neuron. The number of neurons in the competitive layer affects the granularity and scale of the model as a whole, and has great influence on the accuracy and generalization capability of the final model. Since the SOM model has a strong generalization capability, new input samples that have not been seen can be identified.
In practice, the SOM model can map, encode and enhance input data set data, perform dimension reduction processing on the data set data through encoding and decoding processes, extract potential characteristic values of the data set data, and output reconstructed data.
Specifically, the SOM model encodes data of an input data set, can define specific meaning and value range for each data field, and is associated with a data encoding mode, so that consistency of the data in transmission and decoding processes can be ensured, original data in the data set is converted according to a selected encoding format, binary, hexadecimal or other modes of encoding conversion can be performed on the data according to the type and encoding format of the data, and meanwhile, in order to ensure the integrity and accuracy of the data, check information such as a CRC check code or hash value can be added in the data, the encoded data is packaged into a data packet or frame, and after necessary identifiers and header information are added, a receiving party can accurately analyze and process the data.
Specifically, when the SOM model is used for data enhancement, data from different terminals and sensors can be fused and integrated, so that more comprehensive and comprehensive data is formed, potential characteristic value information is calculated and analyzed, reconstruction and enhancement of sample data are completed, and average value and variance of the sample data are calculated, so that characteristic values corresponding to the sample data are obtained, and further data enhancement of the sample data is completed.
In some embodiments, the preprocessing data set is input into the SOM model in step S3, and after the input layer of the SOM model receives the sample data vector input by the data set, the input layer of the SOM model performs similarity measurement comparison with all the neuron weights in the competition layer to find out the most similar neuron as the winning neuron. Specifically, when similarity measurement comparison is performed, the similarity can be compared by comparing and calculating the distance between two different vectors.
In some embodiments, after finding the winning neuron, the output weight of the winning neuron is marked as 1 by means of competition learning, and the output weights of the remaining neurons are marked as 0, and the output weights of the winning neurons can be adjusted based on the winning neuron, and the output weights of the adjusted neurons are as shown in the following formula:
,
wherein,normalized feature vector value representing input SOM model receiving layer,/>Output weight of the neurons, learning rate +.>The value is (0, 1)]T is training time, < >>Decreasing with increasing learning dimension +.>The output weights of the neurons are adjusted.
Specifically, in the process of data enhancement of the preprocessed data set input by the SOM model, the normalized feature vector value in the input layerBy->A data unit composition, and can be expressed asAfter one encoding and decoding in SOM model, normalized feature vector value is calculated>Mean +.>Sum of variances->Forming a normal distribution, updating weight parameters based on the normal distribution by adopting a common 3 sigma criterionDetermining an update threshold when the dataset data is in normal distributionIs->And in the interval, not updating, or else, updating the weight coefficient, the mean value and the variance.
In some embodiments, the parameter updates of the self-organizing map model include updates of normal distribution weight coefficients, means, and variances.
In practice, the parameter updating process of the self-organizing map model is the process of enhancing data in the SOM model.
In some embodiments, in the process of executing step S4, the data set is input into a trained self-organizing map model, the potential feature distribution is output, and after data processing, the reconstructed data is obtained, and in the process of inputting the contrast learning model, the dimension reduction processing of the data is performed.
Specifically, the potential eigenvalue information after one-time encoding and decoding is analyzed, and the potential eigenvalue is shown by the following formula:
,
wherein,is a random number +.>And->Is a normalized feature vector value->Mean and variance of the output data after the first encoding and decoding,/>For potential eigenvalues。
In the process of performing depth coding and depth decoding on the potential eigenvalues, the dimension reduction processing of the data is completed, and the process of performing depth coding through an encoder and performing depth decoding through a decoder can be expressed by the following formula:
,
wherein,representing potential feature distribution obtained by depth coding potential feature values by an encoder>Representing the reconstructed data of the encoded potential feature distribution output after the depth decoding by the decoder, < >>To activate the function +.>Weight matrix representing encoder, +.>Weight matrix representing decoder, +.>Bias vector representing encoder,>representing the offset vector of the decoder. Specifically, the activation function->A Sigmoid function may be employed.
In some embodiments, in the process of inputting the reconstructed data into the contrast learning model, a reconstructed data set is constructed, the training of the contrast learning model adopts a small batch of random gradient descent (SGD) or a variant algorithm thereof to update model parameters, a batch of reconstructed data sample pairs are randomly sampled from the reconstructed data set to be trained, for the reconstructed data in each reconstructed data sample pair, a corresponding reconstructed positive sample and a corresponding reconstructed negative sample are selected from the reconstructed data set to be trained, and a gap value between the reconstructed data in each reconstructed data sample pair and the reconstructed positive sample and the reconstructed negative sample corresponding to the reconstructed data sample pair is calculated through an infoce loss function, and further, the iterative training of the reconstructed data sample pair is randomly sampled next time based on parameter updating until all the reconstructed data to be trained is trained; and after model training is completed, extracting the representation features of the reconstructed data set from the trained contrast learning model to optimize the model.
In practice, the definition of the InfoNCE loss function is to let the reconstructed data in each pair of reconstructed data samples randomly select one reconstructed positive sample and a plurality of reconstructed negative samples in the trained reconstructed data set, calculate similarity scores of the reconstructed data and the reconstructed positive samples and all the reconstructed negative samples by adopting inner products or cosine similarities, divide the similarity scores of the reconstructed data and the reconstructed positive samples by the sum of the similarity scores of the reconstructed data and all the trained reconstructed data set samples to obtain normalized similarity scores, convert the normalized similarity scores into probability distributions by using a softmax function, measure uncertainty of the reconstructed positive samples under the probability distributions by using mutual information, and average or sum the mutual information between the reconstructed data and the reconstructed positive samples and the mutual information of all the reconstructed negative samples.
In fact, a reconstructed positive sample is a sample that belongs to the same class or has similar characteristics as the reconstructed data in the reconstructed data samples, while a reconstructed negative sample is a sample that belongs to a different class or has different characteristics than the reconstructed data.
Specifically, the InfoNCE loss function makes representations between similar reconstructed data closer by maximizing a lower bound of mutual information between vectors generated by different similar reconstructed data; the process of calculating the gap value by the InfoNCE loss function can be described as follows:
assume that there is reconstructed data in a pair of reconstructed data samples to be trainedTrained reconstruction data set comprising 1 reconstruction positive sample and n-1 reconstruction negative samples +.>=/>Reconstructing the dataset +.>Consists of n data units, the positive sample is selected to be reconstructed +.>The method comprises the steps of carrying out a first treatment on the surface of the The uncertainty mutual information of the reconstructed positive sample under the probability distribution can be expressed by the following formula:
,
wherein,can represent reconstruction data +.>And reconstruct positive samples->Mutual information, I/O>Representing reconstruction data +.>And reconstruct positive samples->Association distribution between->And->Representing reconstruction data +.>And positive sampleIs arranged on the edge of the substrate;
based on the above, pairMaximizing, quantifying with cosine similarity, and representing with density ratio, reconstructing data +.>And reconstruct positive samples->The density ratio of mutual information is expressed by the following formula:
,
wherein the method comprises the steps ofRepresenting an equal ratio relationship representing a density ratio and reconstruction data +.>And reconstruct positive samples->Mutual information is not directly equivalent, but positively correlated in meaning; to ensure->Is of return to normal nature, in case of->The definition of (2) can be expressed by the log bilinear model calculation formula as follows:
,
wherein,reconstruction of positive samples +.>Vector space after feature extraction ++>Is indicated by->For a linear transformation matrix +.>Conversion to->The representation of the above;
based on the above, the InfoNCE loss function between the reconstructed data and its corresponding reconstructed positive and negative samples in each pair of reconstructed data samples can be expressed as follows:
,
in practice, based on density ratioReconstructing positive samples->Vector after feature extraction ∈ ->The self-supervision model can be modeled from a high latitude space; distribution->And->Can not directly obtain (I)>The comparison between the target sample and the random negative sample can still be used for calculation through noise comparison estimation.
In some further embodiments, in the process of training the model, the model parameters are updated and the reconstructed data set is learned by the training encoder, and the learning process of the training encoder is that when the gap value is calculated by the InfoNCE loss function in the model training process, iteration is required to be performed by updating the parameters, so that only the reconstructed data of the same category which are actually matched have more similarity; the parameter updates of the contrast learning model include parameter updates of a query sequence encoder and a key value encoder.
Specifically, the process of calculating the gap value and updating the parameters can be described as follows:
assume that there is reconstructed data q to be trained and a trained reconstructed data setReconstructing the dataset +.>By->Is composed of data units, and->Can be used as a key value; the query sequence obtained by the encoder can be expressed as +.>Wherein->An encoder representing a query sequence; the key value can be expressed as +.>Wherein->Encoder representing key values ++>Representing reconstructed data samples in a trained reconstructed data set for generating key values;
if a reconstructed data set is trainedThere is a specific bond corresponding to q +.>Specific bond->The reconstructed positive samples selected for the reconstructed data q to be encoded can then be calculated from the InfoNCE loss function as the reconstructed data q to be encoded and the specific key +.>Calculating the reconstructed data q to be encoded and the specific key +.>The similarity is high, which indicates that the similarity between the reconstructed data q to be encoded and the reconstructed positive sample is high, and the loss function value obtained by calculation is smaller, and the specific calculation mode is as follows:
,
wherein the method comprises the steps ofFor super parameters, the above formula contains 1 specific key +.>And K reconstructed negative samples, specific bond +.>Reconstruct positive samples, but->Representing the reconstructed data q to be encoded with a specific key +.>I.e. the similarity of the reconstructed data q to be encoded with the corresponding reconstructed positive samples; but->Representing the reconstruction data q to be encoded and the entire trained reconstruction data set +.>Is a similarity of (3).
Therefore, based on the calculated difference value, the information of the similar reconstruction data in the specific vector space can be reserved, and further the representation features shared by the similar reconstruction data are extracted;
while parameter updating is based on a query sequence encoderFor->The key encoder is updated with parameters of the model, and as key values generated by the same sample can be consistent, the momentum update of the model is realized by the following formula:
,
wherein the method comprises the steps ofFor the parameters of the key encoder, +.>For querying the parameters of the sequence encoder, +.>Is a super parameter.
In fact, since onlyParticipate in the backward propagation computation, so the update change amplitude of the key encoder is relatively smaller.
In fact, in extracting the representation features of the reconstructed data, the intermediate layer output of the model may be used as the feature representation, or the features may be processed by further dimension reduction or clustering methods.
In some embodiments, in performing step S5, the edge layer includes edge devices, edge nodes, and edge servers, and consideration includes network topology, communication protocol, and security factors.
In fact, cloud layer simplifies the process required to periodically update and maintain edge devices, edge nodes, and the trained self-organizing map model and contrast learning model by remote management and automation tools.
Further, in the detection processing process of the edge layer, the detection processing application is divided into N tasks (denoted as n= {1,2,3, … N }), which are required to be executed sequentially (i.e. the subsequent task requires the execution result of the preceding task), and the mobile user can select the execution mode of each task, i.e. execute locally or offload to an edge computing server (such as a neighboring device or cloud server) for processing; let m= {0,1,..m } denote a set of task execution ways that can be selected by the user (the mobile device can choose to offload tasks to M servers), letIndicating the execution mode of the ith task if +.>The ith task is executed locally on the device; otherwise, the task is offloaded to +.>Performing by the edge computing servers; considering delay sensitive applications, i.e. applications having a need for execution completion time of each task, denoted +.>Wherein->Is->The completion time requirement of each task, the task migration decision u needs to meet the following conditions
,
,
Wherein,limiting the last nth task (such as result visualization) must be performed locally on the device.
In the calculation of the moving edge, the task execution and unloading will bring about corresponding spending and time delay, and the following will be causedIndicate->The individual tasks are->Server execution, and->The overhead of the execution of the individual tasks at the server can be expressed as:
,
wherein,representing tasks->From->Server transmission to->Transmission loss of server, and->Representing task->In the apparatus->Loss performed by the server;
order theIndicate->The individual tasks are->Server execution, and->The individual tasks are->The latency of the server execution can be expressed as:
,
wherein,representing tasks->From->Delay of server transmission to server, but +.>Representing task->In the apparatus->Time delay of server execution.
In some embodiments, the real-time detection process, when performed, includes:
judging the matching degree of the edge layer and the sample data to be detected;
when the edge layer is matched with the sample data to be detected, inputting the sample data to be detected into a trained contrast learning model, outputting an abnormal detection result, uploading the abnormal detection result to the cloud layer for storage and feedback to terminal equipment;
otherwise, the cloud layer carries out anomaly detection on the sample data to be detected, outputs an anomaly detection result, stores the anomaly detection result and feeds the result back to the terminal equipment.
In some further embodiments, the process of determining the matching degree between the edge layer and the sample data to be detected includes:
and outputting a matching result when the calculation task of the sample data to be detected is smaller than or equal to the calculation capacity of the edge layer, and otherwise outputting a non-matching result.
In some further embodiments, the process of determining the matching degree between the edge layer and the sample data to be detected includes:
calculating the running time of processing the sample data to be detected based on the edge layer, and outputting a matching result when the running time is less than or equal to a threshold time, or else outputting a non-matching result.
Further, after the edge layer is matched with the sample data to be detected, the sample data to be detected is matched with the sample data stored in the cloud layer, and when the edge layer is matched with the sample data to be detected, an abnormal detection result stored in the cloud layer is called and fed back to a terminal device; otherwise, inputting the sample data to be detected into a trained contrast learning model.
In some embodiments, the anomaly detection results include a positive sample result indicating normal behavior and a negative sample result indicating abnormal behavior.
Referring to fig. 3, the invention provides a method for detecting abnormal behavior of an electric power internet of things terminal, which comprises the following steps of preprocessing data of a data set, inputting the preprocessed data into a self-organizing map model, calculating the mean value and the variance after primary coding and decoding, on one hand, coding potential characteristic values to output potential characteristic distribution, on the other hand, forming normal distribution of the mean value and the variance, carrying out authentication by an discriminator, updating and reducing weight coefficients, and updating the mean value and the variance after weight adjustment; the output potential feature distribution is decoded by a decoder and then the reconstruction data is output, the reconstruction data is trained and optimized in a comparison learning model, the trained self-organizing mapping model and the comparison learning model are issued to an edge layer, sample data to be detected is uploaded to the edge layer and a cloud layer, real-time anomaly detection is carried out, and an anomaly detection result is output.
As shown in fig. 4, the present invention provides a system for detecting abnormal behavior of a terminal of an electric power internet of things, including:
the data preprocessing layer is used for preprocessing the data set;
the building model layer is used for building a self-organizing map model and a contrast learning model;
the self-organizing map training layer is used for inputting the preprocessed data set data into the self-organizing map model, and updating the parameters of the self-organizing map model based on matching and comparing the output of the self-organizing map model with the learning model to obtain a trained self-organizing map model;
the contrast learning training layer is used for inputting the preprocessed data set into a trained self-organizing mapping model, outputting potential feature distribution, performing data processing to obtain reconstruction data, inputting the contrast learning model, and updating parameters of the contrast learning model based on positive and negative sample equalization of the enhancement reconstruction data to obtain a trained contrast learning model;
and the anomaly detection layer is used for transmitting the trained self-organizing map model and the trained contrast learning model to an edge layer, collecting sample data to be detected, uploading the sample data to the edge layer and the cloud layer for real-time detection, and outputting an anomaly detection result.
While embodiments of the present invention have been described in detail hereinabove, it will be apparent to those skilled in the art that various modifications and variations can be made to these embodiments. It is to be understood that such modifications and variations are within the scope and spirit of the present invention as set forth in the following claims. Moreover, the invention described herein is capable of other embodiments and of being practiced or of being carried out in various ways.
Claims (8)
1. The method for detecting the abnormal behavior of the terminal of the electric power Internet of things is characterized by comprising the following steps of:
preprocessing a data set;
constructing a self-organizing map model and a contrast learning model;
inputting the preprocessed data set data into the self-organizing map model, and updating parameters of the self-organizing map model based on matching the output of the self-organizing map model with a comparison learning model to obtain a trained self-organizing map model;
inputting the preprocessed data set into a trained self-organizing mapping model, outputting potential feature distribution, performing data processing to obtain reconstruction data, inputting a contrast learning model, and updating parameters of the contrast learning model based on positive and negative sample equalization of the enhancement reconstruction data to obtain a trained contrast learning model;
issuing the trained self-organizing map model and the trained contrast learning model to an edge layer, collecting sample data to be detected, uploading the sample data to the edge layer and a cloud layer for real-time detection, and outputting an abnormal detection result;
after the preprocessed data set data is input into the self-organizing map model, the self-organizing map model receives sample data vectors, performs similarity measurement comparison with all neuron weights in a competition layer by calculating distances among different vectors, finds out the most similar neuron as a winning neuron, marks the output weight of the winning neuron as 1 in a competition learning mode, marks the output weight of the rest neurons as 0, adjusts the output weight based on the winning neuron, and the adjusted neuron weight is shown as follows:
,
wherein,normalized feature vector value representing input to the receiving layer of the self-organizing map model, < >>Output weight of the neurons, learning rate +.>The value is (0, 1)]T is training time, < >>Decreasing with increasing learning dimension +.>For the output weight of the adjusted neuron;
calculating the mean value of the output data after the normalized feature vector values are encoded and decoded for the first time based on the parameters of the self-organizing map model updated by matching the output of the self-organizing map model to the learning modelSum of variances->When the data set data is normally distributed +.>Is->In the interval, updating is not carried out, otherwise, updating of normal distribution weight coefficients, mean values and variances is carried out;
and when the step of outputting the potential feature distribution to perform data processing to obtain the reconstructed data is executed, analyzing the potential feature values after primary encoding and decoding based on the following formula:
,
wherein,is a random number +.>And->Is the mean and variance of the output data after the normalized characteristic vector value is encoded and decoded once,/for the output data>Is a potential eigenvalue;
the potential eigenvalues are depth coded and depth decoded based on the following formula:
,
wherein,representing potential feature distribution obtained by depth coding potential feature values by an encoder>Representing the reconstructed data of the encoded potential feature distribution output after the depth decoding by the decoder, < >>The function is activated for Sigmoid,weight matrix representing encoder, +.>Weight matrix representing decoder, +.>Bias vector representing encoder,>a bias vector representing a decoder;
judging the matching degree of the edge layer and the sample data to be detected when the real-time detection processing is executed; when the edge layer is matched with the sample data to be detected, inputting the sample data to be detected into a trained contrast learning model, outputting an abnormal detection result, uploading the abnormal detection result to the cloud layer for storage and feedback to terminal equipment; otherwise, the cloud layer carries out anomaly detection on the sample data to be detected, outputs an anomaly detection result, stores the anomaly detection result and feeds the result back to the terminal equipment.
2. The method for detecting abnormal behavior of a terminal of the electric power internet of things according to claim 1, wherein the process of judging the matching degree of the edge layer and the sample data to be detected is performed, comprises:
and outputting a matching result when the calculation task of the sample data to be detected is smaller than or equal to the calculation capacity of the edge layer, and otherwise outputting a non-matching result.
3. The method for detecting abnormal behavior of a terminal of the electric power internet of things according to claim 1, wherein the process of judging the matching degree of the edge layer and the sample data to be detected is performed, comprises:
calculating the running time of processing the sample data to be detected based on the edge layer, and outputting a matching result when the running time is less than or equal to a threshold time, or else outputting a non-matching result.
4. The method for detecting the abnormal behavior of the terminal of the electric power internet of things according to claim 1, wherein after the edge layer is matched with the sample data to be detected, the sample data to be detected is matched with the sample data stored in the cloud layer, and when the edge layer is matched with the sample data to be detected, an abnormal detection result stored in the cloud layer is called and fed back to the terminal equipment; otherwise, inputting the sample data to be detected into a trained contrast learning model.
5. The method for detecting the abnormal behavior of the terminal of the electric power internet of things according to any one of claims 1 to 4, wherein the abnormal detection result comprises a positive sample result indicating normal behavior and a negative sample result indicating abnormal behavior.
6. The method for detecting abnormal behavior of a terminal of the electric power internet of things according to claim 1, wherein the parameter update of the comparison learning model comprises parameter update of a query sequence encoder and a key value encoder.
7. The method for detecting the abnormal behavior of the terminal of the electric power internet of things according to claim 1, wherein the preprocessing of the data set comprises the following steps: and converting the data set into numerical data by adopting single thermal coding, and carrying out normalization processing.
8. A system for implementing the method for detecting abnormal behavior of the terminal of the electric power internet of things according to any one of claims 1 to 7, comprising:
the data preprocessing layer is used for preprocessing the data set;
the building model layer is used for building a self-organizing map model and a contrast learning model;
the self-organizing map training layer is used for inputting the preprocessed data set data into the self-organizing map model, and updating the parameters of the self-organizing map model based on matching and comparing the output of the self-organizing map model with the learning model to obtain a trained self-organizing map model;
the contrast learning training layer is used for inputting the preprocessed data set into a trained self-organizing mapping model, outputting potential feature distribution, performing data processing to obtain reconstruction data, inputting the contrast learning model, and updating parameters of the contrast learning model based on positive and negative sample equalization of the enhancement reconstruction data to obtain a trained contrast learning model;
and the anomaly detection layer is used for transmitting the trained self-organizing map model and the trained contrast learning model to an edge layer, collecting sample data to be detected, uploading the sample data to the edge layer and the cloud layer for real-time detection, and outputting an anomaly detection result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311022009.8A CN116738354B (en) | 2023-08-15 | 2023-08-15 | Method and system for detecting abnormal behavior of electric power Internet of things terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311022009.8A CN116738354B (en) | 2023-08-15 | 2023-08-15 | Method and system for detecting abnormal behavior of electric power Internet of things terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116738354A CN116738354A (en) | 2023-09-12 |
CN116738354B true CN116738354B (en) | 2023-12-08 |
Family
ID=87904773
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311022009.8A Active CN116738354B (en) | 2023-08-15 | 2023-08-15 | Method and system for detecting abnormal behavior of electric power Internet of things terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116738354B (en) |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102789593A (en) * | 2012-06-18 | 2012-11-21 | 北京大学 | Intrusion detection method based on incremental GHSOM (Growing Hierarchical Self-organizing Maps) neural network |
CN103488662A (en) * | 2013-04-01 | 2014-01-01 | 哈尔滨工业大学深圳研究生院 | Clustering method and system of parallelized self-organizing mapping neural network based on graphic processing unit |
CN109977227A (en) * | 2019-03-19 | 2019-07-05 | 中国科学院自动化研究所 | Text feature, system, device based on feature coding |
CN110245781A (en) * | 2019-05-14 | 2019-09-17 | 贵州科学院 | The modelling application predicted based on the extreme learning machine of self-encoding encoder in industrial production |
CN110376522A (en) * | 2019-09-03 | 2019-10-25 | 宁夏西北骏马电机制造股份有限公司 | A kind of Method of Motor Fault Diagnosis of the deep learning network of data fusion |
CN110765458A (en) * | 2019-09-19 | 2020-02-07 | 浙江工业大学 | Malicious software detection method and device based on deep learning |
CN111241289A (en) * | 2020-01-17 | 2020-06-05 | 北京工业大学 | SOM algorithm based on graph theory |
EP3667570A1 (en) * | 2018-12-12 | 2020-06-17 | Centre National De La Recherche Scientifique - Cnrs | Distributed cellular computing system and method for neural-based self-organized maps |
WO2020258010A1 (en) * | 2019-06-25 | 2020-12-30 | Oppo广东移动通信有限公司 | Image encoding method, image decoding method, encoder, decoder and storage medium |
CN112214788A (en) * | 2020-08-28 | 2021-01-12 | 国网江西省电力有限公司信息通信分公司 | Ubiquitous power Internet of things dynamic data publishing method based on differential privacy |
CN112345252A (en) * | 2020-11-22 | 2021-02-09 | 国家电网有限公司 | Rolling bearing fault diagnosis method based on EEMD and improved GSA-SOM neural network |
WO2021088377A1 (en) * | 2019-11-06 | 2021-05-14 | 北京工业大学 | Convolutional auto-encoding fault monitoring method based on batch imaging |
CN112862064A (en) * | 2021-01-06 | 2021-05-28 | 西北工业大学 | Graph embedding method based on adaptive graph learning |
CN114462554A (en) * | 2022-04-13 | 2022-05-10 | 华南理工大学 | Latent depression evaluation system based on multi-mode width learning |
CN114724043A (en) * | 2022-06-08 | 2022-07-08 | 南京理工大学 | Self-encoder anomaly detection method based on contrast learning |
EP4050518A1 (en) * | 2021-02-25 | 2022-08-31 | Siemens Aktiengesellschaft | Generation of realistic data for training of artificial neural networks |
CN115473671A (en) * | 2022-08-01 | 2022-12-13 | 博瑞得科技有限公司 | Power terminal anomaly detection method and system based on flow baseline |
CN116113967A (en) * | 2020-07-16 | 2023-05-12 | 强力交易投资组合2018有限公司 | System and method for controlling digital knowledge dependent rights |
CN116431966A (en) * | 2023-03-16 | 2023-07-14 | 浙江大学 | Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder |
CN116436551A (en) * | 2021-12-31 | 2023-07-14 | 华为技术有限公司 | Channel information transmission method and device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10869610B2 (en) * | 2018-12-05 | 2020-12-22 | General Electric Company | System and method for identifying cardiac arrhythmias with deep neural networks |
TWI687063B (en) * | 2019-01-04 | 2020-03-01 | 財團法人工業技術研究院 | A communication system and codec method based on deep learning and channel state information |
CN113052203B (en) * | 2021-02-09 | 2022-01-18 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Anomaly detection method and device for multiple types of data |
US20230139718A1 (en) * | 2021-10-28 | 2023-05-04 | Oracle International Corporation | Automated dataset drift detection |
-
2023
- 2023-08-15 CN CN202311022009.8A patent/CN116738354B/en active Active
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102789593A (en) * | 2012-06-18 | 2012-11-21 | 北京大学 | Intrusion detection method based on incremental GHSOM (Growing Hierarchical Self-organizing Maps) neural network |
CN103488662A (en) * | 2013-04-01 | 2014-01-01 | 哈尔滨工业大学深圳研究生院 | Clustering method and system of parallelized self-organizing mapping neural network based on graphic processing unit |
EP3667570A1 (en) * | 2018-12-12 | 2020-06-17 | Centre National De La Recherche Scientifique - Cnrs | Distributed cellular computing system and method for neural-based self-organized maps |
CN109977227A (en) * | 2019-03-19 | 2019-07-05 | 中国科学院自动化研究所 | Text feature, system, device based on feature coding |
CN110245781A (en) * | 2019-05-14 | 2019-09-17 | 贵州科学院 | The modelling application predicted based on the extreme learning machine of self-encoding encoder in industrial production |
WO2020258010A1 (en) * | 2019-06-25 | 2020-12-30 | Oppo广东移动通信有限公司 | Image encoding method, image decoding method, encoder, decoder and storage medium |
CN110376522A (en) * | 2019-09-03 | 2019-10-25 | 宁夏西北骏马电机制造股份有限公司 | A kind of Method of Motor Fault Diagnosis of the deep learning network of data fusion |
CN110765458A (en) * | 2019-09-19 | 2020-02-07 | 浙江工业大学 | Malicious software detection method and device based on deep learning |
WO2021088377A1 (en) * | 2019-11-06 | 2021-05-14 | 北京工业大学 | Convolutional auto-encoding fault monitoring method based on batch imaging |
CN111241289A (en) * | 2020-01-17 | 2020-06-05 | 北京工业大学 | SOM algorithm based on graph theory |
CN116113967A (en) * | 2020-07-16 | 2023-05-12 | 强力交易投资组合2018有限公司 | System and method for controlling digital knowledge dependent rights |
CN112214788A (en) * | 2020-08-28 | 2021-01-12 | 国网江西省电力有限公司信息通信分公司 | Ubiquitous power Internet of things dynamic data publishing method based on differential privacy |
CN112345252A (en) * | 2020-11-22 | 2021-02-09 | 国家电网有限公司 | Rolling bearing fault diagnosis method based on EEMD and improved GSA-SOM neural network |
CN112862064A (en) * | 2021-01-06 | 2021-05-28 | 西北工业大学 | Graph embedding method based on adaptive graph learning |
EP4050518A1 (en) * | 2021-02-25 | 2022-08-31 | Siemens Aktiengesellschaft | Generation of realistic data for training of artificial neural networks |
CN116436551A (en) * | 2021-12-31 | 2023-07-14 | 华为技术有限公司 | Channel information transmission method and device |
CN114462554A (en) * | 2022-04-13 | 2022-05-10 | 华南理工大学 | Latent depression evaluation system based on multi-mode width learning |
CN114724043A (en) * | 2022-06-08 | 2022-07-08 | 南京理工大学 | Self-encoder anomaly detection method based on contrast learning |
CN115473671A (en) * | 2022-08-01 | 2022-12-13 | 博瑞得科技有限公司 | Power terminal anomaly detection method and system based on flow baseline |
CN116431966A (en) * | 2023-03-16 | 2023-07-14 | 浙江大学 | Reactor core temperature anomaly detection method of incremental characteristic decoupling self-encoder |
Non-Patent Citations (1)
Title |
---|
基于序列标注反馈模型的方面信息提取方法;范守祥;姚俊萍;李晓军;马可欣;;计算机工程与设计(第09期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116738354A (en) | 2023-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108734355B (en) | Short-term power load parallel prediction method and system applied to power quality comprehensive management scene | |
CN111967343B (en) | Detection method based on fusion of simple neural network and extreme gradient lifting model | |
CN106600140B (en) | Gas pipeline fault prediction early warning system and method based on improved support vector machine | |
CN112966714B (en) | Edge time sequence data anomaly detection and network programmable control method | |
CN112308288A (en) | Particle swarm optimization LSSVM-based default user probability prediction method | |
CN110929843A (en) | Abnormal electricity consumption behavior identification method based on improved deep self-coding network | |
CN110751318A (en) | IPSO-LSTM-based ultra-short-term power load prediction method | |
Alrawashdeh et al. | Fast activation function approach for deep learning based online anomaly intrusion detection | |
CN113364751B (en) | Network attack prediction method, computer readable storage medium and electronic device | |
CN112949821B (en) | Network security situation awareness method based on dual-attention mechanism | |
CN115037805B (en) | Unknown network protocol identification method, system and device based on deep clustering and storage medium | |
CN111242351A (en) | Tropical cyclone track prediction method based on self-encoder and GRU neural network | |
CN111447217A (en) | Method and system for detecting flow data abnormity based on HTM under sparse coding | |
CN112364889A (en) | Manufacturing resource intelligent matching system based on cloud platform | |
CN113328755A (en) | Compressed data transmission method facing edge calculation | |
CN115051929A (en) | Network fault prediction method and device based on self-supervision target perception neural network | |
CN116738354B (en) | Method and system for detecting abnormal behavior of electric power Internet of things terminal | |
CN115713044B (en) | Method and device for analyzing residual life of electromechanical equipment under multi-condition switching | |
CN116862024A (en) | Credible personalized federal learning method and device based on clustering and knowledge distillation | |
CN115983497A (en) | Time sequence data prediction method and device, computer equipment and storage medium | |
Kang et al. | Classification method for network security data based on multi-featured extraction | |
CN106816871B (en) | State similarity analysis method for power system | |
CN113762591B (en) | Short-term electric quantity prediction method and system based on GRU and multi-core SVM countermeasure learning | |
CN114124437B (en) | Encrypted flow identification method based on prototype convolutional network | |
CN112766537B (en) | Short-term electric load prediction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |