CN111563684A - Load identification method and device and terminal - Google Patents

Load identification method and device and terminal Download PDF

Info

Publication number
CN111563684A
CN111563684A CN202010385580.6A CN202010385580A CN111563684A CN 111563684 A CN111563684 A CN 111563684A CN 202010385580 A CN202010385580 A CN 202010385580A CN 111563684 A CN111563684 A CN 111563684A
Authority
CN
China
Prior art keywords
solution
neural network
initial
network model
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010385580.6A
Other languages
Chinese (zh)
Inventor
孙立明
余涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shuimu Qinghua Technology Co ltd
Original Assignee
Guangzhou Shuimu Qinghua Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shuimu Qinghua Technology Co ltd filed Critical Guangzhou Shuimu Qinghua Technology Co ltd
Priority to CN202010385580.6A priority Critical patent/CN111563684A/en
Publication of CN111563684A publication Critical patent/CN111563684A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Neurology (AREA)
  • Educational Administration (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Primary Health Care (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Game Theory and Decision Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a load identification method, a load identification device and a terminal. The load identification method comprises the steps of establishing an initial neural network model, and obtaining parameters to be optimized in the initial neural network model; according to a particle swarm optimization algorithm, obtaining an initial solution of a parameter to be optimized and a fitness value of the initial solution; carrying out iterative update processing on the initial solution to obtain an iterative solution and obtain a fitness value of the iterative solution; if the fitness value of the iterative solution is smaller than that of the initial solution, replacing the initial solution with the iterative solution; if the fitness value of the iterative solution is larger than that of the initial solution, replacing the initial solution with the iterative solution according to the acceptance probability; performing the next iteration until the iteration times reach a preset value, and confirming the current global optimal solution of the parameters to be optimized as the initialization parameters of the initial neural network model; adjusting the initialization parameters to obtain a current neural network model; and identifying the input load by adopting a current neural network model to obtain the category of each input load.

Description

Load identification method and device and terminal
Technical Field
The present application relates to the field of machine learning technologies, and in particular, to a load identification method, apparatus, and terminal.
Background
In recent years, with the continuous and deep research on artificial intelligence, the power grid gradually tends to be intelligent, and non-invasive load identification has become a research hotspot. The non-invasive load identification is to install non-invasive terminal equipment at a power inlet of a user, so that the power utilization condition of the user can be decomposed. The technology has the significance that for an electric power company, the electric power company can be helped to obtain the fine power utilization information of users, the scientificity of a power grid planning scheme is improved, and the real-time safe and economic operation of a power grid is ensured; for the user, the electricity utilization habit of the user can be adjusted according to the information, so that electricity is saved. Therefore, the technology has wide development prospect and research value.
In the implementation process, the inventor finds that at least the following problems exist in the conventional technology: the traditional load identification method has the problems of low efficiency and long time consumption.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus and a terminal for efficiently identifying a load.
In order to achieve the above object, in one aspect, an embodiment of the present invention provides a load identification method, including:
establishing an initial neural network model, and acquiring parameters to be optimized in the initial neural network model;
according to a particle swarm optimization algorithm, obtaining an initial solution of a parameter to be optimized and a fitness value of the initial solution;
carrying out iterative update processing on the initial solution to obtain an iterative solution and obtain a fitness value of the iterative solution;
if the fitness value of the iterative solution is smaller than that of the initial solution, replacing the initial solution with the iterative solution; if the fitness value of the iterative solution is larger than that of the initial solution, replacing the initial solution with the iterative solution according to the acceptance probability;
performing the next iteration until the iteration times reach a preset value, and confirming the current global optimal solution of the parameters to be optimized as the initialization parameters of the initial neural network model;
adjusting the initialization parameters to obtain a current neural network model;
and identifying the input load by adopting a current neural network model to obtain the category of each input load.
In one embodiment, the step of obtaining an initial solution of a parameter to be optimized and a fitness value of the initial solution according to a particle swarm optimization algorithm includes:
initializing the position and the speed of each particle; taking the position of each particle as an initial solution of a parameter to be optimized;
obtaining a fitness function according to a forward propagation model in the neural network model;
and processing the initial solution by adopting a fitness function to obtain a fitness value of the initial solution.
In one embodiment, the iterative update process is performed on the initial solution, and the step of obtaining the iterative solution includes:
obtaining an individual extreme value and a group extreme value of the particle according to the fitness value of the initial solution;
acquiring the number of neurons of an input layer, the number of neurons of a hidden layer and the number of neurons of an output layer of an initial neural network model, and acquiring the dimensionality of particles according to the number of neurons of the input layer, the number of neurons of the hidden layer and the number of neurons of the output layer;
and carrying out iterative update processing on the initial solution according to the dimension, the individual extreme value and the group extreme value to obtain an iterative solution.
In one embodiment, in the step of obtaining an iterative solution by performing iterative update processing on the initial solution according to the dimension, the individual extremum and the group extremum, the iterative solution is obtained based on the following formula:
V(j,:)'=V(j,:)+c1*rand*(gbest(j,:)-P(j,:))+c2*rand*(zbest-P(j,:));
V(j,find(V(j,:)>Vmax))=Vmax
V(j,find(V(j,:)<Vmin))=Vmin
wherein, V (j,: is the speed of each dimension of the jth particle, and V (j,: is the iteration speed of each dimension of the jth particle; rand is a random number of (0,1), c1And c2As acceleration factor, VmaxAnd VminRespectively as the maximum value and the minimum value of the particle speed, wherein gbest is the extreme value of the particle, and zbest is the extreme value of the particle swarm;
P(j,:)'=P(j,:)+c*V(j,:);
P(j,find(P(j,:)<Pmin))=Pmin
P(j,find(P(j,:)>Pmax))=Pmax
wherein, P (j, i) is the iteration position of each dimension of the jth particle, namely an iteration solution; p (j,: is the position of each dimension of the jth particle; c is a learning factor; pmaxAnd PminRespectively the maximum and minimum of the particle position.
In one embodiment, in the step of obtaining the dimension of the particle according to the number of neurons of the input layer, the number of neurons of the hidden layer, and the number of neurons of the output layer, the dimension is obtained based on the following formula:
dim=inputs*hidden+hidden+hidden*outputs+outputs
wherein dim is the dimension; inputs is the number of neurons of the input layer, hidden is the number of neurons of the hidden layer, and outputs is the number of neurons of the output layer.
In one embodiment, the probability of acceptance is based on Metropolis criteria.
In one embodiment, the step of adjusting the initialization parameter to obtain the current neural network model includes:
and adjusting the initialization parameters by adopting a gradient descent method to obtain the current neural network model.
In one embodiment, the method further comprises the following steps:
detecting the generation of a load switching event, and acquiring a load to be switched;
carrying out normalization processing on the data of the input load;
the method for identifying the input loads by adopting the current neural network model to obtain the categories of the input loads comprises the following steps:
and identifying the load data after the normalization processing by adopting the current neural network model, and determining a classification result corresponding to the maximum value in the output values of the current neural network model as the category of each load.
In one aspect, an embodiment of the present invention further provides a load identification apparatus, including:
the model establishing module is used for establishing an initial neural network model and acquiring parameters to be optimized in the initial neural network model;
the particle swarm optimization module is used for acquiring an initial solution of the parameter to be optimized and a fitness value of the initial solution according to a particle swarm optimization algorithm;
the iterative update module is used for carrying out iterative update processing on the initial solution to obtain an iterative solution and obtain a fitness value of the iterative solution;
the replacing module is used for replacing the initial solution with the iterative solution if the fitness value of the iterative solution is smaller than the fitness value of the initial solution; if the fitness value of the iterative solution is larger than that of the initial solution, replacing the initial solution with the iterative solution according to the acceptance probability;
the initialization parameter acquisition module is used for carrying out the next iteration until the iteration times reach a preset value, and confirming the current global optimal solution of the parameter to be optimized as the initialization parameter of the initial neural network model;
the training module is used for adjusting the initialization parameters to obtain a current neural network model;
and the identification module is used for identifying the input loads by adopting the current neural network model to obtain the categories of the input loads.
On the other hand, the embodiment of the present invention further provides a load identification terminal, which includes a memory and a processor, where the memory stores a computer program, and is characterized in that, when the processor executes the computer program, the steps of any one of the above methods are implemented;
the system also comprises a voltage transformer and a current transformer which are connected with the processor.
One of the above technical solutions has the following advantages and beneficial effects:
according to the load identification method, a cooling mechanism is added in the particle swarm optimization algorithm, namely, the updating of the position of the particle swarm each time only depends on the size of the fitness value. If the new fitness value is greater than the original fitness value, it is not discarded at , but rather a probability is selected for acceptance, which decreases as the temperature continues to decrease. The method ensures that the particles can search more possible values in the initial searching stage, the searching range of the particles is expanded and cannot fall into local optimum, the particles can be quickly converged in the later stage, the oscillation phenomenon cannot occur, and the searched solution is closest to the global optimum solution. With the diversification and complication of the current data, the model of the artificial neural network tends to be complex. With the increase of the number of layers of the neural network, the problem of gradient disappearance or gradient explosion inevitably occurs, and the problem can cause the training of the neural network to be difficult and even cause the situation of no convergence. At the moment, if the initial value of the neural network model parameter is set to be close to the optimal solution, the training effect can be achieved only by a few training times. The neural network model parameters are optimized, namely pre-trained, and the obtained optimization result is used as an initial value of the neural network parameter model, so that a satisfactory neural network model can be obtained by using few computing resources and computing time in the training process, and the practicability of the neural network model is greatly improved. Therefore, the load can be identified with higher efficiency by adopting the neural network model.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular description of preferred embodiments of the application, as illustrated in the accompanying drawings. Like reference numerals refer to like parts throughout the drawings, and the drawings are not intended to be drawn to scale in actual dimensions, emphasis instead being placed upon illustrating the subject matter of the present application.
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a first schematic flow chart diagram of a method for load identification in one embodiment;
FIG. 2 is a flowchart illustrating the steps of obtaining an initial solution of a parameter to be optimized and a fitness value of the initial solution in one embodiment;
FIG. 3 is a flowchart illustrating the steps of iteratively updating an initial solution to obtain an iterative solution according to one embodiment;
FIG. 4 is a second schematic flow chart diagram illustrating a method for load identification in one embodiment;
FIG. 5 is a block diagram showing the structure of a load recognition apparatus according to an embodiment;
fig. 6 is a block diagram showing the structure of a load recognition terminal in one embodiment;
FIG. 7 is a diagram of a neural network model in one embodiment;
FIG. 8 is a graph comparing training results in one embodiment;
fig. 9 is a schematic diagram illustrating the effect of detecting a load-switching event by wavelet transform in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a load identification method, including the steps of:
s110, establishing an initial neural network model, and acquiring parameters to be optimized in the initial neural network model;
specifically, establishing a neural network model, firstly determining the number of neurons in an input layer, the number of layers of a hidden layer, the number of neurons in the hidden layer and the number of neurons in an output layer, then determining an input mode of the neurons and an activation function of the neurons, and finally determining a loss function and a back propagation algorithm of the neural network.
The neural network input mode is to normalize the input data, and the formula is as follows:
Figure BDA0002483791680000071
wherein x represents the original value, x*Represents the normalized standard value.
The functional formula of the neuron activation function ReLU is as follows:
f(x)=max(0,x)
the neural network forward propagation is as follows:
Figure BDA0002483791680000072
wherein
Figure BDA0002483791680000073
Is the weight of the k neuron of layer l-1 to the j neuron of layer l,
Figure BDA0002483791680000074
for the bias of the jth neuron in layer i, σ is the activation function,
Figure BDA0002483791680000075
is the activation value of the jth neuron of the ith layer.
The loss function is a cross entropy loss function suitable for multi-classification problems. The cross entropy can measure the difference degree of two different probability distributions in the same random variable, and is expressed as the difference between the real probability distribution and the prediction probability distribution in machine learning, and the smaller the value of the cross entropy is, the better the model prediction effect is. The cross entropy calculation formula is as follows:
Figure BDA0002483791680000076
wherein x is an input value of the cross entropy value to be calculated, and class is a classification label corresponding to the input value.
In one particular example, the parameters to be optimized include weights.
S120, acquiring an initial solution of a parameter to be optimized and a fitness value of the initial solution according to a particle swarm optimization algorithm;
specifically, each particle in the particle swarm optimization algorithm represents a potential solution of the problem, the position of each particle corresponds to a fitness value determined by a fitness function, and the position of each particle is the required initial solution. The velocity of the particles determines the direction and distance the particles move, and the velocity is dynamically adjusted with experience with itself and other particles to obtain an initial solution to the problem. In the present application, the fitness value of the initial solution may be obtained by any means.
S130, carrying out iterative update processing on the initial solution to obtain an iterative solution and obtain a fitness value of the iterative solution;
s140, if the fitness value of the iterative solution is smaller than that of the initial solution, replacing the initial solution with the iterative solution; if the fitness value of the iterative solution is larger than that of the initial solution, replacing the initial solution with the iterative solution according to the acceptance probability;
specifically, a cooling mechanism is added in the particle swarm optimization algorithm, namely, the position of the particle swarm is updated each time only depending on the size of the fitness value. If the new fitness value is greater than the original fitness value, it is not discarded at , but rather a certain probability is used to choose whether to accept the new fitness value. It should be noted that the probability decreases with increasing temperature.
S150, carrying out next iteration until the iteration times reach a preset value, and confirming the current global optimal solution of the parameters to be optimized as the initialization parameters of the initial neural network model;
specifically, when the iteration times reach a preset value, the current global optimal solution obtained by the particle swarm algorithm is confirmed as an initialization parameter of the initial neural network model.
S160, adjusting the initialization parameters to obtain a current neural network model;
any training algorithm in the field can be adopted to adjust the initialization parameters to obtain the current neural network model. It should be noted that the current neural network model is a trained neural network model.
And S170, identifying the input loads by adopting the current neural network model to obtain the types of the input loads.
And identifying the input load by the trained neural network model to obtain an identification result. The identification result is a category of each input load.
According to the load identification method and the load identification method, a cooling mechanism is added in the particle swarm optimization algorithm, namely, the updating of the position of the particle swarm each time only depends on the size of the fitness value. If the new fitness value is greater than the original fitness value, it is not discarded at , but rather a probability is selected for acceptance, which decreases as the temperature continues to decrease. The method ensures that the particles can search more possible values in the initial searching stage, the searching range of the particles is expanded and cannot fall into local optimum, the particles can be quickly converged in the later stage, the oscillation phenomenon cannot occur, and the searched solution is closest to the global optimum solution. With the diversification and complication of the current data, the model of the artificial neural network tends to be complex. With the increase of the number of layers of the neural network, the problem of gradient disappearance or gradient explosion inevitably occurs, and the problem can cause the training of the neural network to be difficult and even cause the situation of no convergence. At the moment, if the initial value of the neural network model parameter is set to be close to the optimal solution, the training effect can be achieved only by a few training times. The neural network model parameters are optimized, namely pre-trained, and the obtained optimization result is used as an initial value of the neural network parameter model, so that a satisfactory neural network model can be obtained by using few computing resources and computing time in the training process, and the practicability of the neural network model is greatly improved. Therefore, the load can be identified with higher efficiency by adopting the neural network model.
In one embodiment, as shown in fig. 2, the step of obtaining an initial solution of a parameter to be optimized and a fitness value of the initial solution according to a particle swarm optimization algorithm includes:
s210, initializing the position and the speed of each particle; taking the position of each particle as an initial solution of a parameter to be optimized;
further, the initialized parameters may further include a maximum number of iterations and a population group count. Note that the position and velocity of each initialized ion are random.
S220, obtaining a fitness function according to a forward propagation model in the neural network model;
and S230, processing the initial solution by adopting a fitness function to obtain a fitness value of the initial solution.
It should be noted that the initial solution is the positions of the particles, and each position of the particles corresponds to a fitness value. In this embodiment, the initial solution is processed by using a fitness function to obtain a fitness value.
In one embodiment, as shown in fig. 3, the step of performing iterative update processing on the initial solution to obtain an iterative solution includes:
s310, obtaining an individual extreme value and a group extreme value of the particle according to the fitness value of the initial solution;
specifically, the individual extremum and the group extremum can be obtained by comparing the fitness values of the initial solutions.
S320, acquiring the number of neurons of an input layer, the number of neurons of a hidden layer and the number of neurons of an output layer of the initial neural network model, and acquiring the dimensionality of the particles according to the number of the neurons of the input layer, the number of the neurons of the hidden layer and the number of the neurons of the output layer;
in one embodiment, in the step of obtaining the dimension of the particle according to the number of neurons of the input layer, the number of neurons of the hidden layer, and the number of neurons of the output layer, the dimension is obtained based on the following formula:
dim=inputs*hidden+hidden+hidden*outputs+outputs
wherein dim is the dimension; inputs is the number of neurons of the input layer, hidden is the number of neurons of the hidden layer, and outputs is the number of neurons of the output layer.
And S330, performing iterative update processing on the initial solution according to the dimension, the individual extreme value and the group extreme value to obtain an iterative solution.
In one embodiment, in the step of obtaining an iterative solution by performing iterative update processing on the initial solution according to the dimension, the individual extremum and the group extremum, the iterative solution is obtained based on the following formula:
V(j,:)'=V(j,:)+c1*rand*(gbest(j,:)-P(j,:))+c2*rand*(zbest-P(j,:));
V(j,find(V(j,:)>Vmax))=Vmax
V(j,find(V(j,:)<Vmin))=Vmin
wherein, V (j,: is the speed of each dimension of the jth particle, and V (j,: is the iteration speed of each dimension of the jth particle; rand is a random number of (0,1), c1And c2As acceleration factor, VmaxAnd VminRespectively as the maximum value and the minimum value of the particle speed, wherein gbest is the extreme value of the particle, and zbest is the extreme value of the particle swarm;
P(j,:)'=P(j,:)+c*V(j,:);
P(j,find(P(j,:)<Pmin))=Pmin
P(j,find(P(j,:)>Pmax))=Pmax
wherein, P (j, i) is the iteration position of each dimension of the jth particle, namely an iteration solution; p (j,: is the position of each dimension of the jth particle; c is a learning factor; pmaxAnd PminRespectively the maximum and minimum of the particle position.
In one embodiment, the probability of acceptance is based on Metropolis criteria.
The Metropolis criterion is a criterion for accepting a new state with probability.
Specifically, the temperature reduction coefficient and the initialization temperature T are obtained0And a termination temperature Tend. Calculating fitness value S of iterative solution0And the fitness value S of the initial solution1And the fitness value increment df ═ d (S)1)-d(S0)
If df < 0, then S is accepted1As a new current solution, otherwise calculate S according to Metropolis criterion1The acceptance probability of.
The specific process is as follows: generating a random number p uniformly distributed over a (0,1) interval, if
Figure BDA0002483791680000111
Then accept S1As a new current solution, otherwise, the current solution S is retained0
Further, each iteration is followed by a cooling, i.e. T0=T0A, wherein a is a cooling coefficient. In one specific example, when the temperature T is0<TendAnd (5) ending the algorithm, and solving the obtained global optimal solution.
In one embodiment, the step of adjusting the initialization parameter to obtain the current neural network model includes:
and adjusting the initialization parameters by adopting a gradient descent method to obtain the current neural network model.
The back propagation algorithm adopts a gradient descent method, fine adjustment is carried out on each weight by calculating the derivative of the loss function on each weight in the network model, and the minimum value of the loss function can be obtained after a series of iterative calculations, so that the final model parameter is obtained. The parameter update formula is shown below:
Figure BDA0002483791680000112
Figure BDA0002483791680000121
wherein ω islIs the weight of layer l-1 neurons to layer l neurons, blIs the bias of layer I neurons, and a is the learning rate.
In one embodiment, as shown in fig. 4, the method further includes the steps of:
s410, detecting the generation of a load switching event, and acquiring a load to be switched;
specifically, the load input removal event (i.e., the switching event) may be detected through discrete wavelet transform. When the load switching time occurs, the electrical characteristics collected by the terminal equipment show sudden changes of current, and the sudden changes show current amplitude, current waveform and the like. The discrete wavelet transformation transducer decomposes signals on different scales, and sudden change of the signals is represented as a maximum value or a zero crossing point on each scale after wavelet transformation, so that switching events of loads can be effectively detected by detecting the maximum value of each scale after wavelet transformation. The specific principle of the discrete wavelet transform load switching time detection is as follows:
for the current sequence under consideration P ═ { i (k) }, k ═ 1,2, … …, a load switching event detection sliding window is defined, which contains N sampling points. And performing 3-layer discrete wavelet transform on the current signal in the window by using a Mallat algorithm, wherein 1-3 layers of detail coefficients d1, d2 and d3 of the signal and an approximate coefficient a3 of a3 rd layer can be obtained. The discrete wavelet transform calculation formula is as follows:
Figure BDA0002483791680000122
dj,k=<f(t),ψj,k(t)
aj,k=<f(t),φj,k(t)>
where f (t) is the signal to be decomposed, phi (t) is the approximation function, phi (t) is the wavelet function, aj,kAnd dj,kIs an approximation coefficient and a detail coefficient with j as the number of the second decomposition layers and k as the time shift coefficient.
After obtaining coefficients of each layer after wavelet transformation, calculating detail coefficient d of first layer decomposition1Mean μ and variance σ. If at d1When the measurement points out of mu +/-3 sigma are found, the signal mutation happens at the moment, namely, the load switching event happens.
S420, carrying out normalization processing on the data of the input load;
in one specific example, the normalization process may be based on the following formula:
Figure BDA0002483791680000131
wherein x represents the original value, x*Represents the normalized standard value.
The method for identifying the input loads by adopting the current neural network model to obtain the categories of the input loads comprises the following steps:
and identifying the load data after the normalization processing by adopting the current neural network model, and determining a classification result corresponding to the maximum value in the output values of the current neural network model as the category of each load.
Specifically, the inputs to the neural network model are electrical characteristics of the identified load, including the voltage magnitude Varms、IarmsPower factor PF, active power P, reactive power Q and apparent power S; the number of output values of the neural network model is the number of classes of the load, the value corresponding to each class represents the probability of the class corresponding to the input value, and if the probability is higher, and the probability is higher. Therefore, the classification result with the highest probability among the output values corresponding to the input values is used as the recognition result of the neural network.
It should be understood that although the various steps in the flow charts of fig. 1-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-4 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a load identifying apparatus including:
the model establishing module is used for establishing an initial neural network model and acquiring parameters to be optimized in the initial neural network model;
the particle swarm optimization module is used for acquiring an initial solution of the parameter to be optimized and a fitness value of the initial solution according to a particle swarm optimization algorithm;
the iterative update module is used for carrying out iterative update processing on the initial solution to obtain an iterative solution and obtain a fitness value of the iterative solution;
the replacing module is used for replacing the initial solution with the iterative solution if the fitness value of the iterative solution is smaller than the fitness value of the initial solution; if the fitness value of the iterative solution is larger than that of the initial solution, replacing the initial solution with the iterative solution according to the acceptance probability;
the initialization parameter acquisition module is used for carrying out the next iteration until the iteration times reach a preset value, and confirming the current global optimal solution of the parameter to be optimized as the initialization parameter of the initial neural network model;
the training module is used for adjusting the initialization parameters to obtain a current neural network model;
and the identification module is used for identifying the input loads by adopting the current neural network model to obtain the categories of the input loads.
For the specific limitation of the load identification device, reference may be made to the above limitation of the load identification method, which is not described herein again. The modules in the load identification device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, as shown in fig. 6, there is also provided a load identification terminal, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps of any one of the above methods when executing the computer program;
the system also comprises a voltage transformer and a current transformer which are connected with the processor.
It will be understood by those skilled in the art that the configuration shown in fig. 6 is a block diagram of only a portion of the configuration relevant to the present application, and does not constitute a limitation on the load identification terminal to which the present application is applied, and a particular load identification terminal may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
In one embodiment, there is also provided a load identification terminal, including a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of any one of the above methods when executing the computer program;
the system also comprises a voltage transformer and a current transformer which are connected with the processor.
In order to further illustrate the load identification method of the present application, the following description is given with a specific example:
step 1: determining the number of parameters to be optimized in the network according to a neural network model established by non-invasive terminal equipment;
the neural network model firstly needs to determine the number of neurons of an input layer, the number of hidden layers, the number of neurons of the hidden layers and the number of neurons of an output layer, then determines an input mode of the neurons and an activation function of the neurons, and finally determines a loss function and a back propagation algorithm of the neural network. The neural network model is shown in fig. 7.
The neural network input mode is to normalize the input data, and the formula is as follows:
Figure BDA0002483791680000151
wherein x represents the original value, x*Represents the normalized standard value.
The functional formula of the neuron activation function ReLU is as follows:
f(x)=max(0,x)
the neural network forward propagation is as follows:
Figure BDA0002483791680000152
wherein
Figure BDA0002483791680000161
Is the weight of the k neuron of layer l-1 to the j neuron of layer l,
Figure BDA0002483791680000162
for the bias of the jth neuron in layer i, σ is the activation function,
Figure BDA0002483791680000163
is the activation value of the jth neuron of the ith layer.
The loss function is a cross entropy loss function suitable for multi-classification problems. The cross entropy can measure the difference degree of two different probability distributions in the same random variable, and is expressed as the difference between the real probability distribution and the prediction probability distribution in machine learning, and the smaller the value of the cross entropy is, the better the model prediction effect is. The cross entropy calculation formula is as follows:
Figure BDA0002483791680000164
wherein x is an input value of the cross entropy value to be calculated, and class is a classification label corresponding to the input value.
The back propagation algorithm adopts a gradient descent method, fine adjustment is carried out on each weight by calculating the derivative of the loss function on each weight in the network model, and the minimum value of the loss function can be obtained after a series of iterative calculations, so that the final model parameter is obtained. The parameter update formula is shown below:
Figure BDA0002483791680000165
Figure BDA0002483791680000166
wherein ω islIs the weight of layer l-1 neurons to layer l neurons,blis the bias of layer I neurons, and a is the learning rate.
The input of the neural network is the electrical characteristic of the identified load, including the voltage amplitude Varms、IarmsPower factor PF, active power P, reactive power Q and apparent power S; the number of output values of the neural network is the number of classes of the load, the value corresponding to each class represents the probability of the class corresponding to the input value, and if the probability is higher, and the probability is higher. Therefore, the classification result with the highest probability among the output values corresponding to the input values is used as the recognition result of the neural network.
In this embodiment, the neural network model determines that the number of hidden layers is 2, the number of neurons in the input layer is 6, the number of neurons in the hidden layer 1 is 20, the number of neurons in the hidden layer 2 is 20, and the number of neurons in the output layer is 10; the training and testing of the model then uses a model that contains six types of data: the voltage effective value, the current effective value, the power factor, the active power, the reactive power and the apparent power are classified into a data set of 500 pieces of data of 10 types of electric appliances.
Step 2: establishing a particle swarm optimization algorithm model, and initializing algorithm parameters;
each particle in the particle swarm optimization algorithm represents a potential solution of the problem, and the position of each particle corresponds to a fitness value determined by a fitness function. The speed of the particles determines the moving direction and distance of the particles, and the speed is dynamically adjusted according to the experience of the particles and other particles, so that the problem is solved. In the invention, the weight of the neural network model is used as a parameter to be optimized of the particle swarm optimization, and the position of each particle represents a possible solution. The particle swarm algorithm comprises the following steps:
(1) initializing a particle swarm, wherein the particle swarm comprises the total number N of the particle swarm and the maximum iteration number D;
(2) the positions P of the particles and the velocities V of the examples are randomly initialized, and the fitness of each particle is calculated according to the fitness function. The fitness function of each particle is determined by the forward propagation process of the neural network model in a non-invasive load identification algorithm as claimed in claim 1; the position P of each particle is the possible solution sought; the dimension dim of the position P and the velocity V of each particle is the quantity of the parameters to be optimized, and the calculation formula is as follows:
dim=inputs*hidden+hidden+hidden*outputs+outputs
wherein input is the number of neurons of an input layer in the neural network model, hidden is the number of neurons of a hidden layer, and output is the number of neurons of an output layer.
(3) Finding out and recording individual extremum and group extremum according to the initial particle fitness value;
(4) and each particle is subjected to iterative optimization under the search experience of the particle and other particles, the position and the speed of the particle are updated in each iteration, the individual extreme value and the group extreme value are updated according to the fitness value of the new particle, and l is equal to l + 1. The position and velocity update formula for the particle is as follows:
V(j,:)=V(j,:)+c1*rand*(gbest(j,:)-P(j,:))+c2*rand*(zbest-P(j,:))
V(j,find(V(j,:)>Vmax))=Vmax
V(j,find(V(j,:)<Vmin))=Vmin
where V (i, j) is the velocity of the ith particle in the jth dimension, rand is a random number of (0,1), c1And c2Is the acceleration factor, VmaxAnd VminAre the maximum and minimum values of particle velocity, gbest is the extreme value for each particle, zbest is the extreme value for the population of particles.
P(j,:)=P(j,:)+c*V(j,:)
P(j,find(P(j,:)<Pmin))=Pmin
P(j,find(P(j,:)>Pmax))=Pmax
Wherein P (i, j) is the position of the jth dimension of the ith particle, c is a learning factor, PmaxAnd PminIs the maximum and minimum of the particle position
(5) And when the iteration times L is larger than L, the maximum iteration times is reached, the algorithm is ended, and the obtained global optimal solution is solved.
In this embodiment, the number of parameters to be optimized of the particle swarm algorithm may be determined to be 6 × 20+20+20 × 10+10 as 770 according to the neural network model determined in step 1; c. C1And c2Taking the value of 1.5; vmaxAnd VminTaking values of 1 and-1; pmaxAnd PminValues of 1.5 and-1.5; and c takes a value of 0.6.
And step 3: establishing a simulated annealing algorithm model, and initializing algorithm parameters;
the simulated annealing algorithm is an optimization algorithm for simulating the annealing process of the solid matter, can be used for solving different nonlinear problems, optimizes an immaterial or discontinuous function, and can solve a global optimal solution with a high probability. The flow of the simulated annealing algorithm is as follows:
(1) initial temperature T0And a termination temperature Tend
(2) Current solution to the optimization problem S0Random perturbation produces a new solution S1
(3) Calculating S0And S1Is a fitness value d (S)0) And d (S)1) And the fitness value increment df ═ d (S)1)-d(S0). If df < 0, then S is accepted1As a new current solution, otherwise calculate S according to Metropolis criterion1The acceptance probability of. The specific process is as follows: generating a random number p uniformly distributed over a (0,1) interval, if
Figure BDA0002483791680000181
Then accept S1As a new current solution, otherwise, the current solution S is retained0
(4) Cooling after each iteration, i.e. T0=T0A, wherein a is a cooling coefficient. When the temperature T is0<TendAnd (5) ending the algorithm, and solving the obtained global optimal solution.
In the present embodiment, T0And TendValues of 100 and 10, and a value of 0.8.
And 4, step 4: combining the two by using the algorithm models established in the steps 2 and 3 to realize the collaborative optimization;
the particle swarm optimization algorithm and the simulated annealing optimization algorithm inevitably fall into a local optimal solution when being used independently, so that a global optimal solution cannot be obtained. The specific implementation method comprises the following steps: a cooling mechanism is added in the particle swarm optimization algorithm, namely, the position of the particle swarm is updated each time only depending on the size of the fitness value. If the new fitness value is greater than the original fitness value, it is not discarded at , but rather a probability is selected for acceptance, which decreases as the temperature continues to decrease. The combination of the two algorithms ensures that the particles can search more possible values in the initial searching stage, the searching range of the particles is expanded and cannot fall into local optimum, the particles can be quickly converged in the later stage, the oscillation phenomenon cannot occur, and the searched solution is closest to the global optimum solution;
and 5: and (4) taking the optimizing result of the step (4) as an initialization parameter of neural network training, and then finely adjusting the parameter by using a conventional neural network training algorithm. In the embodiment, an AdaGrad algorithm is used to realize the adaptive adjustment of the learning rate in the neural network model training. In the present embodiment, the comparison between the neural network training effect optimized by using the particle swarm-simulated annealing-based hybrid algorithm and the non-optimized neural network training effect is shown in fig. 8.
Step 6: and taking the model parameters trained in the steps as final parameters of the model in the non-invasive terminal equipment. However, the terminal equipment needs to go through the stages of event detection, data acquisition and extraction to complete non-invasive load identification.
The event detection is realized by carrying out discrete wavelet transform on the load current. The main idea is as follows: when the load switching time occurs, the electrical characteristics collected by the terminal equipment show sudden changes of current, and the sudden changes show current amplitude, current waveform and the like. The discrete wavelet transformation transducer decomposes signals on different scales, and sudden change of the signals is represented as a maximum value or a zero crossing point on each scale after wavelet transformation, so that switching events of loads can be effectively detected by detecting the maximum value of each scale after wavelet transformation. The specific principle of the discrete wavelet transform load switching time detection is as follows:
for the current sequence under consideration P ═ { i (k) }, k ═ 1,2, … …, a load switching event detection sliding window is defined, which contains N sampling points. And performing 3-layer discrete wavelet transform on the current signal in the window by using a Mallat algorithm, wherein 1-3 layers of detail coefficients d1, d2 and d3 of the signal and an approximate coefficient a3 of a3 rd layer can be obtained. The discrete wavelet transform calculation formula is as follows:
Figure BDA0002483791680000201
dj,k=<f(t),ψj,k(t)>
aj,k=<f(t),φj,k(t)>
where f (t) is the signal to be decomposed, phi (t) is the approximation function, phi (t) is the wavelet function, aj,kAnd dj,kIs an approximation coefficient and a detail coefficient with j as the number of the second decomposition layers and k as the time shift coefficient.
After obtaining coefficients of each layer after wavelet transformation, calculating detail coefficient d of first layer decomposition1Mean μ and variance σ. If at d1When the measurement points out of mu +/-3 sigma are found, the signal mutation happens at the moment, namely, the load switching event happens.
In the present embodiment, the db5 wavelet is used to decompose the current signal, and the detection effect is shown in fig. 9.
Step 6: the non-invasive terminal equipment has the functions of data acquisition and extraction, and completes non-invasive load identification on the basis of the load identification algorithm and load switching event detection.
The non-invasive load terminal equipment can monitor the effective value of voltage, the effective value of current, current harmonic wave, active power, reactive power and power factor in real time; the load switching event detection can be realized according to the measurement data, the pre-training optimization based on the particle swarm-simulated annealing hybrid algorithm is realized, and the neural network is utilized to carry out the non-invasive load identification of deep learning. The non-invasive terminal equipment comprises a power supply module, a voltage transformer, a current transformer, an ADC module and a central processing module.
The non-invasive terminal equipment obtains secondary side signals of electrical parameters through a voltage current transformer, sends the secondary side signals into an ADC (analog-to-digital converter) module to obtain corresponding digital quantity, then sends the digital quantity to a central processing module for calculation and processing, finally carries out pre-training optimization by utilizing a particle swarm-simulated annealing hybrid algorithm carried on the central processing module, and completes load identification by utilizing a non-invasive load identification algorithm for deep learning by utilizing a neural network.
In this embodiment, the processing module is an ARM tablet with a Linux system, and the tablet can realize the construction and training of a python-based neural network model while ensuring the digital computing capability. A block diagram of the structure of the non-invasive terminal device is shown in fig. 9.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
establishing an initial neural network model, and acquiring parameters to be optimized in the initial neural network model;
according to a particle swarm optimization algorithm, obtaining an initial solution of a parameter to be optimized and a fitness value of the initial solution;
carrying out iterative update processing on the initial solution to obtain an iterative solution and obtain a fitness value of the iterative solution;
if the fitness value of the iterative solution is smaller than that of the initial solution, replacing the initial solution with the iterative solution; if the fitness value of the iterative solution is larger than that of the initial solution, replacing the initial solution with the iterative solution according to the acceptance probability;
performing the next iteration until the iteration times reach a preset value, and confirming the current global optimal solution of the parameters to be optimized as the initialization parameters of the initial neural network model;
adjusting the initialization parameters to obtain a current neural network model;
and identifying the input load by adopting a current neural network model to obtain the category of each input load.
In one embodiment, the processor further performs the following steps when executing the step of obtaining the initial solution of the parameter to be optimized and the fitness value of the initial solution according to the particle swarm optimization algorithm:
initializing the position and the speed of each particle; taking the position of each particle as an initial solution of a parameter to be optimized;
obtaining a fitness function according to a forward propagation model in the neural network model;
and processing the initial solution by adopting a fitness function to obtain a fitness value of the initial solution.
In one embodiment, the processor performs an iterative update process on the initial solution, and the step of obtaining the iterative solution further implements the steps of:
obtaining an individual extreme value and a group extreme value of the particle according to the fitness value of the initial solution;
acquiring the number of neurons of an input layer, the number of neurons of a hidden layer and the number of neurons of an output layer of an initial neural network model, and acquiring the dimensionality of particles according to the number of neurons of the input layer, the number of neurons of the hidden layer and the number of neurons of the output layer;
and carrying out iterative update processing on the initial solution according to the dimension, the individual extreme value and the group extreme value to obtain an iterative solution.
In one embodiment, the step of adjusting the initialization parameter by the processor to obtain the current neural network model further comprises the steps of:
and adjusting the initialization parameters by adopting a gradient descent method to obtain the current neural network model.
In one embodiment, the processor, when executing the computer program, performs the steps of:
detecting the generation of a load switching event, and acquiring a load to be switched;
carrying out normalization processing on the data of the input load;
in one embodiment, the processor, when executing the computer program, performs the steps of:
and identifying the load data after the normalization processing by adopting the current neural network model, and determining a classification result corresponding to the maximum value in the output values of the current neural network model as the category of each load.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
establishing an initial neural network model, and acquiring parameters to be optimized in the initial neural network model;
according to a particle swarm optimization algorithm, obtaining an initial solution of a parameter to be optimized and a fitness value of the initial solution;
carrying out iterative update processing on the initial solution to obtain an iterative solution and obtain a fitness value of the iterative solution;
if the fitness value of the iterative solution is smaller than that of the initial solution, replacing the initial solution with the iterative solution; if the fitness value of the iterative solution is larger than that of the initial solution, replacing the initial solution with the iterative solution according to the acceptance probability;
performing the next iteration until the iteration times reach a preset value, and confirming the current global optimal solution of the parameters to be optimized as the initialization parameters of the initial neural network model;
adjusting the initialization parameters to obtain a current neural network model;
and identifying the input load by adopting a current neural network model to obtain the category of each input load.
In one embodiment, the step of obtaining an initial solution of the parameter to be optimized and a fitness value of the initial solution according to the particle swarm optimization algorithm further implements the following steps when executed by the processor:
initializing the position and the speed of each particle; taking the position of each particle as an initial solution of a parameter to be optimized;
obtaining a fitness function according to a forward propagation model in the neural network model;
and processing the initial solution by adopting a fitness function to obtain a fitness value of the initial solution.
In one embodiment, the initial solution is iteratively updated, and the step of obtaining the iterative solution further implements the following steps when executed by the processor:
obtaining an individual extreme value and a group extreme value of the particle according to the fitness value of the initial solution;
acquiring the number of neurons of an input layer, the number of neurons of a hidden layer and the number of neurons of an output layer of an initial neural network model, and acquiring the dimensionality of particles according to the number of neurons of the input layer, the number of neurons of the hidden layer and the number of neurons of the output layer;
and carrying out iterative update processing on the initial solution according to the dimension, the individual extreme value and the group extreme value to obtain an iterative solution.
In one embodiment, the step of adjusting the initialization parameters to obtain the current neural network model further comprises the following steps when executed by the processor:
and adjusting the initialization parameters by adopting a gradient descent method to obtain the current neural network model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting the generation of a load switching event, and acquiring a load to be switched;
carrying out normalization processing on the data of the input load;
and identifying the load data after the normalization processing by adopting the current neural network model, and determining a classification result corresponding to the maximum value in the output values of the current neural network model as the category of each load.
In one embodiment, the computer program when executed by the processor further performs the steps of:
it will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus DRAM (RDRAM), and interface DRAM (DRDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for load identification, comprising the steps of:
establishing an initial neural network model, and acquiring parameters to be optimized in the initial neural network model;
according to a particle swarm optimization algorithm, obtaining an initial solution of the parameter to be optimized and a fitness value of the initial solution;
carrying out iterative update processing on the initial solution to obtain an iterative solution and obtain a fitness value of the iterative solution;
if the fitness value of the iterative solution is smaller than the fitness value of the initial solution, replacing the initial solution with the iterative solution; if the fitness value of the iterative solution is larger than the fitness value of the initial solution, replacing the initial solution with the iterative solution according to an acceptance probability;
performing next iteration until the iteration times reach a preset value, and confirming the current global optimal solution of the parameter to be optimized as the initialization parameter of the initial neural network model;
adjusting the initialization parameters to obtain a current neural network model;
and identifying input loads by adopting the current neural network model to obtain the category of each input load.
2. The load identification method according to claim 1, wherein the step of obtaining the initial solution of the parameter to be optimized and the fitness value of the initial solution according to a particle swarm optimization algorithm comprises:
initializing the position and the speed of each particle; taking the position of each particle as an initial solution of the parameter to be optimized;
obtaining a fitness function according to a forward propagation model in the neural network model;
and processing the initial solution by adopting the fitness function to obtain a fitness value of the initial solution.
3. The load identification method according to claim 2, wherein the step of iteratively updating the initial solution to obtain an iterative solution comprises:
obtaining an individual extreme value and a group extreme value of the particle according to the fitness value of the initial solution;
acquiring the number of neurons of an input layer, the number of neurons of a hidden layer and the number of neurons of an output layer of the initial neural network model, and acquiring the dimensionality of the particles according to the number of neurons of the input layer, the number of neurons of the hidden layer and the number of neurons of the output layer;
and according to the dimension, the individual extreme value and the group extreme value, carrying out iterative update processing on the initial solution to obtain an iterative solution.
4. The load identification method according to claim 3, wherein in the step of iteratively updating the initial solution according to the dimension, the individual extremum and the group extremum to obtain an iterative solution, the iterative solution is obtained based on the following formula:
V(j,:)'=V(j,:)+c1*rand*(gbest(j,:)-P(j,:))+c2*rand*(zbest-P(j,:));
V(j,find(V(j,:)>Vmax))=Vmax
V(j,find(V(j,:)<Vmin))=Vmin
wherein, V (j,: is the speed of each dimension of the jth particle, and V (j,: is the iteration speed of each dimension of the jth particle; rand is a random number of (0,1), c1And c2As acceleration factor, VmaxAnd VminRespectively as the maximum value and the minimum value of the particle speed, wherein gbest is the extreme value of the particle, and zbest is the extreme value of the particle swarm;
P(j,:)'=P(j,:)+c*V(j,:);
P(j,find(P(j,:)<Pmin))=Pmin
P(j,find(P(j,:)>Pmax))=Pmax
wherein, P (j:)' is the iteration position of each dimension of the jth particle, namely the iteration solution; p (j,: is the position of each dimension of the jth particle; c is a learning factor; pmaxAnd PminRespectively the maximum and minimum of the particle position.
5. The load recognition method according to claim 3, wherein in the step of obtaining the dimension of the particle from the number of neurons of the input layer, the number of neurons of the hidden layer, and the number of neurons of the output layer, the dimension is obtained based on the following formula:
dim=inputs*hidden+hidden+hidden*outputs+outputs
wherein dim is the dimension; inputs is the number of neurons of the input layer, hidden is the number of neurons of the hidden layer, and outputs is the number of neurons of the output layer.
6. The method of claim 1, wherein the acceptance probability is based on Metropolis criteria.
7. The load recognition method of claim 1, wherein adjusting the initialization parameters to obtain the current neural network model comprises:
and adjusting the initialization parameters by adopting a gradient descent method to obtain the current neural network model.
8. The load identification method according to claim 1, further comprising the steps of:
detecting the generation of a load switching event, and acquiring the input load;
normalizing the data of the input load;
the step of identifying the input loads by adopting the current neural network model to obtain the category of each input load comprises the following steps:
and identifying the load data after the normalization processing by adopting the current neural network model, and determining a classification result corresponding to the maximum value in the output values of the current neural network model as the category of each load.
9. A load recognition device, comprising:
the model establishing module is used for establishing an initial neural network model and acquiring parameters to be optimized in the initial neural network model;
the particle swarm optimization module is used for acquiring an initial solution of the parameter to be optimized and a fitness value of the initial solution according to a particle swarm optimization algorithm;
the iteration updating module is used for carrying out iteration updating processing on the initial solution to obtain an iteration solution and acquiring a fitness value of the iteration solution;
a replacement module, configured to replace the initial solution with the iterative solution if the fitness value of the iterative solution is smaller than the fitness value of the initial solution; if the fitness value of the iterative solution is larger than the fitness value of the initial solution, replacing the initial solution with the iterative solution according to an acceptance probability;
the initialization parameter acquisition module is used for carrying out the next iteration until the iteration times reach a preset value, and confirming the current global optimal solution of the parameter to be optimized as the initialization parameter of the initial neural network model;
the training module is used for adjusting the initialization parameters to obtain a current neural network model;
and the identification module is used for identifying input loads by adopting the current neural network model to obtain the category of each input load.
10. A load recognition terminal comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the method according to any one of claims 1 to 8;
the system also comprises a voltage transformer and a current transformer which are connected with the processor.
CN202010385580.6A 2020-05-09 2020-05-09 Load identification method and device and terminal Pending CN111563684A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010385580.6A CN111563684A (en) 2020-05-09 2020-05-09 Load identification method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010385580.6A CN111563684A (en) 2020-05-09 2020-05-09 Load identification method and device and terminal

Publications (1)

Publication Number Publication Date
CN111563684A true CN111563684A (en) 2020-08-21

Family

ID=72074603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010385580.6A Pending CN111563684A (en) 2020-05-09 2020-05-09 Load identification method and device and terminal

Country Status (1)

Country Link
CN (1) CN111563684A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830411A (en) * 2022-11-18 2023-03-21 智慧眼科技股份有限公司 Biological feature model training method, biological feature extraction method and related equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830411A (en) * 2022-11-18 2023-03-21 智慧眼科技股份有限公司 Biological feature model training method, biological feature extraction method and related equipment
CN115830411B (en) * 2022-11-18 2023-09-01 智慧眼科技股份有限公司 Biological feature model training method, biological feature extraction method and related equipment

Similar Documents

Publication Publication Date Title
CN110728360B (en) Micro-energy device energy identification method based on BP neural network
CN106600059B (en) Intelligent power grid short-term load prediction method based on improved RBF neural network
Isa et al. Using the self organizing map for clustering of text documents
CN110544177A (en) Load identification method based on power fingerprint and computer readable storage medium
US11042802B2 (en) System and method for hierarchically building predictive analytic models on a dataset
Dudek Pattern similarity-based methods for short-term load forecasting–Part 2: Models
Ri et al. G-mean based extreme learning machine for imbalance learning
CN111832825A (en) Wind power prediction method and system integrating long-term and short-term memory network and extreme learning machine
CN111222689A (en) LSTM load prediction method, medium, and electronic device based on multi-scale temporal features
Suresh et al. A sequential learning algorithm for meta-cognitive neuro-fuzzy inference system for classification problems
CN113836823A (en) Load combination prediction method based on load decomposition and optimized bidirectional long-short term memory network
CN111563684A (en) Load identification method and device and terminal
Subramanian et al. A meta-cognitive interval type-2 fuzzy inference system classifier and its projection based learning algorithm
Urgun et al. Composite power system reliability evaluation using importance sampling and convolutional neural networks
Yu et al. Short-term load forecasting using deep belief network with empirical mode decomposition and local predictor
Zhang et al. A network traffic prediction model based on quantum-behaved particle swarm optimization algorithm and fuzzy wavelet neural network
Mau et al. Kernel-based k-representatives algorithm for fuzzy clustering of categorical data
Shrestha Natural Gradient Methods: Perspectives, Efficient-Scalable Approximations, and Analysis
Ma et al. The pattern classification based on fuzzy min-max neural network with new algorithm
CN115564155A (en) Distributed wind turbine generator power prediction method and related equipment
CN112418564A (en) Charging and battery replacing load prediction method for charging and battery replacing station based on LSTM and related components thereof
Anowar et al. Incremental Learning with Self-labeling of Incoming High-dimensional Data.
CN112285565A (en) Method for predicting SOH (State of health) of battery by transfer learning based on RKHS (remote keyless entry) domain matching
Qing et al. A new clustering algorithm based on artificial immune network and K-means method
Neukart et al. A Machine Learning Approach for Abstraction Based on the Idea of Deep Belief Artificial Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 509, building D, 39 Ruihe Road, Huangpu District, Guangzhou City, Guangdong Province 510000

Applicant after: Guangzhou Shuimu Qinghua Technology Co.,Ltd.

Address before: 510000 room 936, No. 333, jiufo Jianshe Road, Zhongxin Guangzhou Knowledge City, Guangzhou City, Guangdong Province

Applicant before: Guangzhou Shuimu Qinghua Technology Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200821

RJ01 Rejection of invention patent application after publication