CN118152923A - Artificial intelligence-based carbon emission monitoring method and system - Google Patents

Artificial intelligence-based carbon emission monitoring method and system Download PDF

Info

Publication number
CN118152923A
CN118152923A CN202410291600.1A CN202410291600A CN118152923A CN 118152923 A CN118152923 A CN 118152923A CN 202410291600 A CN202410291600 A CN 202410291600A CN 118152923 A CN118152923 A CN 118152923A
Authority
CN
China
Prior art keywords
parameters
eigenmode
optimal
complexity
eigenmode function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410291600.1A
Other languages
Chinese (zh)
Inventor
刘翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University Of Finance
Original Assignee
Guangdong University Of Finance
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University Of Finance filed Critical Guangdong University Of Finance
Priority to CN202410291600.1A priority Critical patent/CN118152923A/en
Publication of CN118152923A publication Critical patent/CN118152923A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a carbon emission monitoring method and system based on artificial intelligence. The invention belongs to the technical field of carbon emission monitoring, in particular to a carbon emission monitoring method and a carbon emission monitoring system based on artificial intelligence.

Description

Artificial intelligence-based carbon emission monitoring method and system
Technical Field
The invention belongs to the technical field of carbon emission monitoring, and particularly relates to a carbon emission monitoring method and system based on artificial intelligence.
Background
The artificial intelligence-based carbon emission monitoring method and system are mainly used for predicting and monitoring the carbon emission so as to evaluate and manage the carbon emission accurately.
The method and the system generally use an artificial intelligence technology to collect an original data sequence related to the carbon emission, decompose the data sequence related to the carbon emission through empirical mode decomposition, and predict the carbon emission through an extreme learning machine, but the existing prediction mode still has a plurality of defects: the partial parameters of the empirical mode decomposition generally need to be set manually, so that the decomposition result is greatly influenced by the subjectivity of a person, the optimal parameters are difficult to quickly determine in the mode, and a large number of experiments and adjustment are generally needed to obtain the optimal decomposition effect; because the original data sequence generally has the characteristics of nonlinearity and non-stationarity, the existing frequency domain analysis method such as empirical mode decomposition and the like cannot accurately analyze the information of carbon emission, and residual noise such as residual signals and the like is difficult to eliminate; when predicting the carbon emission, the particle swarm optimization-based extreme learning machine is suitable for predicting the high-complexity eigenmode function, but is not suitable for predicting the low-complexity eigenmode function, the original extreme learning machine is higher in prediction precision of the low-complexity eigenmode function, but is not suitable for predicting the high-complexity eigenmode function, and the distributed self-adaptive extreme learning machine is suitable for predicting the high-complexity and low-complexity eigenmode function, but has the defects of over training and low convergence, so that the accuracy of a prediction result is influenced, and therefore, the carbon emission is difficult to accurately predict by adopting the extreme learning machine.
Disclosure of Invention
Aiming at the technical problems that the decomposition result caused by the need of manually setting partial parameters of empirical mode decomposition is greatly influenced by personal subjectivity and the optimal parameters are difficult to quickly determine, and a large number of tests and adjustments are usually needed to obtain the optimal decomposition effect, the method and the system optimize the parameters K 3 and the parameters alpha which are usually needed to be manually set of the empirical mode decomposition through a wolf optimization algorithm, so that the optimal parameters K 3 and the parameters alpha are quickly determined, and the optimized empirical mode decomposition is obtained without being manually set, and the technical problems that the decomposition result is greatly influenced by personal subjectivity and the optimal parameters are difficult to quickly determine and the optimal decomposition effect is usually needed to be obtained are solved; aiming at the technical problems that the prior frequency domain analysis method for the empirical mode decomposition and the like cannot accurately analyze the information of carbon emission and is difficult to eliminate residual noise such as residual signals and the like caused by the characteristics of nonlinearity and non-stationarity of an original data sequence, the scheme obtains an eigenmode function set and a residual signal sequence through the empirical mode decomposition of optimizing the data sequence after preprocessing and adding the Gaussian white noise sequence, updates the eigenmode function set to obtain an updated eigenmode function set, can effectively remove the residual noise such as the residual signals and the like through the operation, and simultaneously quantizes the complexity of the eigenmode function in the updated eigenmode function set, and classifies the eigenmode function in the updated eigenmode function set into an eigenmode function with high complexity and an eigenmode function with low complexity according to the complexity, thereby accurately analyzing the information of carbon emission; aiming at the technical problems that the existing particle swarm optimization-based extreme learning machine is not suitable for predicting low-complexity eigenmode functions, the original extreme learning machine is not suitable for predicting high-complexity eigenmode functions, and the distributed self-adaptive extreme learning machine has the defects of over-training and low convergence, the optimal weight and threshold parameters of the original extreme learning machine are calculated through a particle swarm optimization algorithm, the optimal matrix weight parameters of the original extreme learning machine are calculated through a slime algorithm and low-complexity eigenmode functions, and the parameters of the original extreme learning machine are optimized according to the optimal weight and threshold parameters and the optimal matrix weight parameters, so that the optimized original extreme learning machine is obtained.
The technical scheme adopted by the invention is as follows: the invention provides an artificial intelligence-based carbon emission monitoring method, which comprises the following steps:
step S1: the data acquisition is specifically to acquire an original data sequence;
Step S2: optimizing parameters, namely optimizing parameters K 3 and parameters alpha of the empirical mode decomposition through a gray wolf optimization algorithm to obtain optimized empirical mode decomposition;
step S3: preprocessing data, namely preprocessing an original data sequence to remove trend and periodic components, obtaining a preprocessed original data sequence, adding a Gaussian white noise sequence into the preprocessed original data sequence, and obtaining a data sequence subjected to preprocessing and Gaussian white noise sequence addition, wherein the formula is as follows:
Xi(t)=X(t)+βE(δi(t)),i=1,2,…,K;
Wherein X (t) represents an original data sequence at time t, t represents a time variable, X i (t) represents a data sequence after preprocessing and adding a gaussian white noise sequence at time t, β represents a gaussian white noise weight coefficient, δ i (t) represents an ith gaussian white noise sequence at time t, E (δ i (t)) represents an eigenmode function obtained by empirical mode decomposition optimizing δ i (t), and K represents the number of gaussian white noise sequences;
Step S4: decomposing a data sequence, namely, performing optimized empirical mode decomposition on the data sequence subjected to pretreatment and Gaussian white noise sequence addition to obtain an eigenmode function set and a residual signal sequence, and updating the eigenmode function set to obtain an updated eigenmode function set;
Step S5: quantifying complexity, namely quantifying the complexity of the eigenmode functions in the updated eigenmode function set, and classifying the eigenmode functions in the updated eigenmode function set into eigenmode functions with high complexity and eigenmode functions with low complexity according to the complexity;
Step S6: calculating optimal parameters, namely, calculating optimal weights and threshold parameters of the original extreme learning machine through a particle swarm optimization algorithm, calculating optimal matrix weight parameters of the original extreme learning machine through a slime algorithm and low-complexity eigen mode functions, and optimizing the parameters of the original extreme learning machine through the optimal weights, the threshold parameters and the optimal matrix weight parameters to obtain the optimized original extreme learning machine;
step S7: and predicting the carbon emission, namely inputting the high-complexity eigenmode function and the low-complexity eigenmode function into an optimized original extreme learning machine for training to obtain a carbon emission prediction result.
As a further improvement of the present solution, in step S2, the step of optimizing the parameter includes:
Step S21: initializing the maximum optimization iteration times, a parameter alpha and a parameter K 3;
Step S22: calculating a decomposition fitness, wherein the calculation formula of the decomposition fitness is as follows:
Tn=-Kt+Hpe
Wherein T n represents decomposition fitness, K t represents kurtosis, and H pe represents permutation entropy;
Step S23: updating the parameter alpha and the parameter K 3 according to the decomposition fitness;
Step S24: and (3) iteratively executing the steps S52-S53, stopping iteration after the maximum optimization iteration number is reached, and obtaining optimized empirical mode decomposition by updating parameters K 3 and parameters alpha generated in the last iteration to parameters K 3 and parameters alpha of empirical mode decomposition.
As a further improvement of the present solution, in step S4, the step of decomposing the data sequence includes:
step S41: calculating a first group of residual signals, wherein the calculation formula of the first group of residual signals is as follows:
R1=N(Xi(t));
wherein R 1 represents a first group of residual signals, X i (t) represents a data sequence subjected to preprocessing and Gaussian white noise sequence addition at a time t, and N (X i (t)) represents a local mean value of X i (t);
Step S42: calculating a first group of mode components, wherein the calculation formula of the first group of mode components is as follows:
d1=Xi(t)-R1
Wherein d 1 represents a first group of mode components, R 1 represents a first group of residual signals, and X i (t) represents a data sequence after preprocessing and adding a Gaussian white noise sequence at a time t;
Step S43: and adding the Gaussian white noise sequence again, and calculating a second group of residual signals through the local mean value, wherein the calculation formula of the second group of residual signals is as follows:
R2=R1+R1E(δi(t));
Wherein R 2 represents the second set of residual signals, R 1 represents the first set of residual signals, δ i (t) represents the i-th gaussian white noise sequence at time t, and E (δ i (t)) represents the eigenmode function obtained by empirical mode decomposition of δ i (t);
step S44: calculating a second set of mode components, wherein the calculation formula of the second set of mode components is as follows:
d2=R1-R2=R1-(N(R1+βE(δi(t))));
Where d 2 denotes a second set of modal components, R 1 denotes a first set of residual signals, R 2 denotes a second set of residual signals, β denotes a gaussian white noise weight coefficient, δ i (t) denotes an i-th gaussian white noise sequence at time t, E (δ i (t)) denotes an eigenmode function obtained by empirical mode decomposition of δ i (t), N (R 11E(δi (t))) denotes a local mean of R 11E(δi (t));
Step S45: presetting an initial value of k to be 3, and setting a deviation threshold value;
Step S46: calculating a kth residual signal and a kth modal component, and turning to step S47, wherein the calculation formula of the kth residual signal and the kth modal component is as follows:
Rk=N(Rk-1+βE(δk(t))),k=3,4,…,K;
dk=Rk-1-Rk,k=3,4,…,K;
Wherein R k represents the kth residual signal, d k represents the kth modal component, K represents the number of gaussian white noise sequences, δ k (t) represents the ith gaussian white noise sequence at time t, E (δ k (t)) represents the eigenmode function obtained by empirical mode decomposition of δ k (t), i.e., the kth eigenmode function, β represents the gaussian white noise weight coefficient, R k-1 represents the kth-1 residual signal, N (R k-1k-1E(δk (t))) represents the local mean of R k-1k-1E(δk (t));
Step S47: forming a residual signal sequence according to the residual signals, setting k=k+1, judging whether a termination condition is met, if so, turning to step S48, otherwise, turning to step S46;
The termination condition is that the residual signal sequence monotonically decreases and the standard deviation is smaller than a deviation threshold, and the calculation formula of the standard deviation is as follows:
Wherein σ represents the standard deviation, I k represents the kth eigenmode function, I k+1 represents the k+1th eigenmode function, I k||2 represents the L2 norm of the k eigenmode function, I k+1-Ik||2 represents the L2 norm of the difference between the k+1th eigenmode function and the kth eigenmode function;
Step S48: presetting a mean square threshold, constructing an eigenmode function set according to eigenmode functions, calculating the root mean square value of each eigenmode function, traversing all eigenmode functions, removing the eigenmode function from the eigenmode function set if the root mean square value of the eigenmode function is smaller than or equal to the mean square threshold, and adding the eigenmode function into the rest items;
step S49: and adding the residual items into the eigenmode function set as eigenmode functions to obtain an updated eigenmode function set.
As a further improvement of the present solution, in step S5, the step of quantifying complexity includes:
Step S51: presetting an embedding parameter t 1 and a space offset w 1 of an original data sequence;
Step S52: traversing the eigenmode functions in the updated eigenmode function set, and constructing a recursion chart by calculating Chebyshev distances of the eigenmode functions;
Step S53: analyzing the recursion diagram by using the binary symbiotic matrix to obtain a recursion mapping result
Step S54: increasing the value of the embedding parameter t 1, i.e. t 1=t1 +1, and repeating steps S52 and S53 to obtain a new recursive mapping result
Step S55: calculating the spatial correlation recursive sample entropy SDRSAMPEN (t 1,w1) of each eigenmode function, and evaluating the complexity of each eigenmode function according to the spatial correlation recursive sample entropy SDRSAMPEN (t 1,w1) of each eigenmode function, wherein the calculation formula of the spatial correlation recursive sample entropy SDRSAMPEN (t 1,w1) of each eigenmode function is as follows:
Wherein SDRSAMPEN (t 1,w1) represents the spatial correlation recursive sample entropy of the eigenmode function, Representing recursive mapping results,/>Representing the new recursive mapping result, t 1 representing the embedding parameters, w 1 representing the spatial offset of the original data sequence;
step S56: each eigenmode function is classified into a high-complexity eigenmode function and a low-complexity eigenmode function according to the complexity of each eigenmode function.
As a further improvement of the present solution, in step S6, the step of calculating the optimal parameter includes:
Step S61: calculating optimal weight and threshold parameters, namely calculating the optimal weight and threshold parameters of the original extreme learning machine through a particle swarm optimization algorithm;
Step S62: calculating an optimal matrix weight parameter of the original extreme learning machine, specifically, calculating the optimal matrix weight parameter of the original extreme learning machine through a myxobacteria algorithm and an eigenvalue function with low complexity;
Step S63: and optimizing the parameters of the original extreme learning machine according to the optimal weight, the threshold value parameter and the optimal matrix weight parameter to obtain the optimized original extreme learning machine.
As a further improvement of the present solution, in step S61, the step of calculating the optimal weight and the threshold parameter includes:
Step S611: initializing parameters, namely initializing parameters of a particle swarm optimization algorithm and an original extreme learning machine, wherein the parameters of the particle swarm optimization algorithm comprise maximum iteration times, particle number N', learning factors c 1 and c 2 and inertia weight factors w 2, and the parameters of the original extreme learning machine comprise a weight matrix from an input layer to an implicit layer and a weight matrix from the implicit layer to an output layer;
Step S612: generating an initial particle group, specifically, randomly generating the velocity V ' and the position X ' of N ' particles, and initializing the optimal position P best and the global optimal position G best of each particle;
Step S613: the velocity and position of each particle are updated using the following formula:
X′i(t+1)=X′i(t)+V′i(t+1);
Wherein w 2 represents an inertial weight factor, c 1 and c 2 represent learning factors, P best represents an optimal position of each particle, G best represents a global optimal position, V 'i (t) represents a speed of an ith particle at time t, V' i (t+1) represents a speed of an ith particle at time t+1, i.e., an updated speed of an ith particle, X 'i (t) represents a position of an ith particle at time t, X' i (t+1) represents a position of an ith particle at time t+1, i.e., an updated position of an ith+1 particle, and rand () represents a random number between 0 and 1;
Step S614: the method comprises the steps of updating parameters of an original extreme learning machine, namely setting the updated position of each particle as values of a weight matrix from an input layer to an implicit layer and a weight matrix from the implicit layer to an output layer, training the original extreme learning machine by using a training data set, calculating performance indexes of each particle on a verification set, updating the optimal position P best of the particle to the updated position of the particle if the performance indexes of the particle on the verification set are superior to the optimal position P best of the particle, and updating the global optimal position G best to the updated position of the particle if the performance indexes of the particle on the verification set are superior to the global optimal position G best;
step S615: and iteratively executing the steps S613 to S614, ending the iterative process after the maximum iterative times are reached, and taking the values of the weight matrix from the input layer to the hidden layer and the weight matrix from the hidden layer to the output layer at the last iteration as optimal weight and threshold parameters.
As a further improvement of the present solution, in step S62, the step of calculating an optimal matrix weight parameter of the original extreme learning machine includes:
Step S621: initializing parameters of a myxobacteria algorithm, wherein the parameters of the myxobacteria algorithm comprise low complex iteration times and weight coefficients w 3 of the myxobacteria algorithm;
Step S622: initializing an individual position X ' and an individual velocity V ', the individual position X ' being a matrix of M rows and N columns, an element X ' ij in the individual position X ' representing the position of the ith individual in the jth dimension, the individual velocity V "is a matrix of M rows and N columns, and the element V" ij in the individual velocity V "represents the velocity of the ith individual in the jth dimension;
Step S623: calculating the fitness value of the individual position X';
Step S624: updating the individual position X 'and the individual velocity V' to obtain an updated individual position X 'and an updated individual velocity V', the calculation formula for updating the individual position X 'and the individual speed V' is as follows:
x″ij(t+1)=x″ij(t)+v″ij(t+1);
v″ij(t+1)=w3*v″ij(t)+c3*r3*(pbestij(t)-x″ij(t))+c4*r4*(gbestij(t)-x″ij(t));
Where x "ij (t) represents the position of the ith individual at time t in the jth dimension, x" ij (t+1) represents the position of the ith individual at time t+1 in the jth dimension, i.e., the updated position of the ith individual at time t in the jth dimension, v "ij (t) represents the speed of the ith individual at time t in the jth dimension, v" ij (t+1) represents the speed of the ith individual at time t+1 in the jth dimension, w 3 represents the weight coefficient of the myxobacteria algorithm, c 3 and c 4 represent learning factors, r 3 and r 4 represent random numbers between 0 and 1, pbest ij (t) represents the individual optimal position of the ith individual at time t in the jth dimension, and gbest ij (t) represents the global optimal position of the ith individual at time t in the jth dimension;
step S625: calculating the fitness value of the updated individual position X 'according to the updated individual position X';
Step S626: and step S624 to step S625 are iteratively executed, after the low complex iteration times are reached, the iteration process is ended, and the optimal individual position X' is selected as the optimal matrix weight parameter of the original extreme learning machine according to the fitness value.
The invention provides an artificial intelligence-based carbon emission monitoring system, which comprises a data acquisition module, a data preprocessing module, a sequence decomposition module, a complexity quantization module, a parameter optimization module, an optimal parameter calculation module and a carbon emission prediction module;
the data acquisition module is used for data acquisition, specifically, acquiring an original data sequence and sending the original data sequence to the data preprocessing module;
The parameter optimization module is used for parameter optimization, specifically, optimizing the parameter K 3 and the parameter alpha of the empirical mode decomposition through a gray wolf optimization algorithm to obtain optimized empirical mode decomposition;
The data preprocessing module is used for preprocessing data, specifically, preprocessing an original data sequence to remove trend and periodic components, obtaining a preprocessed original data sequence, adding a Gaussian white noise sequence into the preprocessed original data sequence to obtain a data sequence subjected to preprocessing and the Gaussian white noise sequence, and sending the data sequence subjected to preprocessing and the Gaussian white noise sequence to the sequence decomposition module;
The sequence decomposition module is used for decomposing a data sequence, specifically, performing optimized empirical mode decomposition on the data sequence subjected to pretreatment and Gaussian white noise sequence addition to obtain an eigenmode function set and a residual signal sequence, updating the eigenmode function set to obtain an updated eigenmode function set, and sending the updated eigenmode function set to the complexity quantization module;
The complexity quantization module is used for quantizing the complexity, specifically, quantizing the complexity of the eigenmode functions in the updated eigenmode function set, classifying the eigenmode functions in the updated eigenmode function set into high-complexity eigenmode functions and low-complexity eigenmode functions according to the complexity, sending the high-complexity eigenmode functions to the carbon emission prediction module, and sending the low-complexity eigenmode functions to the optimal parameter calculation module and the carbon emission prediction module;
The optimal parameter calculation module is used for calculating optimal parameters, specifically, calculating optimal weight and threshold parameters of the original extreme learning machine through a particle swarm optimization algorithm, calculating optimal matrix weight parameters of the original extreme learning machine through a slime algorithm and low-complexity eigen mode functions, optimizing parameters of the original extreme learning machine through the optimal weight and threshold parameters and the optimal matrix weight parameters to obtain an optimized original extreme learning machine, and sending the optimized original extreme learning machine to the carbon emission prediction module;
The carbon emission prediction module is used for predicting the carbon emission, specifically, inputting the high-complexity eigenmode function and the low-complexity eigenmode function into the optimized original extreme learning machine for training, and obtaining a carbon emission prediction result.
By adopting the scheme, the beneficial effects obtained by the invention are as follows:
(1) Aiming at the technical problems that the decomposition result caused by the need of manually setting partial parameters of the empirical mode decomposition is greatly influenced by the subjectivity of a person, the optimal parameters are difficult to quickly determine, a large number of tests and adjustment are usually required to obtain the optimal decomposition effect, the scheme optimizes the parameters K 3 and the parameters alpha which are usually required to be manually set for the empirical mode decomposition through a gray-wolf optimization algorithm, so that the optimal parameters K 3 and the parameters alpha are quickly determined, the optimized empirical mode decomposition is obtained, the manual setting is not required, the technical problems that the decomposition result is greatly influenced by the subjectivity of the person are solved, and the optimal parameters are difficult to quickly determine, and a large number of tests and adjustment are usually required to obtain the optimal decomposition effect are solved.
(2) Aiming at the technical problems that the existing frequency domain analysis method for the empirical mode decomposition and the like cannot accurately analyze the information of carbon emission and is difficult to eliminate residual noise such as residual signals and the like caused by the characteristics of nonlinearity and non-stationarity of an original data sequence, the scheme obtains an eigenmode function set and a residual signal sequence through the empirical mode decomposition of optimizing the data sequence after preprocessing and adding the Gaussian white noise sequence, updates the eigenmode function set to obtain the updated eigenmode function set, can effectively remove the residual noise such as the residual signals and the like through the operation, quantizes the complexity of the eigenmode function in the updated eigenmode function set, and classifies the eigenmode function in the updated eigenmode function set into an eigenmode function with high complexity and an eigenmode function with low complexity according to the complexity, so that the information of carbon emission can be accurately analyzed.
(3) Aiming at the technical problems that the existing particle swarm optimization-based extreme learning machine is not suitable for predicting low-complexity eigenmode functions, the original extreme learning machine is not suitable for predicting high-complexity eigenmode functions, and the distributed self-adaptive extreme learning machine has the defects of over-training and low convergence, the optimal weight and threshold parameters of the original extreme learning machine are calculated through a particle swarm optimization algorithm, the optimal matrix weight parameters of the original extreme learning machine are calculated through a slime algorithm and low-complexity eigenmode functions, and the parameters of the original extreme learning machine are optimized according to the optimal weight and threshold parameters and the optimal matrix weight parameters, so that the optimized original extreme learning machine is obtained.
Drawings
FIG. 1 is a schematic flow chart of an artificial intelligence based carbon emission monitoring method provided by the invention;
FIG. 2 is a flow chart of step S2;
FIG. 3 is a flow chart of step S4;
fig. 4 is a flow chart of step S5;
fig. 5 is a flow chart of step S6;
fig. 6 is a flow chart of step S61;
Fig. 7 is a flow chart of step S62;
fig. 8 is a schematic structural diagram of an artificial intelligence-based carbon emission monitoring system according to the present invention.
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention; all other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be understood that the terms "upper," "lower," "front," "rear," "left," "right," "top," "bottom," "inner," "outer," and the like indicate orientation or positional relationships based on those shown in the drawings, merely to facilitate description of the invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the invention.
In an embodiment, referring to fig. 1, the method for monitoring carbon emission based on artificial intelligence provided by the invention includes:
step S1: the data acquisition is specifically to acquire an original data sequence;
Step S2: optimizing parameters, namely optimizing parameters K 3 and parameters alpha of the empirical mode decomposition through a gray wolf optimization algorithm to obtain optimized empirical mode decomposition;
step S3: preprocessing data, namely preprocessing an original data sequence to remove trend and periodic components, obtaining a preprocessed original data sequence, adding a Gaussian white noise sequence into the preprocessed original data sequence, and obtaining a data sequence subjected to preprocessing and Gaussian white noise sequence addition, wherein the formula is as follows:
Xi(t)=X(t)+βE(δi(t)),i=1,2,…,K;
Wherein X (t) represents an original data sequence at time t, t represents a time variable, X i (t) represents a data sequence after preprocessing and adding a gaussian white noise sequence at time t, β represents a gaussian white noise weight coefficient, δ i (t) represents an ith gaussian white noise sequence at time t, E (δ i (t)) represents an eigenmode function obtained by empirical mode decomposition optimizing δ i (t), and K represents the number of gaussian white noise sequences;
Step S4: decomposing a data sequence, namely, performing optimized empirical mode decomposition on the data sequence subjected to pretreatment and Gaussian white noise sequence addition to obtain an eigenmode function set and a residual signal sequence, and updating the eigenmode function set to obtain an updated eigenmode function set;
Step S5: quantifying complexity, namely quantifying the complexity of the eigenmode functions in the updated eigenmode function set, and classifying the eigenmode functions in the updated eigenmode function set into eigenmode functions with high complexity and eigenmode functions with low complexity according to the complexity;
Step S6: calculating optimal parameters, namely, calculating optimal weights and threshold parameters of the original extreme learning machine through a particle swarm optimization algorithm, calculating optimal matrix weight parameters of the original extreme learning machine through a slime algorithm and low-complexity eigen mode functions, and optimizing the parameters of the original extreme learning machine through the optimal weights, the threshold parameters and the optimal matrix weight parameters to obtain the optimized original extreme learning machine;
step S7: the carbon emission is predicted, specifically, the high-complexity eigenmode function and the low-complexity eigenmode function are input into an optimized original extreme learning machine for training, and a carbon emission prediction result is obtained;
In the above operation, aiming at the technical problems that the decomposition result caused by the need of manually setting partial parameters of the empirical mode decomposition is greatly influenced by the subjectivity of a person and the optimal parameters are difficult to quickly determine, a large number of tests and adjustments are usually needed to obtain the optimal decomposition effect, the scheme optimizes the parameters K 3 and the parameters alpha which are usually needed to be manually set of the empirical mode decomposition through a gray-wolf optimization algorithm, so that the optimal parameters K 3 and the parameters alpha are quickly determined and the optimal empirical mode decomposition is obtained, without manually setting, the technical problems that the decomposition result is greatly influenced by the subjectivity of the person and the optimal parameters are difficult to quickly determine and the optimal decomposition effect is usually needed to be obtained; meanwhile, aiming at the technical problems that the existing frequency domain analysis method for the empirical mode decomposition and the like cannot accurately analyze the information of carbon emission and is difficult to eliminate residual noise such as residual signals and the like caused by the characteristics of nonlinearity and non-stationarity of an original data sequence, the scheme obtains an eigenmode function set and a residual signal sequence through the empirical mode decomposition of optimizing the data sequence after preprocessing and adding the Gaussian white noise sequence, updates the eigenmode function set to obtain the updated eigenmode function set, and can effectively remove the residual noise such as residual signals and the like through the operation.
In a second embodiment, referring to fig. 2, in step S2, the step of optimizing the parameter includes:
Step S21: initializing the maximum optimization iteration times, a parameter alpha and a parameter K 3;
Step S22: calculating a decomposition fitness, wherein the calculation formula of the decomposition fitness is as follows:
Tn=-Kt+Hpe
Wherein T n represents decomposition fitness, K t represents kurtosis, and H pe represents permutation entropy;
Step S23: updating the parameter alpha and the parameter K 3 according to the decomposition fitness;
Step S24: and (3) iteratively executing the steps S52-S53, stopping iteration after the maximum optimization iteration number is reached, and obtaining optimized empirical mode decomposition by updating parameters K 3 and parameters alpha generated in the last iteration to parameters K 3 and parameters alpha of empirical mode decomposition.
An embodiment III, which is based on the above embodiment, referring to FIG. 3, in step S4, the step of decomposing the data sequence includes:
step S41: calculating a first group of residual signals, wherein the calculation formula of the first group of residual signals is as follows:
R1=N(Xi(t));
wherein R 1 represents a first group of residual signals, X i (t) represents a data sequence subjected to preprocessing and Gaussian white noise sequence addition at a time t, and N (X i (t)) represents a local mean value of X i (t);
Step S42: calculating a first group of mode components, wherein the calculation formula of the first group of mode components is as follows:
d1=Xi(t)-R1
Wherein d 1 represents a first group of mode components, R 1 represents a first group of residual signals, and X i (t) represents a data sequence after preprocessing and adding a Gaussian white noise sequence at a time t;
Step S43: and adding the Gaussian white noise sequence again, and calculating a second group of residual signals through the local mean value, wherein the calculation formula of the second group of residual signals is as follows:
R2=R1+R1E(δi(t));
Wherein R 2 represents the second set of residual signals, R 1 represents the first set of residual signals, δ i (t) represents the i-th gaussian white noise sequence at time t, and E (δ i (t)) represents the eigenmode function obtained by empirical mode decomposition of δ i (t);
step S44: calculating a second set of mode components, wherein the calculation formula of the second set of mode components is as follows:
d2=R1-R2=R1-(N(R1+βE(δi(t))));
Where d 2 denotes a second set of modal components, R 1 denotes a first set of residual signals, R 2 denotes a second set of residual signals, β denotes a gaussian white noise weight coefficient, δ i (t) denotes an i-th gaussian white noise sequence at time t, E (δ i (t)) denotes an eigenmode function obtained by empirical mode decomposition of δ i (t), N (R 11E(δi (t))) denotes a local mean of R 11E(δi (t));
Step S45: presetting an initial value of k to be 3, and setting a deviation threshold value;
Step S46: calculating a kth residual signal and a kth modal component, and turning to step S47, wherein the calculation formula of the kth residual signal and the kth modal component is as follows:
Rk=N(Rk-1+βE(δk(t))),k=3,4,…,K;
dk=Rk-1-Rk,k=3,4,…,K;
Wherein R k represents the kth residual signal, d k represents the kth modal component, K represents the number of gaussian white noise sequences, δ k (t) represents the ith gaussian white noise sequence at time t, E (δ k (t)) represents the eigenmode function obtained by empirical mode decomposition of δ k (t), i.e., the kth eigenmode function, β represents the gaussian white noise weight coefficient, R k-1 represents the kth-1 residual signal, N (R k-1k-1E(δk (t))) represents the local mean of R k-1k-1E(δk (t));
Step S47: forming a residual signal sequence according to the residual signals, setting k=k+1, judging whether a termination condition is met, if so, turning to step S48, otherwise, turning to step S46;
The termination condition is that the residual signal sequence monotonically decreases and the standard deviation is smaller than a deviation threshold, and the calculation formula of the standard deviation is as follows:
Wherein σ represents the standard deviation, I k represents the kth eigenmode function, I k+1 represents the k+1th eigenmode function, I k||2 represents the L2 norm of the k eigenmode function, I k+1-Ik||2 represents the L2 norm of the difference between the k+1th eigenmode function and the kth eigenmode function;
Step S48: presetting a mean square threshold, constructing an eigenmode function set according to eigenmode functions, calculating the root mean square value of each eigenmode function, traversing all eigenmode functions, removing the eigenmode function from the eigenmode function set if the root mean square value of the eigenmode function is smaller than or equal to the mean square threshold, and adding the eigenmode function into the rest items;
step S49: and adding the residual items into the eigenmode function set as eigenmode functions to obtain an updated eigenmode function set.
Embodiment four, which is based on the above embodiment, referring to fig. 4, in step S5, the step of quantifying complexity includes:
Step S51: presetting an embedding parameter t 1 and a space offset w 1 of an original data sequence;
Step S52: traversing the eigenmode functions in the updated eigenmode function set, and constructing a recursion chart by calculating Chebyshev distances of the eigenmode functions;
Step S53: analyzing the recursion diagram by using the binary symbiotic matrix to obtain a recursion mapping result
Step S54: increasing the value of the embedding parameter t 1, i.e. t 1=t1 +1, and repeating steps S52 and S53 to obtain a new recursive mapping result
Step S55: calculating the spatial correlation recursive sample entropy SDRSAMPEN (t 1,w1) of each eigenmode function, and evaluating the complexity of each eigenmode function according to the spatial correlation recursive sample entropy SDRSAMPEN (t 1,w1) of each eigenmode function, wherein the calculation formula of the spatial correlation recursive sample entropy SDRSAMPEN (t 1,w1) of each eigenmode function is as follows:
Wherein SDRSAMPEN (t 1,w1) represents the spatial correlation recursive sample entropy of the eigenmode function, Representing recursive mapping results,/>Representing the new recursive mapping result, t 1 representing the embedding parameters, w 1 representing the spatial offset of the original data sequence;
step S56: each eigenmode function is classified into a high-complexity eigenmode function and a low-complexity eigenmode function according to the complexity of each eigenmode function.
Embodiment five, which is based on the above embodiment, referring to fig. 5, 6 and 7, in step S6, the step of calculating the optimal parameter includes:
Step S61: calculating optimal weight and threshold parameters, namely calculating the optimal weight and threshold parameters of the original extreme learning machine through a particle swarm optimization algorithm;
Step S62: calculating an optimal matrix weight parameter of the original extreme learning machine, specifically, calculating the optimal matrix weight parameter of the original extreme learning machine through a myxobacteria algorithm and an eigenvalue function with low complexity;
Step S63: optimizing parameters of the original extreme learning machine according to the optimal weight, the threshold value parameter and the optimal matrix weight parameter to obtain an optimized original extreme learning machine;
in step S61, the step of calculating the optimal weight and the threshold parameter includes:
Step S611: initializing parameters, namely initializing parameters of a particle swarm optimization algorithm and an original extreme learning machine, wherein the parameters of the particle swarm optimization algorithm comprise maximum iteration times, particle number N', learning factors c 1 and c 2 and inertia weight factors w 2, and the parameters of the original extreme learning machine comprise a weight matrix from an input layer to an implicit layer and a weight matrix from the implicit layer to an output layer;
Step S612: generating an initial particle group, specifically, randomly generating the velocity V ' and the position X ' of N ' particles, and initializing the optimal position P best and the global optimal position G best of each particle;
Step S613: the velocity and position of each particle are updated using the following formula:
X′i(t+1)=X′i(t)+V′i(t+1);
Wherein w 2 represents an inertial weight factor, c 1 and c 2 represent learning factors, P best represents an optimal position of each particle, G best represents a global optimal position, V 'i (t) represents a speed of an ith particle at time t, V' i (t+1) represents a speed of an ith particle at time t+1, i.e., an updated speed of an ith particle, X 'i (t) represents a position of an ith particle at time t, X' i (t+1) represents a position of an ith particle at time t+1, i.e., an updated position of an ith+1 particle, and rand () represents a random number between 0 and 1;
Step S614: the method comprises the steps of updating parameters of an original extreme learning machine, namely setting the updated position of each particle as values of a weight matrix from an input layer to an implicit layer and a weight matrix from the implicit layer to an output layer, training the original extreme learning machine by using a training data set, calculating performance indexes of each particle on a verification set, updating the optimal position P best of the particle to the updated position of the particle if the performance indexes of the particle on the verification set are superior to the optimal position P best of the particle, and updating the global optimal position G best to the updated position of the particle if the performance indexes of the particle on the verification set are superior to the global optimal position G best;
Step S615: step S613-S614 are executed in an iteration mode, after the maximum iteration times are reached, the iteration process is ended, and the values of the weight matrix from the input layer to the hidden layer and the weight matrix from the hidden layer to the output layer at the last iteration are used as optimal weight and threshold parameters;
in step S62, the step of calculating the optimal matrix weight parameters of the original extreme learning machine includes:
Step S621: initializing parameters of a myxobacteria algorithm, wherein the parameters of the myxobacteria algorithm comprise low complex iteration times and weight coefficients w 3 of the myxobacteria algorithm;
Step S622: initializing an individual position X ' and an individual velocity V ', the individual position X ' being a matrix of M rows and N columns, an element X ' ij in the individual position X ' representing the position of the ith individual in the jth dimension, the individual velocity V "is a matrix of M rows and N columns, and the element V" ij in the individual velocity V "represents the velocity of the ith individual in the jth dimension;
Step S623: calculating the fitness value of the individual position X';
Step S624: updating the individual position X 'and the individual velocity V' to obtain an updated individual position X 'and an updated individual velocity V', the calculation formula for updating the individual position X 'and the individual speed V' is as follows:
x″ij(t+1)=x″ij(t)+v″ij(t+1);
v″ij(t+1)=w3*v″ij(t)+c3*r3*(pbestij(t)-x″ij(t))+c4*r4*(gbestij(t)-x″ij(t));
Where x "ij (t) represents the position of the ith individual at time t in the jth dimension, x" ij (t+1) represents the position of the ith individual at time t+1 in the jth dimension, i.e., the updated position of the ith individual at time t in the jth dimension, v "ij (t) represents the speed of the ith individual at time t in the jth dimension, v" ij (t+1) represents the speed of the ith individual at time t+1 in the jth dimension, w 3 represents the weight coefficient of the myxobacteria algorithm, c 3 and c 4 represent learning factors, r 3 and r 4 represent random numbers between 0 and 1, pbest ij (t) represents the individual optimal position of the ith individual at time t in the jth dimension, and gbest ij (t) represents the global optimal position of the ith individual at time t in the jth dimension;
step S625: calculating the fitness value of the updated individual position X 'according to the updated individual position X';
Step S626: step S624-step S625 are iteratively executed, after the low complex iteration times are reached, the iteration process is ended, and the optimal individual position X' is selected as the optimal matrix weight parameter of the original extreme learning machine according to the fitness value;
In the above operation, aiming at the technical problems that the existing particle swarm optimization-based extreme learning machine is not suitable for predicting the eigen mode function with low complexity, the original extreme learning machine is not suitable for predicting the eigen mode function with high complexity, and the distributed self-adaptive extreme learning machine has the defects of over training and low convergence, the optimal weight and threshold parameters of the original extreme learning machine are calculated through the particle swarm optimization algorithm, the optimal matrix weight parameters of the original extreme learning machine are calculated through the slime algorithm and the eigen mode function with low complexity, and the parameters of the original extreme learning machine are optimized according to the optimal weight, the threshold parameters and the optimal matrix weight parameters, so that the optimized original extreme learning machine can simultaneously predict the eigen mode function with high complexity and low complexity, and the defects of over training and low convergence are avoided.
An embodiment six is based on the above embodiment, referring to fig. 8, and the artificial intelligence-based carbon emission monitoring system provided by the invention includes a data acquisition module, a data preprocessing module, a sequence decomposition module, a complexity quantization module, a parameter optimization module, an optimal parameter calculation module and a carbon emission prediction module;
the data acquisition module is used for data acquisition, specifically, acquiring an original data sequence and sending the original data sequence to the data preprocessing module;
The parameter optimization module is used for parameter optimization, specifically, optimizing the parameter K 3 and the parameter alpha of the empirical mode decomposition through a gray wolf optimization algorithm to obtain optimized empirical mode decomposition;
The data preprocessing module is used for preprocessing data, specifically, preprocessing an original data sequence to remove trend and periodic components, obtaining a preprocessed original data sequence, adding a Gaussian white noise sequence into the preprocessed original data sequence to obtain a data sequence subjected to preprocessing and the Gaussian white noise sequence, and sending the data sequence subjected to preprocessing and the Gaussian white noise sequence to the sequence decomposition module;
The sequence decomposition module is used for decomposing a data sequence, specifically, performing optimized empirical mode decomposition on the data sequence subjected to pretreatment and Gaussian white noise sequence addition to obtain an eigenmode function set and a residual signal sequence, updating the eigenmode function set to obtain an updated eigenmode function set, and sending the updated eigenmode function set to the complexity quantization module;
The complexity quantization module is used for quantizing the complexity, specifically, quantizing the complexity of the eigenmode functions in the updated eigenmode function set, classifying the eigenmode functions in the updated eigenmode function set into high-complexity eigenmode functions and low-complexity eigenmode functions according to the complexity, sending the high-complexity eigenmode functions to the carbon emission prediction module, and sending the low-complexity eigenmode functions to the optimal parameter calculation module and the carbon emission prediction module;
The optimal parameter calculation module is used for calculating optimal parameters, specifically, calculating optimal weight and threshold parameters of the original extreme learning machine through a particle swarm optimization algorithm, calculating optimal matrix weight parameters of the original extreme learning machine through a slime algorithm and low-complexity eigen mode functions, optimizing parameters of the original extreme learning machine through the optimal weight and threshold parameters and the optimal matrix weight parameters to obtain an optimized original extreme learning machine, and sending the optimized original extreme learning machine to the carbon emission prediction module;
The carbon emission prediction module is used for predicting the carbon emission, specifically, inputting the high-complexity eigenmode function and the low-complexity eigenmode function into the optimized original extreme learning machine for training, and obtaining a carbon emission prediction result.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
The invention and its embodiments have been described above with no limitation, and the actual construction is not limited to the embodiments of the invention as shown in the drawings. In summary, if one of ordinary skill in the art is informed by this disclosure, a structural manner and an embodiment similar to the technical solution should not be creatively devised without departing from the gist of the present invention.

Claims (8)

1. The artificial intelligence-based carbon emission monitoring method is characterized by comprising the following steps of:
step S1: the data acquisition is specifically to acquire an original data sequence;
Step S2: optimizing parameters, namely optimizing parameters K 3 and parameters alpha of the empirical mode decomposition through a gray wolf optimization algorithm to obtain optimized empirical mode decomposition;
step S3: preprocessing data, namely preprocessing an original data sequence to remove trend and periodic components, obtaining a preprocessed original data sequence, adding a Gaussian white noise sequence into the preprocessed original data sequence, and obtaining a data sequence subjected to preprocessing and Gaussian white noise sequence addition, wherein the formula is as follows:
Xi(t)=X(t)+βE(δi(t)),i=1,2,…,K;
Wherein X (t) represents an original data sequence at time t, t represents a time variable, X i (t) represents a data sequence after preprocessing and adding a gaussian white noise sequence at time t, β represents a gaussian white noise weight coefficient, δ i (t) represents an ith gaussian white noise sequence at time t, E (δ i (t)) represents an eigenmode function obtained by empirical mode decomposition optimizing δ i (t), and K represents the number of gaussian white noise sequences;
Step S4: decomposing a data sequence, namely, performing optimized empirical mode decomposition on the data sequence subjected to pretreatment and Gaussian white noise sequence addition to obtain an eigenmode function set and a residual signal sequence, and updating the eigenmode function set to obtain an updated eigenmode function set;
Step S5: quantifying complexity, namely quantifying the complexity of the eigenmode functions in the updated eigenmode function set, and classifying the eigenmode functions in the updated eigenmode function set into eigenmode functions with high complexity and eigenmode functions with low complexity according to the complexity;
Step S6: calculating optimal parameters, namely, calculating optimal weights and threshold parameters of the original extreme learning machine through a particle swarm optimization algorithm, calculating optimal matrix weight parameters of the original extreme learning machine through a slime algorithm and low-complexity eigen mode functions, and optimizing the parameters of the original extreme learning machine through the optimal weights, the threshold parameters and the optimal matrix weight parameters to obtain the optimized original extreme learning machine;
step S7: and predicting the carbon emission, namely inputting the high-complexity eigenmode function and the low-complexity eigenmode function into an optimized original extreme learning machine for training to obtain a carbon emission prediction result.
2. The artificial intelligence based carbon emission monitoring method of claim 1, wherein: in step S2, the step of optimizing the parameter includes:
Step S21: initializing the maximum optimization iteration times, a parameter alpha and a parameter K 3;
Step S22: calculating a decomposition fitness, wherein the calculation formula of the decomposition fitness is as follows:
Tn=-Kt+Hpe
Wherein T n represents decomposition fitness, K t represents kurtosis, and H pe represents permutation entropy;
Step S23: updating the parameter alpha and the parameter K 3 according to the decomposition fitness;
Step S24: and (3) iteratively executing the steps S52-S53, stopping iteration after the maximum optimization iteration number is reached, and obtaining optimized empirical mode decomposition by updating parameters K 3 and parameters alpha generated in the last iteration to parameters K 3 and parameters alpha of empirical mode decomposition.
3. The artificial intelligence based carbon emission monitoring method of claim 1, wherein: in step S4, the step of decomposing the data sequence includes:
step S41: calculating a first group of residual signals, wherein the calculation formula of the first group of residual signals is as follows:
R1=N(Xi(t));
wherein R 1 represents a first group of residual signals, X i (t) represents a data sequence subjected to preprocessing and Gaussian white noise sequence addition at a time t, and N (X i (t)) represents a local mean value of X i (t);
Step S42: calculating a first group of mode components, wherein the calculation formula of the first group of mode components is as follows:
d1=Xi(t)-R1
Wherein d 1 represents a first group of mode components, R 1 represents a first group of residual signals, and X i (t) represents a data sequence after preprocessing and adding a Gaussian white noise sequence at a time t;
Step S43: and adding the Gaussian white noise sequence again, and calculating a second group of residual signals through the local mean value, wherein the calculation formula of the second group of residual signals is as follows:
R2=R1+R1E(δi(t));
Wherein R 2 represents the second set of residual signals, R 1 represents the first set of residual signals, δ i (t) represents the i-th gaussian white noise sequence at time t, and E (δ i (t)) represents the eigenmode function obtained by empirical mode decomposition of δ i (t);
step S44: calculating a second set of mode components, wherein the calculation formula of the second set of mode components is as follows:
d2=R1-R2=R1-(N(R1+βE(δi(t))));
Where d 2 denotes a second set of modal components, R 1 denotes a first set of residual signals, R 2 denotes a second set of residual signals, β denotes a gaussian white noise weight coefficient, δ i (t) denotes an i-th gaussian white noise sequence at time t, E (δ i (t)) denotes an eigenmode function obtained by empirical mode decomposition of δ i (t), N (R 11E(δi (t))) denotes a local mean of R 11E(δi (t));
Step S45: presetting an initial value of k to be 3, and setting a deviation threshold value;
Step S46: calculating a kth residual signal and a kth modal component, and turning to step S47, wherein the calculation formula of the kth residual signal and the kth modal component is as follows:
Rk=N(Rk-1+βE(δk(t))),k=3,4,…,K;
dk=Rk-1-Rk,k=3,4,…,K;
Wherein R k represents the kth residual signal, d k represents the kth modal component, K represents the number of gaussian white noise sequences, δ k (t) represents the ith gaussian white noise sequence at time t, E (δ k (t)) represents the eigenmode function obtained by empirical mode decomposition of δ k (t), i.e., the kth eigenmode function, β represents the gaussian white noise weight coefficient, R k-1 represents the kth-1 residual signal, N (R k-1k-1E(δk (t))) represents the local mean of R k-1k-1E(δk (t));
Step S47: forming a residual signal sequence according to the residual signals, setting k=k+1, judging whether a termination condition is met, if so, turning to step S48, otherwise, turning to step S46;
The termination condition is that the residual signal sequence monotonically decreases and the standard deviation is smaller than a deviation threshold, and the calculation formula of the standard deviation is as follows:
Wherein σ represents the standard deviation, I k represents the kth eigenmode function, I k+1 represents the k+1th eigenmode function, I k||2 represents the L2 norm of the k eigenmode function, I k+1-Ik||2 represents the L2 norm of the difference between the k+1th eigenmode function and the kth eigenmode function;
Step S48: presetting a mean square threshold, constructing an eigenmode function set according to eigenmode functions, calculating the root mean square value of each eigenmode function, traversing all eigenmode functions, removing the eigenmode function from the eigenmode function set if the root mean square value of the eigenmode function is smaller than or equal to the mean square threshold, and adding the eigenmode function into the rest items;
step S49: and adding the residual items into the eigenmode function set as eigenmode functions to obtain an updated eigenmode function set.
4. The artificial intelligence based carbon emission monitoring method of claim 1, wherein: in step S5, the step of quantifying complexity includes:
Step S51: presetting an embedding parameter t 1 and a space offset w 1 of an original data sequence;
Step S52: traversing the eigenmode functions in the updated eigenmode function set, and constructing a recursion chart by calculating Chebyshev distances of the eigenmode functions;
Step S53: analyzing the recursion diagram by using the binary symbiotic matrix to obtain a recursion mapping result
Step S54: increasing the value of the embedding parameter t 1, i.e. t 1=t1 +1, and repeating steps S52 and S53 to obtain a new recursive mapping result
Step S55: calculating the spatial correlation recursive sample entropy SDRSAMPEN (t 1,w1) of each eigenmode function, and evaluating the complexity of each eigenmode function according to the spatial correlation recursive sample entropy SDRSAMPEN (t 1,w1) of each eigenmode function, wherein the calculation formula of the spatial correlation recursive sample entropy SDRSAMPEN (t 1,w1) of each eigenmode function is as follows:
Wherein SDRSAMPEN (t 1,w1) represents the spatial correlation recursive sample entropy of the eigenmode function, Representing recursive mapping results,/>Representing the new recursive mapping result, t 1 representing the embedding parameters, w 1 representing the spatial offset of the original data sequence;
step S56: each eigenmode function is classified into a high-complexity eigenmode function and a low-complexity eigenmode function according to the complexity of each eigenmode function.
5. The artificial intelligence based carbon emission monitoring method of claim 1, wherein: in step S6, the step of calculating the optimal parameters includes:
Step S61: calculating optimal weight and threshold parameters, namely calculating the optimal weight and threshold parameters of the original extreme learning machine through a particle swarm optimization algorithm;
Step S62: calculating an optimal matrix weight parameter of the original extreme learning machine, specifically, calculating the optimal matrix weight parameter of the original extreme learning machine through a myxobacteria algorithm and an eigenvalue function with low complexity;
Step S63: and optimizing the parameters of the original extreme learning machine according to the optimal weight, the threshold value parameter and the optimal matrix weight parameter to obtain the optimized original extreme learning machine.
6. The artificial intelligence based carbon emission monitoring method of claim 5, wherein: in step S61, the step of calculating the optimal weight and the threshold parameter includes:
Step S611: initializing parameters, namely initializing parameters of a particle swarm optimization algorithm and an original extreme learning machine, wherein the parameters of the particle swarm optimization algorithm comprise maximum iteration times, particle number N', learning factors c 1 and c 2 and inertia weight factors w 2, and the parameters of the original extreme learning machine comprise a weight matrix from an input layer to an implicit layer and a weight matrix from the implicit layer to an output layer;
Step S612: generating an initial particle group, specifically, randomly generating the velocity V ' and the position X ' of N ' particles, and initializing the optimal position P best and the global optimal position G best of each particle;
Step S613: the velocity and position of each particle are updated using the following formula:
X′i(t+1)=X′i(t)+V′i(t+1);
Wherein w 2 represents an inertial weight factor, c 1 and c 2 represent learning factors, P best represents an optimal position of each particle, G best represents a global optimal position, V 'i (t) represents a speed of an ith particle at time t, V' i (t+1) represents a speed of an ith particle at time t+1, i.e., an updated speed of an ith particle, X 'i (t) represents a position of an ith particle at time t, X' i (t+1) represents a position of an ith particle at time t+1, i.e., an updated position of an ith+1 particle, and rand () represents a random number between 0 and 1;
Step S614: the method comprises the steps of updating parameters of an original extreme learning machine, namely setting the updated position of each particle as values of a weight matrix from an input layer to an implicit layer and a weight matrix from the implicit layer to an output layer, training the original extreme learning machine by using a training data set, calculating performance indexes of each particle on a verification set, updating the optimal position P best of the particle to the updated position of the particle if the performance indexes of the particle on the verification set are superior to the optimal position P best of the particle, and updating the global optimal position G best to the updated position of the particle if the performance indexes of the particle on the verification set are superior to the global optimal position G best;
step S615: and iteratively executing the steps S613 to S614, ending the iterative process after the maximum iterative times are reached, and taking the values of the weight matrix from the input layer to the hidden layer and the weight matrix from the hidden layer to the output layer at the last iteration as optimal weight and threshold parameters.
7. The artificial intelligence based carbon emission monitoring method of claim 5, wherein: in step S62, the step of calculating the optimal matrix weight parameters of the original extreme learning machine includes:
Step S621: initializing parameters of a myxobacteria algorithm, wherein the parameters of the myxobacteria algorithm comprise low complex iteration times and weight coefficients w 3 of the myxobacteria algorithm;
Step S622: initializing an individual position X ' and an individual velocity V ', the individual position X ' being a matrix of M rows and N columns, an element X ' ij in the individual position X ' representing the position of the ith individual in the jth dimension, the individual velocity V "is a matrix of M rows and N columns, and the element V" ij in the individual velocity V "represents the velocity of the ith individual in the jth dimension;
Step S623: calculating the fitness value of the individual position X';
Step S624: updating the individual position X 'and the individual velocity V' to obtain an updated individual position X 'and an updated individual velocity V', the calculation formula for updating the individual position X 'and the individual speed V' is as follows:
x″ij(t+1)=x″ij(t)+v″ij(t+1);
v″ij(t+1)=w3*v″ij(t)+c3*r3*(pbestij(t)-x″ij(t))+c4*r4*(gbestij(t)-x″ij(t));
Where x "ij (t) represents the position of the ith individual at time t in the jth dimension, x" ij (t+1) represents the position of the ith individual at time t+1 in the jth dimension, i.e., the updated position of the ith individual at time t in the jth dimension, v "ij (t) represents the speed of the ith individual at time t in the jth dimension, v" ij (t+1) represents the speed of the ith individual at time t+1 in the jth dimension, w 3 represents the weight coefficient of the myxobacteria algorithm, c 3 and c 4 represent learning factors, r 3 and r 4 represent random numbers between 0 and 1, pbest ij (t) represents the individual optimal position of the ith individual at time t in the jth dimension, and gbest ij (t) represents the global optimal position of the ith individual at time t in the jth dimension;
step S625: calculating the fitness value of the updated individual position X 'according to the updated individual position X';
Step S626: and step S624 to step S625 are iteratively executed, after the low complex iteration times are reached, the iteration process is ended, and the optimal individual position X' is selected as the optimal matrix weight parameter of the original extreme learning machine according to the fitness value.
8. An artificial intelligence based carbon emission monitoring system for implementing the artificial intelligence based carbon emission monitoring method according to any one of claims 1 to 7, characterized by comprising a data acquisition module, a data preprocessing module, a sequence decomposition module, a complexity quantization module, a parameter optimization module, an optimal parameter calculation module and a carbon emission prediction module;
the data acquisition module is used for data acquisition, specifically, acquiring an original data sequence and sending the original data sequence to the data preprocessing module;
The parameter optimization module is used for parameter optimization, specifically, optimizing the parameter K 3 and the parameter alpha of the empirical mode decomposition through a gray wolf optimization algorithm to obtain optimized empirical mode decomposition;
The data preprocessing module is used for preprocessing data, specifically, preprocessing an original data sequence to remove trend and periodic components, obtaining a preprocessed original data sequence, adding a Gaussian white noise sequence into the preprocessed original data sequence to obtain a data sequence subjected to preprocessing and the Gaussian white noise sequence, and sending the data sequence subjected to preprocessing and the Gaussian white noise sequence to the sequence decomposition module;
The sequence decomposition module is used for decomposing a data sequence, specifically, performing optimized empirical mode decomposition on the data sequence subjected to pretreatment and Gaussian white noise sequence addition to obtain an eigenmode function set and a residual signal sequence, updating the eigenmode function set to obtain an updated eigenmode function set, and sending the updated eigenmode function set to the complexity quantization module;
The complexity quantization module is used for quantizing the complexity, specifically, quantizing the complexity of the eigenmode functions in the updated eigenmode function set, classifying the eigenmode functions in the updated eigenmode function set into high-complexity eigenmode functions and low-complexity eigenmode functions according to the complexity, sending the high-complexity eigenmode functions to the carbon emission prediction module, and sending the low-complexity eigenmode functions to the optimal parameter calculation module and the carbon emission prediction module;
The optimal parameter calculation module is used for calculating optimal parameters, specifically, calculating optimal weight and threshold parameters of the original extreme learning machine through a particle swarm optimization algorithm, calculating optimal matrix weight parameters of the original extreme learning machine through a slime algorithm and low-complexity eigen mode functions, optimizing parameters of the original extreme learning machine through the optimal weight and threshold parameters and the optimal matrix weight parameters to obtain an optimized original extreme learning machine, and sending the optimized original extreme learning machine to the carbon emission prediction module;
The carbon emission prediction module is used for predicting the carbon emission, specifically, inputting the high-complexity eigenmode function and the low-complexity eigenmode function into the optimized original extreme learning machine for training, and obtaining a carbon emission prediction result.
CN202410291600.1A 2024-03-14 2024-03-14 Artificial intelligence-based carbon emission monitoring method and system Pending CN118152923A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410291600.1A CN118152923A (en) 2024-03-14 2024-03-14 Artificial intelligence-based carbon emission monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410291600.1A CN118152923A (en) 2024-03-14 2024-03-14 Artificial intelligence-based carbon emission monitoring method and system

Publications (1)

Publication Number Publication Date
CN118152923A true CN118152923A (en) 2024-06-07

Family

ID=91301068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410291600.1A Pending CN118152923A (en) 2024-03-14 2024-03-14 Artificial intelligence-based carbon emission monitoring method and system

Country Status (1)

Country Link
CN (1) CN118152923A (en)

Similar Documents

Publication Publication Date Title
CN111931983B (en) Precipitation prediction method and system
CN111832627A (en) Image classification model training method, classification method and system for suppressing label noise
CN111477247B (en) Speech countermeasure sample generation method based on GAN
CN112417028A (en) Wind speed time sequence characteristic mining method and short-term wind power prediction method
CN111985825A (en) Crystal face quality evaluation method for roller mill orientation instrument
CN112001115B (en) Soft measurement modeling method of semi-supervised dynamic soft measurement network
CN114219139A (en) DWT-LSTM power load prediction method based on attention mechanism
CN114500335B (en) SDN network flow control method based on fuzzy C-means and mixed kernel least square support vector machine
CN113627597B (en) Method and system for generating countermeasure sample based on general disturbance
CN114220164A (en) Gesture recognition method based on variational modal decomposition and support vector machine
CN118152923A (en) Artificial intelligence-based carbon emission monitoring method and system
CN117170294A (en) Intelligent control method of satellite thermal control system based on space thermal environment prediction
CN117155806A (en) Communication base station flow prediction method and device
KR20220014744A (en) Data preprocessing system based on a reinforcement learning and method thereof
CN117079017A (en) Credible small sample image identification and classification method
CN111667186B (en) Method and device for determining the energy consumption of a machine for production
CN114363262A (en) Chaotic dynamic congestion prediction system and method under air-space-ground integrated network
Garcia-Cardona et al. Structure prediction from neutron scattering profiles: A data sciences approach
CN113688774B (en) Advanced learning-based high-rise building wind induced response prediction and training method and device
CN116663659A (en) Genetic algorithm-based adaptive VMD parameter optimizing method
US20230206054A1 (en) Expedited Assessment and Ranking of Model Quality in Machine Learning
CN109508455B (en) GloVe super-parameter tuning method
CN117195039A (en) Bearing fault detection method and system of gravity energy storage system
CN118101276A (en) Intrusion detection method and system based on fusion characteristics
CN117349767A (en) Method, device, equipment and medium for predicting and detecting state and abnormality of equipment of Internet of things

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination