CN113806172B - Method and device for processing equipment state parameters, electronic equipment and storage medium - Google Patents

Method and device for processing equipment state parameters, electronic equipment and storage medium Download PDF

Info

Publication number
CN113806172B
CN113806172B CN202111075693.7A CN202111075693A CN113806172B CN 113806172 B CN113806172 B CN 113806172B CN 202111075693 A CN202111075693 A CN 202111075693A CN 113806172 B CN113806172 B CN 113806172B
Authority
CN
China
Prior art keywords
state parameter
parameter
loss
state
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111075693.7A
Other languages
Chinese (zh)
Other versions
CN113806172A (en
Inventor
蒋冠莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111075693.7A priority Critical patent/CN113806172B/en
Publication of CN113806172A publication Critical patent/CN113806172A/en
Application granted granted Critical
Publication of CN113806172B publication Critical patent/CN113806172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure provides a method, a device, electronic equipment and a storage medium for processing equipment state parameters, relates to the field of artificial intelligence, and further relates to the technical field of industrial equipment management, so as to at least solve the technical problem that prediction accuracy of associated equipment states is low in the existing scheme. The specific implementation scheme is as follows: acquiring a first state parameter of target equipment at a first moment; analyzing the first state parameter by using a target neural network model to determine a second state parameter of the target equipment at a second moment, wherein the target neural network model is obtained by machine learning training by using a plurality of groups of data, and each group of data in the plurality of groups of data comprises: status parameters of the same equipment at different moments; and performing state monitoring and prediction on the target equipment based on the second state parameter.

Description

Method and device for processing equipment state parameters, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of artificial intelligence, and further relates to the technical field of industrial equipment management, in particular to a method, a device, electronic equipment and a storage medium for processing equipment state parameters.
Background
Along with the development of the internet of things and edge computing technology, more and more industrial enterprises begin to pay attention to the construction of a data base and the comprehensive management of production equipment, and further hope to realize production automation and intellectualization through a large amount of collected data information. In the exploration of industrial enterprise data intelligence, predicting the status of associated devices is one of the core tasks.
In the existing scheme, when the state of industrial equipment is monitored, data are required to be manually input, the data specification is difficult to unify, the data of associated equipment cannot be calculated on line, and the association between the equipment is required to be manually searched, so that the prediction accuracy of the state of the associated equipment is affected.
Disclosure of Invention
The disclosure provides a method, a device, an electronic device and a storage medium for processing equipment state parameters, so as to at least solve the technical problem that prediction accuracy of associated equipment states is low in the existing scheme.
According to an aspect of the present disclosure, there is provided a method of processing a device state parameter, comprising: acquiring a first state parameter of target equipment at a first moment; analyzing the first state parameter by using a target neural network model to determine a second state parameter of the target equipment at a second moment, wherein the target neural network model is obtained by machine learning training by using a plurality of groups of data, and each group of data in the plurality of groups of data comprises: status parameters of the same equipment at different moments; and performing state monitoring and prediction on the target equipment based on the second state parameter.
According to yet another aspect of the present disclosure, there is provided an apparatus for processing a device state parameter, comprising: the acquisition module is used for acquiring a first state parameter of the target equipment at a first moment; the determining module is configured to analyze the first state parameter by using a target neural network model, and determine a second state parameter of the target device at a second moment, where the target neural network model is obtained by machine learning training using multiple sets of data, and each set of data in the multiple sets of data includes: status parameters of the same equipment at different moments; and the processing module is used for carrying out state monitoring and prediction on the target equipment based on the second state parameter.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of processing device state parameters set forth in the present disclosure.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of processing a device state parameter set forth in the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, performs a method of processing a device state parameter as set forth in the present disclosure.
In the present disclosure, a first state parameter of a target device at a first moment is obtained; analyzing the first state parameter by using a target neural network model to determine a second state parameter of the target equipment at a second moment, wherein the target neural network model is obtained by machine learning training by using a plurality of groups of data, and each group of data in the plurality of groups of data comprises: status parameters of the same equipment at different moments; the target equipment is subjected to state monitoring and prediction based on the second state parameter, so that the purpose of carrying out state monitoring and prediction on the target equipment at the second moment based on the first state parameter of the target equipment at the first moment and the target neural network model is achieved, the effect of improving the prediction precision of the associated equipment state is achieved, and the technical problem that the prediction precision of the associated equipment state is low in the existing scheme is solved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a block diagram of a hardware architecture of a computer terminal (or mobile device) for implementing a method of processing device state parameters according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method of processing device status parameters according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of the structure of a VSTDE model according to an embodiment of the present disclosure;
fig. 4 is a block diagram of an apparatus for processing device state parameters according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the intelligent exploration of industrial enterprise data, the prediction of the state of associated equipment is of great significance. The indexes for measuring the state of the equipment are various, for example, the quality of the product is particularly focused on the standard reaching rate, and the energy consumption of the equipment is more focused on stationarity, mutation points and the like.
The equipment status of an industrial enterprise generally has two characteristics: multidimensional and strongly associative. The monitoring of the state of the equipment is multidimensional, and can generally comprise equipment operation parameters, product quality of a production process, equipment energy consumption level and the like, and although the source objects of the state indexes are different, the physical logic and the operation data law have extremely strong relevance. Meanwhile, the states of the devices are related, namely, the devices are not completely independent, if the devices are independently monitored and predicted, confidence ellipses are too large, outliers are easy to mask, and monitoring failure of the states of the devices is caused.
In the existing scheme, the following three modes are generally adopted to monitor and predict the equipment state:
the method comprises the following steps: and (5) reducing the dimension. For example, the dimension reduction is performed by adopting a principal component analysis (Principal Component Analysis, PCA), t-distributed random neighbor embedding (t-distributed stochastic neighbor embedding, t-SNE) and other methods. However, these dimension reduction methods do not take into consideration the timing dependency of data, and confusion easily occurs in interpreting the state prediction result.
The second method is as follows: the traditional timing algorithm can mine the dependency of the sequence in the time dimension, but lacks understanding of the spatial characteristics and is difficult to reduce to a low-dimensional space for explanation; however, the common machine learning model has a certain requirement on data independence, otherwise, the prediction result may have a certain deviation. Although these two models have better predictive performance than the first method, they cannot provide an index for measuring state differences or state transitions, and the low-dimensional space is externally a "black box" with poor interpretability.
And a third method: a fusion model is adopted, for example, a parallel model architecture, a depth time sequence model Bi-LSTM and other time dimension characteristics of a capture sequence are used, a graph model or a graph neural network or a convolution neural network (Convolutional Neural Network, CNN) and other capture space topological characteristics are used, and then learning results of the two models are fused. However, such fusion models have the disadvantage that it is difficult to interpret the raw data directly into a spatio-temporal low-dimensional space, and the utilization of the data information is relatively limited.
The existing scheme can not realize timely monitoring and accurate prediction of the state of the associated equipment.
In accordance with an embodiment of the present disclosure, a method of processing device state parameters is provided, it being noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system, such as a set of computer executable instructions, and although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The method embodiments provided by the embodiments of the present disclosure may be performed in a mobile terminal, a computer terminal, or similar electronic device. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein. Fig. 1 shows a block diagram of a hardware architecture of a computer terminal (or mobile device) for implementing a method of processing device state parameters.
As shown in fig. 1, the computer terminal 100 includes a computing unit 101 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 102 or a computer program loaded from a storage unit 108 into a Random Access Memory (RAM) 103. In the RAM 103, various programs and data required for the operation of the computer terminal 100 can also be stored. The computing unit 101, ROM 102, and RAM 103 are connected to each other by a bus 104. An input/output (I/O) interface 105 is also connected to bus 104.
Various components in computer terminal 100 are connected to I/O interface 105, including: an input unit 106 such as a keyboard, a mouse, etc.; an output unit 107 such as various types of displays, speakers, and the like; a storage unit 108 such as a magnetic disk, an optical disk, or the like; and a communication unit 109 such as a network card, modem, wireless communication transceiver, etc. The communication unit 109 allows the computer terminal 100 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 101 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 101 performs the method of processing device state parameters described herein. For example, in some embodiments, the method of processing device state parameters may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 108. In some embodiments, part or all of the computer program may be loaded and/or installed onto the computer terminal 100 via the ROM 102 and/or the communication unit 109. One or more steps of the methods described herein for processing device state parameters may be performed when a computer program is loaded into RAM 103 and executed by computing unit 101. Alternatively, in other embodiments, the computing unit 101 may be configured to perform the method of processing device state parameters in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here can be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
It should be noted here that, in some alternative embodiments, the electronic device shown in fig. 1 described above may include hardware elements (including circuits), software elements (including computer code stored on a computer readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a specific example, and is intended to illustrate the types of components that may be present in the above-described electronic devices.
In the above-described operating environment, the present disclosure provides a method of processing device state parameters as shown in fig. 2, which may be performed by a computer terminal or similar electronic device as shown in fig. 1. Fig. 2 is a flow chart of a method of processing device state parameters provided in accordance with an embodiment of the present disclosure. As shown in fig. 2, the method may include the steps of:
step S20, acquiring a first state parameter of target equipment at a first moment;
the target device may be an industrial device in an industrial scenario, and the first state parameter may include target time dimension state data and space dimension state data. The first state parameters of the target devices corresponding to different industrial scenes at the first moment are different.
For example, in a power distribution network scenario, the spatial dimension state data in the first state parameter of the power device includes inheritance relationships, electrical distances, etc. as the power has transient and high frequency properties. In the dyeing scene of the dyeing mill, the space dimension state information in the first state parameter of the dyeing equipment comprises pipe network attribute information, such as pipe network line length, pipe material, curve number, valve type and the like, because the steam used for dyeing the cloth is obviously inert.
Specifically, the implementation process of obtaining the first state parameter of the target device at the first time may refer to further description of the embodiments of the present disclosure, which is not repeated.
S22, analyzing the first state parameter by using the target neural network model, and determining a second state parameter of the target equipment at a second moment;
the target neural network model is obtained through machine learning training by using a plurality of groups of data, and each group of data in the plurality of groups of data comprises: status parameters of the same device at different times.
The target neural network model comprises a first partial network model and a second partial network model. The first part of network model can be used for carrying out dimension reduction processing on the first state parameter and carrying out reconstruction processing on the result of the dimension reduction processing; the second part of network model can be used for sampling the structure of reconstruction processing, and performing network reconstruction processing on the sampling result, so as to determine a second state parameter of the target device at a second moment.
Specifically, the implementation process of analyzing the first state parameter by using the target neural network model to determine the second state parameter of the target device at the second moment may refer to further description of the embodiments of the present disclosure, which is not repeated.
And step S24, performing state monitoring and prediction on the target equipment based on the second state parameter.
According to the above steps S20 to S24 of the present disclosure, by acquiring a first state parameter of the target device at a first time; analyzing the first state parameter by using a target neural network model to determine a second state parameter of the target equipment at a second moment, wherein the target neural network model is obtained by machine learning training by using a plurality of groups of data, and each group of data in the plurality of groups of data comprises: status parameters of the same equipment at different moments; the target equipment is subjected to state monitoring and prediction based on the second state parameter, so that the purpose of carrying out state monitoring and prediction on the target equipment at the second moment based on the first state parameter of the target equipment at the first moment and the target neural network model is achieved, the effect of improving the prediction precision of the associated equipment state is achieved, and the technical problem that the prediction precision of the associated equipment state is low in the existing scheme is solved.
The method of processing the status parameters of the device according to the above embodiment is further described below.
As an alternative embodiment, in step S20, acquiring the first state parameter of the target device at the first moment includes:
Step S201, acquiring time dimension state data and space dimension state data of a target device at a first moment;
specifically, the time dimension status data of the target device at the first time may include real-time target device operation parameters, target device production product quality, and target device energy consumption level. For example, the target device operating parameters may include bearing speed and pressure, valve opening, etc.; the quality of the product produced by the target equipment can comprise the material, texture, color, size, qualification rate and the like of the product; the target plant energy consumption level may include steam consumption, water consumption, electricity consumption, real-time flow, temperature, pressure, etc. The spatial dimension state data of the target device at the first moment may include a static pipe network topology relationship, pipe network attribute information, and the like.
It should be noted that, the spatial dimension state data of the target device at the first moment is generally uniform under the same working condition, and may also be adjusted in real time according to the industrial requirement, where the spatial dimension state data in the present disclosure is only an example and not limited.
Step S202, preprocessing the time dimension state data and the space dimension state data to obtain a first state parameter, wherein the preprocessing comprises the following steps: time alignment processing and data alignment processing.
Specifically, the time alignment process refers to the time alignment and frequency unification of the acquisition gateway, and mainly depends on the time dimension state data. The data filling processing refers to filling various missing values through a data operation rule, and is commonly used under the condition of unstable network or electric power. The data patch process may restore the distribution of data as much as possible while not affecting the subsequent model predictions.
Optionally, the data alignment process is performed using multiple interpolation. For example, the correlation of the time dimension state data of the target devices in the same production line is strong, and the method is suitable for multiple interpolation. If the target device does not have a strong spatial relationship, its time-dimensional state data is also likely to have no similar sequence characteristics, such as volatility, stationarity, trend, periodicity, etc.
Alternatively, the missing values in the data alignment process may include a sheet-like missing and a discrete missing. Wherein, the lamellar deletion needs to be supplemented by multiple interpolation, and the dependence is the relativity between different sequence data; discrete deletions may be patched by multiple interpolations, or may be patched by statistical features of the historical data itself, such as using the average, median, average of upper and lower quartiles, mode, extremum, etc. of the historical data.
As an alternative embodiment, the target neural network model includes: a first partial network model and a second partial network model.
For example, the first partial network model may be an inferred network (Inference Network) and the second partial network model may be a generated network (Generative Network).
In step S22, analyzing the first state parameter with the target neural network model, and determining the second state parameter of the target device at the second moment includes:
step S221, performing dimension reduction processing on the first state parameter by using the first part of network model to obtain a dimension reduction result, and performing reconstruction processing on the dimension reduction result to obtain a reconstruction result;
and step S222, sampling the reconstruction result by using the second partial network model to obtain a sampling result, and performing network reconstruction processing on the sampling result to obtain a second state parameter.
As an optional embodiment, the method for processing a device status parameter set forth in the present disclosure further includes:
step S26, determining model loss based on the first state parameter, the second state parameter and a third state parameter, wherein the third state parameter is a real state parameter corresponding to the second state parameter;
And S28, performing iterative optimization on the initial neural network model by using model loss to obtain a target neural network model.
As an alternative embodiment, determining the model loss based on the first state parameter, the second state parameter, and the third state parameter in step S26 includes:
step S261, acquiring a first loss, a second loss and a third loss based on the first state parameter, the second state parameter and the third state parameter;
the first loss is the sum of an error parameter and a similarity parameter, the error parameter is the mean square error of a second state parameter and a third state parameter, the similarity parameter is the similarity between the variation posterior distribution and the prior distribution, the second loss is a time sequence autocorrelation loss, and the third loss is a space topology loss.
Step S262, determining model loss by using the first loss, the second loss and the third loss.
As an alternative embodiment, obtaining the similarity parameter based on the first state parameter includes:
step S2611, acquiring a first sampling parameter corresponding to the first state parameter, where the first sampling parameter is a hidden space representation of the first state parameter in the target dimension;
in step S2612, the similarity parameter is calculated using the first sampling parameter.
As an alternative embodiment, obtaining the second loss based on the first state parameter and the third state parameter comprises:
step S2613, acquiring a first sampling parameter corresponding to the first state parameter and a second sampling parameter corresponding to the third state parameter, where the second sampling parameter is a hidden space representation of the third state parameter in the target dimension;
in step S2614, the second loss is calculated using the first sampling parameter and the second sampling parameter.
As an alternative embodiment, obtaining the third loss based on the second state parameter comprises:
step S2615, determining a target weight based on the spatial topology information, wherein the target weight is used to determine a spatial association degree between different devices;
alternatively, the spatial topology information may be obtained from a pre-built theoretical topology map, or may be obtained from an actual topology that is adjusted in real time.
The above-described target weights may alternatively be described as penalty weights for measuring the degree of spatial association between different devices. For example, in a printing process, penalty weights may include pipe network line distance, number of bends, pipe materials, valve type, etc., which may be considered positive or negative.
In step S2616, a third loss is calculated using the target weight and the second state parameter.
As an alternative embodiment, the time difference between the second time and the first time is a custom hysteresis time order, and the time difference is used to adjust the dependency of the target device in the time dimension.
The implementation process of analyzing the first state parameter by using the target neural network model to determine the second state parameter of the target device at the second time is described below by taking the target neural network model as an example of the model of the variable space-time dynamic encoder (Variational Spatial-Temporal Dynamics Encoder, VSTDE).
The VSTDE model proposed by the present disclosure is built on the basis of a Variational automatic-Encoder (VAE) model and a Variational dynamic Encoder (Variational Dynamics Encoder, VDE) model. The VAE model can reduce the original data to a low-dimensional hidden space through an unsupervised inference network, and uses the generating capability to generate or repair images or texts, and has a great deal of research and application results in the fields of natural language processing (Natural Language Processing, NLP), voice, recommendation algorithm, and the like. The VDE model is basically a variant of the VAE model, introduces time sequence information, generates data which is not virtual simulation at the same time, approximates the data after the delta time, and has the purpose of capturing nonlinear dynamic transfer of molecules in molecular dynamics, and has good interpretation and generation capability. That is, the VDE model learns the state transition process Rather than a self-reconstruction process in a VAE model
FIG. 3 is a schematic diagram of a VSTDE model according to an embodiment of the present disclosure, including an inference network including 3 hidden layers and a generation network including 3 hidden layers, each hidden layer being a user-defined deep neural network (Deep Neural Network, DNN), D, as shown in FIG. 3 * For the line/device/apparatus or the point of a line/device/apparatus, the VSTDE model is realThe present process is shown in the following formula (1):
in formula (1), p, q represent a conditional probability distribution (probability distribution); delta, delta is larger than or equal to 1, which represents the hysteresis time order, can be subjected to custom setting according to actual requirements and is used for adjusting the dependency relationship of associated equipment in the time dimension; x is x (t) Representing a first state parameter at a time t;representing a second state parameter at the generated t+delta time; x is x (t+δ) Representing a third state parameter at a time t+delta, wherein the third state parameter is a real state parameter corresponding to the second state parameter; z (t) Is a first sampling parameter; mu (·) function and sigma 2 The (-) function can be used to reconstruct the hidden space in conjunction with gaussian noise interference; q φ (z (t) |x (t) ) Is to infer a first state parameter x at a time t by the network (t) Performing dimension reduction processing to obtain dimension reduction result, namely a first state parameter x (t) Hidden spatial representation z in the target dimension (t) Phi is a dimension reduction parameter; />Is to generate a hidden space representation z of the moment t of the network (t) Under the action of the generation parameter theta, carrying out network reconstruction processing according to the original dimension to obtain a second state parameter ++delta at the moment t+delta>Second state parameter->May be combined with a third state parameter x (t+δ) And (5) performing tuning.
Splitting the original data into N batches (batch) each comprising M target devices or test piecesFirst state parameter x corresponding to point (t) And a third state parameter x (t+δ) Computing a low-dimensional hidden space representation z of a first state parameter by an encoder of an inference network (t) I.e. the first sampling parameter, the low-dimensional hidden space of the third state parameter represents z (t+δ) I.e. the second sampling parameter, z at time t (t) Adding Gaussian noise to obtainA second state parameter ++delta at time t+delta is calculated by the decoder of the generating network>
Further, based on the first state parameter x (t) Second state parameterAnd a third state parameter x (t+δ) Model loss of the VSTDE model is determined.
Specifically, based on the first state parameter x (t) Second state parameterAnd a third state parameter x (t+δ) Determining a first loss L of a VSTDE model reconstruction Second loss L temporal And a third loss L spatial . The calculation process of the model loss L of the VSTDE model is shown in the following formula (2):
L=L reconstruction +L temporal +L spatial formula (2)
In equation (2), the first loss L reconstruction The error parameter is the sum of the error parameter and the similarity parameter, and the error parameter is the second state parameterAnd a third state parameter x (t+δ) The second loss L is the similarity between the variational posterior distribution and the prior distribution temporal The third loss L is the time sequence autocorrelation loss spatial Is a loss of spatial topology. First loss L reconstruction The calculation process of (2) is as shown in the following formula (3):
in the formula (3),can represent a second status parameter +.>And a third state parameter x (t +δ) Mean square error of>A similarity parameter may be represented, wherein the similarity parameter comprises a similarity between a variational posterior distribution and an a priori distribution.
Acquiring a first state parameter x (t) Corresponding first sampling parameter z (t) And a third state parameter x (t+δ) Corresponding second sampling parameter z (t+δ) Wherein the second sampling parameter z (t+δ) Is the third state parameter x (t+δ) In the hidden space representation of the target dimension, a first sampling parameter z is adopted (t) And a second sampling parameter z (t+δ) Calculating to obtain a second loss L temporal . Second loss L temporal The calculation process of (2) is as shown in the following formula (4):
in equation (4), the second loss L is calculated temporal The time ρ (·) can follow the autocorrelation function of the VDE model, the robust autocorrelation function, etc., δ being the lag time order. For example, in inferring networks, μ (·) and σ can be based 2 Gaussian noise of (-), give a hidden spatial representationA degree of randomness is added, wherein gaussian noise parameters can be obtained through model training learning.
Determining a target weight W based on the spatial topology information, and adopting the target weight W and a second state parameterCalculating to obtain a third loss L spatial . Third loss L spatial The calculation process of (2) is as shown in the following formula (5):
in the formula (5), W is a target weight determined based on the spatial topology information, where the target weight is used to determine the spatial association degree between different devices, and the calculation process of W is as shown in the following formula (6):
in the formula (6), W p,p′ Denoted distance weights for devices p, p' denoted devices other than p,means a device having a direct connection relation to p, for example, the same apparatus or production line, etc. dist (p, p ') is a measure of a "distance" between the devices p and p', and may specifically be a function of attribute information of the pipe network connected to the devices, for example, in the printing and dyeing process, the function should be increased along with increase of a pipe network line distance, increase of a curve number, common pipe network materials, and the like, otherwise, the function should be decreased, and the function is similar in other industrial situations and will not be repeated.
For each batch of data, according to the loss function L and the network structure, the gradient information is reversely propagated, and the network parameters are updated, so that the calculated value of the loss function is continuously reduced.
The application scenario of the VSTDE model proposed in the present disclosure may include:
(1) State assessment by low-dimensional hidden space representation: the original data is represented by a low-dimensional hidden space through an inference network, and the relative distance of the associated device in the low-dimensional space can be seen through the visualization of the t-SNE in the two-dimensional or three-dimensional space, namely the self state and the relative state of the device are known (after the visualization, the points with strong association are converged in a class cluster, and each point in the low-dimensional space is mapped with the original data points with different time-space characteristics).
(2) Sequence prediction by generating data: the production capacity of the production network is utilized to predict the equipment operation data in a forward delta step, and the forward delta step is used as a production level reference. The function of the prediction model can be approximated to assist in judging the distribution interval of future data.
(3) The data is enhanced, and the hidden space representation can be used as a characteristic index for other modeling requirements.
Specifically, the VSTDE can assist in industrial demand scenes such as process optimization, working condition identification, production safety, energy consumption level monitoring and the like in actual production.
Because the equipment states in the industrial enterprises have strong relevance in both time and space dimensions, the method and the device integrate the characteristics into the VSTDE model, and can remarkably improve the prediction accuracy and the effectiveness of the VSTDE model.
The method comprises the steps of obtaining a first state parameter of target equipment at a first moment; analyzing the first state parameter by using a target neural network model to determine a second state parameter of the target equipment at a second moment, wherein the target neural network model is obtained by machine learning training by using a plurality of groups of data, and each group of data in the plurality of groups of data comprises: status parameters of the same equipment at different moments; the target equipment is subjected to state monitoring and prediction based on the second state parameter, so that the purpose of carrying out state monitoring and prediction on the target equipment at the second moment based on the first state parameter of the target equipment at the first moment and the target neural network model is achieved, the effect of improving the prediction precision of the associated equipment state is achieved, and the technical problem that the prediction precision of the associated equipment state is low in the existing scheme is solved.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present disclosure may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the various embodiments of the present disclosure.
The disclosure further provides a device for processing a device status parameter, which is used for implementing the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 4 is a block diagram of an apparatus for processing device status parameters according to one embodiment of the present disclosure, and as shown in fig. 4, an apparatus 400 for processing device status parameters includes: the module for acquiring 401, the module for determining 402, the module for processing 403.
An obtaining module 401, configured to obtain a first state parameter of a target device at a first moment;
the determining module 402 is configured to analyze the first state parameter with a target neural network model, and determine a second state parameter of the target device at a second time, where the target neural network model is obtained by machine learning training using multiple sets of data, and each set of data in the multiple sets of data includes: status parameters of the same equipment at different moments;
a processing module 403, configured to perform status monitoring and prediction on the target device based on the second status parameter.
Optionally, the acquiring module 401 is configured to acquire a first state parameter of the target device at a first moment includes: acquiring time dimension state data and space dimension state data of target equipment at a first moment; preprocessing the time dimension state data and the space dimension state data to obtain a first state parameter, wherein the preprocessing comprises the following steps: time alignment processing and data alignment processing.
Optionally, the target neural network model includes: the determining module 402 is configured to analyze the first state parameter using the target neural network model, and determine a second state parameter of the target device at a second time includes: performing dimension reduction processing on the first state parameter by using the first part of network model to obtain a dimension reduction result, and performing reconstruction processing on the dimension reduction result to obtain a reconstruction result; and sampling the reconstruction result by using the second partial network model to obtain a sampling result, and performing network reconstruction processing on the sampling result to obtain a second state parameter.
Optionally, the determining module 402 is further configured to determine a model loss based on the first state parameter, the second state parameter, and a third state parameter, where the third state parameter is a real state parameter corresponding to the second state parameter; the apparatus 400 for processing device state parameters further comprises: and the optimization module 404 is configured to perform iterative optimization on the initial neural network model by using model loss to obtain a target neural network model.
Optionally, the determining module 402 is configured to determine the model loss based on the first state parameter, the second state parameter, and the third state parameter includes: acquiring a first loss, a second loss and a third loss based on a first state parameter, a second state parameter and a third state parameter, wherein the first loss is the sum of an error parameter and a similarity parameter, the error parameter is the mean square error of the second state parameter and the third state parameter, the similarity parameter is the similarity between a variation posterior distribution and a priori distribution, the second loss is a time sequence autocorrelation loss, and the third loss is a space topology loss; model losses are determined using the first loss, the second loss, and the third loss.
Optionally, the obtaining module 401 is further configured to obtain the similarity parameter based on the first state parameter includes: acquiring a first sampling parameter corresponding to a first state parameter, wherein the first sampling parameter is a hidden space representation of the first state parameter in a target dimension; and calculating a similarity parameter by adopting the first sampling parameter.
Optionally, the obtaining module 401 is further configured to obtain the second loss based on the first state parameter and the third state parameter includes: acquiring a first sampling parameter corresponding to the first state parameter and a second sampling parameter corresponding to the third state parameter, wherein the second sampling parameter is a hidden space representation of the third state parameter in the target dimension; and calculating a second loss by adopting the first sampling parameter and the second sampling parameter.
Optionally, the obtaining module 401 is further configured to obtain the third loss based on the second state parameter includes: determining target weights based on the spatial topology information, wherein the target weights are used for determining the spatial association degree between different devices; and calculating by using the target weight and the second state parameter to obtain a third loss.
Optionally, the time difference between the second time and the first time is a custom hysteresis time order, and the time difference is used for adjusting the dependency relationship of the target device in the time dimension.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
According to an embodiment of the present disclosure, there is also provided an electronic device comprising a memory having stored therein computer instructions and at least one processor configured to execute the computer instructions to perform the steps of any of the method embodiments described above.
Optionally, the electronic device may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in the present disclosure, the above processor may be configured to perform the following steps by a computer program:
s1, acquiring a first state parameter of target equipment at a first moment;
s2, analyzing the first state parameter by using the target neural network model, and determining a second state parameter of the target equipment at a second moment;
and S3, performing state monitoring and prediction on the target equipment based on the second state parameter.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
According to an embodiment of the present disclosure, the present disclosure also provides a non-transitory computer readable storage medium having stored therein computer instructions, wherein the computer instructions are configured to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described nonvolatile storage medium may be configured to store a computer program for performing the steps of:
s1, acquiring a first state parameter of target equipment at a first moment;
s2, analyzing the first state parameter by using the target neural network model, and determining a second state parameter of the target equipment at a second moment;
and S3, performing state monitoring and prediction on the target equipment based on the second state parameter.
Alternatively, in the present embodiment, the non-transitory computer readable storage medium described above may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product. Program code for carrying out the audio processing methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the foregoing embodiments of the present disclosure, the descriptions of the various embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a usb disk, a read-only memory (ROM), a random-access memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, etc., which can store program codes.
The foregoing is merely a preferred embodiment of the present disclosure, and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present disclosure, which are intended to be comprehended within the scope of the present disclosure.

Claims (9)

1. A method of processing device state parameters, comprising:
acquiring a first state parameter of target equipment at a first moment;
performing dimension reduction processing on the first state parameter by using a first part of network model in the target neural network model to obtain a dimension reduction result, and performing reconstruction processing on the dimension reduction result to obtain a reconstruction result;
sampling the reconstruction result by using a second part of network model in the target neural network model to obtain a sampling result, and performing network reconstruction processing on the sampling result to obtain a second state parameter of the target equipment at a second moment, wherein the target neural network model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: status parameters of the same equipment at different moments;
performing state monitoring and prediction on the target equipment based on the second state parameter;
Wherein the method further comprises:
acquiring a first loss, a second loss and a third loss based on the first state parameter, the second state parameter and the third state parameter, wherein the first loss is a sum value of an error parameter and a similarity parameter, the error parameter is a mean square error of the second state parameter and the third state parameter, the similarity parameter is similarity between a variation posterior distribution and a priori distribution, the second loss is a time sequence autocorrelation loss, the third loss is a space topology loss, and the third state parameter is a real state parameter corresponding to the second state parameter;
determining a model loss using the first loss, the second loss, and the third loss;
and carrying out iterative optimization on the initial neural network model by adopting the model loss to obtain the target neural network model.
2. The method of claim 1, wherein obtaining the first state parameter of the target device at the first time instant comprises:
acquiring time dimension state data and space dimension state data of the target equipment at the first moment;
preprocessing the time dimension state data and the space dimension state data to obtain the first state parameter, wherein the preprocessing comprises: time alignment processing and data alignment processing.
3. The method of claim 1, wherein obtaining the similarity parameter based on the first state parameter comprises:
acquiring a first sampling parameter corresponding to the first state parameter, wherein the first sampling parameter is a hidden space representation of the first state parameter in a target dimension;
and calculating the similarity parameter by adopting the first sampling parameter.
4. The method of claim 1, wherein obtaining the second penalty based on the first state parameter and the third state parameter comprises:
acquiring a first sampling parameter corresponding to the first state parameter and a second sampling parameter corresponding to the third state parameter, wherein the second sampling parameter is a hidden space representation of the third state parameter in a target dimension;
and calculating the second loss by adopting the first sampling parameter and the second sampling parameter.
5. The method of claim 1, wherein obtaining the third loss based on the second state parameter comprises:
determining target weights based on the spatial topology information, wherein the target weights are used for determining the spatial association degree between different devices;
and calculating the third loss by adopting the target weight and the second state parameter.
6. The method of claim 1, wherein a time difference between the second time instant and the first time instant is a custom hysteresis time order, the time difference being used to adjust a dependency of the target device in a time dimension.
7. An apparatus for processing device state parameters, comprising:
the acquisition module is used for acquiring a first state parameter of the target equipment at a first moment;
a determining module for: performing dimension reduction processing on the first state parameter by using a first part of network model in the target neural network model to obtain a dimension reduction result, and performing reconstruction processing on the dimension reduction result to obtain a reconstruction result; sampling the reconstruction result by using a second part of network model in the target neural network model to obtain a sampling result, and performing network reconstruction processing on the sampling result to obtain a second state parameter of the target equipment at a second moment, wherein the target neural network model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: status parameters of the same equipment at different moments;
the processing module is used for carrying out state monitoring and prediction on the target equipment based on the second state parameter;
The acquisition module is further configured to acquire a first loss, a second loss, and a third loss based on the first state parameter, the second state parameter, and the third state parameter, where the first loss is a sum of an error parameter and a similarity parameter, the error parameter is a mean square error of the second state parameter and the third state parameter, the similarity parameter is a similarity between a variance posterior distribution and a priori distribution, the second loss is a time-series autocorrelation loss, the third loss is a spatial topology loss, and the third state parameter is a real state parameter corresponding to the second state parameter;
the determination module is further to determine a model loss using the first loss, the second loss, and the third loss;
the device also comprises an optimization module, which is used for carrying out iterative optimization on the initial neural network model by adopting the model loss to obtain the target neural network model.
8. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
9. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202111075693.7A 2021-09-14 2021-09-14 Method and device for processing equipment state parameters, electronic equipment and storage medium Active CN113806172B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111075693.7A CN113806172B (en) 2021-09-14 2021-09-14 Method and device for processing equipment state parameters, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111075693.7A CN113806172B (en) 2021-09-14 2021-09-14 Method and device for processing equipment state parameters, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113806172A CN113806172A (en) 2021-12-17
CN113806172B true CN113806172B (en) 2024-02-06

Family

ID=78895320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111075693.7A Active CN113806172B (en) 2021-09-14 2021-09-14 Method and device for processing equipment state parameters, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113806172B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019192172A1 (en) * 2018-04-04 2019-10-10 歌尔股份有限公司 Attitude prediction method and apparatus, and electronic device
CN111860763A (en) * 2020-06-05 2020-10-30 北京嘀嘀无限科技发展有限公司 Model training method, parameter prediction method, model training device, parameter prediction device, electronic equipment and storage medium
CN113191478A (en) * 2020-01-14 2021-07-30 阿里巴巴集团控股有限公司 Training method, device and system of neural network model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019192172A1 (en) * 2018-04-04 2019-10-10 歌尔股份有限公司 Attitude prediction method and apparatus, and electronic device
CN113191478A (en) * 2020-01-14 2021-07-30 阿里巴巴集团控股有限公司 Training method, device and system of neural network model
CN111860763A (en) * 2020-06-05 2020-10-30 北京嘀嘀无限科技发展有限公司 Model training method, parameter prediction method, model training device, parameter prediction device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Distributed parameter circuit model for transmission line;Jian Luo等;《 IEEE Xplore》;全文 *
一种TensorFlow平台上目标意图识别模型设计与实现;李宁安;张剑;周倜;;舰船电子工程(05);全文 *

Also Published As

Publication number Publication date
CN113806172A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
Ma et al. A hybrid attention-based deep learning approach for wind power prediction
CN111091233B (en) Short-term wind power prediction modeling method for wind power plant
Zhang et al. Energy theft detection in an edge data center using threshold-based abnormality detector
Sundararajan et al. Regression and generalized additive model to enhance the performance of photovoltaic power ensemble predictors
CN113033780A (en) Cloud platform resource prediction method based on double-layer attention mechanism
Liao et al. Data-driven missing data imputation for wind farms using context encoder
Wang et al. Swarm Intelligence‐Based Hybrid Models for Short‐Term Power Load Prediction
CN115983497A (en) Time sequence data prediction method and device, computer equipment and storage medium
Zhu et al. A coupled model for dam foundation seepage behavior monitoring and forecasting based on variational mode decomposition and improved temporal convolutional network
CN112241802A (en) Interval prediction method for wind power
CN117131022B (en) Heterogeneous data migration method of electric power information system
CN117688362A (en) Photovoltaic power interval prediction method and device based on multivariate data feature enhancement
CN113806172B (en) Method and device for processing equipment state parameters, electronic equipment and storage medium
CN117154680A (en) Wind power prediction method based on non-stationary transducer model
Ye et al. TS2V: A transformer-based Siamese network for representation learning of univariate time-series data
Dai et al. Multimodal deep learning water level forecasting model for multiscale drought alert in Feiyun River basin
CN113151842B (en) Method and device for determining conversion efficiency of wind-solar complementary water electrolysis hydrogen production
CN114372418A (en) Wind power space-time situation description model establishing method
CN111353523A (en) Method for classifying railway customers
Zhou et al. Mathematical model of yield forecast based on long and short-term memory image neural network
Liu et al. Enhancing short-term wind power forecasting accuracy for reliable and safe integration into power systems: A gray relational analysis and optimized support vector regression machine approach
KR102664053B1 (en) Apparatus and method for analyzing of load prediction model based on machine learning
CN114638555B (en) Power consumption behavior detection method and system based on multilayer regularization extreme learning machine
Wang Short-term Power Load Forecasting Based on Machine Learning
Li et al. A novel probabilistic framework with interpretability for generator coherency identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant