EP4354344A1 - Couche neuronale fonctionnelle adaptative et réseau - Google Patents

Couche neuronale fonctionnelle adaptative et réseau Download PDF

Info

Publication number
EP4354344A1
EP4354344A1 EP22201404.5A EP22201404A EP4354344A1 EP 4354344 A1 EP4354344 A1 EP 4354344A1 EP 22201404 A EP22201404 A EP 22201404A EP 4354344 A1 EP4354344 A1 EP 4354344A1
Authority
EP
European Patent Office
Prior art keywords
physical asset
stage
stage vectors
vectors
processing signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22201404.5A
Other languages
German (de)
English (en)
Inventor
Jan Poland
James Robert OTTEWILL
Kai YUAN
Edyta KUK
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Energy Ltd
Original Assignee
Hitachi Energy Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Energy Ltd filed Critical Hitachi Energy Ltd
Priority to EP22201404.5A priority Critical patent/EP4354344A1/fr
Priority to PCT/EP2023/078553 priority patent/WO2024079344A1/fr
Publication of EP4354344A1 publication Critical patent/EP4354344A1/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks

Definitions

  • the present disclosure relates to a method, determining system, and an industrial or power system of generating or updating a signal processing logic for processing signals, in particular time-series signals, that comprise measurements associated with a physical asset.
  • a Functional Neural Network, FNN can be used for monitoring, diagnostics, and analytics and as a part of a logic for automatic decision making on industrial assets.
  • assets used in the electricity industry such as power generation assets and their parts, power transmission assets and their parts, and power distribution assets and their parts.
  • There can be different quantities of interest to monitor and/or estimate for an industrial asset such as operational performance, operational state, or information on external conditions or adjacent systems.
  • the information thus obtained can be used for informing human operators, managers, or stakeholders, to support their operational or other decisions, or to partly or fully automate the operation of the asset.
  • industrial assets While being operated, industrial assets typically generate measurement data in the form of numerical and/or categorical time series. Such time series can be recorded continuously at regular time intervals (periodic sampling), or at irregular time intervals. Measurements may also be triggered by certain events, e.g., time series may be recorded
  • Time series which are measured and sampled at regular or irregular time intervals constitute functional data: This data can be seen as a function over time. Analytics may be performed on the functional data with the functions, which may be considered as being comprised of an infinite number of samples, themselves being analyzed. The sampling is only a necessary technical property of the data, because infinite sampling is technically impossible.
  • ANN Artificial Neural Networks
  • Functional Data Analysis is an approach to process functional input data to statistical models. If functional input data is used in this way as an input to ANN, this is termed a Functional Neural Network (FNN). Functional Data Analysis processes the data in a way that is - as far as possible - sampling time agnostic. In particular, FNN can be applied to time series with irregular sampling and missing values.
  • FNN Functional Neural Network
  • a conventional FNN is illustrated in Fig. 1 a) .
  • Only the first layer is a functional layer, taking functional input. It is annotated with ⁇ f ⁇ ⁇ , which refers to the fact that the output of this layer is computed by projection ⁇ f i ( t ) ⁇ ⁇ j ( t ) dt of the functional input f i ( t ), for all input channels i, onto basis functions ⁇ j ( t ), for all basis functions j.
  • It could be that different basis functions are chosen for each channel i, in which case the projections read ⁇ f i ( t ) ⁇ ⁇ i , j ( t ) dt ) .
  • Fig. 1 b) illustrates a small example. Here, two functional inputs are projected onto two basis functions, giving rise to a four-dimensional output of the functional layer. After the first functional layer of the FNN, further conventional Multilayer Perceptron (MLP) layers follow. They may be of any known or new kind: fully connected, convolutional, recurrent, etc.
  • MLP Multilayer Perceptron
  • They may be of any known or new kind: fully connected, convolutional, recurrent, etc.
  • the example in Fig. 1 b) shows one hidden fully connected layer with Rectified Linear Unit (ReLU) activation and one fully connected output layer with linear activation.
  • ReLU Rectified Linear Unit
  • a FNN architecture may be implemented with an adaptive basis layer.
  • Fig. 2 illustrates a conventional FNN with adaptive basis layers (AdaFNN). Only the input layer is shown in detail and the remaining layers (e.g. fully connected MLP, convolutional, recurrent, etc.) are omitted for simplicity.
  • the input is visualized in two dimensions: there are channels (3 in this example) and time steps (7 in this example). The time steps need not be regularly spaced, and the number of time steps per channel need not be the same.
  • the basis functions ⁇ i , j may but need not depend on the channel i . They are fixed or adaptive.
  • an adaptive basis layer means that the choice of the basis function is shifted. This means that it is taken out of the hyperparameter selection, where it causes additional effort to the modeler and thus gives rise to higher modeling costs and risks. It is taken into the usual training process, typically based on Stochastic Gradient Descent (SGD), leading to less effort, costs, and risks to the user performing the configuration of the network.
  • SGD Stochastic Gradient Descent
  • the SGD When the SGD is applied to sufficiently wide network architectures, they will create hidden features that, depending on the task to be learned, with very high probability, cause the trainable ANN weight vector to be initialized very close to an optimum that can be quickly reached with state-of-the-art SGD. This can be demonstrated with the smallest possible example of learning a 2-dimensional linear regression with SGD, as illustrated in Fig.
  • any linear ANN will be functionally equivalent to a minimal ANN as shown in Fig. 3 a)
  • a non-minimal architecture as illustrated in Fig. 3 b)
  • an additional wide (e.g., 128 neurons) hidden linear layer will train much more efficiently with SGD.
  • Training efficiency may be defined by three properties: Low number of training epochs to convergence, low amount of time for training, and high likelihood of a networks being trained to low generalization error when started from randomly initialized weights.
  • recurrent models such as Recurrent Neural Networks (RNN).
  • RNN Recurrent Neural Networks
  • recurrent models rely on the sampling of the time series. Typically, the sampling is required to be regular in order to make the recurrent models perform well.
  • the present disclosure provides a building block to facilitate learning in particular for limited availability of data, i.e., low amount and / or low quality of training data and / or labels. This applies to any learning setup and architecture, regardless if all / few / no labels are available, if autoencoder and / or reward labels are used, and if the new adaptive functional layer is being used within a small or a large ANN.
  • the present disclosure advantageously offers the opportunity of increasing the width of the hidden elementary layers in the per-channel processing, to achieve favourable initialization. Simultaneously, the number of resulting projections, i.e., the width of the input to the remaining (MLP or other) layers, can be tightly controlled. This is important: If this width is too big, then the risk of overfitting strongly increases. In particular if limited training data is available and/or the subsequent layers are hard to train (for instance because they are recurrent layers), this may often be key to enable successful training.
  • the present disclosure relates to a method of generating or updating a signal processing logic for processing signals, in particular time-series signals, that comprise measurements associated with a physical asset, the method comprising: training a machine learning, ML, model, in particular a functional neural network, FNN, wherein the ML model comprises performing: receiving the processing signals; determining a plurality of first-stage vectors based on the processing signals and a plurality of basis functions, in particular by projecting each of the processing signals to a respective basis function of a plurality of basis functions, wherein each of the plurality of first-stage vectors is related to the projected each of the processing signals through the respective basis function; and determining a plurality of second-stage vectors based on the plurality of first-stage vectors, in particular by projecting each of the plurality of first-stage vectors to a respective transformation matrix of a plurality of transformation matrices, wherein each of the plurality of second-stage vectors is related to the projected each of the plurality of first-stage vectors through the
  • the method further comprises, in particular the ML model comprises performing: combining the second-stage vectors, in particular by concatenating the second-stage vectors into a concatenated vector; and/or determining at least one output of the FNN based on the plurality of second-stage vectors.
  • the method further comprises providing the trained ML model as at least part of a signal processing logic to a device that executes the signal processing logic to control, monitor, and/or analyze the physical asset.
  • the method further comprises: receiving the trained ML model; and executing the signal processing logic to control, monitor, and/or analyze the physical asset.
  • each of the plurality of transformation matrices comprises a plurality of trainable weights.
  • a dimensionality of at least one of transformation matrices is different from the dimensionalities of remaining transformation matrices such that a dimensionality of at least one of the plurality of second-stage vectors is different from the dimensionalities of the remaining second-stage vectors.
  • the dimensionalities of the respective transformation matrices are determined based on at least one optimization method, in particular on integer decisions including grid search and hill climbing.
  • the method further comprises determining information related to the health of the physical asset, based on the trained ML model, comprising a health indicator, a time series of health indicator evolution, a remaining useful life, RUL, a failure probability, a time series of failure probability evolution, a reliability, and/or a time series of reliabilities.
  • determining the at least one output comprises processing the at least one second-stage intermediate value by a linear or nonlinear function that is subject to the plurality of trainable weights.
  • the method further comprises: receiving sensor measurement data captured during operation of the physical asset; and updating the prognostic asset health state based on the received sensor measurement data.
  • the physical asset is a power transformer, a distributed energy resource, DER, unit, or a power generator.
  • the present disclosure also relates to a method of generating or updating a signal processing logic for processing signals, in particular time-series signals, that comprise measurements associated with a physical asset, the method comprising providing the ML model, trained according to any one of the above-described embodiments, as at least part of a signal processing logic to a device that executes the signal processing logic to control, monitor, and/or analyze the physical asset.
  • the present disclosure further relates to a method of operating and/or maintaining an physical asset comprising: performing a prognostic physical asset health analysis for the physical asset using the method of any one of the above-described embodiments; and automatically performing at least one of the following: scheduling a down-time of the physical asset based on the determined prognostic physical asset health state; scheduling maintenance work based on the determined prognostic physical asset health state; scheduling replacement work based on the determined prognostic physical asset health state; changing maintenance intervals based on the determined prognostic physical asset health state.
  • the present disclosure further relates to a determining system operative to generate or update a signal processing logic for processing signals, in particular time-series signals, that comprise measurements associated with a physical asset
  • the determining system comprising a processor being configured to: receive the processing signals; and train a machine learning, ML, model, in particular a functional neural network, FNN, wherein the ML model comprises performing: determining a plurality of first-stage vectors based on the processing signals and a plurality of basis functions, in particular by projecting each of the processing signals to a respective basis function of a plurality of basis functions, wherein each of the plurality of first-stage vectors is related to the projected each of the processing signals through the respective basis function; and determining a plurality of second-stage vectors based on the plurality of first-stage vectors, in particular by projecting each of the plurality of first-stage vectors to a respective transformation matrix of a plurality of transformation matrices, wherein each of the plurality of second-stage vectors is related to the
  • the processor is configured to, in particular the ML model is configured to: combine the second-stage vectors, in particular by concatenating the second-stage vectors into a concatenated vector; and/or determine at least one output of the FNN based on the plurality of second-stage vectors.
  • the processor is configured to provide the trained ML model as at least part of a signal processing logic to a device that executes the signal processing logic to control, monitor, and/or analyze the physical asset.
  • the processor is configured to: receive the trained ML model; and execute the signal processing logic to control, monitor, and/or analyze the physical asset.
  • each of the plurality of transformation matrices comprises a plurality of trainable weights.
  • a dimensionality of at least one of transformation matrices is different from the dimensionalities of remaining transformation matrices such that a dimensionality of at least one of the plurality of second-stage vectors is different from the dimensionalities of the remaining second-stage vectors.
  • the dimensionalities of the respective transformation matrices are determined based on at least one optimization method, in particular on integer decisions including grid search and hill climbing.
  • the processor is configured to determine information related to the health of the physical asset, based on the trained ML model, comprising a health indicator, a time series of health indicator evolution, a remaining useful life, RUL, a failure probability, a time series of failure probability evolution, a reliability, and/or a time series of reliabilities.
  • determining the at least one output comprises processing the at least one second-stage intermediate value by a linear or nonlinear function that is subject to the plurality of trainable weights.
  • the processor is configured to: receive sensor measurement data captured during operation of the physical asset; and update the prognostic asset health state based on the received sensor measurement data.
  • the physical asset is a power transformer, a distributed energy resource, DER, unit, or a power generator.
  • the present disclosure further relates to an industrial or power system, comprising: a physical asset; and the determining system according to any one of the above-described embodiments, optionally wherein the determining system is a decentralized controller of the industrial or power system for controlling the asset.
  • the method according to any one of the embodiments disclosed herein may advantageously monitor and/or estimate quantities for an industrial asset, such as operational performance, operational state, or information on external conditions or adjacent systems.
  • quantities to monitor and/or estimate is the state of health of an industrial asset, which allows the degradation of the asset to be understood, its remaining useful life (RUL) to be predicted, and decisions for operation, maintenance, and repair to be derived.
  • RUL remaining useful life
  • the information thus obtained can be used for informing human operators, managers, or stakeholders, to support their operational or other decisions, or to partly or fully automate the operation of the asset.
  • the present disclosure is not limited to the exemplary embodiments and applications described and illustrated herein. Additionally, the specific order and/or hierarchy of steps in the methods disclosed herein are merely exemplary approaches. Based upon design preferences, the specific order or hierarchy of steps of the disclosed methods or processes can be re-arranged while remaining within the scope of the present disclosure. Thus, those of ordinary skill in the art will understand that the methods and techniques disclosed herein present various steps or acts in a sample order, and the present disclosure is not limited to the specific order or hierarchy presented unless expressly stated otherwise.
  • Fig. 4 illustrates a flowchart of a method according to an embodiment of the present disclosure.
  • the method according to an embodiment of the present disclosure comprises training a machine learning, ML, model, in particular a functional neural network, FNN, and Fig. 4 illustrates the methods performed by the ML model.
  • the processing signals are received.
  • the processing signals may be time-series signals, that comprise measurements associated with at least one physical asset.
  • the processing signals may be optionally preprocessed before being received, in particular by the ML model, (e.g. by scaling, data cleaning, etc.).
  • a plurality of first-stage vectors is determined based on the processing signals and a plurality of basis functions, in particular by projecting each of the processing signals to a respective basis function of a plurality of basis functions, wherein each of the plurality of first-stage vectors is related to the projected each of the processing signals through the respective basis function.
  • a plurality of second-stage vectors is determined based on the plurality of first-stage vectors, in particular by projecting each of the plurality of first-stage vectors to a respective transformation matrix of a plurality of transformation matrices, wherein each of the plurality of second-stage vectors is related to the projected each of the plurality of first-stage vectors through the respective transformation matrix.
  • Fig. 5 illustrates a functional neural network architecture according to an embodiment of the present disclosure.
  • the functional neural network of the present disclosure is hereinafter referred to as an Adaptive-Equivalent FNN (AEFNN) layer.
  • AEFNN Adaptive-Equivalent FNN
  • the basic linear AEFNN architecture is shown in Fig. 5 and comprises performing:
  • the output of the second projection z i , j are projections of the functional inputs f i ( t ) onto basis functions which are linear combinations of the fixed basis functions ⁇ k . Since the weight matrices A i are trainable for each data channel i, the linear combinations of the fixed basis functions ⁇ k behave as an adaptive basis functions ⁇ i , j of Fig. 2 . This relationship is illustrated in the mathematical derivation, the basis functions being effectively used are circled 522.
  • the input sample may be referred to the processing signals.
  • the output of the first projection u i , k may be referred to a plurality of first-stage vectors.
  • the output of the second projection z i , j may be referred to a plurality of second-stage vectors.
  • the matrices A i may be referred to a transformation matrix.
  • the method further comprises, in particular the ML model comprises performing: combining the second-stage vectors, in particular by concatenating the second-stage vectors into a concatenated vector; and/or determining at least one output of the FNN based on the plurality of second-stage vectors.
  • the method further comprises providing the trained ML model as at least part of a signal processing logic to a device that executes the signal processing logic to control, monitor, and/or analyze the physical asset.
  • the method further comprises: receiving the trained ML model; and executing the signal processing logic to control, monitor, and/or analyze the physical asset.
  • each of the plurality of transformation matrices comprises a plurality of trainable weights.
  • a dimensionality of at least one of transformation matrices is different from the dimensionalities of remaining transformation matrices such that a dimensionality of at least one of the plurality of second-stage vectors is different from the dimensionalities of the remaining second-stage vectors.
  • the dimensionalities of the respective transformation matrices are determined based on at least one optimization method, in particular on integer decisions including grid search and hill climbing.
  • the method further comprises determining information related to the health of the physical asset, based on the trained ML model, comprising a health indicator, a time series of health indicator evolution, a remaining useful life, RUL, a failure probability, a time series of failure probability evolution, a reliability, and/or a time series of reliabilities.
  • determining the at least one output comprises processing the at least one second-stage intermediate value by a linear or nonlinear function that is subject to the plurality of trainable weights.
  • the method further comprises: receiving sensor measurement data captured during operation of the physical asset; and updating the prognostic asset health state based on the received sensor measurement data.
  • the physical asset is a power transformer, a distributed energy resource, DER, unit, or a power generator.
  • the method further comprises providing the ML model, trained according to any one of the above-described embodiments, as at least part of a signal processing logic to a device that executes the signal processing logic to control, monitor, and/or analyze the physical asset.
  • the method further comprises: performing a prognostic physical asset health analysis for the physical asset using the method of any one of the above-described embodiments; and automatically performing at least one of the following: scheduling a down-time of the physical asset based on the determined prognostic physical asset health state; scheduling maintenance work based on the determined prognostic physical asset health state; scheduling replacement work based on the determined prognostic physical asset health state; changing maintenance intervals based on the determined prognostic physical asset health state.
  • Fig. 6 The fully general design of the Adaptive-Equivalent FNN layer is shown in Fig. 6 .
  • the only difference to the linear layer described in Fig. 5 is the generalization of the per channel layers, which may be nonlinear and may comprise multiple elementary layers. Hence, comprises performing:
  • Adaptive-Equivalent FNN layer refers to a construction which may technically be a sequence of multiple non-trainable and trainable layers and other matrix manipulation operations (concatenating, reshaping, stacking, etc.).
  • the Adaptive-Equivalent FNN layer is usually just one part of the whole ANN. For instance, in Figures 3 and 4 , after the Adaptive-Equivalent layer, there is a remaining MLP network S530 S630. In general, the Adaptive-Equivalent FNN layer is typically the first layer of an ANN, followed by arbitrary further ANN layers.
  • functional layers including at least one Adaptive-Equivalent functional layer, is stacked, in particular by performing: applying a functional layer to the functional input data; Interpreting the numerical output vectors from the functional layer as functional data (e.g. sampled function evaluations), possibly after rearranging the vectors (reshaping, transposing, etc.); and applying a second functional layer to the data resulting from the previous operation.
  • the processing signals comprises at least two variables.
  • the basis functions are chosen for respective variables of the at least two variables.
  • the processing signals are projected to the respective basis functions chosen for the respective variables of the at least two variables.
  • the ML is or is part of a trainable Neural Network system which, after training, is used to process data from industrial assets according to any one of the above-described embodiments.
  • the data processing is performed offline or online, wherein the term "online" means that the trained Neural Network system is continuously or at regular or irregular time intervals evaluated on new data recorded from the asset(s).
  • Fig. 7 and Fig. 8 illustrate the performance on a simple RUL prediction task on the NASA Turbofan Data Set.
  • an ANN is setup and trained for predicting RUL labels on the first data set FD001.
  • the training dataset is randomly partitioned into 80% training and 20% validation data, and the training is repeated multiple times, in order to obtain statistical distributions of the validation loss and the training time.
  • Two different sets of input channels are used: (a) all available input channels in the data set, and (b) 6 input channels which have been preselected according to their significance to predict RUL.
  • RNN Recurrent Neural Network
  • GRU Gated Recurrent Unit
  • Fig. 7 compares the validation loss of the different architectures for the RUL prediction task, for both sets of input channels.
  • Fig. 7 a) illustrates the results when all input channels are used and Fig. 7 b) with 6 selected input channels. It can be seen that with a recurrent architecture, the network performs better than without. Comparing the AE FNN architecture without reference directly to the FNN with eigenfunctions dedicated to this task, it is observed that the performance is similar, hence the AE FNN successfully learns to predict RUL well without dedicated eigenfunctions. In contrast, the performance of AdaFNN is less robust. In comparison with a RNN (GRU network), AE FNN is able to compete.
  • RNN GRU network
  • Fig. 8 a illustrates the results when all input channels are used and Fig. 8 b) with 6 selected input channels. It can be seen that the AEFNN without recurrence trains fastest, even faster than the FNN with dedicated eigenfunctions. Also in combination with RNN, one AE FNN variant is fastest. This demonstrates that the AE FNN layer is a strong architectural choice for with possibly favorable scaling properties for functional input data.
  • Fig. 9 a illustrates a determining system 910 operative to generate or update a signal processing logic for processing signals, in particular time-series signals, that comprise measurements associated with a physical asset 920, the determining system 910 comprising a processor 911 being configured to: receive the processing signals; and train a machine learning, ML, model, in particular a functional neural network, FNN, wherein the ML model comprises performing: determining a plurality of first-stage vectors based on the processing signals and a plurality of basis functions, in particular by projecting each of the processing signals to a respective basis function of a plurality of basis functions, wherein each of the plurality of first-stage vectors is related to the projected each of the processing signals through the respective basis function; and determining a plurality of second-stage vectors based on the plurality of first-stage vectors, in particular by projecting each of the plurality of first-stage vectors to a respective transformation matrix of a plurality of transformation matrices, wherein each of the plurality of second-
  • the processor is configured to transmit/provide/transfer the received processing signals to the ML model.
  • the ML model comprises performing receiving the processing signals, in particular from the processor.
  • the processor is configured to execute a method according to any one of the above-described embodiments.
  • Fig. 9 b illustrates an industrial or power system 900, comprising: a physical asset 920; and the determining system 910 according to any one of the above-described embodiments, optionally wherein the determining system 910 is a decentralized controller of the industrial or power system 900 for controlling the asset.
  • any reference to an element herein using a designation such as "first,” “second,” and so forth does not generally limit the quantity or order of those elements. Rather, these designations can be used herein as a convenient means of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements can be employed, or that the first element must precede the second element in some manner.
  • any of the various illustrative logical blocks, units, processors, means, circuits, methods and functions described in connection with the aspects disclosed herein can be implemented by electronic hardware (e.g., a digital implementation, an analog implementation, or a combination of the two), firmware, various forms of program or design code incorporating instructions (which can be referred to herein, for convenience, as "software” or a "software unit”), or any combination of these techniques.
  • a processor, device, component, circuit, structure, machine, unit, etc. can be configured to perform one or more of the functions described herein.
  • IC integrated circuit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the logical blocks, units, and circuits can further include antennas and/or transceivers to communicate with various components within the network or within the device.
  • a general purpose processor can be a microprocessor, but in the alternative, the processor can be any conventional processor, controller, or state machine.
  • a processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other suitable configuration to perform the functions described herein. If implemented in software, the functions can be stored as one or more instructions or code on a computer-readable medium. Thus, the steps of a method or algorithm disclosed herein can be implemented as software stored on a computer-readable medium.
  • Computer-readable media includes both computer storage media and communication media including any medium that can be enabled to transfer a computer program or code from one place to another.
  • a storage media can be any available media that can be accessed by a computer.
  • such computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • memory or other storage may be employed in embodiments of the present disclosure.
  • memory or other storage may be employed in embodiments of the present disclosure.
  • any suitable distribution of functionality between different functional units, processing logic elements or domains may be used without detracting from the present disclosure.
  • functionality illustrated to be performed by separate processing logic elements, or controllers may be performed by the same processing logic element, or controller.
  • references to specific functional units are only references to a suitable means for providing the described functionality, rather than indicative of a strict logical or physical structure or organization.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
EP22201404.5A 2022-10-13 2022-10-13 Couche neuronale fonctionnelle adaptative et réseau Pending EP4354344A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22201404.5A EP4354344A1 (fr) 2022-10-13 2022-10-13 Couche neuronale fonctionnelle adaptative et réseau
PCT/EP2023/078553 WO2024079344A1 (fr) 2022-10-13 2023-10-13 Couche neuronale fonctionnelle adaptative et réseau

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP22201404.5A EP4354344A1 (fr) 2022-10-13 2022-10-13 Couche neuronale fonctionnelle adaptative et réseau

Publications (1)

Publication Number Publication Date
EP4354344A1 true EP4354344A1 (fr) 2024-04-17

Family

ID=83692833

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22201404.5A Pending EP4354344A1 (fr) 2022-10-13 2022-10-13 Couche neuronale fonctionnelle adaptative et réseau

Country Status (2)

Country Link
EP (1) EP4354344A1 (fr)
WO (1) WO2024079344A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210382473A1 (en) * 2020-06-08 2021-12-09 Abb Power Grids Switzerland Ag Condition-Based Method for Malfunction Prediction

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210382473A1 (en) * 2020-06-08 2021-12-09 Abb Power Grids Switzerland Ag Condition-Based Method for Malfunction Prediction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANIRUDDHA RAJENDRA RAO ET AL: "Non-linear Functional Modeling using Neural Networks", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 19 April 2021 (2021-04-19), XP081941042 *
JUNWEN YAO ET AL: "Deep Learning for Functional Data Analysis with Adaptive Basis Layers", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 19 June 2021 (2021-06-19), XP081992256 *
LE XIE ET AL: "Massively Digitized Power Grid: Opportunities and Challenges of Use-inspired AI", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 10 May 2022 (2022-05-10), XP091224380 *

Also Published As

Publication number Publication date
WO2024079344A1 (fr) 2024-04-18

Similar Documents

Publication Publication Date Title
US20210011791A1 (en) Abnormality detection system, abnormality detection method, abnormality detection program, and method for generating learned model
Lee et al. Deep reinforcement learning for predictive aircraft maintenance using probabilistic remaining-useful-life prognostics
CN109657805B (zh) 超参数确定方法、装置、电子设备及计算机可读介质
Huang et al. Real-time fault detection for IIoT facilities using GBRBM-based DNN
US11593618B2 (en) Data processing apparatus, data processing method, and storage medium
CN110571792A (zh) 一种电网调控系统运行状态的分析评估方法及系统
Nguyen et al. Probabilistic deep learning methodology for uncertainty quantification of remaining useful lifetime of multi-component systems
KR102531879B1 (ko) 인공지능 기반 기업용 전자 장비의 유지 보수를 위한 이벤트 발생 예측 및 모니터링 방법, 장치 및 시스템
Afzal Using faults-slip-through metric as a predictor of fault-proneness
Huang et al. An adversarial learning approach for machine prognostic health management
JP2023547849A (ja) ラベルなしセンサデータを用いた産業システム内の稀な障害の自動化されたリアルタイムの検出、予測、及び予防に関する、方法または非一時的コンピュータ可読媒体
Lutska et al. Forecasting the efficiency of the control system of the technological object on the basis of neural networks
EP4354344A1 (fr) Couche neuronale fonctionnelle adaptative et réseau
Maleki et al. Improvement of credit scoring by lstm autoencoder model
Yang et al. ADT: Agent-based Dynamic Thresholding for Anomaly Detection
Aftabi et al. A Variational Autoencoder Framework for Robust, Physics-Informed Cyberattack Recognition in Industrial Cyber-Physical Systems
Benaddy et al. Evolutionary prediction for cumulative failure modeling: A comparative study
Petrlik et al. Multiobjective selection of input sensors for svr applied to road traffic prediction
Goyal et al. Strong α-cut and associated membership-based modeling for fuzzy time series forecasting
Zhou et al. Dynamic dispatching for re-entrant production lines—A deep learning approach
Javanmardi et al. Conformal Prediction Intervals for Remaining Useful Lifetime Estimation
Vachtsevanos et al. Prognosis: Challenges, Precepts, Myths and Applications
Benaddy et al. Recurrent neural network for software failure prediction
CN115456073B (zh) 基于长短期记忆的生成式对抗网络模型建模分析方法
US20230196088A1 (en) Fan behavior anomaly detection using neural network

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR