EP4305552A1 - Système technique pour la génération centralisée d'une pluralité de modèles d'apprentissage machine formés, reformés et/ou surveillés, les modèles d'apprentissage machine générés étant exécutés de manière décentralisée - Google Patents

Système technique pour la génération centralisée d'une pluralité de modèles d'apprentissage machine formés, reformés et/ou surveillés, les modèles d'apprentissage machine générés étant exécutés de manière décentralisée

Info

Publication number
EP4305552A1
EP4305552A1 EP22717402.6A EP22717402A EP4305552A1 EP 4305552 A1 EP4305552 A1 EP 4305552A1 EP 22717402 A EP22717402 A EP 22717402A EP 4305552 A1 EP4305552 A1 EP 4305552A1
Authority
EP
European Patent Office
Prior art keywords
machine learning
decentral
unit
units
trained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22717402.6A
Other languages
German (de)
English (en)
Inventor
Mathias Duckheim
Stephan Merk
Andreas SCHILDORFER
Sigurd Spieckermann
Thomas Werner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Original Assignee
Siemens AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG filed Critical Siemens AG
Publication of EP4305552A1 publication Critical patent/EP4305552A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Definitions

  • the present invention relates to a technical system for a centralized generation of a plurality of trained, retrained and/or monitored machine learning models, wherein the generated machine learning models are executed decentral. Further, the invention relates to a corresponding computer- implemented method and a corresponding storage medium.
  • AI Artificial Intelligence
  • the AI systems are software programs whose behavior is learned from data instead of being explicitly programmed.
  • the learning process is called “training” which requires plenty of data and significant computational resources.
  • the trained AI system solves a specific task for which it was trained, such as state estimation.
  • the learned behavior of the AI systems highly depends on the data used during training. This means that the performance of an AI system will be high when the training data is a representative sample of the data on which the AI system will be applied later in the field while the performance may be low when the training and in-field data differ in their characteristics. It is common that a trained AI system performs well after training and that its performance degrades over time as in-field data increasingly differs from the original training data. Therefore, operating AI systems in real-world distributed systems requires managing complete lifecycles including e.g. (a) continuous monitoring in order to detect performance degradation, (b) retraining when performance degradation is detected, (c) continuous collection of in-field data used for monitoring and retraining, and (d) deployment of the new versions .
  • the platform core, data and training management require elastically scalable standard server infrastructure, such as cloud infrastructure (e.g. x86, 64 bit, standard Linux OS) whereas trained AI systems are deployed to and operated on resource-constrained edge devices (e.g. ARM, 32 or 64 bit, custom Linux OS).
  • cloud infrastructure e.g. x86, 64 bit, standard Linux OS
  • resource-constrained edge devices e.g. ARM, 32 or 64 bit, custom Linux OS
  • a document DEBAUCHE OLIVER ET AL "A new Edge Architecture for AI-IoT services deployment", PROCEDIA COMPUTER SCIENCE, ELSEVIER, AMSTERDAM, NL, vol. 175, 1 January 2020 (2020-01- 01), pages 10-19, proposes a new architecture used to deploy at edge level micro services and adapted artificial intelligence algorithms and models.
  • the trained AI system can be used for AI-based state estimation.
  • the state estimation serves the purpose of stabilizing low-voltage power grids with decentralized production (e.g. photovoltaic) and new consumption patterns from e-mobility.
  • decentralized production and new consumption patterns cause increasing stress on the low- voltage grid (e.g. violation of mandatory voltage bands).
  • the data which is needed for the training is often only available in a central IT-system and not in the location where the field device is installed which performs the state estimation . Further, the field devices in general have not sufficient computational power to process the training. Further, a continuous re-training of the AI-based state estimation is necessary since the behavior of the low voltage networks will change if new installations are added or existing installations will be removed. Further, the AI-based state estimation has to be located in the field device, since it needs to work even if the communication between the field device and the central IT-system is disturbed.
  • a technical system for a centralized generation of a plurality of trained, retrained and/or monitored machine learning models, wherein the generated machine learning models are executed decentral comprising: a. A communication link or a communication interface; wherein b. the communication link or the communication interface is configured to communicate data bidirectional between a central machine learning platform with a machine learning unit and a plurality of decentral units; c. the central machine learning platform and the plurality of decentral units; wherein d. the central machine learning platform with the machine learning unit is connected to the communication link or the communication interface; wherein e.
  • the central machine learning platform is configured to train, retrain and/or monitor a respective machine learning model for each decentral unit of the plurality of decentral units based on training data, wherein the training data of the decentral unit is received from the respective decentral unit of the plurality of decentral units and/or other at least one unit; wherein f. the central machine learning platform is configured to provide the trained, retrained and/or monitored machine learning models via the communication link or the communication interface for the respective decentral units of the plurality of decentral units; wherein g. each decentral unit of the plurality of decentral units is connected to the communication link or the communication interface; wherein h.
  • each decentral unit of the plurality of decentral units is configured to receive the respective trained, retrained and/or monitored machine learning model of the generated plurality of machine learning models from the central machine learning platform via the communication link or the communication interface; wherein i. each decentral unit of the plurality of decentral units is configured to execute the received trained, retrained and/or monitored machine learning model.
  • the invention is directed to a technical system for a centralized generation of a plurality of trained, retrained and/or monitored machine learning models.
  • the training, retraining and/or monitoring steps are performed on the central machine learning platform.
  • the central machine learning platform is a cloud based platform or is installed on user's premise.
  • the resulting generated trained, retrained and/or monitored machine learning models are not executed directly on the central machine learning platform, but decentral. Hence, the machine learning models are transmitted to and executed on the decentral units.
  • the decentral units are designed as edge devices.
  • the central machine learning platform comprises one or more units, including the machine learning unit for training, retraining and/or monitoring a respective machine learning model for each decentral unit of the plurality of decentral units based on training data.
  • the central machine learning platform can comprise e.g. storage units. Thereby, machine learning, the machine learning model and the corresponding concepts such as training, retraining and/or monitoring can be interpreted in the common sense.
  • the training data is provided by the decentral units and/or other units.
  • the training data can be received from smart meters or other IT systems as exemplary other units.
  • the training data can be provided by other IT-systems which provide the data via a machine to machine communication to the central machine learning platform.
  • the training data can comprise measuring data points.
  • the measuring data points can be derived from secondary substations and/or smart meters under a substation, wherein the secondary substations are located near the decentral units and the measuring data points are provided for the central machine learning platform by the decentral units.
  • the measuring data points can comprise electrical measurements like current, voltage, active and reactive power or any other measured value or parameter of interest e.g. in context of a power grid or an electricity network.
  • the machine learning model such as AI model
  • AI model is applied on input data for state estimation.
  • the state of the power grid or the electricity network is determined in an efficient and reliable manner using machine learning.
  • the decentral units each receive and execute their respective trained, retrained and/or monitored machine learning model.
  • the aforementioned units e.g. decentral units may be realized as any devices, or any means, for computing, in particular for executing a software, an app, or an algorithm.
  • the decentral units can be realized as any physical computing devices, means or cloud systems for executing a software, an app, or an algorithm.
  • the unit may consist of or comprise a central processing unit (CPU) and/or a memory operatively connected to the CPU.
  • the unit may also comprise an array of CPUs, an array of graphical processing units (GPUs), at least one application-specific integrated circuit (ASIC), at least one field-programmable gate array, or any combination of the foregoing.
  • the unit may comprise at least one module which in turn may comprise software and/or hardware. Some, or even all, modules of the unit may be implemented by a cloud computing platform.
  • the central machine learning platform and the decentral units are connected to the communication bus or link for communication of data, according to which bidirectional data exchange or transfer. Accordingly, the machine learning model and associated data is transmitted via the communication bus or link. Additionally, any other data can be transmitted as well.
  • the data can e.g. comprise the machine learning model and/or the container etc. This allows for efficient data transmission and secure data traffic.
  • the present invention provides a hybrid solution with central monitoring, training and/or retraining, but decentral execution of the monitored, trained and/or retrained machine learning model, such as AI system or AI model.
  • This hybrid approach integrates the advantages of both central and decentral prior art approaches. Contrary to prior art, the aforementioned complex complete lifecycle essential for operating AI systems in real-world distributed systems can be managed in an efficient and reliable manner using the technical system.
  • the machine learning model is an Artificial Intelligence model, preferably an Artificial Neural Network. Accordingly, the machine learning model is an Artificial Intelligence model.
  • any other machine learning model can be used depending on the underlying user requirement, the use case such as the aforementioned power grid, the application case and/or any other condition.
  • the present invention allows for efficient, reliable and low- cost usage of machine learning systems, preferably the AI systems on the resource-constrained edge devices at large scale including full lifecycle management of the AI systems.
  • the full lifecycle management comprises e.g. monitoring, training, retraining, deployment aspects.
  • the advantage of the method according to the invention is that the method is independent from the application for which the machine learning models are trained.
  • the machine learning model can be used for state estimation or other applications, which require a central training and a distributed execution of individual AI systems on the edge devices.
  • the method according to the invention is also device-agnostic and is advantageously applicable on diverse edge devices with e.g. different hardware and basic software stacks and provided by different vendors, which enables scalability of the solution.
  • the communication link is a bus system, preferably a shared message bus. Accordingly, the communication link is a bus, such as MQTT.
  • the communication interface can be used. The advantage is that the communication link or interface can be flexibly and efficiently selected and adapted depending on the underlying technical system. Alternatively, communication protocols used in energy automation, like IEC 61850, can be used.
  • the decentral unit is an edge device. Accordingly, the decentral unit is designed as edge device. Alternatively, any other computing devices can be used, which can be connected to the communication link or communication interface .
  • the plurality of decentral units can comprise distinct decentral units, including edge devices and other computing units.
  • training data comprises measuring data points.
  • the central machine learning platform is further configured to deploy the generated trained, retrained and/or monitored machine learning models on the respective decentral units of the plurality of decentral unit in step f.
  • the central machine learning platform is further configured to initiate a build process of a respective container for each decentral unit of the plurality of decentral units comprising the respective trained, retrained and/or monitored machine learning model; notifying the respective decentral unit of the plurality of decentral units about the availability of the respective built container; and/or transmitting the built containers from the central machine learning platform to the respective decentral units of the plurality of decentral units.
  • each decentral unit of the plurality of decentral units is further configured to receive the respective container after notification of availability from the central machine learning platform via the communication link or communication interface in step h.
  • the generated trained, retrained and/or monitored machine learning models can be deployed on the respective decentral units. Therefore, a container is built via the build process for each machine learning model and hence for each decentral unit. In other words, the machine learning models are packaged into standardized units. Accordingly, a container deployment such as Kubernetes is performed in an efficient and reliable manner.
  • the containers are transmitted from the central machine learning platform to the decentral units. The advantage of the containers is that they can be quickly downloaded and put into use on the decentral units.
  • the central machine learning platform comprises at least one further unit, wherein the at least one further unit is configured to:
  • the central machine learning platform can comprise further additional units such as e.g. non-volatile or volatile storage units.
  • the monitoring comprises at least one of the sub steps: determining the analytical performance of the machine learning model, the trained, retrained and/or previously monitored machine learning model based on a test dataset, wherein the test dataset is independent from the training data and is not used during training and/or retraining;
  • Initiating training, retraining and/or monitoring depending on the evaluation result preferably after detection of the performance degradation. Accordingly, the performance of the machine learning model, the trained, retrained and/or previously monitored machine learning model can be determined and evaluated. The evaluation serves the purpose to detect any degradation in performance or deviation from the expected performance. After having identified the evaluation result, training, retraining and/or monitoring can be triggered.
  • a machine learning model is trained on the central machine learning platform, then executed on the respective decentral unit, evaluated after execution and then retrained after evaluation depending on the evaluation result e.g. detection of performance degradation.
  • a machine learning model is evaluated on the central machine learning platform before being deployed on the respective decentral unit.
  • the machine learning model can be evaluated after new test data is available etc.
  • a further aspect of the invention is a computer-implemented method.
  • a further aspect of the invention is a non-transitory computer-readable data storage medium.
  • Fig. 1 shows a schematic representation of the technical system according to the invention.
  • Fig. 2 shows a schematic representation of the technical system with its units according to an embodiment of the invention.
  • Fig. 3 shows a schematic representation of the technical system with its units according to another embodiment of the invention.
  • FIG 1 illustrates the technical system according to the invention .
  • the technical system 1 comprises the central machine learning platform 20, preferably cloud based and the decentral units 30, preferably edge devices.
  • the central machine learning platform 20 comprises the machine learning unit 22.
  • the central machine learning platform 20 can comprise further units 24, such as units for storage of data or processing of data. Such additional exemplary units 24 are shown in Figures 2 and 3 and explained further below.
  • the central machine learning platform 20 performs the training, retraining and/or monitoring of the machine learning model 12 and is connected to the communication link or the communication interface 10 such as a bus.
  • the decentral units 30 each execute their communicated trained, retrained and/or monitored machine learning model 14 (after training, retraining and/or monitoring by the central machine learning platform 30) and are also connected to the communication link or the communication interface 10. They e.g. receive the trained, retrained and/or monitored machine learning model 14 via the communication link or the communication interface 10.
  • the present invention is directed to a hybrid scalable lifecycle management platform that automates the entire lifecycle for a large number of AI systems that shall be deployed to resource-constrained edge devices in the field, as depicted in Figures 1 to 3.
  • the core platform or central machine learning platform 20 comprises a multitude of microservices that run on an elastically scalable standard server infrastructure (e.g. x86, 64 bit, standard Linux OS).
  • the platform 20 comprises e.g. microservices for data collection and data integration etc. according to Figure 2.
  • the trained AI systems 14 such as Artificial Neural Network (ANN) run on resource-constrained edge devices 30 (e.g. ARM, 32 bit, 64 bit, custom Linux OS). Thereby, the edge devices 30 can be installed on secondary substations, as shown in Figure 2.
  • SGW1050 is the name of exemplary edge devices..
  • a central message bus (e.g. using MQTT) 10, to which the core platform 20 and the edge devices 30 are connected, is used for communication such as data upload from the edge devices 30 to the core platform 20 and event messages for updating the deployments of AI systems to edge devices 30.
  • FIG. 3 shows a schematic representation of the technical system 1 with the central machine learning platform 20 and the edge devices 30 in more detail, especially the platform 20 with a plurality of distinct units according to another embodiment of the invention.
  • the platform 20 comprises the following units, according to this embodiment:
  • the master data manager requests or receives the master data about the available measurements or data points (equally referred to as measuring data points) at edge devices 30 and from the other software systems via the shared message bus or other communication interfaces (e.g. REST APIs) 10, integrates them into a consistent master data model in a common namespace and inserts them into the master database.
  • the shared message bus or other communication interfaces e.g. REST APIs
  • the master database stores the integrated master data about e.g. the edge devices 30, their location, their installation status, any available measurements or data points at assets (e.g. secondary substations) near the edge devices 30, available measurements or data points (e.g. measurements from smart meters under a substation) from other software systems (e.g. a meter data management system) and/or services (e.g. state estimation) which are e.g. activated at the edge devices 30.
  • assets e.g. secondary substations
  • available measurements or data points e.g. measurements from smart meters under a substation
  • services e.g. state estimation
  • the edge device data manager receives the dynamic data from the edge devices on the shared message bus and inserts them into the edge device database.
  • the edge device database stores the dynamic data (e.g. timeseries data, log data, audio data, image data and/or video data) received from the edge devices.
  • dynamic data e.g. timeseries data, log data, audio data, image data and/or video data
  • the data manager for the dynamic external data receives the dynamic data (e.g. weather data that is required for model training) from the other software systems via a machine to machine communication 10 and inserts them into the database for external data.
  • the dynamic data e.g. weather data that is required for model training
  • the database for dynamic external data stores the dynamic external data received from the other software systems.
  • the dynamic data provided by the field devices and the external software systems are often not synchronized since not all devices and/or software systems have an exact time and thus will not stamp the data with the exact time.
  • time series data collected in the edge devices and in other software systems often have different sampling rates.
  • the data synchronization manager is required in these cases which is configured to synchronize the dynamic data from the edge devices and the other software systems to a common time base and sampling rate.
  • the synchronized data is stored in the database for synchronized dynamic data.
  • the database for synchronized dynamic data stores the dynamic data synchronized by the data synchronization manager
  • the application manager provides the public interfaces for managing the core platform e.g. activates AI services like the power grid state estimation on edge devices.
  • the training manager orchestrates the training process of an AI system for a specific edge device with the associated data. It extracts a relevant subset of the available data, determines the space of hyperparameters to explore for training the AI system, schedules training jobs on appropriate compute infrastructure e.g. in the cloud, and stores the best performing AI system among the set of explored possibilities in the model store.
  • the machine learning core defines the AI system to be trained and operated in the field.
  • the core is configured for data preprocessing, feature selection, feature engineering, training the AI system, evaluation of the trained AI system and/or exporting a trained AI system as an artifact to be stored in the model store.
  • the model store is a storage system for exported AI system artifacts i.e. AI models and their metadata (e.g. performance on test data, used training dataset, etc.). It supports adding new artifacts and retrieving existing artifacts based on a unique identifier of the artifact.
  • exported AI system artifacts i.e. AI models and their metadata (e.g. performance on test data, used training dataset, etc.). It supports adding new artifacts and retrieving existing artifacts based on a unique identifier of the artifact.
  • the model performance monitor uses the machine learning core (software library) and an exported AI system artifact from the model store in order to determine the performance of the AI system on a test dataset that was not used in the training process .
  • the performance of a deployed AI system is evaluated on a regular basis in order to detect performance degradation.
  • the results of each performance evaluation are stored in the master data DB.
  • the deployment manager initiates the build process of a container that contains a self-contained runtime environment of a trained AI systems ready for execution on an edge device.
  • the container build manager builds a self-contained runtime environment for a trained AI system and stores it in a container registry.
  • Container registry stores the containers with the self- contained runtime environment for a trained AI system.
  • Machine learning model before Training, Retraining and/or Monitoring
  • Trained, retrained and/or monitored machine learning model after Training, Retraining and/or Monitoring

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Testing And Monitoring For Control Systems (AREA)

Abstract

Des systèmes techniques destinés à la génération centralisée d'une pluralité de modèles d'apprentissage machine formés, reformés et/ou surveillés, les modèles d'apprentissage machine générés étant exécutés de manière décentralisée. L'invention se rapporte à des systèmes techniques destinés à la génération centralisée d'une pluralité de modèles d'apprentissage machine formés, reformés et/ou surveillés à exécuter de manière décentralisée, comprenant : a. une liaison de communication ou une interface de communication ; b. la liaison de communication ou l'interface de communication étant configurée pour la communication bidirectionnelle de données entre une plate-forme d'apprentissage machine centralisé présentant une unité d'apprentissage machine et une pluralité d'unités décentralisées ; c. la plate-forme d'apprentissage machine centralisé et la pluralité d'unités décentralisées ; d. la plate-forme d'apprentissage machine centralisé présentant l'unité d'apprentissage machine étant connectée à la liaison de communication ou à l'interface de communication ; e. la plate-forme d'apprentissage machine centralisé étant configurée pour former, reformer et/ou surveiller un modèle d'apprentissage machine respectif pour chaque unité décentralisée de la pluralité d'unités décentralisées sur la base de données de formation, les données de formation de l'unité décentralisée étant reçues en provenance de l'unité décentralisée respective de la pluralité d'unités décentralisées et/ou d'au moins une autre unité ; f. la plate-forme d'apprentissage machine centralisé étant configurée pour fournir le modèle d'apprentissage machine formé, reformé et/ou surveillé par l'intermédiaire de la liaison de communication ou de l'interface de communication pour l'unité décentralisée respective de la pluralité d'unités décentralisées ; g. chaque unité décentralisée de la pluralité d'unités décentralisées est connectée à la liaison de communication ou à l'interface de communication ; h. chaque unité décentralisée de la pluralité d'unités décentralisées étant configurée pour recevoir le modèle d'apprentissage machine formé, reformé et/ou surveillé de la pluralité générée de modèles d'apprentissage machine en provenance de la plate-forme d'apprentissage machine centralisé par l'intermédiaire de la liaison de communication ou de l'interface de communication ; i. chaque unité décentralisée de la pluralité d'unités décentralisées étant configurée pour exécuter le modèle d'apprentissage machine formé, reformé et/ou surveillé reçu. En outre, l'invention concerne un procédé mis en œuvre par ordinateur et un support de stockage de données lisible par ordinateur non transitoire.
EP22717402.6A 2021-04-09 2022-03-31 Système technique pour la génération centralisée d'une pluralité de modèles d'apprentissage machine formés, reformés et/ou surveillés, les modèles d'apprentissage machine générés étant exécutés de manière décentralisée Pending EP4305552A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP21167631.7A EP4071670A1 (fr) 2021-04-09 2021-04-09 Système technique pour une génération centralisée d'une pluralité de modèles d'apprentissage machine entraînés, ré-entraînés et/ou surveillés, dans lequel les modèles d'apprentissage machine générés sont exécutés de manière décentralisée
PCT/EP2022/058668 WO2022214391A1 (fr) 2021-04-09 2022-03-31 Système technique pour la génération centralisée d'une pluralité de modèles d'apprentissage machine formés, reformés et/ou surveillés, les modèles d'apprentissage machine générés étant exécutés de manière décentralisée

Publications (1)

Publication Number Publication Date
EP4305552A1 true EP4305552A1 (fr) 2024-01-17

Family

ID=75441790

Family Applications (2)

Application Number Title Priority Date Filing Date
EP21167631.7A Pending EP4071670A1 (fr) 2021-04-09 2021-04-09 Système technique pour une génération centralisée d'une pluralité de modèles d'apprentissage machine entraînés, ré-entraînés et/ou surveillés, dans lequel les modèles d'apprentissage machine générés sont exécutés de manière décentralisée
EP22717402.6A Pending EP4305552A1 (fr) 2021-04-09 2022-03-31 Système technique pour la génération centralisée d'une pluralité de modèles d'apprentissage machine formés, reformés et/ou surveillés, les modèles d'apprentissage machine générés étant exécutés de manière décentralisée

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP21167631.7A Pending EP4071670A1 (fr) 2021-04-09 2021-04-09 Système technique pour une génération centralisée d'une pluralité de modèles d'apprentissage machine entraînés, ré-entraînés et/ou surveillés, dans lequel les modèles d'apprentissage machine générés sont exécutés de manière décentralisée

Country Status (2)

Country Link
EP (2) EP4071670A1 (fr)
WO (1) WO2022214391A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4365779A1 (fr) * 2022-11-04 2024-05-08 Helsing GmbH Procédé et dispositifs de surveillance de données de performance d'un modèle d'ia

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200372402A1 (en) * 2019-05-24 2020-11-26 Bank Of America Corporation Population diversity based learning in adversarial and rapid changing environments

Also Published As

Publication number Publication date
EP4071670A1 (fr) 2022-10-12
WO2022214391A1 (fr) 2022-10-13

Similar Documents

Publication Publication Date Title
CN111355606B (zh) 面向web应用的容器集群自适应扩缩容系统和方法
CN114328198A (zh) 一种系统故障检测方法、装置、设备及介质
CN105046327A (zh) 一种基于机器学习技术的智能电网信息系统及方法
EP3748811A1 (fr) Procédé de configuration d'un dispositif électronique intelligent et système associé
Yang et al. A novel PMU fog based early anomaly detection for an efficient wide area PMU network
CN115280741A (zh) 混合能量管理中的自主监测和恢复的系统和方法
EP4305552A1 (fr) Système technique pour la génération centralisée d'une pluralité de modèles d'apprentissage machine formés, reformés et/ou surveillés, les modèles d'apprentissage machine générés étant exécutés de manière décentralisée
CN114138501B (zh) 用于现场安全监控的边缘智能服务的处理方法和装置
WO2020206699A1 (fr) Prédiction de défaillances d'attribution de machine virtuelle sur des grappes de nœuds de serveur
Sifat et al. Design, development, and optimization of a conceptual framework of digital twin electric grid using systems engineering approach
Rathfelder et al. Capacity planning for event-based systems using automated performance predictions
Liu et al. The design and implementation of the enterprise level data platform and big data driven applications and analytics
Shih et al. Implementation and visualization of a netflow log data lake system for cyberattack detection using distributed deep learning
CN116226067A (zh) 日志管理方法、日志管理装置、处理器和日志平台
CN115220131A (zh) 气象数据质检方法及系统
Li et al. An automated data engineering pipeline for anomaly detection of IoT sensor data
CN111476316B (zh) 一种基于云计算下电力负荷特征数据均值聚类的方法及系统
CN114116252A (zh) 一种调控系统运行量测数据存储系统及方法
CN110999263B (zh) Iot装置集群的分层数据处理
Sheeba et al. WFCM based big sensor data error detection and correction in wireless sensor network
Nechifor et al. Event detection for urban dynamic data streams
Pérez et al. Analysis of an edge-computing-based solution for local data processing at secondary substations
Han et al. Design of On-Orbit Monitoring Method for Small Satellite Payload Equipment Based on Grey Prediction
Wen et al. Orchestrating networked machine learning applications using Autosteer
Ruhe et al. Distributed Processing System for Monitoring using Digital Twins in Medium Voltage Grids

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20231009

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR