WO2020260016A1 - Procédé et dispositif d'apprentissage d'un système d'apprentissage automatique - Google Patents

Procédé et dispositif d'apprentissage d'un système d'apprentissage automatique Download PDF

Info

Publication number
WO2020260016A1
WO2020260016A1 PCT/EP2020/066033 EP2020066033W WO2020260016A1 WO 2020260016 A1 WO2020260016 A1 WO 2020260016A1 EP 2020066033 W EP2020066033 W EP 2020066033W WO 2020260016 A1 WO2020260016 A1 WO 2020260016A1
Authority
WO
WIPO (PCT)
Prior art keywords
machine learning
learning system
image
gen
dis
Prior art date
Application number
PCT/EP2020/066033
Other languages
German (de)
English (en)
Inventor
Nianlong GU
Lydia Gauerhof
Original Assignee
Robert Bosch Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch Gmbh filed Critical Robert Bosch Gmbh
Priority to US17/610,669 priority Critical patent/US20220245932A1/en
Priority to CN202080046427.9A priority patent/CN113994349A/zh
Publication of WO2020260016A1 publication Critical patent/WO2020260016A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

Definitions

  • the invention relates to a method of training a machine learning system, a training device, a computer program and a machine-readable storage medium.
  • CVAE-GAN Fine-Grained Image Generation through Asymmetry Training
  • arXiv preprint arXiv: 1703.10155, 2017, Jianmin Bao, Dong Chen, Fang Wen, Houqiang Li, and Gang Hua offers an overview of known generative processes such as Variational Autoencoder and Generative Adversarial Networks.
  • the invention relates to a computer-implemented method for generating an augmented data record
  • Input images for training a machine learning system which is set up for the classification and / or semantic segmentation of input images, with a first machine learning system, which is designed as a decoder of an autoencoder, in particular a first neural network, and a second machine learning system,
  • a second neural network which is designed as an encoder of the autoencoder, with each latent by means of the encoder
  • Variables are determined from the input images, the input images being classified as a function of the determined characteristics of their image data, and an augmented input image of the augmented
  • average values of the determined latent variables is determined in at least two of the classes, the image classes being selected in such a way that the input images classified therein correspond with regard to their characteristics in a predeterminable set of other features.
  • the augmented input image is determined by means of the decoder as a function of a determined augmented latent variable. This can be used to efficiently generate a modified image.
  • the augmented latent variable is determined from a specifiable one of the determined latent variables and a difference between the average values.
  • the feature of the image that corresponds to the predeterminable of the determined latent variables is thus varied.
  • the generated augmented data record is used to check whether the machine learning system, in particular already trained, is robust, and depending on this, and the training is then continued, in particular only when the
  • the learning system is trained with the generated augmented data set, in particular only when monitoring has shown that the machine learning system is not robust.
  • Monitoring of the machine learning system is carried out by means of a monitoring unit which comprises the first machine learning system and the second machine learning system, the input image being fed to the second machine learning system, which uses this to determine a low-dimensional latent variable from which the first machine learning system reconstructs the input image determined, depending on the input image and the reconstructed input image, it is decided whether the machine
  • Monitoring unit also includes a third machine learning system of a neural network system,
  • the second machine learning system being designed to determine the latent variable again from the higher-dimensional constructed image
  • the third machine learning system being designed to distinguish whether an image fed to it is a real image is, or not
  • the feature map of the third machine learning system assumes when the input image is fed to it and which value the activation in the predeterminable feature map of the third machine learning system assumes when the reconstructed input image is fed to it.
  • the first machine learning system is trained to activate it in a predeterminable
  • the feature map of the feature maps of the third machine learning system assumes the same value if possible when it is supplied with a real image or an image of the real image reconstructed by a series connection of a second machine learning system and a first machine learning system. It has been shown that the training converges particularly well as a result.
  • the first machine learning system is also trained in such a way that the third machine learning system does not recognize, if possible, that an image generated by the first machine learning system is not a real image. This ensures particularly robust anomaly detection.
  • the second machine learning system and in particular only the second machine learning system, is trained in such a way that a reconstruction of the latent variable determined by a series connection of the first machine learning system and the second machine learning system is as similar as possible to the latent variable. It was recognized that the convergence of the procedure was considerable is improved if this reconstruction is chosen so that only the
  • Parameters of the second machine learning system are trained, since otherwise the cost function of the encoder and the generator are difficult to reconcile with one another.
  • the third machine learning system is trained so that it recognizes as much as possible that an image generated by the first machine learning system is not a real image and / or that the third machine learning system is also trained in such a way that it recognizes as far as possible that a real image fed to it is a real image.
  • the third machine learning system is trained so that it recognizes as much as possible that an image generated by the first machine learning system is not a real image and / or that the third machine learning system is also trained in such a way that it recognizes as far as possible that a real image fed to it is a real image.
  • the monitoring is particularly reliable, since it is particularly easy to ensure that the statistical distributions of the training data sets are comparable (namely: identical).
  • the invention relates to a computer program which is set up to carry out the above methods and to a machine-readable storage medium on which this computer program is stored.
  • FIG. 1 schematically shows a structure of an embodiment of the invention
  • FIG. 2 schematically shows an exemplary embodiment for controlling an at least partially autonomous robot
  • Figure 3 schematically shows an embodiment for controlling a
  • Figure 4 schematically shows an embodiment for controlling a
  • Figure 5 schematically shows an embodiment for controlling a
  • Figure 6 schematically shows an embodiment for controlling a
  • Figure 7 schematically shows an embodiment for controlling a
  • FIG. 8 shows a possible structure of the monitoring unit
  • FIG. 9 shows a possible structure of a first training device 141
  • FIG. 10 the neural network system
  • FIG. 11 shows a possible structure of a second training device 140.
  • FIG. 1 shows an actuator 10 in its surroundings 20 in interaction with one
  • Control system 40 At preferably regular time intervals, the surroundings 20 are recorded in a sensor 30, in particular an imaging sensor such as a video sensor, which can also be provided by a plurality of sensors, for example a stereo camera. Other imaging sensors are also conceivable, such as radar, ultrasound or Lidar. A thermal imaging camera is also conceivable.
  • the sensor signal S - or, in the case of a plurality of sensors, one sensor signal S each - from the sensor 30 is transmitted to the control system 40.
  • the control system 40 thus receives a sequence of sensor signals S.
  • the control system 40 uses this to determine control signals A, which are transmitted to the actuator 10.
  • the control system 40 receives the sequence of sensor signals S from the sensor 30 in an optional receiving unit 50, which receives the sequence from
  • the input image x can, for example, be a section or further processing of the sensor signal S.
  • the input image x comprises individual frames of a video recording.
  • Input image x determined as a function of sensor signal S.
  • Input images x are fed to a machine learning system, in the exemplary embodiment an artificial neural network 60.
  • the artificial neural network 60 is preferably parameterized by
  • Parameters F which are stored in a parameter memory P and are provided by this.
  • the artificial neural network 60 determines x from the input images
  • Output variables y can in particular include a classification and / or semantic segmentation of the input images x.
  • Output variables y are fed to an optional conversion unit 80, which uses this to determine control signals A which are fed to the actuator 10 in order to control the actuator 10 accordingly.
  • Output variable y includes information about objects that sensor 30 has detected.
  • the control system 40 further comprises a monitoring unit 61 for monitoring the functioning of the artificial neural network 60.
  • the monitoring unit 61 is also supplied with the input image x. Depending on this, it determines a monitoring signal d, which is also the
  • Forming unit 80 is supplied.
  • the control signal A is also determined as a function of the monitoring signal d.
  • the monitoring signal d characterizes whether the neural network 60 reliably determines the output variables y or not. If that
  • control signal A is determined in accordance with a secured operating mode (while it is otherwise determined in a normal operating mode).
  • the secured operating mode can include, for example, that a dynamic of the actuator 10 is reduced, or that functionalities for controlling the actuator 10 are switched off.
  • the actuator 10 receives the control signals A, is controlled accordingly and carries out a corresponding action.
  • the actuator 10 can include control logic (not necessarily structurally integrated), which determines a second control signal from the control signal A, with which the actuator 10 is then controlled.
  • control system 40 includes the sensor 30. In still further embodiments, the control system 40 alternatively or additionally also includes the actuator 10.
  • control system 40 comprises one or a plurality of processors 45 and at least one
  • Machine-readable storage medium 46 on which instructions are stored which, when they are executed on the processors 45, cause the control system 40 to carry out the method according to the invention.
  • a display unit 10a is provided as an alternative or in addition to the actuator 10.
  • FIG. 2 shows how the control system 40 can be used to control an at least partially autonomous robot, here an at least partially autonomous motor vehicle 100.
  • the sensor 30 can, for example, be a video sensor preferably arranged in the motor vehicle 100.
  • the artificial neural network 60 is set up to reliably identify x objects from the input images.
  • the actuator 10, which is preferably arranged in the motor vehicle 100, can be, for example, a brake, a drive or a steering system
  • the control signal A can then be determined in such a way that the actuator or the actuators 10 are controlled in such a way that the motor vehicle 100 prevents, for example, a collision with the objects reliably identified by the artificial neural network 60, in particular when they are objects of certain classes, e.g. pedestrians.
  • the at least partially autonomous robot can also be another mobile robot (not shown), for example one that moves by flying, swimming, diving or stepping.
  • the mobile robot can also be, for example, an at least partially autonomous lawnmower or an at least partially autonomous cleaning robot.
  • the control signal A can be determined in such a way that the drive and / or steering of the mobile robot are controlled in such a way that the at least partially autonomous robot prevents, for example, a collision with objects identified by the artificial neural network 60.
  • control signal A can be used to control the display unit 10a, and for example the determined safe areas can be displayed.
  • the display unit 10a it is also possible for the display unit 10a to be activated with the activation signal A in such a way that it emits an optical or acoustic warning signal if it is determined that the motor vehicle 100 is threatening with one of the reliably identified Objects to collide.
  • FIG. 3 shows an exemplary embodiment in which the control system 40 for controlling a production machine 11 of a production system 200 is used in that an actuator 10 controlling this manufacturing machine 11 is controlled.
  • the manufacturing machine 11 can, for example, be a machine for punching, sawing, drilling and / or cutting.
  • the sensor 30 can then be, for example, an optical sensor, e.g. Properties of manufactured products 12a, 12b recorded.
  • FIG. 4 shows an exemplary embodiment in which the control system 40 is used to control an access system 300.
  • the access system 300 may include a physical access control, for example a door 401.
  • Video sensor 30 is set up to detect a person. Using the
  • Object identification system 60 can interpret this captured image.
  • the actuator 10 can be a lock that depends on
  • Control signal A releases the access control or not, for example the door 401 opens or not.
  • the control signal A can be selected depending on the interpretation of the object identification system 60, for example depending on the identified identity of the person.
  • a logical access control can also be provided.
  • FIG. 5 shows an exemplary embodiment in which the control system 40 is used to control a monitoring system 400.
  • This exemplary embodiment differs from the exemplary embodiment shown in FIG. 5 in that instead of the actuator 10, the display unit 10a is provided, which is controlled by the control system 40.
  • the artificial neural network 60 can reliably determine an identity of the objects recorded by the video sensor 30 in order to conclude, for example, which are suspicious, and the control signal A can then be selected such that this object is shown highlighted in color by the display unit 10a becomes.
  • FIG. 6 shows an exemplary embodiment in which the control system 40 is used to control a personal assistant 250.
  • the sensor 30 is preferably an optical sensor that receives images of a gesture by a user 249.
  • control system 40 determines a control signal A from the personal assistant 250, for example by the neural network carrying out gesture recognition. This determined control signal A is then transmitted to personal assistant 250 and is thus controlled accordingly.
  • This determined control signal A act can in particular be selected such that it corresponds to a presumed desired control by the user 249. This presumed desired activation can be determined as a function of the gesture recognized by the artificial neural network 60.
  • the control system 40 can then select the activation signal A for transmission to the personal assistant 250 and / or the activation signal A for transmission to the personal assistant depending on the presumed desired activation
  • This corresponding control can include, for example, that the personal assistant 250 retrieves information from a database and reproduces it for the user 249 in a perceptible manner.
  • FIG. 7 shows an exemplary embodiment in which the control system 40 is used to control a medical imaging system 500, for example an MRT, X-ray or ultrasound device.
  • the sensor 30 can for example be provided by an imaging sensor
  • the display unit 10a is controlled by the control system 40.
  • the neural network 60 can determine whether an area recorded by the imaging sensor is conspicuous, and the control signal A can then be selected such that this area is shown highlighted in color by the display unit 10a.
  • FIG. 8 shows a possible structure of the monitoring unit 61
  • the input image x is fed to an encoder ENC, which uses this to determine a so-called latent variable z.
  • the latent variable z has a smaller dimensionality than the input image x.
  • This latent variable z is fed to a generator GEN which uses it to generate a reconstructed image x.
  • the encoder ENC and generator GEN are each given by a convolutional neural network.
  • the input image x and the reconstructed image x are fed to a discriminator DIS.
  • the discriminator DIS has been trained to generate a variable as well as possible which characterizes whether an image fed to the discriminator DIS is a real image or whether it was generated by the generator GEN. This is explained in more detail below in connection with FIG.
  • Generator GEN is also a convolutional neural network.
  • Feature maps (English: “feature maps") of an l-th layer (where l is a predefinable number), which result when the generator GEN
  • Input image x or the reconstructed image x are supplied, are denoted by DIS l (x) and. These are fed to an evaluator BE, in
  • an abnormality value A (x) can be used as the proportion of those
  • Input images of a reference data set for example a Training data set with which the discriminator DIS and / or the generator GEN and / or the encoder ENC was trained, whose
  • FIG. 9 shows a possible structure of a first training device 141 for training the monitoring unit 51. This is set with parameters q
  • Parameters q include generator parameters q GEN , which parameterize the generator GEN, encoder parameters q ENC , which parameterize the encoder ENC and
  • Discriminator parameters q DIS which parameterize the discriminator DIS.
  • Training device 141 comprises a provider 71, which consists of a
  • Training data set provides input images e.
  • Input images e are fed to the monitoring unit 61 to be trained, which determines output variables a therefrom.
  • Output variables a and input images e are fed to an assessor 74 which, as described in connection with FIG. 10, determines new parameters q 'therefrom, which are transmitted to parameter memory P and replace parameters q there.
  • the methods executed by exercise device 141 can be configured as
  • Storage medium 146 can be stored and executed by a processor 145.
  • FIG. 10 illustrates the interaction of generator GEN, encoder ENC and discriminator DIS during training.
  • the arrangement of generator GEN, encoder ENC and discriminator DIS shown here is also referred to in this document as a neural network system.
  • the discriminator DIS is trained. The following steps for training the discriminator DIS can, for example, be repeated n DIS times, where n DIS is a specifiable whole number.
  • These input images x® are real images that are made available, for example, from a database. The entirety of these input images is also called
  • the probability distribution p z is here, for example, a
  • a stack of random variables is considered to be randomly selected from a
  • the probability distribution p ⁇ is, for example, a uniform distribution over the interval [0; 1].
  • the latent variables z are fed to the generator GEN and give a constructed input image , so
  • the generator GEN and encoder ENC are then trained. Here too, as real input images and randomly chosen latent ones
  • Cost function of the input image x and a reconstruction cost function of the latent variable z determined as
  • Discriminator parameters then replace the generator parameters q GEN , encoder parameters q ENC and discriminator parameters q DIS .
  • FIG. 11 shows an exemplary second training device 140 for training the neural network 60.
  • Training device 140 comprises a provider 72 which provides input images x and target output variables ys, for example target classifications.
  • Input image x is fed to the artificial neural network 60 to be trained, which determines output variables y therefrom.
  • Output variables y and desired output variables ys are fed to a comparator 75 which, depending on a match between the respective output variables y and desired output variables ys, generates new parameters therefrom determined, which are transmitted to the parameter memory P and replace parameter F there.
  • the methods executed by training system 140 can be configured as
  • Storage medium 148 be stored and executed by a processor 147.
  • a data record comprising input images x and associated target output variables ys can be augmented or generated (for example by provider 72) as follows.
  • a data set is made up of input images provided. These are classified according to predeterminable characteristics (named “A” and “B” for example) of a feature, for example vehicles can be classified according to the feature “headlights switched on” or “headlights switched off”, or identified cars according to the type “Limousine” or "Kombi". Also, for example, are different
  • New latent variables with a predeterminable scale factor a are now formed for images from the set I A , which can assume values between 0 and 1, for example
  • new latent variables can be formed for images from the set I B as
  • New pictures can be created from this can be generated using
  • the associated target output variable ys can be adopted unchanged.
  • the augmented data set can thus be generated and the neural network 60 can be trained with it. This ends the procedure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé d'apprentissage d'un système d'apprentissage automatique (60), comportant des étapes pour générer un jeu de données augmenté comportant des images d'entrée (x ( i) )) pour l'apprentissage du système d'apprentissage automatique (60), qui est conçu pour classer et/ou segmenter de manière sémantique des images d'entrée (x), ayant un premier système d'apprentissage automatique (GEN) qui est conçu en tant que décodeur (GEN) d'un autocodeur (ENC-GEN), en particulier un premier réseau neuronal, et un second système d'apprentissage automatique (ENG), en particulier un second réseau neuronal, qui est conçu en tant que codeur (ENG) de l'autocodeur (ENC-GEN). Des variables latentes (z (i) ) sont déterminées à partir de chacune des images d'entrée (x (i) ) au moyen du codeur (ENG), les images d'entrée (x (i) ) sont classées en fonction des expressions caractéristiques déterminées de leurs données d'image, et une image d'entrée augmentée (x (i) neu ) du jeu de données augmenté est déterminée à partir d'au moins une des images d'entrée (x (i) ) en fonction de valeurs moyennes (̅z A , ̅z B ) des variables (z (i) ) latentes déterminées dans au moins deux classes, les classes d'images sont choisies de manière que les images d'entrée (x (i) ) classées dans celles-ci concordent quant à leurs expressions selon une quantité, définissable à l'avance, d'autres caractéristiques.
PCT/EP2020/066033 2019-06-28 2020-06-10 Procédé et dispositif d'apprentissage d'un système d'apprentissage automatique WO2020260016A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/610,669 US20220245932A1 (en) 2019-06-28 2020-06-10 Method and device for training a machine learning system
CN202080046427.9A CN113994349A (zh) 2019-06-28 2020-06-10 用于训练机器学习系统的方法和设备

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102019209566.6A DE102019209566A1 (de) 2019-06-28 2019-06-28 Verfahren und Vorrichtung zum Trainieren eines maschinellen Lernsystems
DE102019209566.6 2019-06-28

Publications (1)

Publication Number Publication Date
WO2020260016A1 true WO2020260016A1 (fr) 2020-12-30

Family

ID=71092522

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/066033 WO2020260016A1 (fr) 2019-06-28 2020-06-10 Procédé et dispositif d'apprentissage d'un système d'apprentissage automatique

Country Status (4)

Country Link
US (1) US20220245932A1 (fr)
CN (1) CN113994349A (fr)
DE (1) DE102019209566A1 (fr)
WO (1) WO2020260016A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467881A (zh) * 2021-09-01 2021-10-01 南方电网数字电网研究院有限公司 图表样式自动化调整方法、装置、计算机设备和存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210224511A1 (en) * 2020-01-21 2021-07-22 Samsung Electronics Co., Ltd. Image processing method and apparatus using neural network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017223166A1 (de) * 2017-12-19 2019-06-19 Robert Bosch Gmbh Verfahren zum automatischen Klassifizieren

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200144398A (ko) * 2019-06-18 2020-12-29 삼성전자주식회사 클래스 증가 학습을 수행하는 장치 및 그의 동작 방법

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017223166A1 (de) * 2017-12-19 2019-06-19 Robert Bosch Gmbh Verfahren zum automatischen Klassifizieren

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANDERS LARSEN ET AL: "Autoencoding beyond pixels using a learned similarity metric", 10 February 2016 (2016-02-10), arXiv.org, pages 1 - 8, XP055724529, Retrieved from the Internet <URL:https://arxiv.org/pdf/1512.09300.pdf> [retrieved on 20200824] *
CHOE JUNSUK ET AL: "Face Generation for Low-Shot Learning Using Generative Adversarial Networks", 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), IEEE, 22 October 2017 (2017-10-22), pages 1940 - 1948, XP033303655, DOI: 10.1109/ICCVW.2017.229 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467881A (zh) * 2021-09-01 2021-10-01 南方电网数字电网研究院有限公司 图表样式自动化调整方法、装置、计算机设备和存储介质
CN113467881B (zh) * 2021-09-01 2021-11-16 南方电网数字电网研究院有限公司 图表样式自动化调整方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
US20220245932A1 (en) 2022-08-04
DE102019209566A1 (de) 2020-12-31
CN113994349A (zh) 2022-01-28

Similar Documents

Publication Publication Date Title
WO2020260020A1 (fr) Procédé et dispositif de contrôle de la robustesse d&#39;un réseau neuronal artificiel
WO2020260016A1 (fr) Procédé et dispositif d&#39;apprentissage d&#39;un système d&#39;apprentissage automatique
DE202020101012U1 (de) Vorrichtung zum Vorhersagen einer geeigneten Konfiguration eines maschinellen Lernsystems für einen Trainingsdatensatz
EP3899808A1 (fr) Procédé pour entraîner un réseau neuronal
DE102020211849A1 (de) Trainieren eines maschinellen lernmodells unter verwendung eines batch-basierten aktiven lernansatzes
WO2020173700A1 (fr) Procédé et dispositif de fonctionnement d&#39;un système de commande
EP3857822A1 (fr) Procédé et dispositif de détermination d&#39;un signal de commande
DE102022201679A1 (de) Verfahren und Vorrichtung zum Trainieren eines neuronalen Netzes
DE102020208309A1 (de) Verfahren und Vorrichtung zum Erstellen eines maschinellen Lernsystems
DE102020208828A1 (de) Verfahren und Vorrichtung zum Erstellen eines maschinellen Lernsystems
DE102019209228A1 (de) Verfahren und Vorrichtung zum Überprüfen der Robustheit eines künstlichen neuronalen Netzes
EP3899809A1 (fr) Procédé et dispositif de classification de données de capteur et de détermination d&#39;un signal de commande pour commander un actionneur
DE102018216078A1 (de) Verfahren und Vorrichtung zum Betreiben eines Steuerungssystems
DE102004018288A1 (de) Verfahren und Vorrichtung zur näherungsweisen Indentifizierung eines Objekts
DE102022204263A1 (de) Verfahren und Vorrichtung zum Trainieren eines neuronalen Netzes
DE102020203807A1 (de) Verfahren und Vorrichtung zum Trainieren eines Bildklassifikators
DE102021214329A1 (de) Verfahren und Vorrichtung zum Bestimmen einer Abdeckung eines Datensatzes für ein maschinelles Lernsystem hinsichtlich Trigger Events
WO2023006597A1 (fr) Procédé et dispositif de création d&#39;un système d&#39;apprentissage automatique
DE102021209212A1 (de) Verfahren und Vorrichtung zum Ermitteln von Objektdetektionen eines Bildes
DE102020211714A1 (de) Verfahren und Vorrichtung zum Erstellen eines maschinellen Lernsystems
DE102020204005A1 (de) Verfahren und Vorrichtung zum Training eines Bildklassifikators
DE202020105509U1 (de) Vorrichtung zum Anlernen eines maschinellen Lernsystems
DE102018216079A1 (de) Verfahren und Vorrichtung zum Betreiben eines Steuerungssystems
DE102019205657A1 (de) Verfahren und Vorrichtung zum Klassifizieren von Sensordaten
WO2023016843A1 (fr) Procédé et dispositif de création automatisée d&#39;un système d&#39;apprentissage machine pour la fusion de données de capteurs multiples

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20732551

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20732551

Country of ref document: EP

Kind code of ref document: A1