EP4276712A1 - Verfahren und vorrichtung zum betrieb eines technischen systems - Google Patents

Verfahren und vorrichtung zum betrieb eines technischen systems Download PDF

Info

Publication number
EP4276712A1
EP4276712A1 EP22173335.5A EP22173335A EP4276712A1 EP 4276712 A1 EP4276712 A1 EP 4276712A1 EP 22173335 A EP22173335 A EP 22173335A EP 4276712 A1 EP4276712 A1 EP 4276712A1
Authority
EP
European Patent Office
Prior art keywords
latent variable
series data
time
fuel cell
cell stack
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22173335.5A
Other languages
English (en)
French (fr)
Inventor
Jakob Lindinger
Christoph LIPPERT
Barbara Rakitsch
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Priority to EP22173335.5A priority Critical patent/EP4276712A1/de
Priority to PCT/EP2023/062296 priority patent/WO2023217792A1/en
Publication of EP4276712A1 publication Critical patent/EP4276712A1/de
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]

Definitions

  • the invention relates to a method and device for operating a technical system.
  • Gaussian process state-space models use Gaussian processes as the transition function in a state-space model to describe time series data in a fully probabilistic manner. These models have two types of latent variables, the temporal states required for modelling noisy sequential observations, and the so-called inducing outputs which are needed to treat the Gaussian process part of the model in an efficient manner.
  • the computer-implemented method and the device according to the independent claims provides a model and a combination of these inference methods by treating two different types of the latent variables in Gaussian process state-space model distinctly and applying variational inference to a Gaussian process part of the model and the Laplace approximation to the temporal states of the model.
  • the distinction of the two types of latent variables makes it possible to process the model efficiently.
  • the method does not require sequential sampling of the temporal states during inference and instead performs the Laplace approximation that involves a joint optimization over those temporal states. This point helps in optimizing the model.
  • the approximate posterior that is used in the model further assumes that dynamics can be locally linearly approximated.
  • the improvements in the optimization that are provided by this model also lead to better calibrated uncertainties for different time-series prediction tasks.
  • the computer-implemented method for machine learning with time-series data representing observations related to a technical system comprises providing the time-series data, and model parameters of a distribution over the time-series data and over a first latent variable and over a second latent variable, and variational parameters of an approximate distribution over a second latent variable, sampling a value of the second latent variable from the approximate distribution over the second latent variable, finding a value of the first latent variable depending on a density of the distribution over the time-series data and over the first latent variable and over the value of the second latent variable, in particular that maximizes the density of the distribution over the time-series data and over the first latent variable and over the value of the second latent variable, determining a Hessian depending on a second order Taylor approximation of the distribution over the time-series data and the first latent variable and the value of the second latent variable evaluated at the value of the first latent variable, determining a determinant of the Hessian, determining a Laplace approx
  • This method uses a distinction of two types of latent variables, and uses an approximate posterior, and assumes that the dynamics can be locally linearly approximated.
  • This method provides an improved way of doing inference in Gaussian process state-space models.
  • the method has the following advantages: The method does not require sequential sampling of the temporal states during inference and instead performs the Laplace approximation that involves a joint optimization over those temporal states.
  • providing the time-series data comprises receiving the time-series data or receiving a sensor signal comprising information about the technical system and determining the time-series data depending on the sensor signal.
  • the method preferably comprises determining an instruction for actuating the technical system depending on the time-series data, the model parameters and the variational parameters, and outputting the instruction to cause the technical system to act.
  • the technical system is a computer-controlled machine, like a robot, in particular a vehicle, a domestic appliance, a power tool, a manufacturing machine, a personal assistant or an access control system.
  • the technical system may comprise an engine or a part thereof, wherein the time-series data comprises as input to the technical system a speed and/or a load, and as output of the technical system an emission, a temperature of the engine, or an oxygen content in the engine.
  • the technical system may comprise a fuel cell stack or a part thereof, wherein the time-series data comprises as input to the technical system a current in the fuel cell stack, a hydrogen concentration in the fuel cell stack, a stoichiometry of an anode or a cathode of the fuel cell stack, a volume stream of a coolant for the fuel cell stack, an anode pressure for an anode of the fuel cell stack, a cathode pressure for a cathode of the fuel cell stack, an inlet temperature of a coolant for the fuel cell stack, an outlet temperature of a coolant for the fuel cell stack, an anode dew point temperature of an anode of the fuel cell stack, a cathode dew point temperature of a cathode of the fuel cell stack, and as output of the technical system (102) an average of the cell tensions across cells of the fuel cell stack, an anode pressure drop at an anode of the fuel cell stack, a cathode pressure drop at a catho
  • the instruction preferably comprises a target operating mode for the technical system.
  • the method may comprise determining the determinant of the Hessian depending on a factorization comprising a strictly upper triangular part of a part of the Hessian a strictly lower triangular part of the part of the Hessian and a block diagonal matrix of recursively defined blocks of a matrix. This is a very computing ressource efficient way of determining the Hessian.
  • the method may comprise determining the inverse of the Hessian depending on a factorization comprising a strictly upper triangular part of a part of the Hessian a strictly lower triangular part of the part of the Hessian and a block diagonal matrix of recursively defined blocks of a matrix. This is a very computing ressource efficient way of determining the inverse of the Hessian.
  • Evaluating the approximate lower bound may comprise sampling with samples of the second latent variable that are drawn from the approximate distribution over the second latent variable.
  • the device for machine learning with time-series data representing observations related to a technical system comprises at least one processor and at least one memory, wherein the at least one processor is adapted to execute instructions that when executed by the at least one processor cause the device to perform steps in a method for operating the technical system.
  • This device provides advantages that correspond to the advantages the method provides.
  • the device may comprises an interface that is adapted to receive information about the technical system and/or that is adapted to output an instruction that causes the technical system to act. This device is capable of interacting with the technical system.
  • a computer program may comprise computer readable instructions that when executed by a computer cause the computer to perform the steps of the method.
  • Figure 1 depicts a device 100 for operating a technical system 102 schematically.
  • the device 100 comprises at least one processor 104 and at least one memory 106.
  • the at least one processor 104 is adapted to execute instructions that when executed by the at least one processor 104 cause the device 100 to perform steps in a method for operating the technical system 102.
  • the device 100 in the example comprises an interface 108.
  • the interface 108 is for example adapted to receive information about the technical system 102.
  • the interface 108 is for example adapted to output an instruction that causes the technical system 102 to act.
  • the technical system 102 may comprise an actuator 110.
  • the actuator 110 may be connected at least temporarily with the interface 108 via a signal line 112.
  • Figure 2 depicts steps of the method.
  • the time series data Y T comprises for example noisy observations from the technical system 102.
  • the method may consider additional d u dimensional time series data U T with u t ⁇ R du .
  • the technical system 102 may comprise an engine or a part thereof.
  • the time-series data may comprise as input to the technical system 102 a speed and/or a load, and as of the technical system 102 output an emission, a temperature of the engine, or an oxygen content in the engine.
  • the technical system 102 may comprises a fuel cell stack or a part thereof.
  • the time-series data may comprise as input to the technical system 102 a current in the fuel cell stack, a hydrogen concentration in the fuel cell stack, a stoichiometry of an anode or a cathode of the fuel cell stack, a volume stream of a coolant for the fuel cell stack, an anode pressure for an anode of the fuel cell stack, a cathode pressure for a cathode of the fuel cell stack, an inlet temperature of a coolant for the fuel cell stack, an outlet temperature of a coolant for the fuel cell stack, an anode dew point temperature of an anode of the fuel cell stack, a cathode dew point temperature of a cathode of the fuel cell stack, and as output of the technical system 102 an average of the cell tensions across cells of the fuel cell stack, an anode pressure drop at an anode of the fuel cell stack, a cathode pressure drop at
  • the method operates on the given time-series data Y T for a given number I of iterations i and a given number N of samples n.
  • the method is based on a probabilistic model and an approximate model.
  • the approximate model is based on the fully independent training conditional, FITC, assumption. Details of this assumption are described e.g. in Edward Snelson and Zoubin Ghahramani. Sparse Gaussian Processes using Pseudo-inputs. In Advances in Neural Information Processing Systems, 2005.
  • X T 0 ⁇ t 1 T p ⁇ y t
  • x t ) are left unspecified, and the transition model is given by p ⁇ x t
  • x t ⁇ 1 , f t ⁇ 1 N x t
  • the kernel of the Gaussian process accepts input pairs of dimension R d x + d u .
  • the probabilistic model comprises a first latent variable.
  • Gaussian Process posteriors can be summarized by sparse Gaussian processes in which the information of the posterior is contained in the pseudo-dataset ( X M , F M ) where X M are the inducing inputs and F M are the inducing outputs.
  • the inducing output F M and F T share a joint Gaussian distribution p ⁇ ( F T , F M ) .
  • the model employs the fully independent training conditional approximation that assumes independence of the latent GP evaluations given the inducing outputs: p ⁇ F T
  • X T 0 , F M ⁇ ⁇ t 1 T ⁇ 1 p ⁇ f t
  • F M p F M p ⁇ Y T , X T 0
  • the inducing output F M is a second latent variable.
  • the approximate model comprises a distribution over the time-series data Y T and the first latent variable, e.g. the temporal state X T o , and the second latent variable, e.g. the inducing output F M .
  • m,S ) over the inducing output F M with mean m and variance S. These are e.g. given initial variational parameters ⁇ ⁇ m , S ⁇ .
  • the inducing input X M and the inducing output F M are referred to as pseudo data point.
  • the approximate model comprises a predictive distribution p ( x t
  • x t -1 ,F M ) N ( x t
  • the method comprises a step 200.
  • the step 200 comprises providing the time series data Y T .
  • the time series data U T may be provided and used additionally.
  • the method may comprise receiving the time series data Y T at the interface 108.
  • Step 200 may comprise receiving a sensor signal comprising information about the technical system 102 and determining the time-series data Y T depending on the sensor signal.
  • the time series data U T may be received or determined from a received sensor signal additionally.
  • the time series data u t is for example concatenated with the latent state x t and used as input to the transition model and kernel function.
  • the step 200 comprises providing a given true distribution p ⁇ ( Y T ,X T 0
  • the method comprises an outer loop 202 and an inner loop 204.
  • the model and variational parameters are optimized.
  • the samples are used to obtain a stochastic approximation to the log-likelihood.
  • the inner loop 204 comprises a step 204-1.
  • the step 204-1 comprises sampling a value of the second latent variable from the approximate distribution over the second latent variable.
  • an inducing output sample F M n is determined from the distribution q ⁇ ( F M ) over the inducing output F M : F M n ⁇ q ⁇ F M
  • the inner loop 204 comprises a step 204-2.
  • the step 204-2 comprises finding a value of the first latent variable depending on the density of the distribution over the time-series data and the first latent variable and the value of the second latent variable.
  • step 204-2 comprises finding a value of the first latent variable for that the density of the distribution over the time-series data and the first latent variable and the value of the second latent variable is maximized.
  • F M n with respect to the latent states X T o , X ⁇ n T o argmax X T o g GP X T o ⁇ F M n
  • the method comprises finding a mode X ⁇ ( n ) T o that maximizes the logarithmic density.
  • the inner loop 204 comprises a step 204-3.
  • the step 204-3 comprises determining a Hessian of the the logarithmic density g GP X T 0 ⁇ F M n depending on a mode of first latent variable X ⁇ T o the model parameters ⁇ , and the value of the second latent variable F M n .
  • the Hessian is used to provide a second order Taylor approximation of g GP X T 0 ⁇ F M n around the mode X ⁇ T o . Note that Y T is constant.
  • the non-zero elements of the Hessian are determined with only 3 d x vector-Hessian products, reducing the memory and time requirements to O Td x 2 .
  • the inner loop 204 comprises a step 204-4.
  • the step 204-4 comprises determining a determinant of the Hessian.
  • the determinant of the Hessian is determined for example depending on a factorization comprising a strictly upper triangular part of a part of the Hessian a strictly lower triangular part of the part of the Hessian and a block diagonal matrix of recursively defined blocks of a matrix.
  • a determinant det H ( A t , B t ) of the Hessian H ( A t ,B t ) is evaluated.
  • the inner loop 204 comprises a step 204-5.
  • the step 204-5 comprises determining a Laplace approximation of a distribution over the time-series data conditioned with the value of the second latent variable depending on the determinant of the Hessian.
  • F M n is evaluated: p ⁇ ⁇ Y T
  • the inner loop 204 comprises a step 204-6.
  • the step 204-6 comprises determining an inverse of the Hessian.
  • the inverse of the Hessian is determined for example depending on the factorization comprising the strictly upper triangular part of the part of the Hessian the strictly lower triangular part of the part of the Hessian and the block diagonal matrix of recursively defined blocks of the matrix.
  • the inverse H -1 of the Hessian H is determined.
  • H ⁇ 1 ⁇ + B ⁇ 1 ⁇ ⁇ + B T ⁇ 1
  • the inner loop 204 comprises a step 204-7.
  • the step 204-7 comprises determining a Jacobian of the distribution over the time-series data and the first latent variable and the value of the second latent variable.
  • X ⁇ T 0 n x
  • the outer loop 202 comprises a step 202-1.
  • the step 202-1 comprises evaluating an approximate lower bound that depends on the Laplace approximations that are determined for a plurality of values of the second latent variable.
  • an approximate lower bound L ( ⁇ , ⁇ ) L ⁇ , ⁇ ⁇ q ⁇ F M log p ⁇ ⁇ Y T
  • F M dF M ⁇ KL q ⁇ F M ⁇ p ⁇ F M is evaluated, that comprises a Kullback-Leibler term, KL-term, for comparing the approximate distribution q ⁇ ( F M ) with the true posterior distribution p ⁇ ( F M ).
  • F M ) is given by the Laplace approximation around X ⁇ T o .
  • the plurality of values of the second latent variable are the samples of the second latent variable that are determined in step 204-1 when processing the inner loop repeatedly. This means, the approximate lower bound is evaluated depending on samples of the second latent variable that are drawn from the approximate distribution over the second latent variable.
  • q ⁇ ( F M ) is a Gaussian distribution. This allows an analytical evaluation of the KL-term.
  • the other term of L ( ⁇ , ⁇ ) is analytically intractable. In the example the other term is optimized by sampling: ⁇ q ⁇ F M log p ⁇ ⁇ Y T
  • F M dF M ⁇ ⁇ n 1 N log p ⁇ ⁇ Y T
  • the outer loop 202 comprises a step 202-2.
  • the step 202-2 comprises determining gradients of the Laplace approximations depending on the inverse Hessians and the Jacobians.
  • the outer loop 202 comprises a step 202-3.
  • the step 202-3 comprises updating the model parameters and variational parameters depending on the gradients.
  • model parameters ⁇ and variational parameters ⁇ ⁇ m , S ⁇ are updated.
  • the aforementioned steps of the method describe an inference method to learn the model parameters ⁇ and variational parameters ⁇ of a Gausssian process state space model in a training. These steps may be determined in an offline phase, e.g. for given time-series data Y T and optionally given additional time series data U T .
  • the following steps of the method may be executed for a prediction e.g. in an online phase. These steps may be executed with a trained model, i.e. with given model parameters ⁇ and variational parameters ⁇ . These steps may be executed independently of the training, i.e. without training, or jointly with the training after the training.
  • model parameters ⁇ and variational parameters ⁇ that are determined in a last iteration of updating the model parameters ⁇ and variational parameters ⁇ are used for the prediction.
  • the method may comprise a step 206.
  • the additional time series data U T may be used as well.
  • the time-series data comprises as input to the approximate model of the technical system 102 a speed and/or a load.
  • the output of the approximate model of the technical system 102 is an emission, a temperature of the engine, or an oxygen content in the engine.
  • the time-series data comprises as input to the approximate model of the technical system 102 a current in the fuel cell stack, a hydrogen concentration in the fuel cell stack, a stoichiometry of an anode or a cathode of the fuel cell stack, a volume stream of a coolant for the fuel cell stack, an anode pressure for an anode of the fuel cell stack, a cathode pressure for a cathode of the fuel cell stack, an inlet temperature of a coolant for the fuel cell stack, an outlet temperature of a coolant for the fuel cell stack, an anode dew point temperature of an anode of the fuel cell stack, a cathode dew point temperature of a cathode of the fuel cell stack.
  • the output of the approximate model of the technical system 102 is an average of the cell tensions across cells of the fuel cell stack, an anode pressure drop at an anode of the fuel cell stack, a cathode pressure drop at a cathode of the fuel cell stack, a coolant pressure drop between an inlet and an outlet for the coolant of the fuel cell stack, or a coolant temperature rise between an inlet and an outlet for the coolant of the fuel cell stack.
  • the instruction for example comprises a target operating mode for the technical system 102.
  • the target operating mode may be determined depending on the output of the approximate model, e.g. by a controller or a characteristic curve or a map that maps the output to the target operating mode.
  • the method may comprise a step 208.
  • the method comprises outputting the instruction to cause the technical system 102 to act.
  • the instruction for example comprises the target operating mode for the technical system 102.
  • the time series data Y T may be processed in the training in minibatches.
  • the method may be applied to minibatches.
  • the method may comprise drawing a minibatch for a sample from the approximate distribution q ⁇ ( F M ) and approximate the term ⁇ q ⁇ F M log p ⁇ ⁇ Y T
  • F M dF M ⁇ T T b ⁇ n 1 N log p ⁇ ⁇ Y T b n
  • the method is applied to one-dimensional or multidimensional latent states x t alike.
  • multi-dimensional latent states x t an independent Gaussian process may be used for each dimension of the latent state x t : p ⁇ x t
  • x t d is the d-th dimension of the latent state and ⁇ ( d ) is the mean and ⁇ ( d ) is the covariance of the Gaussian process of the d-th dimension.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
EP22173335.5A 2022-05-13 2022-05-13 Verfahren und vorrichtung zum betrieb eines technischen systems Pending EP4276712A1 (de)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22173335.5A EP4276712A1 (de) 2022-05-13 2022-05-13 Verfahren und vorrichtung zum betrieb eines technischen systems
PCT/EP2023/062296 WO2023217792A1 (en) 2022-05-13 2023-05-09 Method and the device for operating a technical system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP22173335.5A EP4276712A1 (de) 2022-05-13 2022-05-13 Verfahren und vorrichtung zum betrieb eines technischen systems

Publications (1)

Publication Number Publication Date
EP4276712A1 true EP4276712A1 (de) 2023-11-15

Family

ID=81653515

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22173335.5A Pending EP4276712A1 (de) 2022-05-13 2022-05-13 Verfahren und vorrichtung zum betrieb eines technischen systems

Country Status (2)

Country Link
EP (1) EP4276712A1 (de)
WO (1) WO2023217792A1 (de)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3716160A1 (de) * 2019-03-26 2020-09-30 Robert Bosch GmbH Lernparameter eines probabilistischen modells mit gaussschen prozessen
US20210011447A1 (en) * 2018-01-30 2021-01-14 Robert Bosch Gmbh Method for ascertaining a time characteristic of a measured variable, prediction system, actuator control system, method for training the actuator control system, training system, computer program, and machine-readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210011447A1 (en) * 2018-01-30 2021-01-14 Robert Bosch Gmbh Method for ascertaining a time characteristic of a measured variable, prediction system, actuator control system, method for training the actuator control system, training system, computer program, and machine-readable storage medium
EP3716160A1 (de) * 2019-03-26 2020-09-30 Robert Bosch GmbH Lernparameter eines probabilistischen modells mit gaussschen prozessen

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LALONGO, ALESSANDRO DAVIDEMARK VAN DER WILKJAMES HENSMANCARL EDWARD RASMUSSEN: "Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models", INTERNATIONAL CONFERENCE ON MACHINE LEARNING, 2019
SKAUG, HANS JULIUSDAVID A. FOURNIER: "Automatic approximation of the marginal likelihood in non-Gaussian hierarchical models", COMPUT. STAT. DATA ANAL., vol. 51, 2006, pages 699 - 709, XP024955545, DOI: 10.1016/j.csda.2006.03.005

Also Published As

Publication number Publication date
WO2023217792A1 (en) 2023-11-16

Similar Documents

Publication Publication Date Title
US20230274125A1 (en) Learning observation representations by predicting the future in latent space
Barthelmé et al. Expectation propagation for likelihood-free inference
Ardia Financial Risk Management with Bayesian Estimation of GARCH Models Theory and Applications
Richard et al. Efficient high-dimensional importance sampling
Hajivassiliou et al. Classical estimation methods for LDV models using simulation
Ober et al. Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes
US7092920B2 (en) Method and apparatus for determining one or more statistical estimators of customer behavior
CN111191457A (zh) 自然语言语义识别方法、装置、计算机设备和存储介质
US11475279B2 (en) Computational implementation of gaussian process models
CN112712117A (zh) 一种基于全卷积注意力的多元时间序列分类方法及系统
Ranjan et al. Bayes analysis of some important lifetime models using MCMC based approaches when the observations are left truncated and right censored
CN112381079A (zh) 图像处理方法和信息处理设备
Titov et al. Constituent parsing with incremental sigmoid belief networks
Overstall et al. Bayesian optimal design for ordinary differential equation models with application in biological science
Herzog et al. Data-driven modeling and prediction of complex spatio-temporal dynamics in excitable media
Katsoulakis et al. Data-driven, variational model reduction of high-dimensional reaction networks
Gundersen et al. Active multi-fidelity Bayesian online changepoint detection
Koskela Neural network methods in analysing and modelling time varying processes
EP4276712A1 (de) Verfahren und vorrichtung zum betrieb eines technischen systems
Liu et al. A unified inference for predictive quantile regression
Liu et al. Hessian regularization of deep neural networks: A novel approach based on stochastic estimators of Hessian trace
Malefaki et al. An EM and a stochastic version of the EM algorithm for nonparametric Hidden semi-Markov models
Pearce et al. Bayesian neural network ensembles
de Feo The averaging principle for non-autonomous slow-fast stochastic differential equations and an application to a local stochastic volatility model
Griffin et al. Testing sparsity-inducing penalties

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20240515

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR