CN110990766A - Data prediction method and storage medium - Google Patents

Data prediction method and storage medium Download PDF

Info

Publication number
CN110990766A
CN110990766A CN201910989996.6A CN201910989996A CN110990766A CN 110990766 A CN110990766 A CN 110990766A CN 201910989996 A CN201910989996 A CN 201910989996A CN 110990766 A CN110990766 A CN 110990766A
Authority
CN
China
Prior art keywords
data
covariance matrix
prediction
processing
prediction method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910989996.6A
Other languages
Chinese (zh)
Inventor
王江林
方东城
马原
文述生
闫少霞
李宁
周光海
肖浩威
黄劲风
徐丹龙
杨艺
马然
丁永祥
庄所增
潘伟锋
张珑耀
刘国光
郝志刚
陶超
韦锦超
赵瑞东
潘军兆
闫志愿
陈奕均
黄海锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South GNSS Navigation Co Ltd
Original Assignee
South GNSS Navigation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South GNSS Navigation Co Ltd filed Critical South GNSS Navigation Co Ltd
Priority to CN201910989996.6A priority Critical patent/CN110990766A/en
Publication of CN110990766A publication Critical patent/CN110990766A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/148Wavelet transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Abstract

The invention discloses a data prediction method and a storage medium, wherein the data prediction method comprises the following steps: acquiring sample data of a time window; the data preprocessing process comprises the following steps: performing centralization processing, DWT discrete wavelet transform and periodic difference processing on global data of sample data; data regression analysis processing: acquiring a plurality of prior models of a certain time window, calculating covariance matrixes of the prior models, performing fusion solving on the covariance matrixes to obtain a new covariance matrix, and taking the new covariance matrix as the covariance matrix of the new model; and (3) data prediction: and performing data prediction through the new model, and sequentially performing inverse difference processing, wavelet inverse transformation and inverse centralization processing on the new model prediction data. According to the method, through data preprocessing, the model can be calculated by using less data resources, the complexity of model fitting calculation is low, and the method is lower in cost and more efficient than the traditional prediction process.

Description

Data prediction method and storage medium
Technical Field
The invention relates to the technical field of data processing, in particular to a data prediction method and a storage medium.
Background
The construction of smart cities needs big data support, and important data processing methods are endless, so that the smart city construction method spans the large fields of economy, construction, computers, Internet of things, surveying and mapping and the like. The problems of how to separate good data from bad data from a large amount of data, how to establish a mathematical model conforming to physical laws by using the principle of the existing theory, how to predict data which may appear in the future by using the model and the like are more concerned, and various principles written on books are moved to an application stage.
The current data prediction methods include the following methods: trend extrapolation prediction method, regression prediction method, Kalman filtering prediction model, BP neural network prediction model, deep learning prediction model, etc. The existing methods are generally a trend extrapolation method and a regression prediction method, and have the following defects: the data preprocessing is simple, the historical data requirement is large, and the prediction result is easy to be over-fitted; the Kalman filtering prediction model is generally suitable for high-speed prediction and is not friendly to the low-speed target prediction effect; the BP neural network prediction model is a popular model at present, but is not friendly to the time series processing of data, and once the model is established, the cost for updating again is high; on one hand, the deep learning is high in training data cost, which is mainly reflected in large data volume requirement, long training time, high hardware cost and the like, and on the other hand, the deep learning also has the defect of a BP neural network.
Disclosure of Invention
In view of the above technical problems, an object of the present invention is to provide a data prediction method, which solves the problems of large data volume requirement and low efficiency in the prior art.
The technical scheme adopted by the invention is as follows:
a method of data prediction, comprising:
acquiring sample data of a time window;
the data preprocessing process comprises the following steps: performing centralization processing on global data of the sample data, and performing DWT discrete wavelet transform on the data after the centralization processing; carrying out set period difference processing on the data subjected to DWT discrete wavelet transform;
data regression analysis processing: the method comprises the steps of obtaining a plurality of prior models of a certain time window, calculating covariance matrixes of the prior models, carrying out fusion solving on the covariance matrixes to obtain a new covariance matrix, and taking the new covariance matrix as the covariance matrix of the new model.
And (3) data prediction: and performing data prediction through the new model, and sequentially performing inverse difference processing, wavelet inverse transformation and inverse centralization processing on the new model prediction data.
Further, the step of calculating covariance matrices of a plurality of prior models, and performing fusion solution on the covariance matrices to obtain a new covariance matrix comprises: and obtaining a change of the ARMA model parameters of the time window data as a covariance matrix, and solving the ARMA model parameters of the next time window according to the change of the ARMA model parameters of the time window data as the covariance matrix to obtain a new covariance matrix.
Further, the step of obtaining a plurality of prior models for a time window comprises:
fitting different ARMA models to obtain different models;
and performing order weighting on the different models through AIC value comparison, and selecting the optimal first models as prior local models according to the minimum principle of AIC values.
Further, the step of centralizing the data comprises:
randomly selecting a group of data, and subtracting the group of data from the global data to obtain the global data from which a certain group of data is removed;
and moving the whole global data without certain group of data to the vicinity of the origin in the super-dimensional space to obtain the data after the global data is subjected to centralized processing.
Further, the step of performing set period difference processing on the data includes:
and setting time of one period interval, and subtracting data before the time of one period interval from data at a certain moment to obtain data after the time is subjected to set period difference processing.
Further, the step of performing a DWT discrete wavelet transform comprises: and continuously decomposing the signals by DWT discrete wavelet transform to obtain high-frequency signals and low-frequency signals each time, and decomposing the low-frequency signals by DWT discrete wavelet transform until a specified order is reached.
Further, the step of performing inverse difference processing on the new model prediction data comprises: and adding the predicted data at the certain moment to the data before the one-cycle interval at the certain moment to obtain the predicted data after the inverse difference processing is carried out at the certain moment.
Further, the wavelet inverse transformation step comprises: and taking the predicted data as a low-frequency decomposition signal reaching a specified order, and performing DWT discrete wavelet inverse transformation of the specified order on the low-frequency decomposition signal to obtain wavelet inverse transformation predicted data.
Further, the step of anti-centralization processing comprises: and adding the global data of the predicted data to the certain group of data to obtain the data subjected to anti-centralization processing.
A computer storage medium having stored thereon a computer program which, when executed by a processor, implements the data prediction method.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, through data preprocessing, the model can be calculated by using less data resources, the complexity of model fitting calculation is low, and the method is lower in cost and more efficient than the traditional prediction process. Furthermore, the effective method for updating the model by using the autoregressive coincidence and the weighted average is more accurate than the traditional prediction method.
Drawings
FIG. 1 is a flow chart illustrating a data prediction method according to the present invention;
FIG. 2 is a schematic diagram of a data prediction method according to the present invention;
FIG. 3 is a schematic diagram of DWT discrete wavelet transform performed by the data prediction method of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
Example (b):
referring to fig. 1-3, a data prediction method, as shown in fig. 1, includes:
step S1, obtaining sample data of a time window;
step S2, data preprocessing process: performing centralization processing on global data of the sample data, and performing DWT discrete wavelet transform on the data after the centralization processing; carrying out set period difference processing on the data subjected to DWT discrete wavelet transform;
step S3, data regression analysis processing: the method comprises the steps of obtaining a plurality of prior models of a certain time window, calculating covariance matrixes of the prior models, carrying out fusion solving on the covariance matrixes to obtain a new covariance matrix, and taking the new covariance matrix as the covariance matrix of the new model.
Step S4, data prediction: and performing data prediction through the new model, and sequentially performing inverse difference processing, wavelet inverse transformation and inverse centralization processing on the new model prediction data.
Further, the step of calculating covariance matrices of a plurality of prior models, and performing fusion solution on the covariance matrices to obtain a new covariance matrix comprises: and obtaining a change of the ARMA model parameters of the time window data as a covariance matrix, and solving the ARMA model parameters of the next time window according to the change of the ARMA model parameters of the time window data as the covariance matrix to obtain a new covariance matrix.
Further, the step of obtaining a plurality of prior models for a time window comprises:
fitting different ARMA models to obtain different models;
and performing order weighting on the different models through AIC value comparison, and selecting the optimal first models as prior local models according to the minimum principle of AIC values.
Further, the step of centralizing the data comprises:
randomly selecting a group of data, and subtracting the group of data from the global data to obtain the global data from which a certain group of data is removed;
and moving the whole global data without certain group of data to the vicinity of the origin in the super-dimensional space to obtain the data after the global data is subjected to centralized processing.
Further, the step of performing set period difference processing on the data includes:
and setting time of one period interval, and subtracting data before the time of one period interval from data at a certain moment to obtain data after the time is subjected to set period difference processing.
Further, the step of performing a DWT discrete wavelet transform comprises: and continuously decomposing the signals by DWT discrete wavelet transform to obtain high-frequency signals and low-frequency signals each time, and decomposing the low-frequency signals by DWT discrete wavelet transform until a specified order is reached.
The significance of wavelet decomposition is that the signal can be decomposed on different scales, and the selection of different scales can be determined according to different targets.
For many signals, the low frequency component is important, often implying a characteristic of the signal, while the high frequency component gives details or differences in the signal. Human voice, if the high frequency components are removed, sounds different from before, but still knows what is being said; if enough low frequency components are removed, some meaningless sound is heard. Approximation and detail are commonly used in wavelet analysis. Approximately represents the high-scale, i.e., low-frequency, information of the signal; the details represent the high scale, i.e. high frequency information, of the signal. Thus, the original signal passes through two mutual filters to produce two signals. By continuously decomposing the approximate signal through a continuous decomposition process, the signal can be decomposed into a plurality of low-resolution components. In theory the decomposition can proceed without limitation, but in fact the decomposition can proceed until the detail (high frequency) contains only a single sample. Therefore, in practical applications, the appropriate number of decomposition layers is generally selected according to the characteristics of the signal or an appropriate standard.
Further, the step of performing inverse difference processing on the new model prediction data comprises: and adding the predicted data at the certain moment to the data before the one-cycle interval at the certain moment to obtain the predicted data after the inverse difference processing is carried out at the certain moment.
Further, the wavelet inverse transformation step comprises: and taking the predicted data as a low-frequency decomposition signal reaching a specified order, and performing DWT discrete wavelet inverse transformation of the specified order on the low-frequency decomposition signal to obtain wavelet inverse transformation predicted data.
Further, the step of anti-centralization processing comprises: and adding the global data of the predicted data to the certain group of data to obtain the data subjected to anti-centralization processing.
The prediction algorithm process aims to form self-adaptation when the old model trained for a period of time and the newly observed data are added into the algorithm so as to represent the latest model of the target, and the condition that the model deviates from the actual condition due to the change of the target motion rule is avoided.
As an embodiment, the method of the present invention is composed of a data preprocessing module, a data regression analysis module, a data prediction module, and an adaptive module (as shown in fig. 2), and the following is specifically explained:
1. time window of sample data
The time window controls the amount of sample data and functions to build a priori local models to better form a self-adaptive benign cycle with the shifted window. Data forms a data body with a plurality of dimensions on a time axis, a time window represents the data body in a period of time, and the data has a million-minute relationship with the time and changes along with the time. Therefore, the process calculates the data ARMA model parameter change on the transition time window as a covariance matrix, and participates the covariance matrix (prior local model) of the previous time window in the calculation of the ARMA model parameter of the next time window to obtain a new covariance matrix (posterior local model).
2. Data preprocessing module
Centralizing the data, compressing the data volume, and reducing the storage burden during operation. Randomly selecting a group of data, subtracting the group of data from the global data, moving the whole data to the vicinity of the origin in the super-dimensional space, and reducing the repeated data to reduce the repeated operation amount of the operation.
DWT, discrete wavelet transform, treating the whole sample data as a signal, separating its high and low frequency signals, eliminating high frequency vibrations partially caused by accidental errors, as shown in fig. 3.
X (t) is the input signal of sample data, which is decomposed into the first order high frequency signal and the first order low frequency signal after DWT, the first order low frequency signal is again DWT decomposed into the second order and the third order until the order is specified, the order is determined by specific data, the process does not analyze specific data, and therefore the problem of wavelet transform order determination is not discussed here.
Periodic difference processing, eliminating the signal for some systematic error in the time series. For the ARMA regression algorithm, the MA model only requires white noise in time, but real general data are related to the time, so that the period difference is set, namely, the later data and the data before the period interval are subtracted by setting the period interval, partial unexplainable time correlation is eliminated, and the applicability of the model is improved.
3. Data regression analysis module
Model order loop fitting. For a prior model of a time window, fitting needs to be performed through different ARMA orders to obtain a model, and covariance matrixes corresponding to the orders are calculated.
The covariance matrix is calculated by selecting the first several models that are optimal with the principle of minimum AIC values and weighting the model orders by AIC value comparison. The AIC information criterion, namely Akaike information criterion, is a standard for measuring the Goodness of statistical model fitting (Goodness of fit), and is also called as the akabanchi information amount criterion because it is established and developed for the japanese statistician who carries forward and backward in akabanchi. The method is based on the concept of entropy, and can balance the complexity of an estimated model and the goodness of the model fitting data.
And fusing the prior covariance matrixes of the plurality of models to obtain a new model.
4. Data prediction module
The prediction data is subjected to inverse difference processing to restore the original state. Since we perform differential processing on the data in step 2, the data after the cycle interval must be added back to restore the data itself to be in the original state.
For the prediction data IDWT, the wavelet inverse transform. Because the prediction data is established on the time window prior local model on each signal frequency after the wavelet transformation, the prediction data on each decomposed signal can be subjected to inverse transformation again to obtain the prediction value on the original signal.
Recentering the predicted data, i.e., adding data that is subtracted when centering.
The algorithm process has the advantages that the model is calculated by using less data resources, the complexity of model fitting calculation is low, the local model is obvious to the recent rule characteristics of a target, an effective method for updating the model is formed by using autoregressive conformity and weighted average, and compared with the traditional prediction process, the cost is low, the efficiency is higher, and the accuracy is higher.
As the application of the invention, the invention can realize that historical data of the geographical position of the physical target is obtained by measuring for a period of time, a set of algorithm flow for predicting the geographical position of the physical target for a period of time in the future is established by a discrete mathematics, mathematical modeling and statistical decision method, and the algorithm flow realizes self-adaption to represent the latest position model of the physical target, thereby better eliminating the situation of poor prediction effect caused by the change of the motion rule of the physical target.
The invention also provides a computer storage medium on which a computer program is stored, in which the method of the invention, if implemented in the form of software functional units and sold or used as a stand-alone product, can be stored. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer storage medium and used by a processor to implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer storage medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer storage media may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer storage media that does not include electrical carrier signals and telecommunications signals as subject to legislation and patent practice.
Various other modifications and changes may be made by those skilled in the art based on the above-described technical solutions and concepts, and all such modifications and changes should fall within the scope of the claims of the present invention.

Claims (10)

1. A method of data prediction, comprising:
acquiring sample data of a time window;
the data preprocessing process comprises the following steps: performing centralization processing on global data of sample data, performing DWT discrete wavelet transform on the data after the centralization processing, and performing set period difference processing on the data after the DWT discrete wavelet transform;
data regression analysis processing: acquiring a plurality of prior models of a certain time window, calculating covariance matrixes of the prior models, performing fusion solving on the covariance matrixes to obtain a new covariance matrix, and taking the new covariance matrix as the covariance matrix of the new model;
and (3) data prediction: and performing data prediction through the new model, and sequentially performing inverse difference processing, wavelet inverse transformation and inverse centralization processing on the new model prediction data.
2. The data prediction method of claim 1, wherein the covariance matrix of a plurality of prior models is calculated, and the step of performing fusion solution on the covariance matrix to obtain a new covariance matrix comprises: and obtaining a change of the ARMA model parameters of the time window data as a covariance matrix, and solving the ARMA model parameters of the next time window according to the change of the ARMA model parameters of the time window data as the covariance matrix to obtain a new covariance matrix.
3. The data prediction method of claim 1, the step of obtaining a plurality of prior models for a time window comprising:
fitting different ARMA models to obtain different models;
and performing order weighting on the different models through AIC value comparison, and selecting the optimal first models as prior local models according to the minimum principle of AIC values.
4. The data prediction method of claim 1, the step of centralizing the data comprising:
randomly selecting a group of data, and subtracting the group of data from the global data to obtain the global data from which a certain group of data is removed;
and moving the whole global data without certain group of data to the vicinity of the origin in the super-dimensional space to obtain the data after the global data is subjected to centralized processing.
5. The data prediction method of claim 1, wherein the step of performing the set period difference processing on the data comprises:
and setting time of one period interval, and subtracting data before the time of one period interval from data at a certain moment to obtain data after the time is subjected to set period difference processing.
6. The data prediction method of claim 1, the step of performing a DWT discrete wavelet transform comprising: and continuously decomposing the signals by DWT discrete wavelet transform to obtain high-frequency signals and low-frequency signals each time, and decomposing the low-frequency signals by DWT discrete wavelet transform until a specified order is reached.
7. The data prediction method of claim 5, the step of inverse differencing the new model prediction data comprising: and adding the predicted data at the certain moment to the data before the one-cycle interval at the certain moment to obtain the predicted data after the inverse difference processing is carried out at the certain moment.
8. The data prediction method of claim 6, the step of inverse wavelet transform comprising: and taking the predicted data as a low-frequency decomposition signal reaching a specified order, and performing DWT discrete wavelet inverse transformation of the specified order on the low-frequency decomposition signal to obtain wavelet inverse transformation predicted data.
9. The data prediction method of claim 4, the step of anti-centralizing comprising: and adding the global data of the predicted data to the certain group of data to obtain the data subjected to anti-centralization processing.
10. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a data prediction method as claimed in any one of claims 1 to 9.
CN201910989996.6A 2019-10-17 2019-10-17 Data prediction method and storage medium Pending CN110990766A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910989996.6A CN110990766A (en) 2019-10-17 2019-10-17 Data prediction method and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910989996.6A CN110990766A (en) 2019-10-17 2019-10-17 Data prediction method and storage medium

Publications (1)

Publication Number Publication Date
CN110990766A true CN110990766A (en) 2020-04-10

Family

ID=70082138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910989996.6A Pending CN110990766A (en) 2019-10-17 2019-10-17 Data prediction method and storage medium

Country Status (1)

Country Link
CN (1) CN110990766A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907064A (en) * 2021-02-08 2021-06-04 国网安徽省电力有限公司蚌埠供电公司 Electric quantity prediction method and device based on self-adaptive window, storage medium and terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907064A (en) * 2021-02-08 2021-06-04 国网安徽省电力有限公司蚌埠供电公司 Electric quantity prediction method and device based on self-adaptive window, storage medium and terminal
CN112907064B (en) * 2021-02-08 2024-04-02 国网安徽省电力有限公司蚌埠供电公司 Electric quantity prediction method and device based on adaptive window, storage medium and terminal

Similar Documents

Publication Publication Date Title
CN111832627A (en) Image classification model training method, classification method and system for suppressing label noise
JP6950756B2 (en) Neural network rank optimizer and optimization method
US20220114455A1 (en) Pruning and/or quantizing machine learning predictors
CN110930996B (en) Model training method, voice recognition method, device, storage medium and equipment
CN108595815B (en) Artificial intelligence body training system and passive circuit optimization design system and method
CN110633859B (en) Hydrologic sequence prediction method integrated by two-stage decomposition
CN112884236B (en) Short-term load prediction method and system based on VDM decomposition and LSTM improvement
CN116611546B (en) Knowledge-graph-based landslide prediction method and system for target research area
CN110990766A (en) Data prediction method and storage medium
CN114221877A (en) Load prediction method, device, equipment and computer readable medium
CN113194493A (en) Wireless network data missing attribute recovery method and device based on graph neural network
Elistratova et al. Development of a Machine Learning Method for Automatic Analysis of Data Processing Quality
CN115859048A (en) Noise processing method and device for partial discharge signal
WO2023113729A1 (en) High performance machine learning system based on predictive error compensation network and the associated device
CN116432860A (en) Short-term power load prediction method, system and medium based on EWT-TCN
Khan et al. Real-time lossy audio signal reconstruction using novel sliding based multi-instance linear regression/random forest and enhanced cgpann
CN115345106A (en) Verilog-A model construction method, system and equipment of electronic device
CN115392441A (en) Method, apparatus, device and medium for on-chip adaptation of quantized neural network model
CN114386580A (en) Decision model training method and device, decision method and device, electronic equipment and storage medium
Abbas et al. Volterra system identification using adaptive genetic algorithms
CN115063499A (en) Slice self-adaptive determination method for radial sampling trajectory of magnetic resonance imaging
Wang et al. A novel two-stage ellipsoid filtering-based system modeling algorithm for a Hammerstein nonlinear model with an unknown noise term
Su et al. Neural network based fusion of global and local information in predicting time series
CN113052388A (en) Time series prediction method and device
CN114692888A (en) System parameter processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination