CN110222840A - A kind of cluster resource prediction technique and device based on attention mechanism - Google Patents

A kind of cluster resource prediction technique and device based on attention mechanism Download PDF

Info

Publication number
CN110222840A
CN110222840A CN201910413227.1A CN201910413227A CN110222840A CN 110222840 A CN110222840 A CN 110222840A CN 201910413227 A CN201910413227 A CN 201910413227A CN 110222840 A CN110222840 A CN 110222840A
Authority
CN
China
Prior art keywords
attention
weight
layer
unit
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910413227.1A
Other languages
Chinese (zh)
Other versions
CN110222840B (en
Inventor
窦耀勇
唐家伟
吴维刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201910413227.1A priority Critical patent/CN110222840B/en
Publication of CN110222840A publication Critical patent/CN110222840A/en
Application granted granted Critical
Publication of CN110222840B publication Critical patent/CN110222840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present disclosure provides a kind of cluster resource prediction techniques and device based on attention mechanism, using improved attention mechanism, and it is integrated into LSTM, the correlation between multiple time serieses can be excavated, and propose the scheme that resource requirement forecasting in cluster is carried out using multiple time serieses, effectively increase the accuracy rate of prediction, the significantly more efficient auxiliary resources planning of energy, to improve the resource utilization of cluster, the O&M cost of data center is more effectively reduced.

Description

A kind of cluster resource prediction technique and device based on attention mechanism
Technical field
The present invention relates to cluster resource administrative skill field, more particularly to a kind of collection based on attention mechanism Group's resource prediction method and device.
Background technique
The scale of construction of current data center is increasing, and effectively carrying out resource management to the cluster in data center can be with The utilization rate of hardware resource is improved, O&M cost is reduced, improves the profit of O&M.One of them effectively improves resource utilization Method is exactly to predict the resource requirement in cluster future, to carry out resource planning in advance, reduces the waste of resource.
Carrying out cluster resource requirement forecasting at present is mainly to use the time series data of cluster resource.And the common time Sequential forecasting models have ARIMA (integrating rolling average autoregression model), and (gradient is promoted by VAR (Vector Autoression Models), GBRT Regression tree) LSTM (shot and long term memory network) etc., they can be directly used in the prediction of the resource requirement in cluster.
But current cluster resource prediction method there is a problem of two it is main: 1. these methods mainly use the single time (such as ARIMA) that sequence is predicted as feature carries out resource requirement prediction almost without multiple time serieses are used. Whether the accuracy of prediction contains apparent rule dependent on the history value of this time series;2. although existing at present many logical The characteristics of more time series predicting models (such as VAR), these model methods do not account for cluster in data center, especially It does not account for the correlation in cluster between application load and interferes with each other, above-mentioned two problems can all cause cluster resource to be predicted As a result inaccurate.
Therefore, how to provide a kind of accurate method of cluster resource prediction result is those skilled in the art's urgent need to resolve Problem.
Summary of the invention
In view of this, being used the present invention provides a kind of cluster resource prediction technique and device based on attention mechanism Multiple resource time sequences use resource for application load in cluster to carry out the prediction of Future demand The characteristics of, the correlation between multiple resource requirement time serieses, energy are excavated using improved deep learning attention mechanism Enough effectively improve the accuracy rate of resources.
To achieve the goals above, the present invention adopts the following technical scheme:
A kind of cluster resource prediction technique based on attention mechanism, comprising:
S1: last moment first is hidden into layer state, belongs to institute's having time sequence an of main computer unit with object instance Column data, all time series datas that a deployment unit is belonged to object instance and the Goal time order of historical juncture are made For the input for inputting attention layer, the first input vector is obtained;
S2: first input vector is input to LSTM encoder, obtains the current first hiding layer state;
S3: the current first hiding layer state and last moment second are hidden into layer state and are input to temporal correlation note Meaning power layer, obtains context vector;
S4: the Goal time order that the context vector, last moment second hide layer state and historical juncture is input to LSTM decoder obtains the current second hiding layer state;
S5: the current second hiding layer state and the context vector are subjected to linear transformation, obtain predicted value.
Preferably, step S1 is specifically included:
S11: last moment first is hidden into layer state and belongs to the institute an of main computer unit sometimes with object instance Between input of the sequence data as deployment unit attention layer, obtain deployment unit attention layer output vector;
S12: last moment first is hidden into layer state and belongs to the institute an of deployment unit sometimes with object instance Between input of the sequence data as main computer unit attention layer, obtain main computer unit attention output vector;
S13: last moment first is hidden into layer state and the Goal time order of historical juncture as autocorrelation attention The input of layer, obtains autocorrelation attention layer output vector;
S14: by deployment unit attention layer output vector, main computer unit attention output vector and autocorrelation attention Layer output vector is incorporated as the first input vector.
Preferably, step S11 is specifically included:
Layer state is hidden based on last moment first and institute's having time sequence an of main computer unit is belonged to object instance Column data calculates the first attention weight;
Based on the first attention weight, normalization deployment unit attention weight is calculated using softmax function;
Layer state is hidden based on last moment first, all time serieses an of main computer unit are belonged to object instance Data and normalization deployment unit attention weight, calculate deployment unit attention layer output vector;
Step S12 is specifically included:
It calculates and belongs to each time series data an of deployment unit relative to history Goal time order with object instance Single order timing dependence coefficient, and when obtaining all time serieses and the static state of history Goal time order in corresponding main computer unit Between relevance weight;
Layer state is hidden based on last moment first and object instance belongs to all time serieses an of deployment unit Data calculate the second attention weight;
Main computer unit attention weight is obtained based on quiet hour relevance weight and the second attention weight, and is returned One changes, and obtains normalization main computer unit attention weight;
Layer state is hidden based on last moment first, all time serieses an of deployment unit are belonged to object instance Data, the Goal time order of historical juncture and normalization main computer unit attention weight, calculating main frame unit attention export to Amount;
Step S13 is specifically included:
The relative coefficient between the historical juncture Goal time order in different time window is calculated, and is obtained corresponding from phase Close weight;
Layer state is hidden based on last moment first and different historical juncture Goal time orders calculate third attention weight;
Autocorrelation unit attention weight is obtained based on auto-correlation weight and third attention weight, and is normalized, Obtain normalized autocorrelation unit attention weight;
Layer state, the Goal time order of historical juncture and normalized autocorrelation unit note are hidden based on last moment first Meaning power weight, calculates autocorrelation attention layer output vector.
Preferably, step S3 is specifically included:
It based on the second hidden layer of last moment state computation time attention layer weight, and is normalized, obtains normalizing Time attention layer weight after change;
Based on the time attention layer weight calculation context vector after the current first hiding layer state and normalization.
A kind of cluster resource prediction meanss based on attention mechanism, comprising:
First input vector computing module, for last moment first to be hidden layer state, belongs to one with object instance All time series datas of a main computer unit, with object instance belong to all time series datas an of deployment unit with And input of the Goal time order of historical juncture as input attention layer, obtain the first input vector;
First hidden layer state computation module is worked as first input vector to be input to LSTM encoder Preceding first hiding layer state;
Context vector computing module, for the current first hiding layer state and last moment second to be hidden stratiform State is input to temporal correlation attention layer, obtains context vector;
Second hidden layer state computation module, for by the context vector, last moment second hide layer state and The Goal time order of historical juncture is input to LSTM decoder, obtains the current second hiding layer state;
Linear transform module, for linearly being become the current second hiding layer state and the context vector It changes, obtains predicted value.
Preferably, the first input vector computing module specifically includes:
First computing unit, for last moment first to be hidden layer state and belongs to a host with object instance Input of all time series datas of unit as deployment unit attention layer, obtain deployment unit attention layer export to Amount;
Second computing unit, for last moment first to be hidden layer state and belongs to a deployment with object instance Input of all time series datas of unit as main computer unit attention layer, obtains main computer unit attention output vector;
Third computing unit, for last moment first to be hidden layer state and the Goal time order of historical juncture as certainly The input of correlation attention layer obtains autocorrelation attention layer output vector;
Combining unit, for by deployment unit attention layer output vector, main computer unit attention output vector and from phase Closing property attention layer output vector is incorporated as the first input vector.
Preferably, first computing unit specifically includes:
First attention weight calculation subelement, for based on last moment first hide layer state and with object instance it is same All time series datas for belonging to a main computer unit calculate the first attention weight;
First normalized weight computation subunit is calculated for being based on the first attention weight using softmax function Normalize deployment unit attention weight;
First attention layer output vector computation subunit, for hiding layer state and target based on last moment first Example belongs to all time series datas and normalization deployment unit attention weight of a main computer unit, calculates deployment Unit attention layer output vector;
Second computing unit specifically includes:
Quiet hour relevance weight computation subunit belongs to the every of a deployment unit with object instance for calculating Single order timing dependence coefficient of a time series data relative to history Goal time order, and obtain institute in corresponding main computer unit The quiet hour relevance weight of having time sequence and history Goal time order;
Second attention weight calculation subelement, for being based on, last moment first hides layer state and object instance belongs to The second attention weight is calculated in all time series datas of a deployment unit;
Second normalized weight subelement, for being led based on quiet hour relevance weight and the second attention weight Machine unit attention weight, and be normalized, obtain normalization main computer unit attention weight;
Second attention layer output vector computation subunit, for hiding layer state and target based on last moment first Example belongs to all time series datas, the Goal time order of historical juncture and normalization main computer unit of a deployment unit Attention weight, calculating main frame unit attention layer output vector;
The third computing unit specifically includes:
Auto-correlation weight calculation subelement, for calculating the phase between the historical juncture Goal time order in different time window Property coefficient is closed, and obtains corresponding auto-correlation weight;
Third attention weight calculation subelement, for hiding layer state and different historical junctures based on last moment first Goal time order calculates third attention weight;
Third normalized weight subelement, for obtaining autocorrelation unit based on auto-correlation weight and third attention weight Attention weight, and be normalized, obtain normalized autocorrelation unit attention weight;
Third attention layer output vector subelement, the target of layer state, historical juncture is hidden based on last moment first Timing and normalized autocorrelation unit attention weight calculate autocorrelation attention layer output vector.
Preferably, the context vector computing module specifically includes:
4th normalized weight subelement, for being weighed based on the second hidden layer of last moment state computation time attention layer Weight, and be normalized, the time attention layer weight after being normalized;
Context vector computation subunit, for based on the time attention after the current first hiding layer state and normalization Layer weight calculation context vector.
It can be seen via above technical scheme that compared with prior art, the present disclosure provides one kind to be based on attention The cluster resource prediction technique and device of mechanism using improved attention mechanism, and are integrated into LSTM, can excavate Correlation between multiple time serieses, and propose the scheme that resource requirement forecasting in cluster is carried out using multiple time serieses, The accuracy rate of prediction is effectively increased, the significantly more efficient auxiliary resources planning of energy to improve the resource utilization of cluster, more has Effect ground reduces the O&M cost of data center.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of flow chart of the cluster resource prediction technique based on attention mechanism provided by the invention;
Fig. 2 is the specific flow chart provided by the invention for calculating the first input vector;
Fig. 3 is a kind of schematic diagram of the cluster resource prediction meanss based on attention mechanism provided by the invention;
Fig. 4 is the composition schematic diagram of the first input vector computing module provided by the invention;
Fig. 5 is that time series provided by the invention acquires configuration diagram;
Fig. 6 is the prediction model schematic diagram provided by the invention based on attention mechanism.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Referring to attached drawing 1, the embodiment of the invention discloses a kind of cluster resource prediction technique based on attention mechanism, packets It includes:
S1: last moment first is hidden into layer state, belongs to institute's having time sequence an of main computer unit with object instance Column data, all time series datas that a deployment unit is belonged to object instance and the Goal time order of historical juncture are made For the input for inputting attention layer, the first input vector is obtained;
S2: being input to LSTM encoder for the first input vector, obtains the current first hiding layer state;
S3: the current first hiding layer state and last moment second are hidden into layer state and are input to temporal correlation attention Layer, obtains context vector;
S4: the Goal time order that context vector, last moment second hide layer state and historical juncture is input to LSTM Decoder obtains the current second hiding layer state;
S5: the current second hiding layer state and context vector are subjected to linear transformation, obtain predicted value.
The present invention when carrying out resources, used multiple resources and time series rather than a time series come into Row prediction, in addition, the characteristics of resource is used also directed to application load in cluster, using improved deep learning attention mechanism Excavate the correlation between multiple resource time sequences, the final effective accuracy rate for improving resources.
Referring to attached drawing 2, in order to further optimize the above technical scheme, the embodiment of the present invention further discloses step S1 It specifically includes:
S11: last moment first is hidden into layer state and belongs to the institute an of main computer unit sometimes with object instance Between input of the sequence data as deployment unit attention layer, obtain deployment unit attention layer output vector;
S12: last moment first is hidden into layer state and belongs to the institute an of deployment unit sometimes with object instance Between input of the sequence data as main computer unit attention layer, obtain main computer unit attention output vector;
S13: last moment first is hidden into layer state and the Goal time order of historical juncture as autocorrelation attention The input of layer, obtains autocorrelation attention layer output vector;
S14: by deployment unit attention layer output vector, main computer unit attention output vector and autocorrelation attention Layer output vector is incorporated as the first input vector.
In order to further optimize the above technical scheme, the embodiment of the present invention further discloses step S11 and specifically includes:
Layer state is hidden based on last moment first and institute's having time sequence an of main computer unit is belonged to object instance Column data calculates the first attention weight;
Based on the first attention weight, normalization deployment unit attention weight is calculated using softmax function;
Layer state is hidden based on last moment first, all time serieses an of main computer unit are belonged to object instance Data and normalization deployment unit attention weight, calculate deployment unit attention layer output vector;
Step S12 is specifically included:
It calculates and belongs to each time series data an of deployment unit relative to history Goal time order with object instance Single order timing dependence coefficient, and when obtaining all time serieses and the static state of history Goal time order in corresponding main computer unit Between relevance weight;
Layer state is hidden based on last moment first and object instance belongs to all time serieses an of deployment unit Data calculate the second attention weight;
Main computer unit attention weight is obtained based on quiet hour relevance weight and the second attention weight, and is returned One changes, and obtains normalization main computer unit attention weight;
Layer state is hidden based on last moment first, all time serieses an of deployment unit are belonged to object instance Data, the Goal time order of historical juncture and normalization main computer unit attention weight, calculating main frame unit attention export to Amount;
Step S13 is specifically included:
The relative coefficient between the historical juncture Goal time order in different time window is calculated, and is obtained corresponding from phase Close weight;
Layer state is hidden based on last moment first and different historical juncture Goal time orders calculate third attention weight;
Autocorrelation unit attention weight is obtained based on auto-correlation weight and third attention weight, and is normalized, Obtain normalized autocorrelation unit attention weight;
Layer state, the Goal time order of historical juncture and normalized autocorrelation unit note are hidden based on last moment first Meaning power weight, calculates autocorrelation attention layer output vector.
In order to further optimize the above technical scheme, the embodiment of the present invention further discloses step S3 and specifically includes:
It based on the second hidden layer of last moment state computation time attention layer weight, and is normalized, obtains normalizing Time attention layer weight after change;
Based on the time attention layer weight calculation context vector after the current first hiding layer state and normalization.
Referring to attached drawing 3, the embodiment of the invention also discloses a kind of cluster resource prediction meanss based on attention mechanism, packets It includes:
First input vector computing module, for last moment first to be hidden layer state, belongs to one with object instance All time series datas of a main computer unit, with object instance belong to all time series datas an of deployment unit with And input of the Goal time order of historical juncture as input attention layer, obtain the first input vector;
First hidden layer state computation module obtains current for the first input vector to be input to LSTM encoder One hiding layer state;
Context vector computing module, it is defeated for the current first hiding layer state and last moment second to be hidden layer state Enter to temporal correlation attention layer, obtains context vector;
Second hidden layer state computation module, for context vector, last moment second to be hidden layer state and history The Goal time order at moment is input to LSTM decoder, obtains the current second hiding layer state;
Linear transform module obtains pre- for the current second hiding layer state and context vector to be carried out linear transformation Measured value.
Referring to attached drawing 4, in order to further optimize the above technical scheme, it is defeated that the embodiment of the present invention further discloses first Incoming vector computing module specifically includes:
First computing unit, for last moment first to be hidden layer state and belongs to a host with object instance Input of all time series datas of unit as deployment unit attention layer, obtain deployment unit attention layer export to Amount;
Second computing unit, for last moment first to be hidden layer state and belongs to a deployment with object instance Input of all time series datas of unit as main computer unit attention layer, obtains main computer unit attention output vector;
Third computing unit, for last moment first to be hidden layer state and the Goal time order of historical juncture as certainly The input of correlation attention layer obtains autocorrelation attention layer output vector;
Combining unit, for by deployment unit attention layer output vector, main computer unit attention output vector and from phase Closing property attention layer output vector is incorporated as the first input vector.
In order to further optimize the above technical scheme, it is specific to further disclose the first computing unit for the embodiment of the present invention Include:
First attention weight calculation subelement, for based on last moment first hide layer state and with object instance it is same All time series datas for belonging to a main computer unit calculate the first attention weight;
First normalized weight computation subunit is calculated for being based on the first attention weight using softmax function Normalize deployment unit attention weight;
First attention layer output vector computation subunit, for hiding layer state and target based on last moment first Example belongs to all time series datas and normalization deployment unit attention weight of a main computer unit, calculates deployment Unit attention layer output vector;
Second computing unit specifically includes:
Quiet hour relevance weight computation subunit belongs to the every of a deployment unit with object instance for calculating Single order timing dependence coefficient of a time series data relative to history Goal time order, and obtain institute in corresponding main computer unit The quiet hour relevance weight of having time sequence and history Goal time order;
Second attention weight calculation subelement, for being based on, last moment first hides layer state and object instance belongs to The second attention weight is calculated in all time series datas of a deployment unit;
Second normalized weight subelement, for being led based on quiet hour relevance weight and the second attention weight Machine unit attention weight, and be normalized, obtain normalization main computer unit attention weight;
Second attention layer output vector computation subunit, for hiding layer state and target based on last moment first Example belongs to all time series datas, the Goal time order of historical juncture and normalization main computer unit of a deployment unit Attention weight, calculating main frame unit attention layer output vector;
Third computing unit specifically includes:
Auto-correlation weight calculation subelement, for calculating the phase between the historical juncture Goal time order in different time window Property coefficient is closed, and obtains corresponding auto-correlation weight;
Third attention weight calculation subelement, for hiding layer state and different historical junctures based on last moment first Goal time order calculates third attention weight;
Third normalized weight subelement, for obtaining autocorrelation unit based on auto-correlation weight and third attention weight Attention weight, and be normalized, obtain normalized autocorrelation unit attention weight;
Third attention layer output vector subelement, the target of layer state, historical juncture is hidden based on last moment first Timing and normalized autocorrelation unit attention weight calculate autocorrelation attention layer output vector.
In order to further optimize the above technical scheme, the embodiment of the present invention further discloses context vector and calculates mould Block specifically includes:
4th normalized weight subelement, for being weighed based on the second hidden layer of last moment state computation time attention layer Weight, and be normalized, the time attention layer weight after being normalized;
Context vector computation subunit, for based on the time attention after the current first hiding layer state and normalization Layer weight calculation context vector.
The prediction technique that the present invention uses uses attention mechanism, can excavate multiple time serieses with correlativity Correlation, and indicate that using weight, these time serieses to the correlation of Goal time order, and then utilize these correlativities Goal time order is predicted, effectively increases the accuracy rate of prediction.Model provided by the invention is applied in cluster resource In prediction, the resource requirement in cluster future can be more accurately predicted, can more efficiently plan auxiliary resources, to mention The resource utilization of high cluster more effectively reduces the O&M cost of data center.
Technical solution provided by the invention is described in further details below with reference to concrete methods of realizing.
In modern trunked, an application is usually to be made of multiple application examples, these are belonged in an application Multiple application examples be classified as a deployment unit.And these examples can be generally distributed on different physical hosts, so Multiple and different application examples are had on each physical host, and the application example being present on a physical host is classified as one Main computer unit, then probably by correlativity between application example in a deployment unit and a main computer unit. So for the object instance to be predicted, it, can also be same in the time series data for acquiring this object instance When acquisition object instance where deployment unit other application example time series data and the host where object instance The time series data of the other application example of unit, and finally these time series datas are all used in the prediction of object instance.
Before method provided by the invention is discussed in detail, that first states the descriptive model in the present invention outputs and inputs institute The mathematic sign used is as shown in the table:
Table 1
Firstly, as shown in Fig. 5, designing the framework of time series data acquisition: disposing one on each host The local zone time sequence database of a local zone time sequence data database, All hosts can upload the data to length of a game In sequence database.For the object instance that so carry out resources for one, it only needs to obtain locally It is derived from oneself time data (target sequence) and belongs to all time series data X an of main computer unit with object instancei, so It is inquired from length of a game's sequence database afterwards and obtains all time series numbers for belonging to a deployment unit with object instance According to Xo
The present invention is named as MLA-LSTM according to one prediction model based on attention mechanism of these design datas A size is arranged in (Multi-level Attention LSTM, the shot and long term memory network with the multilayer attention) model For the time window of T, then each time series predicts object instance future time point using T value in this window Value, that is, the value at T+1 time point, this process can be abstracted are as follows:Wherein, F is to need to train Model.
It include two LSTM: the first LSTM in model as encoder, for handling multiple time serieses of input, and Export hidden state ht;Second LSTM is responsible for the hidden state h of processing first LSTM output as decodert, and it is final Predicted value is exported, the schematic diagram of this model is as shown in Figure 6.
One, LSTM encoder
Define the calculating process of LSTM encoder are as follows:
Wherein, htIt is hidden state vector of the LSTM in time point t, if its length is m,For the input of LSTM, this A input is obtained by the calculating of three attention layers, and the calculating process of three attention layers will be discussed in detail below.By LSTM Encoder temporally be unfolded by dimension, as shown in Figure 6.
For LSTM encoder, merging three attention layers is an input attention layer, to excavate between time series Correlation, these three attention layers are respectively as follows:
(1) multiple time series X in a deployment unit are excavated using a kind of common attention mechanismiCorrelation Property, and it is known as deployment unit attention layer.
(2) multiple time series X in a main computer unit are excavated using improved attention mechanismoCorrelation, and It is known as main computer unit attention layer.
(3) autocorrelation of the time series of object instance is excavated using improved attention mechanism, and it is known as certainly Correlation attention layer.
1, the calculation formula of deployment unit attention layer is as follows:
Wherein, parameter declaration is as follows:
In summary, the input and output of deployment unit attention layer are as follows:
2, the calculation formula of main computer unit attention layer is as follows:
(1) the single order timing dependence coefficient CORT (the firstly the need of each time series of calculating relative to Goal time order first order temporal correlation coefficient)。
With first of time series x in main computer unitO, lFor, to calculate it and object time sequence YTSingle order timing phase When closing property coefficient, need to do two sequences some cutting processing.
First xO, lThe last one value remove, obtain
Remove target sequence in the lagged value Y at T momentTFirst value, obtain
Then it calculatesWithCORT absolute value CO, l:
This absolute value is taken as timing xO, lWith the quiet hour relevance weight in time of target sequence.Wherein, The calculation method of CORT is:
Wherein S1, S2The time series that two length are q, S1, t, S2, tIt is S respectively1, S2In the value of moment t.
Finally, the quiet hour relevance weight of all timing and Goal time order in main computer unit can be obtained, and combine For a vector Cout:
(2) attention weight is calculated using common attention mechanism.
Or with first of time series xO, lFor, calculate the attention weight in t moment of this time series:
Then the time series of entire main computer unit can form an attention in t moment in the attention weight of t moment Power weight vectors gt:
(3) the temporal correlation weight vectors C that combination both the above step obtainsoutWith attention weight vectors gt.Combination When completed by a linear transformation, the new weight vectors θ obtained after transformationt:
In order to enable all elements of this vector normalize, a softmax function is reused to map, when obtaining t When quarter in main computer unit first of time series normalized weight value
(4) the value vector of main computer unit time series after being weighted in t moment:
The normalized weight obtained by previous stepThis weight and corresponding first of main computer unit time Value of the sequence after the value multiplication of t moment is weighted.The time series of All hosts unit can be formed in t moment weighted value One vector
In summary, the input and output of main computer unit attention layer are as follows:
3, the calculation method of autocorrelation attention layer is as follows:
(1) similar with main computer unit attention layer, first calculate the phase between the object time sequence in different time window Close property coefficient.It is the object time sequence Y to end up firstly the need of calculating with moment rrWith the object time sequence for taking moment T as ending Arrange YTBetween CORT coefficient CA, r:
CA, r=| | CORT (YT, Yr)||
In so time window T, the CORT coefficient of the corresponding object time sequence at each moment can form one long Degree is the auto-correlation vector C of Tauto:
(2) attention weight is calculated using common attention mechanism.
(3) the temporal correlation weight vectors C that combination both the above step obtainsautoWith attention weight vectors μt.Pass through Two weight vectors are converted to a weight vectors φ by the method for linear transformationt:
(4) the object time sequence vector after being weighted:
The normalized weight that previous step obtainsIt describes in a time window T, r at the time of object time sequence Value to the influence degree of the value of moment T, that is, object time sequence oneself with oneself different moments correlation.With power WeightTo YtThe value at interior r moment is weighted, and obtains the output vector of a t moment
Finally, the output vector of three attention layers is combined the input vector in t moment as encoder LSTM, Wherein
Two, decoder LSTM
Define decoder LSTM are as follows:
If h 'tBe as decoder LSTM moment t hidden state vector, if vector element number be n.It needs It is to be noted that the hidden state vector h of it and encoder LSTMtDifferent from.We are by this decoder LSTM temporally dimension Expansion, as shown in Figure 6.
It is gone in an integrated temporal correlation attention layer to this LSTM, the weight calculation side of this time attention layer Method is as follows:
The t moment normalized weight obtained by above methodIt can be with hpWeighted sum, obtain a context to Amount
The context vector c of moment ttWith object time sequential value ytMerge, and by a linear transformation, obtains moment t Decoder input
As previously described,It can be input into decoder LSTM and carry out operation.Namely it and current time t Hidden state vector h 'tIt is used to update the decoder LSTM hidden state h ' of subsequent time t+1 simultaneouslyt+1:
Renewal process constantly more than circulation, terminates until the T moment, obtains the hidden state vector h ' at T momentTAnd cellular State vector cT
The predictor calculation method at the T+1 moment of last decoder LSTM output is as follows:
In summary, the input and output of temporal correlation attention layer are summarized as follows:
Finally use MSE (Mean Square Error) as the training criterion of model:
This model is trained using gradient descent algorithm, determines weight coefficient matrix/vector of above-mentioned neural network/partially The occurrence set.
Next specific example is combined to be further elaborated technical solution of the present invention.
This example has randomly selected it using the company-data cluster-trace of Alibaba's publication in 2018 In container (id is c_66550) be used as an object instance, using its cpu busy percentage time series data as the target The resource time sequence data of example.This object instance is belonged into the example of a deployment unit and belongs to the reality of a main computer unit Example is found out, and the time series data of these examples is extracted, and is finally handled it respectively to be spaced the time series for being 300 seconds Data.
The time series of the same deployment unit finally obtained 33 belongs to 23 and target reality of a host Time series 1 of example.By these time serieses according to time unifying, and that unifies is divided into three data sets: training set is tested Card collection and test set.Wherein training set has 10141 time points, verifying 563 time points of collection, 564 time points of test set.Often A data set has identical time series number.
Model has multiple hyper parameters, and setting window size is T={ 25,35,45,60 }, encoder and decoder LSTM's The hidden state vector sum cellular state vector magnitude of hidden layer is m=n={ 32,64,128 }, (flat using MSE and MAE respectively Equal absolute value error) it is used as error criterion, and optimize training pattern using batch stochastic gradient descent algorithm, wherein learning rate is 5e-4。
Finally, using the method training pattern of grid search, and each model is taken to obtain best effects on verifying collection Optimized parameter of the hyper parameter as model.Then it is predicted in test set, finally adds up the error in test set with MSE.
In an experiment, in order to distinguish the LSTM predicted using simple sequence and multisequencing, being denoted as simple sequence will be used LSTM-Un is denoted as LSTM-Mul using multisequencing.
The experimental results are shown inthe following table:
The experimental results showed that model or 2 multisequencing models either relative to 3 simple sequences, the present invention is proposed Model error it is all more much smaller than them: wherein at MSE, than best VAR 98.26%;At MAE, than best VAR 74.40% has very high prediction accuracy, thus when demonstrating more based on attention mechanism proposed by the present invention Between sequential forecasting models validity.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other The difference of embodiment, the same or similar parts in each embodiment may refer to each other.For device disclosed in embodiment For, since it is corresponded to the methods disclosed in the examples, so being described relatively simple, related place is said referring to method part It is bright.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest scope of cause.

Claims (8)

1. a kind of cluster resource prediction technique based on attention mechanism characterized by comprising
S1: last moment first is hidden into layer state, belongs to all time series numbers an of main computer unit with object instance According to, all time series datas for belonging to a deployment unit with object instance and the Goal time order of historical juncture as defeated The input for entering attention layer obtains the first input vector;
S2: first input vector is input to LSTM encoder, obtains the current first hiding layer state;
S3: the current first hiding layer state and last moment second are hidden layer state and is input to temporal correlation attention Layer, obtains context vector;
S4: the Goal time order that the context vector, last moment second hide layer state and historical juncture is input to LSTM Decoder obtains the current second hiding layer state;
S5: the current second hiding layer state and the context vector are subjected to linear transformation, obtain predicted value.
2. a kind of cluster resource prediction technique based on attention mechanism according to claim 1, which is characterized in that step S1 is specifically included:
S11: hiding layer state for last moment first and institute's having time sequence an of main computer unit is belonged to object instance Input of the column data as deployment unit attention layer obtains deployment unit attention layer output vector;
S12: hiding layer state for last moment first and institute's having time sequence an of deployment unit is belonged to object instance Input of the column data as main computer unit attention layer obtains main computer unit attention output vector;
S13: last moment first is hidden into layer state and the Goal time order of historical juncture as autocorrelation attention layer Input, obtains autocorrelation attention layer output vector;
S14: deployment unit attention layer output vector, main computer unit attention output vector and autocorrelation attention layer is defeated Outgoing vector is incorporated as the first input vector.
3. a kind of cluster resource prediction technique based on attention mechanism according to claim 2, which is characterized in that step S11 is specifically included:
Layer state is hidden based on last moment first and all time series numbers an of main computer unit are belonged to object instance According to the first attention weight of calculating;
Based on the first attention weight, normalization deployment unit attention weight is calculated using softmax function;
Layer state is hidden based on last moment first, all time series datas an of main computer unit are belonged to object instance And normalization deployment unit attention weight, calculate deployment unit attention layer output vector;
Step S12 is specifically included:
It calculates and belongs to one of each time series data an of deployment unit relative to history Goal time order with object instance Rank timing dependence coefficient, and obtain the quiet hour phase of all time serieses and history Goal time order in corresponding main computer unit Closing property weight;
Layer state is hidden based on last moment first and object instance belongs to all time series datas an of deployment unit Calculate the second attention weight;
Main computer unit attention weight is obtained based on quiet hour relevance weight and the second attention weight, and carries out normalizing Change, obtains normalization main computer unit attention weight;
Layer state is hidden based on last moment first, all time series numbers an of deployment unit are belonged to object instance According to the Goal time order and normalization main computer unit attention weight of, historical juncture, calculating main frame unit attention output vector;
Step S13 is specifically included:
The relative coefficient between the historical juncture Goal time order in different time window is calculated, and obtains corresponding auto-correlation power Weight;
Layer state is hidden based on last moment first and different historical juncture Goal time orders calculate third attention weight;
Autocorrelation unit attention weight is obtained based on auto-correlation weight and third attention weight, and is normalized, is obtained Normalized autocorrelation unit attention weight;
Layer state, the Goal time order of historical juncture and normalized autocorrelation unit attention are hidden based on last moment first Weight calculates autocorrelation attention layer output vector.
4. a kind of cluster resource prediction technique based on attention mechanism according to any one of claims 1 to 3, special Sign is that step S3 is specifically included:
It based on the second hidden layer of last moment state computation time attention layer weight, and is normalized, after obtaining normalization Time attention layer weight;
Based on the time attention layer weight calculation context vector after the current first hiding layer state and normalization.
5. a kind of cluster resource prediction meanss based on attention mechanism characterized by comprising
First input vector computing module, for last moment first to be hidden layer state, belongs to a master with object instance It all time series datas of machine unit, all time series datas that a deployment unit is belonged to object instance and goes through Input of the Goal time order at history moment as input attention layer, obtains the first input vector;
First hidden layer state computation module obtains current for first input vector to be input to LSTM encoder One hiding layer state;
Context vector computing module, it is defeated for the current first hiding layer state and last moment second to be hidden layer state Enter to temporal correlation attention layer, obtains context vector;
Second hidden layer state computation module, for the context vector, last moment second to be hidden layer state and history The Goal time order at moment is input to LSTM decoder, obtains the current second hiding layer state;
Linear transform module is obtained for the current second hiding layer state and the context vector to be carried out linear transformation To predicted value.
6. a kind of cluster resource prediction meanss based on attention mechanism according to claim 5, which is characterized in that described First input vector computing module specifically includes:
First computing unit, for last moment first to be hidden layer state and belongs to a main computer unit with object instance Input of all time series datas as deployment unit attention layer, obtain deployment unit attention layer output vector;
Second computing unit, for last moment first to be hidden layer state and belongs to a deployment unit with object instance Input of all time series datas as main computer unit attention layer, obtain main computer unit attention output vector;
Third computing unit, for last moment first to be hidden the Goal time order of layer state and historical juncture as auto-correlation The input of property attention layer, obtains autocorrelation attention layer output vector;
Combining unit is used for deployment unit attention layer output vector, main computer unit attention output vector and autocorrelation Attention layer output vector is incorporated as the first input vector.
7. a kind of cluster resource prediction meanss based on attention mechanism according to claim 6, which is characterized in that described First computing unit specifically includes:
First attention weight calculation subelement, for hiding layer state based on last moment first and being belonged to object instance All time series datas of one main computer unit calculate the first attention weight;
First normalized weight computation subunit calculates normalizing using softmax function for being based on the first attention weight Change deployment unit attention weight;
First attention layer output vector computation subunit, for hiding layer state and object instance based on last moment first All time series datas and normalization deployment unit attention weight for belonging to a main computer unit, calculate deployment unit Attention layer output vector;
Second computing unit specifically includes:
Quiet hour relevance weight computation subunit, for calculate with object instance belong to a deployment unit it is each when Between single order timing dependence coefficient of the sequence data relative to history Goal time order, and obtain in corresponding main computer unit institute sometimes Between sequence and history Goal time order quiet hour relevance weight;
Second attention weight calculation subelement, for being based on, last moment first hides layer state and object instance belongs to one All time series datas of a deployment unit calculate the second attention weight;
Second normalized weight subelement, for obtaining host list based on quiet hour relevance weight and the second attention weight First attention weight, and be normalized, obtain normalization main computer unit attention weight;
Second attention layer output vector computation subunit, for hiding layer state and object instance based on last moment first All time series datas, the Goal time order of historical juncture and the normalization main computer unit for belonging to a deployment unit pay attention to Power weight, calculating main frame unit attention layer output vector;
The third computing unit specifically includes:
Auto-correlation weight calculation subelement, for calculating the correlation between the historical juncture Goal time order in different time window Coefficient, and obtain corresponding auto-correlation weight;
Third attention weight calculation subelement, for hiding layer state and different historical juncture targets based on last moment first Timing calculates third attention weight;
Third normalized weight subelement, for obtaining autocorrelation unit based on auto-correlation weight and third attention weight and paying attention to Power weight, and be normalized, obtain normalized autocorrelation unit attention weight;
Third attention layer output vector subelement, based on last moment first hide layer state, the Goal time order of historical juncture, And normalized autocorrelation unit attention weight, calculate autocorrelation attention layer output vector.
8. a kind of cluster resource prediction meanss based on attention mechanism according to claim 5~7 any one, special Sign is that the context vector computing module specifically includes:
4th normalized weight subelement, for being based on the second hidden layer of last moment state computation time attention layer weight, And it is normalized, the time attention layer weight after being normalized;
Context vector computation subunit, for based on the time attention layer power after the current first hiding layer state and normalization Re-computation context vector.
CN201910413227.1A 2019-05-17 2019-05-17 Cluster resource prediction method and device based on attention mechanism Active CN110222840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910413227.1A CN110222840B (en) 2019-05-17 2019-05-17 Cluster resource prediction method and device based on attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910413227.1A CN110222840B (en) 2019-05-17 2019-05-17 Cluster resource prediction method and device based on attention mechanism

Publications (2)

Publication Number Publication Date
CN110222840A true CN110222840A (en) 2019-09-10
CN110222840B CN110222840B (en) 2023-05-05

Family

ID=67821396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910413227.1A Active CN110222840B (en) 2019-05-17 2019-05-17 Cluster resource prediction method and device based on attention mechanism

Country Status (1)

Country Link
CN (1) CN110222840B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909046A (en) * 2019-12-02 2020-03-24 上海舵敏智能科技有限公司 Time series abnormality detection method and device, electronic device, and storage medium
CN111638958A (en) * 2020-06-02 2020-09-08 中国联合网络通信集团有限公司 Cloud host load processing method and device, control equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017097693A (en) * 2015-11-26 2017-06-01 Kddi株式会社 Data prediction device, information terminal, program, and method performing learning with data of different periodic layer
CN107730087A (en) * 2017-09-20 2018-02-23 平安科技(深圳)有限公司 Forecast model training method, data monitoring method, device, equipment and medium
US20180143966A1 (en) * 2016-11-18 2018-05-24 Salesforce.Com, Inc. Spatial Attention Model for Image Captioning
CN108090558A (en) * 2018-01-03 2018-05-29 华南理工大学 A kind of automatic complementing method of time series missing values based on shot and long term memory network
CN108154435A (en) * 2017-12-26 2018-06-12 浙江工业大学 A kind of stock index price expectation method based on Recognition with Recurrent Neural Network
CN108182260A (en) * 2018-01-03 2018-06-19 华南理工大学 A kind of Multivariate Time Series sorting technique based on semantic selection
CN108304846A (en) * 2017-09-11 2018-07-20 腾讯科技(深圳)有限公司 Image-recognizing method, device and storage medium
CN108491680A (en) * 2018-03-07 2018-09-04 安庆师范大学 Drug relationship abstracting method based on residual error network and attention mechanism
CN108804495A (en) * 2018-04-02 2018-11-13 华南理工大学 A kind of Method for Automatic Text Summarization semantic based on enhancing
CN109685252A (en) * 2018-11-30 2019-04-26 西安工程大学 Building energy consumption prediction technique based on Recognition with Recurrent Neural Network and multi-task learning model
CN109697304A (en) * 2018-11-19 2019-04-30 天津大学 A kind of construction method of refrigeration unit multi-sensor data prediction model
CN109740419A (en) * 2018-11-22 2019-05-10 东南大学 A kind of video behavior recognition methods based on Attention-LSTM network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017097693A (en) * 2015-11-26 2017-06-01 Kddi株式会社 Data prediction device, information terminal, program, and method performing learning with data of different periodic layer
US20180143966A1 (en) * 2016-11-18 2018-05-24 Salesforce.Com, Inc. Spatial Attention Model for Image Captioning
CN108304846A (en) * 2017-09-11 2018-07-20 腾讯科技(深圳)有限公司 Image-recognizing method, device and storage medium
CN107730087A (en) * 2017-09-20 2018-02-23 平安科技(深圳)有限公司 Forecast model training method, data monitoring method, device, equipment and medium
CN108154435A (en) * 2017-12-26 2018-06-12 浙江工业大学 A kind of stock index price expectation method based on Recognition with Recurrent Neural Network
CN108090558A (en) * 2018-01-03 2018-05-29 华南理工大学 A kind of automatic complementing method of time series missing values based on shot and long term memory network
CN108182260A (en) * 2018-01-03 2018-06-19 华南理工大学 A kind of Multivariate Time Series sorting technique based on semantic selection
CN108491680A (en) * 2018-03-07 2018-09-04 安庆师范大学 Drug relationship abstracting method based on residual error network and attention mechanism
CN108804495A (en) * 2018-04-02 2018-11-13 华南理工大学 A kind of Method for Automatic Text Summarization semantic based on enhancing
CN109697304A (en) * 2018-11-19 2019-04-30 天津大学 A kind of construction method of refrigeration unit multi-sensor data prediction model
CN109740419A (en) * 2018-11-22 2019-05-10 东南大学 A kind of video behavior recognition methods based on Attention-LSTM network
CN109685252A (en) * 2018-11-30 2019-04-26 西安工程大学 Building energy consumption prediction technique based on Recognition with Recurrent Neural Network and multi-task learning model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CINAR Y G,ET AL: "Time series forecasting using RNNs:an extended attention mechanism to model periods and handle missing values", 《ARXIV》 *
LIANG YUXUAN,ET AL: "GeoMAN:Multi-level Attention Networks for Geo-sensory Time Series Prediction", 《PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE》 *
PABLO MONTERO ET AL: "An R Package for Time Series Clustering", 《JOURNAL OF STATISTICAL SOFTWARE》 *
SNEHA CHAUDHARI ET AL: "An Attentive Survey of Attention Models", 《ARXIV》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909046A (en) * 2019-12-02 2020-03-24 上海舵敏智能科技有限公司 Time series abnormality detection method and device, electronic device, and storage medium
CN110909046B (en) * 2019-12-02 2023-08-11 上海舵敏智能科技有限公司 Time-series abnormality detection method and device, electronic equipment and storage medium
CN111638958A (en) * 2020-06-02 2020-09-08 中国联合网络通信集团有限公司 Cloud host load processing method and device, control equipment and storage medium
CN111638958B (en) * 2020-06-02 2024-04-05 中国联合网络通信集团有限公司 Cloud host load processing method and device, control equipment and storage medium

Also Published As

Publication number Publication date
CN110222840B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
Alexandrov et al. Gluonts: Probabilistic and neural time series modeling in python
Yu et al. k-Nearest neighbor model for multiple-time-step prediction of short-term traffic condition
EP3446260B1 (en) Memory-efficient backpropagation through time
Li et al. The improved grey model based on particle swarm optimization algorithm for time series prediction
Wang et al. An effective estimation of distribution algorithm for the multi-mode resource-constrained project scheduling problem
Wang et al. Minimizing the total flow time in a flow shop with blocking by using hybrid harmony search algorithms
CN110476172A (en) Neural framework for convolutional neural networks is searched for
CN111324993B (en) Turbulent flow field updating method, device and related equipment
CN102346745B (en) Method and device for predicting user behavior number for words
Moya et al. Deeponet-grid-uq: A trustworthy deep operator framework for predicting the power grid’s post-fault trajectories
Zhang et al. Performance analysis of MAP/G/1 queue with working vacations and vacation interruption
Lehký et al. Reliability calculation of time-consuming problems using a small-sample artificial neural network-based response surface method
CN110019401A (en) Part amount prediction technique, device, equipment and its storage medium
Kourehpaz et al. Machine learning for enhanced regional seismic risk assessments
Pandit et al. Prediction of earthquake magnitude using adaptive neuro fuzzy inference system
CN104217258A (en) Method for power load condition density prediction
CN110222840A (en) A kind of cluster resource prediction technique and device based on attention mechanism
Zhang et al. Dynamic risk-aware patch scheduling
Cui et al. Study on parameters characteristics of NGM (1, 1, k) prediction model with multiplication transformation
CN104539601A (en) Reliability analysis method and system for dynamic network attack process
CN109816157A (en) Project plan optimization method, device, computer equipment and storage medium
CN109858681A (en) A kind of traffic based on IC card passenger flow forecasting and relevant apparatus in short-term
CN109583100A (en) A kind of gyroscope failure prediction method based on AGO-RVM
Beaucaire et al. Multi-point infill sampling strategies exploiting multiple surrogate models
CN110147284B (en) Supercomputer working load prediction method based on two-dimensional long-short term memory neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant