CN105550323A - Load balancing prediction method of distributed database, and predictive analyzer - Google Patents

Load balancing prediction method of distributed database, and predictive analyzer Download PDF

Info

Publication number
CN105550323A
CN105550323A CN201510938406.9A CN201510938406A CN105550323A CN 105550323 A CN105550323 A CN 105550323A CN 201510938406 A CN201510938406 A CN 201510938406A CN 105550323 A CN105550323 A CN 105550323A
Authority
CN
China
Prior art keywords
data
neural network
network model
circulation neural
multilayer circulation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510938406.9A
Other languages
Chinese (zh)
Other versions
CN105550323B (en
Inventor
孙乔
王思宁
付兰梅
邓卜侨
吴舜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Great Opensource Software Co ltd
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
State Grid Zhejiang Electric Power Co Ltd
State Grid Jibei Electric Power Co Ltd
Beijing China Power Information Technology Co Ltd
Beijing Zhongdian Feihua Communication Co Ltd
Original Assignee
Beijing Great Opensource Software Co Ltd
State Grid Corp of China SGCC
State Grid Zhejiang Electric Power Co Ltd
State Grid Jibei Electric Power Co Ltd
Beijing Guodiantong Network Technology Co Ltd
Beijing Fibrlink Communications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Great Opensource Software Co Ltd, State Grid Corp of China SGCC, State Grid Zhejiang Electric Power Co Ltd, State Grid Jibei Electric Power Co Ltd, Beijing Guodiantong Network Technology Co Ltd, Beijing Fibrlink Communications Co Ltd filed Critical Beijing Great Opensource Software Co Ltd
Priority to CN201510938406.9A priority Critical patent/CN105550323B/en
Publication of CN105550323A publication Critical patent/CN105550323A/en
Application granted granted Critical
Publication of CN105550323B publication Critical patent/CN105550323B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a load balancing prediction method of a distributed database, and a predictive analyzer. The load balancing prediction method of the distributed database comprises the following steps: collecting a load index on each local data node to form training set data; initializing a multilayer circulation neural network model; extracting data in one time period from the training set data to serve as the input of the multilayer circulation neural network model, extracting the data in an equal time period after the data in one time period from the training set data to serve as the output of the multilayer circulation neural network model, and training the multilayer circulation neural network model; and extracting the data which is after the data in the equal time period and is in the same time period to serve as the input of the multilayer circulation neural network model, and predicting the load index of the local data node. Therefore, the load balancing prediction method of the distributed database and the predictive analyzer can more precisely describe and effectively predict the structure of load balancing data.

Description

A kind of distributed data base load balancing Forecasting Methodology and predictive analyzer
Technical field
The present invention relates to computer realm, refer to a kind of distributed data base load balancing Forecasting Methodology and predictive analyzer especially.
Background technology
At present, in order to improve resource utilization and the performance of distributed data base, forecasting techniques is adopted to carry out real-time estimate to distributed data base resource behaviour in service significant.
General BP Network Load Balance Forecasting Methodology still comes with some shortcomings part, in Distributed Predictive analyzer, load balance is by transferring to the lighter server of load task from the server of overload, with the computing power making task can utilize there, thus improve performance and the stability of whole Distributed Predictive analyzer.When the operating load of certain server in predictive analyzer often keeps heavier, or the certain server speed of executing the task slower than other server many time, the situation of load distribution inequality clearly will often occur.Even if in the Distributed Predictive analyzer of an automorphis, arrive time of server due to task and task completes the difference existed required service time, to appear in whole predictive analyzer load balance problem between server equally; There is enquiry frequency in addition far above problems such as data rewriting frequencies.
BP network has clear principle, the advantage such as simple and practical, but carries out real time load prediction during needing large-scale data lower network model training, but owing to have employed the gradient method of neural network, speed of convergence is slow, easily converges to Local Minimum.In addition, Studying factors and choosing of inertial factor can only be selected by personal experience usually on the convergence impact of neural network.Therefore, BP network is also not suitable for the high load estimation inputting high prediction of output analyzer, especially needs the situation processing sudden change real time load.
Summary of the invention
In view of this, the object of the invention is to propose a kind of distributed data base load balancing Forecasting Methodology and predictive analyzer, can more accurately describe the structure of load balancing data and it effectively be predicted.
Based on above-mentioned purpose distributed data base load balancing provided by the invention Forecasting Methodology, comprise step:
Gather the loading index on each local data node, composing training collection data;
Initialization multilayer circulation neural network model;
The input of a period of time data as multilayer circulation neural network model is extracted from training set data, extract these a period of time data from training set data after, the data of equal time period are as the output of multilayer circulation neural network model, train described multilayer circulation neural network model;
Extract the data of this equal time period from training set data after and the data of same time section, as the input of multilayer circulation neural network model, predict the loading index of this local data node.
In certain embodiments, described loading index comprises cpu busy percentage R c, memory usage R m, network downstream speed S dand network uplink speed S u; Further, each second gathers once described loading index, gathers 2T second altogether, composing training collection data
L=[R c,R m,S d,S u];
Further, described initialization multilayer circulation neural network model comprises:
Determine the hidden layer number of multilayer circulation neural network model and the neuron number n of multilayer circulation neural network model every layer m; By the network connection weight between multilayer circulation neural network model input layer and the neuron of each hidden layer random initializtion, and be denoted as
In certain embodiments, during described training multilayer circulation neural network model, from training set data, extract the 1st article to T article of data, and mapped often organizing data by following formula:
x i=[1,sin(L(i)),cos(L(i))]
Using the input x of the data after mapping as multilayer circulation neural network model i;
The output y of T+1 article to 2T article data as multilayer circulation neural network model is extracted from training set data i, train described multilayer circulation neural network model.
In certain embodiments, described from training set data, extract the data of this equal time period after and before the data of same time section, also comprise:
Output by each hidden layer of following formulae discovery multilayer circulation neural network model:
h i 1 = tanh ( w h 1 x x i + w h 1 h 1 h i - 1 1 + b h 1 )
h i 2 = tanh ( w h 1 h 2 h i 1 + w h 2 h 2 h i - 1 2 + b h 2 )
y ^ i = tanh ( w yh 2 h i 2 + b y )
Wherein, x ifor network input, for network exports, with represent the output of ground floor hidden layer and second layer hidden layer respectively, for h 1weight matrix between layer and input, for h 1weight matrix between layer different time sequence, for h 1layer and h 2weight matrix between layer, for h 2weight matrix between layer different time sequence, for h 1being biased of layer, for h 2being biased of layer, b yfor being biased of output layer, tanh is activation function;
Calculate the output error of each hidden layer of multilayer circulation neural network model, according to following formula:
e = 1 2 Σ i = 1 T ( y ^ i - y i )
According to calculating the output error obtained, by upgrading whole weights of multilayer circulation neural network model, until described output error is in the allowed band preset, the training of multilayer circulation neural network model terminates.
In certain embodiments, T+1 article to 2T article data are extracted from training set data, and using the input of T+1 article to 2T article data as multilayer circulation neural network model, the output of multilayer circulation neural network model is the loading index of the local data node of prediction 2T+1 to 3T time period.
In another aspect of the present invention, additionally provide a kind of multilayer circulation Neural Network Prediction device, comprising:
Data acquisition unit, for gathering the loading index on each local data node, composing training collection data;
Model initialization unit, for initialization multilayer circulation neural network model;
Model training unit, for extracting the input of a period of time data as multilayer circulation neural network model from training set data, extract these a period of time data from training set data after, the data of equal time period are as the output of multilayer circulation neural network model, train described multilayer circulation neural network model;
Load estimation unit, for from training set data, extract this equal time period data after and the data of same time section, as the input of multilayer circulation neural network model, predict the loading index of this local data node.
In certain embodiments, the loading index of described data acquisition unit comprises cpu busy percentage R c, memory usage R m, network downstream speed S dand network uplink speed S u; Further, each second of described data acquisition unit gathers once described loading index, gathers 2T second altogether, composing training collection data L=[R c, R m, S d, S u; ]
In addition, described model initialization unit, when initialization multilayer circulation neural network model, comprises and determines the hidden layer number of multilayer circulation neural network model and the neuron number n of multilayer circulation neural network model every layer m; By the network connection weight between multilayer circulation neural network model input layer and the neuron of each hidden layer random initializtion, and be denoted as
In certain embodiments, described model training unit extracts the 1st article to T article of data from training set data, and is mapped often organizing data by following formula:
x i=[1,sin(L(i)),cos(L(i))]
Using the input x of the data after mapping as multilayer circulation neural network model i;
The output y of T+1 article to 2T article data as multilayer circulation neural network model is extracted from training set data i, train described multilayer circulation neural network model;
In addition, described load estimation unit extracts T+1 article to 2T article data from training set data, and using the input of T+1 article to 2T article data as multilayer circulation neural network model, the output of multilayer circulation neural network model is the loading index of the local data node of prediction 2T+1 to 3T time period.
In certain embodiments, described model training unit is also for the output by each hidden layer of following formulae discovery multilayer circulation neural network model:
h i 1 = tanh ( w h 1 x x i + w h 1 h 1 h i - 1 1 + b h 1 )
h i 2 = tanh ( w h 1 h 2 h i 1 + w h 2 h 2 h i - 1 2 + b h 2 )
y ^ i = tanh ( w yh 2 h i 2 + b y )
Wherein, x ifor network input, for network exports, with represent the output of ground floor hidden layer and second layer hidden layer respectively, for h 1weight matrix between layer and input, for h 1weight matrix between layer different time sequence, for h 1layer and h 2weight matrix between layer, for h 2weight matrix between layer different time sequence, for h 1being biased of layer, for h 2being biased of layer, b yfor being biased of output layer, tanh is activation function;
Calculate the output error of each hidden layer of multilayer circulation neural network model, according to following formula:
e = 1 2 Σ i = 1 T ( y ^ i - y i )
According to calculating the output error obtained, by upgrading whole weights of multilayer circulation neural network model, until described output error is in the allowed band preset, the training of multilayer circulation neural network model terminates.
In another aspect of the present invention, additionally provide a kind of distributed data base, comprise: distributed data base management system (DDBMS), at least one back end be connected with this distributed data base management system (DDBMS), and multilayer circulation Neural Network Prediction device is installed on each back end;
Wherein, distributed data base management system (DDBMS) includes resource management module and job scheduling module, and resource management module manages for the data resource of at least one back end; And the data resource situation of each back end that job scheduling module manages according to resource management module, the request of data for client carries out job scheduling to back end; Further, each back end comprises respectively location resource allocation module and local job scheduling module; The data resource of location resource allocation module to this locality manages, and local job scheduling module processes the data resource in location resource allocation module according to the dispatch command of the job scheduling module of distributed data base management system (DDBMS);
In addition, described multilayer circulation Neural Network Prediction device corresponds to the location resource allocation module of each back end, and location resource allocation module is to the current local charge capacity of multilayer circulation Neural Network Prediction device report of correspondence; This multilayer circulation Neural Network Prediction device, according to described current local charge capacity, is predicted the load of corresponding location resource allocation module and will be predicted the outcome and feed back to this location resource allocation module;
Afterwards, location resource allocation module is respectively by predicting the outcome of obtaining and current loading condition sends to the resource management module of distributed data base management system (DDBMS) together, and resource management module can by allly predicting the outcome and current loading condition sends job scheduling module to of obtaining; Job scheduling module can predicting the outcome and the local job scheduling module transmission job scheduling instruction of current loading condition to the back end of correspondence according to each back end.
As can be seen from above, distributed data base load balancing Forecasting Methodology provided by the invention and predictive analyzer, achieve a set of simple and easy to do, simultaneously compared with general BP neural net prediction method, real time load prediction is carried out during being more suitable for process large-scale data lower network model training, the situation of more effective process sudden change real time load, is more suitable for method and the predictive analyzer of the high load estimation scene exported of high input.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of distributed data base load balancing Forecasting Methodology in the embodiment of the present invention;
Fig. 2 is that the present invention can the schematic flow sheet of distributed data base load balancing Forecasting Methodology in reference example;
Fig. 3 is that the present invention can the structural representation of multilayer circulation neural network in reference example;
Fig. 4 is the structural representation of multilayer circulation Neural Network Prediction device in the embodiment of the present invention;
Fig. 5 is the structural representation of distributed data base in the embodiment of the present invention;
Fig. 6 is that the instruction of distributed data base in the embodiment of the present invention performs schematic diagram.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
As one embodiment of the present of invention, consulting shown in Fig. 1, is the schematic flow sheet of distributed data base load balancing Forecasting Methodology in the embodiment of the present invention.Described distributed data base load balancing Forecasting Methodology comprises:
Step 101, gathers the loading index on each local data node, composing training collection data.
Wherein, loading index comprises cpu busy percentage R c, memory usage R m, network downstream speed S dand network uplink speed S u.Preferably, each second gathers once described loading index, gathers 2T second altogether.Then, composing training collection data L=[R c, R m, S d, S u].
Step 102, initialization multilayer circulation neural network model.
Preferably, initialization multilayer circulation neural network model comprises: determine the hidden layer number of multilayer circulation neural network model and the neuron number n of multilayer circulation neural network model every layer m.Further, by the network connection weight between multilayer circulation neural network model input layer and the neuron of each hidden layer random initializtion, and be denoted as
Step 103, the input of a period of time data as multilayer circulation neural network model is extracted from training set data, extract these a period of time data from training set data after, the data of equal time period are as the output of multilayer circulation neural network model, train described multilayer circulation neural network model.
In an embodiment, from training set data, extract the 1st article to T article of data, and mapped often organizing data by following formula:
x i=[1,sin(L(i)),cos(L(i))]
Using the input x of the data after mapping as multilayer circulation neural network model i;
The output y of T+1 article to 2T article data as multilayer circulation neural network model is extracted from training set data i, train described multilayer circulation neural network model.
Preferably, by the output of each hidden layer of following formulae discovery multilayer circulation neural network model:
h i 1 = tanh ( w h 1 x x i + w h 1 h 1 h i - 1 1 + b h 1 )
h i 2 = tanh ( w h 1 h 2 h i 1 + w h 2 h 2 h i - 1 2 + b h 2 )
y ^ i = tanh ( w yh 2 h i 2 + b y )
Wherein, x ifor network input, for network exports, with represent the output of ground floor hidden layer and second layer hidden layer respectively, for h 1weight matrix between layer and input, for h 1weight matrix between layer different time sequence, for h 1layer and h 2weight matrix between layer, for h 2weight matrix between layer different time sequence, for h 1being biased of layer, for h 2being biased of layer, b yfor being biased of output layer, tanh is activation function.
Then, the output error of each hidden layer of multilayer circulation neural network model is calculated, according to following formula:
e = 1 2 Σ i = 1 T ( y ^ i - y i )
According to calculating the output error obtained, by upgrading whole weights of multilayer circulation neural network model, until described output error is in the allowed band preset, the training of multilayer circulation neural network model terminates.
Step 104, after extracting the data of this equal time period and the data of same time section, as the input of multilayer circulation neural network model, predicts the loading index of this local data node from training set data.
As embodiment, T+1 article to 2T article data are extracted from training set data, and using the input of T+1 article to 2T article data as multilayer circulation neural network model, the output of multilayer circulation neural network model is the loading index of the local data node of prediction 2T+1 to 3T time period.
As the present invention another can the embodiment of reference, consult shown in Fig. 2, described distributed data base load balancing Forecasting Methodology can be following process:
Step 201, gathers the loading index on each local data node.Wherein, loading index comprises cpu busy percentage, memory usage, network downstream speed and network uplink speed.Preferably, described loading index is gathered once each second.Preferably, 2T second can be gathered.
Step 202, by the loading index composing training collection data L=[R gathered c, R m, S d, S u].Wherein, R cfor cpu busy percentage, R mfor memory usage, S dfor network downstream speed, S ufor speed uplink.
Step 203, initialization multilayer circulation neural network model, comprises and determines the hidden layer number of multilayer circulation neural network model and the neuron number of multilayer circulation neural network model every layer; By the network connection weight between multilayer circulation neural network model input layer and the neuron of each hidden layer random initializtion, and be denoted as
Preferably, the hidden layer number 2 of multilayer circulation neural network model.The neuron number of multilayer circulation neural network model every layer is n m.
Step 204, from training set data, extract the 1st article to T article of data, and mapped often organizing data by following formula:
x i=[1,sin(L(i)),cos(L(i))]
Using the input x of the data after mapping as multilayer circulation neural network model i.
Step 205, extracts the output yi of T+1 article to 2T article data as multilayer circulation neural network model from training set data.
Step 206, calculates the output of each hidden layer of multilayer circulation neural network model, its concrete formula following (as shown in Figure 3):
h i 1 = tanh ( w h 1 x x i + w h 1 h 1 h i - 1 1 + b h 1 )
h i 2 = tanh ( w h 1 h 2 h i 1 + w h 2 h 2 h i - 1 2 + b h 2 )
y ^ i = tanh ( w yh 2 h i 2 + b y )
Wherein, x ifor network input, for network exports, with represent the output of ground floor hidden layer and second layer hidden layer respectively, for h 1weight matrix between layer and input, for h 1weight matrix between layer different time sequence, for h 1layer and h 2weight matrix between layer, for h 2weight matrix between layer different time sequence, for h 1being biased of layer, for h 2being biased of layer, b yfor being biased of output layer, tanh is activation function.
Step 207, calculates the output error of each hidden layer of multilayer circulation neural network model, according to following formula:
e = 1 2 Σ i = 1 T ( y ^ i - y i )
Step 208, is calculated the output error obtained, is upgraded by whole weights of gradient descent method to multilayer circulation neural network model according to step 207, until described error is in allowed band, the training of multilayer circulation neural network model terminates.
Preferably, the allowed band of described error is positive and negative 5% of multilayer circulation neural network model input value.
Step 209, extracts T+1 article to 2T article data from training set data, and using the input of T+1 article to 2T article data as multilayer circulation neural network model, the loading index of the local data node of prediction 2T+1 to 3T time period.
In another aspect of this invention, provide a kind of multilayer circulation Neural Network Prediction device, consult shown in Fig. 4, described multilayer circulation Neural Network Prediction device comprises data acquisition unit 401 successively, for gathering the loading index on each local data node, composing training collection data.Model initialization unit 402, for initialization multilayer circulation neural network model.Model training unit 403, for extracting the input of a period of time data as multilayer circulation neural network model from training set data, extract these a period of time data from training set data after, the data of equal time period are as the output of multilayer circulation neural network model, train described multilayer circulation neural network model.Load estimation unit 404, for from training set data, extract this equal time period data after and the data of same time section, as the input of multilayer circulation neural network model, predict the loading index of this local data node.
Wherein, the loading index of data acquisition unit 401 comprises cpu busy percentage R c, memory usage R m, network downstream speed S dand network uplink speed S u.Preferably, each second of described data acquisition unit gathers once described loading index, gathers 2T second altogether, composing training collection data L=[R c, R m, S d.]
As embodiment, described model initialization unit 402, when initialization multilayer circulation neural network model, needs to determine the hidden layer number of multilayer circulation neural network model and the neuron number n of multilayer circulation neural network model every layer m.In addition, by the network connection weight between multilayer circulation neural network model input layer and the neuron of each hidden layer random initializtion, and be denoted as
As a preferably embodiment, model training unit 403 extracts the 1st article to T article of data from training set data, and is mapped often organizing data by following formula.
x i=[1,sin(L(i)),cos(L(i))]
Using the input x of the data after mapping as multilayer circulation neural network model i;
Then, from training set data, extract the output y of T+1 article to 2T article data as multilayer circulation neural network model i, train described multilayer circulation neural network model.
Finally, load estimation unit 404 extracts T+1 article to 2T article data from training set data, and using the input of T+1 article to 2T article data as multilayer circulation neural network model, the output of multilayer circulation neural network model is the loading index of the local data node of prediction 2T+1 to 3T time period.
As a preferably embodiment, model training unit 403 can also by the output of each hidden layer of following formulae discovery multilayer circulation neural network model:
h i 1 = tanh ( w h 1 x x i + w h 1 h 1 h i - 1 1 + b h 1 )
h i 2 = tanh ( w h 1 h 2 h i 1 + w h 2 h 2 h i - 1 2 + b h 2 )
y ^ i = tanh ( w yh 2 h i 2 + b y )
Wherein, x ifor network input, for network exports, with represent the output of ground floor hidden layer and second layer hidden layer respectively, for h 1weight matrix between layer and input, for h 1weight matrix between layer different time sequence, for h 1layer and h 2weight matrix between layer, for h 2weight matrix between layer different time sequence, for h 1being biased of layer, for h 2being biased of layer, b yfor being biased of output layer, tanh is activation function.
Then, the output error of each hidden layer of multilayer circulation neural network model is calculated, according to following formula:
e = 1 2 Σ i = 1 T ( y ^ i - y i )
According to calculating the output error obtained, by upgrading whole weights of multilayer circulation neural network model, until described output error is in the allowed band preset, the training of multilayer circulation neural network model terminates.
It should be noted that, at the concrete implementation content of distributed data base load balancing predictive analyzer of the present invention, describe in detail in distributed data base load balancing Forecasting Methodology recited above, therefore no longer illustrate at this duplicate contents.
As another aspect of the invention, provide n the back end that a kind of distributed data base can comprise distributed data base management system (DDBMS), be connected with this distributed data base management system (DDBMS), and multilayer circulation Neural Network Prediction device is installed on each back end, namely has n multilayer circulation Neural Network Prediction device.As shown in Figure 5, request of data is sent on distributed data base management system (DDBMS) by data I/O by client.Preferably, distributed data base management system (DDBMS) includes resource management module and job scheduling module.Wherein, resource management module can manage for the data resource of n back end; And job scheduling module can according to the data resource situation of each back end of resource management module management, the request of data for client carries out job scheduling to back end.Preferably, each back end comprises respectively location resource allocation module and local job scheduling module.Location resource allocation module can manage the data resource of this locality, and local job scheduling module can process the data resource in location resource allocation module according to the dispatch command of the job scheduling module of distributed data base management system (DDBMS).
Consult shown in Fig. 6, n multilayer circulation Neural Network Prediction device corresponds to the location resource allocation module of n back end, and location resource allocation module is to the current local charge capacity of multilayer circulation Neural Network Prediction device report of correspondence.This multilayer circulation Neural Network Prediction device, according to described current local charge capacity, utilizes the load of distributed data base load balancing Forecasting Methodology recited above to corresponding location resource allocation module to predict and will predict the outcome and feeds back to this location resource allocation module.
Preferably, predicting the outcome of obtaining and current loading condition are sent to the resource management module of distributed data base management system (DDBMS) by n location resource allocation module respectively together, and described resource management module can by allly predicting the outcome and current loading condition sends job scheduling module to of obtaining.Then, described job scheduling module can according to each back end predicting the outcome and current loading condition sends job scheduling instruction to the local job scheduling module of the back end of correspondence, and current all job scheduling situations are sent to resource management module.Thus, achieve the job scheduling of distributed data base management system (DDBMS) to load balancing between n back end.
In sum, distributed data base load balancing Forecasting Methodology provided by the invention and predictive analyzer and distributed data base, the distributed data base load balancing creatively based on multilayer circulation neural network is predicted; Relative to general BP neural network algorithm, avoid the defect that speed of convergence that neural network gradient method causes converges to Local Minimum slowly and easily; By the message loop transmission of many hidden layers neural network, training pattern is made to describe the structure of load balancing data more accurately and effectively to predict it; And loss when described distributed data base load balancing Forecasting Methodology and predictive analyzer reply sudden change real time load is less; In most of the cases make the loading condition of each node more balanced; Less occur overlaid or cross underloaded node; Meanwhile, it is significant that the present invention adopts forecasting techniques to carry out real-time estimate to distributed data base resource behaviour in service, improves resource utilization and the performance of distributed data base; Finally, whole described distributed data base load balancing Forecasting Methodology and predictive analyzer easy, compact, be easy to realize.
Those of ordinary skill in the field are to be understood that: the foregoing is only specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a distributed data base load balancing Forecasting Methodology, is characterized in that, comprises step:
Gather the loading index on each local data node, composing training collection data;
Initialization multilayer circulation neural network model;
The input of a period of time data as multilayer circulation neural network model is extracted from training set data, extract these a period of time data from training set data after, the data of equal time period are as the output of multilayer circulation neural network model, train described multilayer circulation neural network model;
Extract the data of this equal time period from training set data after and the data of same time section, as the input of multilayer circulation neural network model, predict the loading index of this local data node.
2. method according to claim 1, is characterized in that, described loading index comprises cpu busy percentage R c, memory usage R m, network downstream speed S dand network uplink speed S u; Further, each second gathers once described loading index, gathers 2T second altogether, composing training collection data L=[R c, R m, S d, S u];
Further, described initialization multilayer circulation neural network model comprises:
Determine the hidden layer number of multilayer circulation neural network model and the neuron number n of multilayer circulation neural network model every layer m; By the network connection weight between multilayer circulation neural network model input layer and the neuron of each hidden layer random initializtion, and be denoted as
3. Forecasting Methodology according to claim 2, is characterized in that, during described training multilayer circulation neural network model, extracts the 1st article to T article of data, and mapped often organizing data by following formula from training set data:
x i=[1,sin(L(i)),cos(L(i))]
Using the input x of the data after mapping as multilayer circulation neural network model i;
The output y of T+1 article to 2T article data as multilayer circulation neural network model is extracted from training set data i, train described multilayer circulation neural network model.
4. Forecasting Methodology according to claim 3, is characterized in that, described from training set data, extract the data of this equal time period after and before the data of same time section, also comprise:
Output by each hidden layer of following formulae discovery multilayer circulation neural network model:
Wherein, x ifor network input, for network exports, with represent the output of ground floor hidden layer and second layer hidden layer respectively, for h 1weight matrix between layer and input, for h 1weight matrix between layer different time sequence, for h 1layer and h 2weight matrix between layer, for h 2weight matrix between layer different time sequence, for h 1being biased of layer, for h 2being biased of layer, b yfor being biased of output layer, tanh is activation function;
Calculate the output error of each hidden layer of multilayer circulation neural network model, according to following formula:
According to calculating the output error obtained, by upgrading whole weights of multilayer circulation neural network model, until described output error is in the allowed band preset, the training of multilayer circulation neural network model terminates.
5. the Forecasting Methodology according to claim 3 or 4, it is characterized in that, T+1 article to 2T article data are extracted from training set data, and using the input of T+1 article to 2T article data as multilayer circulation neural network model, the output of multilayer circulation neural network model is the loading index of the local data node of prediction 2T+1 to 3T time period.
6. a multilayer circulation Neural Network Prediction device, is characterized in that, comprising:
Data acquisition unit, for gathering the loading index on each local data node, composing training collection data;
Model initialization unit, for initialization multilayer circulation neural network model;
Model training unit, for extracting the input of a period of time data as multilayer circulation neural network model from training set data, extract these a period of time data from training set data after, the data of equal time period are as the output of multilayer circulation neural network model, train described multilayer circulation neural network model;
Load estimation unit, for from training set data, extract this equal time period data after and the data of same time section, as the input of multilayer circulation neural network model, predict the loading index of this local data node.
7. Forecasting Methodology according to claim 6, is characterized in that, the loading index of described data acquisition unit comprises cpu busy percentage R c, memory usage R m, network downstream speed S dand network uplink speed S u; Further, each second of described data acquisition unit gathers once described loading index, gathers 2T second altogether, composing training collection data L=[R c, R m, S d, S u];
In addition, described model initialization unit, when initialization multilayer circulation neural network model, comprises and determines the hidden layer number of multilayer circulation neural network model and the neuron number n of multilayer circulation neural network model every layer m; By the network connection weight between multilayer circulation neural network model input layer and the neuron of each hidden layer random initializtion, and be denoted as
8. Forecasting Methodology according to claim 7, is characterized in that, described model training unit extracts the 1st article to T article of data from training set data, and is mapped often organizing data by following formula:
x i=[1,sin(L(i)),cos(L(i))]
Using the input x of the data after mapping as multilayer circulation neural network model i;
The output y of T+1 article to 2T article data as multilayer circulation neural network model is extracted from training set data i, train described multilayer circulation neural network model;
In addition, described load estimation unit extracts T+1 article to 2T article data from training set data, and using the input of T+1 article to 2T article data as multilayer circulation neural network model, the output of multilayer circulation neural network model is the loading index of the local data node of prediction 2T+1 to 3T time period.
9. Forecasting Methodology according to claim 8, is characterized in that, described model training unit is also for the output by each hidden layer of following formulae discovery multilayer circulation neural network model:
Wherein, x ifor network input, for network exports, with represent the output of ground floor hidden layer and second layer hidden layer respectively, for h 1weight matrix between layer and input, for h 1weight matrix between layer different time sequence, for h 1layer and h 2weight matrix between layer, for h 2weight matrix between layer different time sequence, for h 1being biased of layer, for h 2being biased of layer, b yfor being biased of output layer, tanh is activation function;
Calculate the output error of each hidden layer of multilayer circulation neural network model, according to following formula:
According to calculating the output error obtained, by upgrading whole weights of multilayer circulation neural network model, until described output error is in the allowed band preset, the training of multilayer circulation neural network model terminates.
10. a distributed data base, it is characterized in that, comprise: distributed data base management system (DDBMS), at least one back end be connected with this distributed data base management system (DDBMS), and multilayer circulation Neural Network Prediction device is installed on each back end;
Wherein, distributed data base management system (DDBMS) includes resource management module and job scheduling module, and resource management module manages for the data resource of at least one back end; And the data resource situation of each back end that job scheduling module manages according to resource management module, the request of data for client carries out job scheduling to back end; Further, each back end comprises respectively location resource allocation module and local job scheduling module; The data resource of location resource allocation module to this locality manages, and local job scheduling module processes the data resource in location resource allocation module according to the dispatch command of the job scheduling module of distributed data base management system (DDBMS);
In addition, described multilayer circulation Neural Network Prediction device corresponds to the location resource allocation module of each back end, and location resource allocation module is to the current local charge capacity of multilayer circulation Neural Network Prediction device report of correspondence; This multilayer circulation Neural Network Prediction device, according to described current local charge capacity, is predicted the load of corresponding location resource allocation module and will be predicted the outcome and feed back to this location resource allocation module;
Afterwards, location resource allocation module is respectively by predicting the outcome of obtaining and current loading condition sends to the resource management module of distributed data base management system (DDBMS) together, and resource management module can by allly predicting the outcome and current loading condition sends job scheduling module to of obtaining; Job scheduling module can predicting the outcome and the local job scheduling module transmission job scheduling instruction of current loading condition to the back end of correspondence according to each back end.
CN201510938406.9A 2015-12-15 2015-12-15 Load balance prediction method and prediction analyzer for distributed database Active CN105550323B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510938406.9A CN105550323B (en) 2015-12-15 2015-12-15 Load balance prediction method and prediction analyzer for distributed database

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510938406.9A CN105550323B (en) 2015-12-15 2015-12-15 Load balance prediction method and prediction analyzer for distributed database

Publications (2)

Publication Number Publication Date
CN105550323A true CN105550323A (en) 2016-05-04
CN105550323B CN105550323B (en) 2020-04-28

Family

ID=55829512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510938406.9A Active CN105550323B (en) 2015-12-15 2015-12-15 Load balance prediction method and prediction analyzer for distributed database

Country Status (1)

Country Link
CN (1) CN105550323B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502799A (en) * 2016-12-30 2017-03-15 南京大学 A kind of host load prediction method based on long memory network in short-term
CN106682217A (en) * 2016-12-31 2017-05-17 成都数联铭品科技有限公司 Method for enterprise second-grade industry classification based on automatic screening and learning of information
CN107426315A (en) * 2017-07-24 2017-12-01 南京邮电大学 A kind of improved method of the distributed cache system Memcached based on BP neural network
WO2017215339A1 (en) * 2016-06-14 2017-12-21 武汉斗鱼网络科技有限公司 Search cluster optimisation method and system based on rbf neural network
CN107515663A (en) * 2016-06-15 2017-12-26 北京京东尚科信息技术有限公司 The method and apparatus for adjusting central processor core running frequency
WO2018000991A1 (en) * 2016-06-30 2018-01-04 华为技术有限公司 Data balancing method and device
CN107743630A (en) * 2015-07-27 2018-02-27 谷歌有限责任公司 Meet the possibility of condition using Recognition with Recurrent Neural Network prediction
CN108632082A (en) * 2018-03-27 2018-10-09 北京国电通网络技术有限公司 A kind of prediction technique and device of the load information of server
CN109358959A (en) * 2018-10-23 2019-02-19 电子科技大学 Data distribution formula cooperative processing method based on prediction
CN109522129A (en) * 2018-11-23 2019-03-26 快云信息科技有限公司 A kind of resource method for dynamically balancing, device and relevant device
CN109936473A (en) * 2017-12-19 2019-06-25 华耀(中国)科技有限公司 Distributed computing system and its operation method based on deep learning prediction
CN109976908A (en) * 2019-03-15 2019-07-05 北京工业大学 A kind of server cluster dynamic retractility method based on RNN time series forecasting
CN110059858A (en) * 2019-03-15 2019-07-26 深圳壹账通智能科技有限公司 Server resource prediction technique, device, computer equipment and storage medium
CN110084380A (en) * 2019-05-10 2019-08-02 深圳市网心科技有限公司 A kind of repetitive exercise method, equipment, system and medium
CN110784555A (en) * 2019-11-07 2020-02-11 中电福富信息科技有限公司 Intelligent monitoring and load scheduling method based on deep learning
CN111143050A (en) * 2018-11-02 2020-05-12 中移(杭州)信息技术有限公司 Container cluster scheduling method and device
WO2021052140A1 (en) * 2019-09-17 2021-03-25 中国科学院分子细胞科学卓越创新中心 Anticipatory learning method and system oriented towards short-term time series prediction
WO2021139276A1 (en) * 2020-01-10 2021-07-15 平安科技(深圳)有限公司 Automatic operation and maintenance method and device for platform databases, and computer readable storage medium
CN113157814A (en) * 2021-01-29 2021-07-23 东北大学 Query-driven intelligent workload analysis method under relational database
CN114661463A (en) * 2022-03-09 2022-06-24 国网山东省电力公司信息通信公司 BP neural network-based system resource prediction method and system
CN114969209A (en) * 2022-06-15 2022-08-30 支付宝(杭州)信息技术有限公司 Training method and device, and method and device for predicting resource consumption

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130060948A1 (en) * 2001-01-29 2013-03-07 Overland Storage, Inc. Systems and methods for load balancing drives and servers
CN103678004A (en) * 2013-12-19 2014-03-26 南京大学 Host load prediction method based on unsupervised feature learning
CN104239194A (en) * 2014-09-12 2014-12-24 上海交通大学 Task completion time prediction method based on BP (Back Propagation) neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130060948A1 (en) * 2001-01-29 2013-03-07 Overland Storage, Inc. Systems and methods for load balancing drives and servers
CN103678004A (en) * 2013-12-19 2014-03-26 南京大学 Host load prediction method based on unsupervised feature learning
CN104239194A (en) * 2014-09-12 2014-12-24 上海交通大学 Task completion time prediction method based on BP (Back Propagation) neural network

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107743630A (en) * 2015-07-27 2018-02-27 谷歌有限责任公司 Meet the possibility of condition using Recognition with Recurrent Neural Network prediction
CN107743630B (en) * 2015-07-27 2021-12-17 谷歌有限责任公司 Predicting likelihood of satisfying a condition using a recurrent neural network
WO2017215339A1 (en) * 2016-06-14 2017-12-21 武汉斗鱼网络科技有限公司 Search cluster optimisation method and system based on rbf neural network
CN107515663B (en) * 2016-06-15 2021-01-26 北京京东尚科信息技术有限公司 Method and device for adjusting running frequency of central processing unit kernel
CN107515663A (en) * 2016-06-15 2017-12-26 北京京东尚科信息技术有限公司 The method and apparatus for adjusting central processor core running frequency
WO2018000991A1 (en) * 2016-06-30 2018-01-04 华为技术有限公司 Data balancing method and device
CN106502799A (en) * 2016-12-30 2017-03-15 南京大学 A kind of host load prediction method based on long memory network in short-term
CN106682217A (en) * 2016-12-31 2017-05-17 成都数联铭品科技有限公司 Method for enterprise second-grade industry classification based on automatic screening and learning of information
CN107426315A (en) * 2017-07-24 2017-12-01 南京邮电大学 A kind of improved method of the distributed cache system Memcached based on BP neural network
CN107426315B (en) * 2017-07-24 2020-07-31 南京邮电大学 Distributed cache system Memcached improvement method based on BP neural network
CN109936473A (en) * 2017-12-19 2019-06-25 华耀(中国)科技有限公司 Distributed computing system and its operation method based on deep learning prediction
CN109936473B (en) * 2017-12-19 2022-04-08 北京华耀科技有限公司 Deep learning prediction-based distributed computing system and operation method thereof
CN108632082A (en) * 2018-03-27 2018-10-09 北京国电通网络技术有限公司 A kind of prediction technique and device of the load information of server
CN109358959A (en) * 2018-10-23 2019-02-19 电子科技大学 Data distribution formula cooperative processing method based on prediction
CN111143050B (en) * 2018-11-02 2023-09-19 中移(杭州)信息技术有限公司 Method and equipment for dispatching container clusters
CN111143050A (en) * 2018-11-02 2020-05-12 中移(杭州)信息技术有限公司 Container cluster scheduling method and device
CN109522129A (en) * 2018-11-23 2019-03-26 快云信息科技有限公司 A kind of resource method for dynamically balancing, device and relevant device
CN110059858A (en) * 2019-03-15 2019-07-26 深圳壹账通智能科技有限公司 Server resource prediction technique, device, computer equipment and storage medium
CN109976908A (en) * 2019-03-15 2019-07-05 北京工业大学 A kind of server cluster dynamic retractility method based on RNN time series forecasting
CN110084380A (en) * 2019-05-10 2019-08-02 深圳市网心科技有限公司 A kind of repetitive exercise method, equipment, system and medium
WO2021052140A1 (en) * 2019-09-17 2021-03-25 中国科学院分子细胞科学卓越创新中心 Anticipatory learning method and system oriented towards short-term time series prediction
CN110784555A (en) * 2019-11-07 2020-02-11 中电福富信息科技有限公司 Intelligent monitoring and load scheduling method based on deep learning
WO2021139276A1 (en) * 2020-01-10 2021-07-15 平安科技(深圳)有限公司 Automatic operation and maintenance method and device for platform databases, and computer readable storage medium
CN113157814A (en) * 2021-01-29 2021-07-23 东北大学 Query-driven intelligent workload analysis method under relational database
CN113157814B (en) * 2021-01-29 2023-07-18 东北大学 Query-driven intelligent workload analysis method under relational database
CN114661463A (en) * 2022-03-09 2022-06-24 国网山东省电力公司信息通信公司 BP neural network-based system resource prediction method and system
CN114969209A (en) * 2022-06-15 2022-08-30 支付宝(杭州)信息技术有限公司 Training method and device, and method and device for predicting resource consumption

Also Published As

Publication number Publication date
CN105550323B (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN105550323A (en) Load balancing prediction method of distributed database, and predictive analyzer
CN103118124B (en) A kind of cloud computing load balancing method based on the many agencies of layering
CN104216782B (en) Dynamic resource management method in high-performance calculation and cloud computing hybird environment
Taher et al. New approach for optimal UPFC placement using hybrid immune algorithm in electric power systems
CN106534318B (en) A kind of OpenStack cloud platform resource dynamic scheduling system and method based on flow compatibility
CN102981910B (en) The implementation method of scheduling virtual machine and device
CN111966453B (en) Load balancing method, system, equipment and storage medium
CN113794494B (en) Edge computing system and computing unloading optimization method for low-orbit satellite network
CN109189553A (en) Network service and virtual resource multiple target matching process and system
CN102110021B (en) Automatic optimization method for cloud computing
CN105373432B (en) A kind of cloud computing resource scheduling method based on virtual resource status predication
CN103607466B (en) A kind of wide-area multi-stage distributed parallel grid analysis method based on cloud computing
CN105718364A (en) Dynamic assessment method for ability of computation resource in cloud computing platform
CN106020933A (en) Ultra-lightweight virtual machine-based cloud computing dynamic resource scheduling system and method
CN107038064A (en) Virtual machine management method and device, storage medium
CN107454105A (en) A kind of multidimensional network safety evaluation method based on AHP and grey correlation
CN109271257A (en) A kind of method and apparatus of virtual machine (vm) migration deployment
CN110389813A (en) A kind of dynamic migration of virtual machine method in network-oriented target range
CN103617067A (en) Electric power software simulation system based on cloud computing
CN105426241A (en) Cloud computing data center based unified resource scheduling energy-saving method
CN112329997A (en) Power demand load prediction method and system, electronic device, and storage medium
CN106681839A (en) Elasticity calculation dynamic allocation method
CN112261120A (en) Cloud-side cooperative task unloading method and device for power distribution internet of things
CN112651177A (en) Power distribution network flexible resource allocation method and system considering flexible service cost
CN104283717B (en) A kind of method and device for predicting virtual network resource state

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20170315

Address after: 100070 Fengtai District, Feng Feng Road, the era of wealth on the 1st floor of the world's 28 floor, Beijing

Applicant after: BEIJING GUODIANTONG NETWORK TECHNOLOGY Co.,Ltd.

Applicant after: State Grid Corporation of China

Applicant after: STATE GRID ZHEJIANG ELECTRIC POWER Co.

Applicant after: STATE GRID JIBEI ELECTRIC POWER Co.,Ltd.

Applicant after: Beijing Zhongdian Feihua Communications Co.,Ltd.

Applicant after: BEIJING GREAT OPENSOURCE SOFTWARE Co.,Ltd.

Applicant after: STATE GRID INFORMATION & TELECOMMUNICATION GROUP Co.,Ltd.

Address before: 100070 Fengtai District, Feng Feng Road, the era of wealth on the 1st floor of the world's 28 floor, Beijing

Applicant before: BEIJING GUODIANTONG NETWORK TECHNOLOGY Co.,Ltd.

Applicant before: State Grid Corporation of China

Applicant before: STATE GRID ZHEJIANG ELECTRIC POWER Co.

Applicant before: STATE GRID JIBEI ELECTRIC POWER Co.,Ltd.

Applicant before: Beijing Zhongdian Feihua Communications Co.,Ltd.

Applicant before: BEIJING GREAT OPENSOURCE SOFTWARE Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100070 Fengtai District, Feng Feng Road, the era of wealth on the 1st floor of the world's 28 floor, Beijing

Applicant after: BEIJING GUODIANTONG NETWORK TECHNOLOGY Co.,Ltd.

Applicant after: STATE GRID CORPORATION OF CHINA

Applicant after: STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant after: STATE GRID JIBEI ELECTRIC POWER Co.,Ltd.

Applicant after: Beijing Zhongdian Feihua Communications Co.,Ltd.

Applicant after: BEIJING GREAT OPENSOURCE SOFTWARE Co.,Ltd.

Applicant after: STATE GRID INFORMATION & TELECOMMUNICATION GROUP Co.,Ltd.

Address before: 100070 Fengtai District, Feng Feng Road, the era of wealth on the 1st floor of the world's 28 floor, Beijing

Applicant before: BEIJING GUODIANTONG NETWORK TECHNOLOGY Co.,Ltd.

Applicant before: State Grid Corporation of China

Applicant before: STATE GRID ZHEJIANG ELECTRIC POWER Co.

Applicant before: STATE GRID JIBEI ELECTRIC POWER Co.,Ltd.

Applicant before: Beijing Zhongdian Feihua Communications Co.,Ltd.

Applicant before: BEIJING GREAT OPENSOURCE SOFTWARE Co.,Ltd.

Applicant before: STATE GRID INFORMATION & TELECOMMUNICATION GROUP Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190724

Address after: 100085 Beijing city Haidian District Qinghe small Camp Road No. 15

Applicant after: BEIJING CHINA POWER INFORMATION TECHNOLOGY Co.,Ltd.

Applicant after: STATE GRID CORPORATION OF CHINA

Applicant after: STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant after: STATE GRID JIBEI ELECTRIC POWER Co.,Ltd.

Applicant after: Beijing Zhongdian Feihua Communications Co.,Ltd.

Applicant after: BEIJING GREAT OPENSOURCE SOFTWARE Co.,Ltd.

Applicant after: STATE GRID INFORMATION & TELECOMMUNICATION GROUP Co.,Ltd.

Address before: 100070 Fengtai District, Feng Feng Road, the era of wealth on the 1st floor of the world's 28 floor, Beijing

Applicant before: BEIJING GUODIANTONG NETWORK TECHNOLOGY Co.,Ltd.

Applicant before: STATE GRID CORPORATION OF CHINA

Applicant before: STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant before: STATE GRID JIBEI ELECTRIC POWER Co.,Ltd.

Applicant before: Beijing Zhongdian Feihua Communications Co.,Ltd.

Applicant before: BEIJING GREAT OPENSOURCE SOFTWARE Co.,Ltd.

Applicant before: STATE GRID INFORMATION & TELECOMMUNICATION GROUP Co.,Ltd.

GR01 Patent grant
GR01 Patent grant