CN110059858A - Server resource prediction technique, device, computer equipment and storage medium - Google Patents
Server resource prediction technique, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110059858A CN110059858A CN201910198343.6A CN201910198343A CN110059858A CN 110059858 A CN110059858 A CN 110059858A CN 201910198343 A CN201910198343 A CN 201910198343A CN 110059858 A CN110059858 A CN 110059858A
- Authority
- CN
- China
- Prior art keywords
- performance parameter
- parameter sets
- historical performance
- historical
- usage amount
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
Abstract
The invention discloses server resource prediction technique, device, computer equipment and storage mediums.This method comprises: will in user list selected user as target user, target user is obtained in preset historical time section using the historical performance parameter of the corresponding consumption of Cloud Server, to obtain historical performance parameter sets corresponding with target user by preset collection period;Trained reverse transmittance nerve network is treated according to historical performance parameter sets and carries out model training, obtains the reverse transmittance nerve network for estimated performance parameter value;It is corresponding to obtain current input sequence according to historical performance parameter sets and institute's received time point to be predicted;Current input sequence is input to reverse transmittance nerve network, obtains predicted value corresponding with time point to be predicted.The method achieve server resource prediction model is established using historical data, make a prediction to the future usage amount of Cloud Server resource.
Description
Technical field
The present invention relates to intelligent Decision Technology field more particularly to a kind of server resource prediction techniques, device, computer
Equipment and storage medium.
Background technique
Currently, a certain enterprise architecture Cloud Server, Cloud Server can dynamically divide according to the actual use demand of user
A certain team, enterprise, which is supplied to, with certain storage and CPU uses (such as team's internal R&D test uses).Cloud Server is deposited
Storage and CPU (i.e. database and applied host machine) are only responsible for by the project of the team by storage and the charge of CPU actual use amount
The working experience of people is unable to judge accurately following capacity requirement, and a large amount of resource also can not with all items responsible person by
A verification, this both influences development of projects, is also unfavorable for planning the buying of Cloud Server.
Summary of the invention
The embodiment of the invention provides a kind of server resource prediction technique, device, computer equipment and storage medium, purports
It is solving in the prior art in the storage of known Cloud Server and the currently practical usage amount and history actual use amount of CPU
When, can not rule of thumb the capacity requirement in accurate judgement future the problem of.
In a first aspect, the embodiment of the invention provides a kind of server resource prediction techniques comprising:
In user list selected user the target will be obtained by preset collection period and will be used as target user
Family corresponds to the historical performance parameter of consumption in preset historical time section using Cloud Server, to obtain and the target user
Corresponding historical performance parameter sets;Wherein, the historical performance parameter includes at least storage usage amount and CPU usage amount;
Trained reverse transmittance nerve network is treated according to the historical performance parameter sets and carries out model training, is used for
The reverse transmittance nerve network of estimated performance parameter value;
It is corresponding to obtain current input sequence according to the historical performance parameter sets and institute's received time point to be predicted;
And
The current input sequence is input to the reverse transmittance nerve network, is obtained and the time point pair to be predicted
The predicted value answered.
Second aspect, the embodiment of the invention provides a kind of server resource prediction meanss comprising:
Historical set acquiring unit, for will selected user is as target user in user list, by preset
Collection period, which obtains the target user, is joined in preset historical time section using the historical performance of the corresponding consumption of Cloud Server
Number, to obtain historical performance parameter sets corresponding with the target user;Wherein, the historical performance parameter is included at least and is deposited
Store up usage amount and CPU usage amount;
Model training unit is carried out for treating trained reverse transmittance nerve network according to the historical performance parameter sets
Model training obtains the reverse transmittance nerve network for estimated performance parameter value;
Current sequence acquiring unit was used for according to the historical performance parameter sets and institute's received time point to be predicted,
It is corresponding to obtain current input sequence;And
Predicted value acquiring unit is obtained for the current input sequence to be input to the reverse transmittance nerve network
Predicted value corresponding with time point to be predicted.
The third aspect, the embodiment of the present invention provide a kind of computer equipment again comprising memory, processor and storage
On the memory and the computer program that can run on the processor, the processor execute the computer program
Server resource prediction technique described in the above-mentioned first aspect of Shi Shixian.
Fourth aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, wherein the computer can
It reads storage medium and is stored with computer program, it is above-mentioned that the computer program when being executed by a processor executes the processor
Server resource prediction technique described in first aspect.
The embodiment of the invention provides a kind of server resource prediction technique, device, computer equipment and storage mediums.It should
Method includes that selected user will obtain the target as target user in user list by preset collection period and use
Family corresponds to the historical performance parameter of consumption in preset historical time section using Cloud Server, to obtain and the target user
Corresponding historical performance parameter sets;Wherein, the historical performance parameter includes at least storage usage amount and CPU usage amount;Root
Trained reverse transmittance nerve network is treated according to the historical performance parameter sets and carries out model training, obtains joining for estimated performance
The reverse transmittance nerve network of numerical value;It is obtained according to the historical performance parameter sets and institute's received time point to be predicted, correspondence
Take current input sequence;The current input sequence is input to the reverse transmittance nerve network, obtain with it is described to be predicted
Time point corresponding predicted value.The method achieve server resource prediction model is established using historical data, to Cloud Server
The future usage amount of resource is made a prediction.
Detailed description of the invention
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to needed in embodiment description
Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, general for this field
For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the flow diagram of server resource prediction technique provided in an embodiment of the present invention;
Fig. 2 is the sub-process schematic diagram of server resource prediction technique provided in an embodiment of the present invention;
Fig. 3 is another sub-process schematic diagram of server resource prediction technique provided in an embodiment of the present invention;
Fig. 4 is another sub-process schematic diagram of server resource prediction technique provided in an embodiment of the present invention;
Fig. 5 is another sub-process schematic diagram of server resource prediction technique provided in an embodiment of the present invention;
Fig. 6 is the schematic block diagram of server resource prediction meanss provided in an embodiment of the present invention;
Fig. 7 is the subelement schematic block diagram of server resource prediction meanss provided in an embodiment of the present invention;
Fig. 8 is another subelement schematic block diagram of server resource prediction meanss provided in an embodiment of the present invention;
Fig. 9 is another subelement schematic block diagram of server resource prediction meanss provided in an embodiment of the present invention;
Figure 10 is another subelement schematic block diagram of server resource prediction meanss provided in an embodiment of the present invention;
Figure 11 is the schematic block diagram of computer equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " and "comprising" instruction
Described feature, entirety, step, operation, the presence of element and/or component, but one or more of the other feature, whole is not precluded
Body, step, operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this description of the invention merely for the sake of description specific embodiment
And be not intended to limit the present invention.As description of the invention and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in description of the invention and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
Referring to Fig. 1, Fig. 1 is the flow diagram of server resource prediction technique provided in an embodiment of the present invention, the clothes
Device resource prediction method be engaged in applied in server, this method is executed by the application software being installed in server.
As shown in Figure 1, the method comprising the steps of S110~S140.
S110, will in user list selected user as target user, obtain by preset collection period described in
Target user corresponds to the historical performance parameter of consumption in preset historical time section using Cloud Server, to obtain and the mesh
Mark the corresponding historical performance parameter sets of user;Wherein, the historical performance parameter includes at least storage usage amount and CPU is used
Amount.
In the present embodiment, if a certain enterprise architecture Cloud Server, Cloud Server can be according to the actual use need of user
It asks the certain storage of dynamic distribution and CPU to be supplied to a certain team, enterprise and uses (such as team's internal R&D test uses).Example
Such as, the A team application of enterprise, which has used the 100G in Cloud Server to store, can monitor acquisition A with 4 CPU, that Cloud Server
Team obtains and the target user in the historical performance parameter for using the corresponding consumption of server in preset historical time section
Corresponding historical performance parameter sets;Wherein, the historical performance parameter includes at least storage usage amount and CPU usage amount.
Due to Cloud Server be for the server resource (i.e. storage usage amount and CPU usage amount) of each user application can be with
Real time monitoring either regular monitoring, therefore designated user (i.e. target user) at the appointed time section can be obtained by Cloud Server
Server resource service condition.For example, predicting that current year A team uses the performance parameter of Cloud Server in Cloud Server
When, historical performance parameter of the A team in first 5 years can be collected by historical data, such as the current year is 2018, then
With preset data collection cycle obtained over 15 days A team in historical time section (such as -2017 years 2013) historical performance
Parameter.
In one embodiment, as shown in Fig. 2, step S110 includes:
S111, it is corresponded in the historical time section using Cloud Server by the collection period acquisition target user
The storage usage amount of consumption obtains the first historical performance parameter sets corresponding with the target user;
S112, the first historical performance parameter sets are grouped with the time, are obtained and first historical performance
The corresponding multiple storage usage amount sequences of parameter sets, to form storage usage amount arrangement set;
S113, it is corresponded in the historical time section using Cloud Server by the collection period acquisition target user
The CPU usage amount of consumption obtains the second historical performance parameter sets corresponding with the target user;
S114, the second historical performance parameter sets are grouped with the time, are obtained and second historical performance
The corresponding multiple CPU usage amount sequences of parameter sets, to form CPU usage amount arrangement set;
S115, it is made of and the target user couple the storage usage amount arrangement set and the CPU usage arrangement set
The historical performance parameter sets answered.
In the present embodiment, such as A team first half of the month in January, 2013 memory usage amount (average value is) is 70G, and CPU makes
Dosage is 4;The second half of the month in January, 2013 memory usage amount is 80G, and CPU usage amount is 6;……;Then memory in 2013 makes
Dosage is abbreviated as sequence 1, sequence 1 is specially [70 80 71 60 50 60 70 60 76 80 77 79 70 80 71 60
50 60 70 60 76 80 77 79], CPU usage amount is abbreviated as sequence 2, sequence 2 is specially [4 63456 3.5 4
3.8 3.74 4.1 4.15 3.55 4.05 3.55 2.75 2.3 3.5 3.85 4 3.55 4.4 6.1 7].Likewise,
- 2017 years 2014 memory usage amounts and CPU usage amount can also be obtained with the same manner.For example, memory in 2014 uses
Amount is abbreviated as sequence CPU usage amount in 3,2014 and is abbreviated as sequence memory usage amount in 4,2015 being abbreviated as sequence 5,2015
The CPU usage amount in year is abbreviated as sequence memory usage amount in 6,2016 and is abbreviated as sequence CPU usage amount in 7,2016 brief note
Sequence CPU usage amount in 9,2017, which is abbreviated as, for sequence memory usage amount in 8,2017 is abbreviated as sequence 10.Wherein, sequence
1, sequence 3, sequence 5, sequence 7, sequence 9 constitute the first historical performance parameter sets, sequence 2, sequence 4, sequence 6, sequence 8,
Sequence 10 constitutes the second historical performance parameter sets.
It is made of by the above-mentioned means, having obtained the corresponding historical performance parameter sets of server A sequence 1- sequence 10.When
Obtain historical performance parameter sets corresponding with the target user, it can historical performance parameter sets are training set training
Prediction model.
S120, trained reverse transmittance nerve network progress model training is treated according to the historical performance parameter sets, obtained
To the reverse transmittance nerve network for estimated performance parameter value.
In the present embodiment, trained backpropagation is treated as training set using the data in the historical performance parameter sets
Neural network carries out model training, prediction model (the i.e. backpropagation mind of the resource usage amount of available prediction Cloud Server
Through network).
In one embodiment, as shown in figure 3, step S120 includes:
S121, the target capabilities parameter set corresponding with selected parameter in the historical performance parameter sets is obtained
It closes, with the objective matrix being made of each sequence of target capabilities parameter sets by row;
S122, it removes the row vector of last line in the objective matrix to obtain training matrix, by the training matrix
In input of each column vector as reverse transmittance nerve network to be trained, will be in the row vector of the objective matrix last line
Output with the one-to-one vector value of each column vector of the training matrix as reverse transmittance nerve network to be trained, to described
Reverse transmittance nerve network to be trained is trained, and obtains reverse transmittance nerve network.
In the present embodiment, reverse transmittance nerve network is abbreviated as BP network (Back-ProPagation Network),
By the training of sample data, continuous corrective networks weight and threshold value decline error function along negative gradient direction, approach expectation
Output.It is a kind of widely used neural network model, is chiefly used in function approximation, model identification classification, data compression
With time series forecasting etc..
BP network is made of input layer, hidden layer and output layer, and hidden layer can have one or more layers, such as a m × k
Three layers of BP network model of × n, network select S type transmission functionPass through anti-pass error function(the calculating output that ti is desired output, Oi is network), constantly regulate network weight and threshold value makes error
Function E reaches minimum.
When choosing the number of hidden layer k, referring to empirical equationWherein n is input layer nerve
First number, m are output layer neuron number, constant of a between [1,10].In this application, n=4, m=1 select a=2,
K takes 5.
In the present embodiment, when designing the input of BP network, such as can be by January, -2016 in first half of the month in January, 2013
The memory usage amount in first half of the month forms first item list entries, by the memory of the second half of the month in January, -2016 the second half of the month in January, 2013
Usage amount forms Section 2 list entries ... ..., by the memory usage amount in first half of the month in December, -2016 in first half of the month in December, 2013
Section 23 list entries is formed, by the memory usage amount of the second half of the month in December, -2016 the second half of the month in December, 2013 composition the
24 list entries;The memory usage amount in first half of the month in January, 2017 is formed corresponding with first item list entries first
The memory usage amount of the second half of the month in January, 2017 is formed Section 2 corresponding with Section 2 list entries and exported by item output valve
The memory usage amount in first half of the month in December, 2017 is formed the corresponding with Section 23 list entries the 23rd by value ... ...
The memory usage amount of the second half of the month in December, 2017 is formed the corresponding with Section 24 list entries the 24th by item output valve
Item output valve.The reverse transmittance nerve network to be trained is instructed by above-mentioned 24 groups of list entries and 24 groups of output valves
Practice, obtains reverse transmittance nerve network.
In one embodiment, as shown in figure 4, after step S121 further include:
S1211, each row vector in the objective matrix is obtained by row, by each vector value of each row vector and adjacent
Previous vector value compares with evaluation growth rate;
The current value growth rate that the vector value of S1212, if it exists row vector obtains compared with adjacent previous vector value
Beyond preset growth rate threshold value, corresponding vector value is replaced according to preset numerical value replacement policy, after obtaining update
Objective matrix.
In the present embodiment, there are exist to make an uproar in each vector value of row vector in objective matrix row vector in order to prevent
The point of articulation is needed each vector value of each row vector compared with adjacent previous vector value with evaluation growth rate, if such as
Each vector value of each row vector compared with adjacent previous vector value obtained from current value growth rate exceed 50%, then
Indicate that current vector value is noise point, need to be replaced according to preset numerical value replacement policy (such as replace current vector value
It is changed to 1.25 multiple values of adjacent previous vector value), to obtain updated objective matrix.
S130, according to the historical performance parameter sets and institute's received time point to be predicted, it is corresponding to obtain current input
Sequence.
In one embodiment, as shown in figure 5, step S130 includes:
S131, obtain time point corresponding with last line row vector in the objective matrix at time point to be predicted when
Between be spaced, by the time interval divided by 1 year to obtain space-number;
If S132, the space-number are equal to 1, the first row row vector removal in the objective matrix is adjusted rear square
Battle array, using column vector corresponding with time point to be predicted in matrix after the adjustment as current input sequence.
In the present embodiment, if such as time point to be predicted be first half of the month in July, 2018, by 2014 in objective matrix
First half of the month in July corresponding vector value, first half of the month in July, 2015 corresponding vector value, first half of the month in July, 2016 corresponding vector
Value, first half of the month in July, 2017 corresponding vector value form current input sequence.Wherein time point to be predicted is in July, 2018
Half a month, there are the time point in first half of the month in July, 2017, first half of the month in July, 2018 and in July, 2017 in objective matrix last line
The time interval at the time point in first half of the month was equal to 1 divided by 1 year, therefore the first row row vector in the objective matrix is removed
Matrix after to adjustment, by column vector corresponding with time point to be predicted in matrix after the adjustment (i.e. in July, 2017 upper half
The time point of the moon corresponds to the column vector where vector value) as current input sequence.
If the space-number is greater than 1, such as the space-number is 2, needs the time point in prediction first half of the month in July, 2019 at this time
Predicted value.It needs to execute in two steps at this time, first predicts the predicted value at the time point in first half of the month in July, 2018, then by 2018
Vector value of the predicted value at the time point in first half of the month in July year as current input sequence last line, by first half of the month in July, 2015
The vector value of the vector value as current input sequence the first row at time point, the vector at the time point in first half of the month in July, 2016
It is worth the vector value as the second row of current input sequence, the vector value at the time point in first half of the month in July, 2017 respectively as current
The vector value of list entries the third line, composition current input sequence are input to the reverse transmittance nerve network and obtain 2019 7
The vector value at the time point in the first half of the month moon.
S140, the current input sequence is input to the reverse transmittance nerve network, obtain with it is described to be predicted when
Between put corresponding predicted value.
In the present embodiment, such as corresponding vector value of known first half of the month in July, 2014, first half of the month in July, 2015 correspond to
Vector value, first half of the month in July, 2016 corresponding vector value, first half of the month in July, 2017 corresponding vector value, prediction 2018
It, can will be by first half of the month in July, 2014 corresponding vector value in training matrix, 2015 years 7 when first half of the month in July corresponding vector value
The first half of the month moon corresponding vector value, first half of the month in July, 2016 corresponding vector value, first half of the month in July, 2017 corresponding vector value
Composition current input sequence is input to the reverse transmittance nerve network, obtains prediction corresponding with time point to be predicted
Value.The prediction to memory usage amount is realized, the maintenance personnel of Cloud Server can be convenient for timely according to different target users
Dynamical Deployment memory and CPU.
The method achieve server resource prediction model is established using historical data, the future of Cloud Server resource is made
Dosage is made a prediction.
The embodiment of the present invention also provides a kind of server resource prediction meanss, and the server resource prediction meanss are for executing
Any embodiment of aforementioned server resource prediction method.Specifically, referring to Fig. 6, Fig. 6 is clothes provided in an embodiment of the present invention
The schematic block diagram of business device resources device.The server resource prediction meanss 100 can be configured in server.
As shown in fig. 6, server resource prediction meanss 100 include historical set acquiring unit 110, model training unit
120, current sequence acquiring unit 130, predicted value acquiring unit 140.
Historical set acquiring unit 110, for that selected user will be pressed default as target user in user list
Collection period obtain the target user in preset historical time section using the historical performance of the corresponding consumption of Cloud Server
Parameter, to obtain historical performance parameter sets corresponding with the target user;Wherein, the historical performance parameter includes at least
Storage usage amount and CPU usage amount.
In the present embodiment, if a certain enterprise architecture Cloud Server, Cloud Server can be according to the actual use need of user
It asks the certain storage of dynamic distribution and CPU to be supplied to a certain team, enterprise and uses (such as team's internal R&D test uses).Example
Such as, the A team application of enterprise, which has used the 100G in Cloud Server to store, can monitor acquisition A with 4 CPU, that Cloud Server
Team obtains and the target user in the historical performance parameter for using the corresponding consumption of server in preset historical time section
Corresponding historical performance parameter sets;Wherein, the historical performance parameter includes at least storage usage amount and CPU usage amount.
Due to Cloud Server be for the server resource (i.e. storage usage amount and CPU usage amount) of each user application can be with
Real time monitoring either regular monitoring, therefore designated user (i.e. target user) at the appointed time section can be obtained by Cloud Server
Server resource service condition.For example, predicting that current year A team uses the performance parameter of Cloud Server in Cloud Server
When, historical performance parameter of the A team in first 5 years can be collected by historical data, such as the current year is 2018, then
With preset data collection cycle obtained over 15 days A team in historical time section (such as -2017 years 2013) historical performance
Parameter.
In one embodiment, as shown in fig. 7, historical set acquiring unit 110 includes:
First set acquiring unit 111, for obtaining the target user in the historical time by the collection period
Using the storage usage amount of the corresponding consumption of Cloud Server in section, the first historical performance parameter corresponding with the target user is obtained
Set;
First grouped element 112 obtains and institute for being grouped the first historical performance parameter sets with the time
The corresponding multiple storage usage amount sequences of the first historical performance parameter sets are stated, to form storage usage amount arrangement set;
Second set acquiring unit 113, for obtaining the target user in the historical time by the collection period
Using the CPU usage amount of the corresponding consumption of Cloud Server in section, the second historical performance parameter corresponding with the target user is obtained
Set;
Second packet unit 114 obtains and institute for being grouped the second historical performance parameter sets with the time
The corresponding multiple CPU usage amount sequences of the second historical performance parameter sets are stated, to form CPU usage amount arrangement set;
Assembled unit 115, for by the storage usage amount arrangement set and the CPU usage arrangement set form with it is described
The corresponding historical performance parameter sets of target user.
In the present embodiment, such as A team first half of the month in January, 2013 memory usage amount (average value is) is 70G, and CPU makes
Dosage is 4;The second half of the month in January, 2013 memory usage amount is 80G, and CPU usage amount is 6;……;Then memory in 2013 makes
Dosage is abbreviated as sequence 1, sequence 1 is specially [70 80 71 60 50 60 70 60 76 80 77 79 70 80 71 60
50 60 70 60 76 80 77 79], CPU usage amount is abbreviated as sequence 2, sequence 2 is specially [4 63456 3.5 4
3.8 3.74 4.1 4.15 3.55 4.05 3.55 2.75 2.3 3.5 3.85 4 3.55 4.4 6.1 7].Likewise,
- 2017 years 2014 memory usage amounts and CPU usage amount can also be obtained with the same manner.For example, memory in 2014 uses
Amount is abbreviated as sequence CPU usage amount in 3,2014 and is abbreviated as sequence memory usage amount in 4,2015 being abbreviated as sequence 5,2015
The CPU usage amount in year is abbreviated as sequence memory usage amount in 6,2016 and is abbreviated as sequence CPU usage amount in 7,2016 brief note
Sequence CPU usage amount in 9,2017, which is abbreviated as, for sequence memory usage amount in 8,2017 is abbreviated as sequence 10.Wherein, sequence
1, sequence 3, sequence 5, sequence 7, sequence 9 constitute the first historical performance parameter sets, sequence 2, sequence 4, sequence 6, sequence 8,
Sequence 10 constitutes the second historical performance parameter sets.
It is made of by the above-mentioned means, having obtained the corresponding historical performance parameter sets of server A sequence 1- sequence 10.When
Obtain historical performance parameter sets corresponding with the target user, it can historical performance parameter sets are training set training
Prediction model.
Model training unit 120, for treating trained reverse transmittance nerve network according to the historical performance parameter sets
Model training is carried out, the reverse transmittance nerve network for estimated performance parameter value is obtained.
In the present embodiment, trained backpropagation is treated as training set using the data in the historical performance parameter sets
Neural network carries out model training, prediction model (the i.e. backpropagation mind of the resource usage amount of available prediction Cloud Server
Through network).
In one embodiment, as shown in figure 8, model training unit 120 includes:
Objective matrix acquiring unit 121 is used to obtain in the historical performance parameter sets and selected parameter pair
The target capabilities parameter sets answered, with the objective matrix being made of each sequence of target capabilities parameter sets by row;
Training unit 122, for removing the row vector of last line in the objective matrix to obtain training matrix, by institute
Input of each column vector as reverse transmittance nerve network to be trained in training matrix is stated, by the objective matrix last line
Row vector in the one-to-one vector value of each column vector of the training matrix as reverse transmittance nerve network to be trained
Output, is trained the reverse transmittance nerve network to be trained, obtains reverse transmittance nerve network.
In the present embodiment, reverse transmittance nerve network is abbreviated as BP network (Back-ProPagation Network),
By the training of sample data, continuous corrective networks weight and threshold value decline error function along negative gradient direction, approach expectation
Output.It is a kind of widely used neural network model, is chiefly used in function approximation, model identification classification, data compression
With time series forecasting etc..
BP network is made of input layer, hidden layer and output layer, and hidden layer can have one or more layers, such as a m × k
Three layers of BP network model of × n, network select S type transmission functionPass through anti-pass error function(the calculating output that ti is desired output, Oi is network), constantly regulate network weight and threshold value makes error
Function E reaches minimum.
When choosing the number of hidden layer k, referring to empirical equationWherein n is input layer nerve
First number, m are output layer neuron number, constant of a between [1,10].In this application, n=4, m=1 select a=2,
K takes 5.
In the present embodiment, when designing the input of BP network, such as can be by January, -2016 in first half of the month in January, 2013
The memory usage amount in first half of the month forms first item list entries, by the memory of the second half of the month in January, -2016 the second half of the month in January, 2013
Usage amount forms Section 2 list entries ... ..., by the memory usage amount in first half of the month in December, -2016 in first half of the month in December, 2013
Section 23 list entries is formed, by the memory usage amount of the second half of the month in December, -2016 the second half of the month in December, 2013 composition the
24 list entries;The memory usage amount in first half of the month in January, 2017 is formed corresponding with first item list entries first
The memory usage amount of the second half of the month in January, 2017 is formed Section 2 corresponding with Section 2 list entries and exported by item output valve
The memory usage amount in first half of the month in December, 2017 is formed the corresponding with Section 23 list entries the 23rd by value ... ...
The memory usage amount of the second half of the month in December, 2017 is formed the corresponding with Section 24 list entries the 24th by item output valve
Item output valve.The reverse transmittance nerve network to be trained is instructed by above-mentioned 24 groups of list entries and 24 groups of output valves
Practice, obtains reverse transmittance nerve network.
In one embodiment, as shown in figure 9, model training unit 120 further include:
Growth Rate Calculation unit 1211, for obtaining each row vector in the objective matrix by row, by each row vector
Each vector value compared with adjacent previous vector value with evaluation growth rate;
Objective matrix updating unit 1212, vector value for row vector if it exists is compared with adjacent previous vector value
The current value growth rate arrived exceed preset growth rate threshold value, by corresponding vector value according to preset numerical value replacement policy into
Row replacement, obtains updated objective matrix.
In the present embodiment, there are exist to make an uproar in each vector value of row vector in objective matrix row vector in order to prevent
The point of articulation is needed each vector value of each row vector compared with adjacent previous vector value with evaluation growth rate, if such as
Each vector value of each row vector compared with adjacent previous vector value obtained from current value growth rate exceed 50%, then
Indicate that current vector value is noise point, need to be replaced according to preset numerical value replacement policy (such as replace current vector value
It is changed to 1.25 multiple values of adjacent previous vector value), to obtain updated objective matrix.
Current sequence acquiring unit 130, for according to the historical performance parameter sets and received time to be predicted
Point, it is corresponding to obtain current input sequence.
In one embodiment, as shown in Figure 10, current sequence acquiring unit 130 includes:
Space-number computing unit 131, for obtaining last every trade in the time point to be predicted and the objective matrix
Vector corresponds to the time interval at time point, by the time interval divided by 1 year to obtain space-number;
List entries acquiring unit 132, if being equal to 1 for the space-number, by the first every trade in the objective matrix
Vector removal is adjusted rear matrix, using column vector corresponding with time point to be predicted in matrix after the adjustment as working as
Preceding list entries.
In the present embodiment, if such as time point to be predicted be first half of the month in July, 2018, by 2014 in objective matrix
First half of the month in July corresponding vector value, first half of the month in July, 2015 corresponding vector value, first half of the month in July, 2016 corresponding vector
Value, first half of the month in July, 2017 corresponding vector value form current input sequence.Wherein time point to be predicted is in July, 2018
Half a month, there are the time point in first half of the month in July, 2017, first half of the month in July, 2018 and in July, 2017 in objective matrix last line
The time interval at the time point in first half of the month was equal to 1 divided by 1 year, therefore the first row row vector in the objective matrix is removed
Matrix after to adjustment, by column vector corresponding with time point to be predicted in matrix after the adjustment (i.e. in July, 2017 upper half
The time point of the moon corresponds to the column vector where vector value) as current input sequence.
If the space-number is greater than 1, such as the space-number is 2, needs the time point in prediction first half of the month in July, 2019 at this time
Predicted value.It needs to execute in two steps at this time, first predicts the predicted value at the time point in first half of the month in July, 2018, then by 2018
Vector value of the predicted value at the time point in first half of the month in July year as current input sequence last line, by first half of the month in July, 2015
The vector value of the vector value as current input sequence the first row at time point, the vector at the time point in first half of the month in July, 2016
It is worth the vector value as the second row of current input sequence, the vector value at the time point in first half of the month in July, 2017 respectively as current
The vector value of list entries the third line, composition current input sequence are input to the reverse transmittance nerve network and obtain 2019 7
The vector value at the time point in the first half of the month moon.
Predicted value acquiring unit 140 is obtained for the current input sequence to be input to the reverse transmittance nerve network
To predicted value corresponding with time point to be predicted.
In the present embodiment, such as corresponding vector value of known first half of the month in July, 2014, first half of the month in July, 2015 correspond to
Vector value, first half of the month in July, 2016 corresponding vector value, first half of the month in July, 2017 corresponding vector value, prediction 2018
It, can will be by first half of the month in July, 2014 corresponding vector value in training matrix, 2015 years 7 when first half of the month in July corresponding vector value
The first half of the month moon corresponding vector value, first half of the month in July, 2016 corresponding vector value, first half of the month in July, 2017 corresponding vector value
Composition current input sequence is input to the reverse transmittance nerve network, obtains prediction corresponding with time point to be predicted
Value.The prediction to memory usage amount is realized, the maintenance personnel of Cloud Server can be convenient for timely according to different target users
Dynamical Deployment memory and CPU.
The arrangement achieves server resource prediction model is established using historical data, the future of Cloud Server resource is made
Dosage is made a prediction.
Above-mentioned server resource prediction meanss can be implemented as the form of computer program, which can be such as
It is run in computer equipment shown in Figure 11.
Figure 11 is please referred to, Figure 11 is the schematic block diagram of computer equipment provided in an embodiment of the present invention.The computer is set
Standby 500 be server, and server can be independent server, is also possible to the server cluster of multiple server compositions.
Refering to fig. 11, which includes processor 502, memory and the net connected by system bus 501
Network interface 505, wherein memory may include non-volatile memory medium 503 and built-in storage 504.
The non-volatile memory medium 503 can storage program area 5031 and computer program 5032.The computer program
5032 are performed, and may make 502 execute server resource prediction method of processor.
The processor 502 supports the operation of entire computer equipment 500 for providing calculating and control ability.
The built-in storage 504 provides environment for the operation of the computer program 5032 in non-volatile memory medium 503, should
When computer program 5032 is executed by processor 502,502 execute server resource prediction method of processor may make.
The network interface 505 is for carrying out network communication, such as the transmission of offer data information.Those skilled in the art can
To understand, structure shown in Figure 11, only the block diagram of part-structure relevant to the present invention program, is not constituted to this hair
The restriction for the computer equipment 500 that bright scheme is applied thereon, specific computer equipment 500 may include than as shown in the figure
More or fewer components perhaps combine certain components or with different component layouts.
Wherein, the processor 502 is for running computer program 5032 stored in memory, to realize following function
Can: in user list selected user the target user will be obtained by preset collection period and will be existed as target user
It is corresponding with the target user to obtain using the historical performance parameter of the corresponding consumption of Cloud Server in preset historical time section
Historical performance parameter sets;Wherein, the historical performance parameter includes at least storage usage amount and CPU usage amount;According to institute
It states historical performance parameter sets and treats trained reverse transmittance nerve network progress model training, obtain for estimated performance parameter value
Reverse transmittance nerve network;According to the historical performance parameter sets and institute's received time point to be predicted, corresponding obtain is worked as
Preceding list entries;And the current input sequence is input to the reverse transmittance nerve network, obtain with it is described to be predicted
Time point corresponding predicted value.
In one embodiment, processor 502 described obtains the target user pre- by preset collection period executing
If historical time section in using the corresponding consumption of Cloud Server historical performance parameter, it is corresponding with the target user to obtain
It when the step of historical performance parameter sets, performs the following operations: obtaining the target user by the collection period and gone through described
Using the storage usage amount of the corresponding consumption of Cloud Server in the history period, it is historic to obtain corresponding with the target user first
It can parameter sets;The first historical performance parameter sets are grouped with the time, obtain joining with first historical performance
Manifold closes corresponding multiple storage usage amount sequences, to form storage usage amount arrangement set;Institute is obtained by the collection period
Target user is stated in the historical time section using the CPU usage amount of the corresponding consumption of Cloud Server, obtains using with the target
The corresponding second historical performance parameter sets in family;The second historical performance parameter sets are grouped with the time, obtain with
The corresponding multiple CPU usage amount sequences of the second historical performance parameter sets, to form CPU usage amount arrangement set;By institute
It states storage usage amount arrangement set and the CPU usage arrangement set forms historical performance parameter set corresponding with the target user
It closes.
In one embodiment, processor 502 is described executing treats training instead according to the historical performance parameter sets
Model training is carried out to Propagation Neural Network, when obtaining the step for the reverse transmittance nerve network of estimated performance parameter value,
It performs the following operations: obtaining the target capabilities parameter set corresponding with selected parameter in the historical performance parameter sets
It closes, with the objective matrix being made of each sequence of target capabilities parameter sets by row;By last line in the objective matrix
Row vector remove to obtain training matrix, using column vector each in the training matrix as reverse transmittance nerve network to be trained
Input, by the row vector of the objective matrix last line with the one-to-one vector value of each column vector of the training matrix
As the output of reverse transmittance nerve network to be trained, the reverse transmittance nerve network to be trained is trained, is obtained anti-
To Propagation Neural Network.
In one embodiment, processor 502 execute it is described acquisition in the historical performance parameter sets with select
The corresponding target capabilities parameter sets of parameter, with by each sequence of target capabilities parameter sets by the objective matrix that forms of row
The step of after, perform the following operations: each row vector in the objective matrix obtained by row, by each vector of each row vector
Value is compared with adjacent previous vector value with evaluation growth rate;The vector value of row vector and adjacent previous vector if it exists
The current value growth rate that value compares exceeds preset growth rate threshold value, and corresponding vector value is replaced according to preset numerical value
It changes strategy to be replaced, obtains updated objective matrix.
In one embodiment, processor 502 execute it is described according to the historical performance parameter sets and institute it is received to
Predicted time point performs the following operations when corresponding to the step for obtaining current input sequence: obtaining the time point to be predicted and institute
State the time interval that last line row vector in objective matrix corresponds to time point, by the time interval divided by 1 year to obtain between
Every number;If the space-number is equal to 1, the first row row vector removal in the objective matrix is adjusted rear matrix, by institute
Column vector corresponding with time point to be predicted is stated after adjustment in matrix as current input sequence.
It will be understood by those skilled in the art that the embodiment of computer equipment shown in Figure 11 is not constituted to computer
The restriction of equipment specific composition, in other embodiments, computer equipment may include components more more or fewer than diagram, or
Person combines certain components or different component layouts.For example, in some embodiments, computer equipment can only include depositing
Reservoir and processor, in such embodiments, the structure and function of memory and processor are consistent with embodiment illustrated in fig. 11,
Details are not described herein.
It should be appreciated that in embodiments of the present invention, processor 502 can be central processing unit (Central
Processing Unit, CPU), which can also be other general processors, digital signal processor (Digital
Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit,
ASIC), ready-made programmable gate array (Field-Programmable GateArray, FPGA) or other programmable logic devices
Part, discrete gate or transistor logic, discrete hardware components etc..Wherein, general processor can be microprocessor or
The processor is also possible to any conventional processor etc..
Computer readable storage medium is provided in another embodiment of the invention.The computer readable storage medium can be with
For non-volatile computer readable storage medium.The computer-readable recording medium storage has computer program, wherein calculating
Machine program is performed the steps of when being executed by processor using user selected in user list as target user, by pre-
If collection period obtain the target user in preset historical time section using the history of the corresponding consumption of Cloud Server
Energy parameter, to obtain historical performance parameter sets corresponding with the target user;Wherein, the historical performance parameter is at least wrapped
Include storage usage amount and CPU usage amount;Trained reverse transmittance nerve network is treated according to the historical performance parameter sets to carry out
Model training obtains the reverse transmittance nerve network for estimated performance parameter value;According to the historical performance parameter sets and
It is institute's received time point to be predicted, corresponding to obtain current input sequence;And the current input sequence is input to described anti-
To Propagation Neural Network, predicted value corresponding with time point to be predicted is obtained.
In one embodiment, described to obtain the target user in preset historical time section by preset collection period
Using the historical performance parameter of the corresponding consumption of Cloud Server, to obtain historical performance parameter set corresponding with the target user
It closes, comprising: obtain the target user in the historical time section using the corresponding consumption of Cloud Server by the collection period
Storage usage amount, obtain the first historical performance parameter sets corresponding with the target user;By first historical performance
Parameter sets are grouped with the time, obtain multiple storage usage amount sequences corresponding with the first historical performance parameter sets
Column, to form storage usage amount arrangement set;The target user is obtained in the historical time section by the collection period
Using the CPU usage amount of the corresponding consumption of Cloud Server, the second historical performance parameter sets corresponding with the target user are obtained;
The second historical performance parameter sets are grouped with the time, are obtained corresponding with the second historical performance parameter sets
Multiple CPU usage amount sequences, to form CPU usage amount arrangement set;By the storage usage amount arrangement set and the CPU usage
Arrangement set forms historical performance parameter sets corresponding with the target user.
In one embodiment, described to treat trained reverse transmittance nerve network progress according to the historical performance parameter sets
Model training obtains the reverse transmittance nerve network for estimated performance parameter value, comprising: obtains in the historical performance parameter
Target capabilities parameter sets corresponding with selected parameter in set, to press row by each sequence of target capabilities parameter sets
The objective matrix of composition;It removes the row vector of last line in the objective matrix to obtain training matrix, by the trained square
Input of each column vector as reverse transmittance nerve network to be trained in battle array, by the row vector of the objective matrix last line
In output with the one-to-one vector value of each column vector of the training matrix as reverse transmittance nerve network to be trained, to institute
It states reverse transmittance nerve network to be trained to be trained, obtains reverse transmittance nerve network.
In one embodiment, acquisition target corresponding with selected parameter in the historical performance parameter sets
Performance parameter set, with by each sequence of target capabilities parameter sets by the objective matrix that forms of row after, further includes: by row
Each row vector in the objective matrix is obtained, by each vector value of each row vector compared with adjacent previous vector value in terms of
It counts and is worth growth rate;The current value growth rate that the vector value of row vector obtains compared with adjacent previous vector value if it exists is super
Corresponding vector value is replaced according to preset numerical value replacement policy, obtains updated by preset growth rate threshold value out
Objective matrix.
In one embodiment, described according to the historical performance parameter sets and institute's received time point to be predicted, it is corresponding
Obtain current input sequence, comprising: it is corresponding with last line row vector in the objective matrix to obtain the time point to be predicted
The time interval at time point, by the time interval divided by 1 year to obtain space-number;It, will be described if the space-number is equal to 1
In objective matrix the first row row vector removal is adjusted rear matrix, by matrix after the adjustment with the time to be predicted
The corresponding column vector of point is as current input sequence.
It is apparent to those skilled in the art that for convenience of description and succinctly, foregoing description is set
The specific work process of standby, device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
Those of ordinary skill in the art may be aware that unit described in conjunction with the examples disclosed in the embodiments of the present disclosure and algorithm
Step can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware and software
Interchangeability generally describes each exemplary composition and step according to function in the above description.These functions are studied carefully
Unexpectedly the specific application and design constraint depending on technical solution are implemented in hardware or software.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
In several embodiments provided by the present invention, it should be understood that disclosed unit and method, it can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only logical function partition, there may be another division manner in actual implementation, can also will be with the same function
Unit set is at a unit, such as multiple units or components can be combined or can be integrated into another system or some
Feature can be ignored, or not execute.In addition, shown or discussed mutual coupling, direct-coupling or communication connection can
Be through some interfaces, the indirect coupling or communication connection of device or unit, be also possible to electricity, mechanical or other shapes
Formula connection.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.Some or all of unit therein can be selected to realize the embodiment of the present invention according to the actual needs
Purpose.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, is also possible to two or more units and is integrated in one unit.It is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in one storage medium.Based on this understanding, technical solution of the present invention is substantially in other words to existing
The all or part of part or the technical solution that technology contributes can be embodied in the form of software products, should
Computer software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be
Personal computer, server or network equipment etc.) execute all or part of step of each embodiment the method for the present invention
Suddenly.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), magnetic disk or
The various media that can store program code such as person's CD.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or replace
It changes, these modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with right
It is required that protection scope subject to.
Claims (10)
1. a kind of server resource prediction technique characterized by comprising
In user list selected user the target user will be obtained by preset collection period and will be existed as target user
It is corresponding with the target user to obtain using the historical performance parameter of the corresponding consumption of Cloud Server in preset historical time section
Historical performance parameter sets;Wherein, the historical performance parameter includes at least storage usage amount and CPU usage amount;
Trained reverse transmittance nerve network is treated according to the historical performance parameter sets and carries out model training, is obtained for predicting
The reverse transmittance nerve network of performance parameter value;
It is corresponding to obtain current input sequence according to the historical performance parameter sets and institute's received time point to be predicted;And
The current input sequence is input to the reverse transmittance nerve network, is obtained corresponding with the time point to be predicted
Predicted value.
2. server resource prediction technique according to claim 1, which is characterized in that described to be obtained by preset collection period
Take historical performance parameter of the target user in preset historical time section using the corresponding consumption of Cloud Server, with obtain with
The corresponding historical performance parameter sets of the target user, comprising:
The target user depositing using the corresponding consumption of Cloud Server in the historical time section is obtained by the collection period
Usage amount is stored up, the first historical performance parameter sets corresponding with the target user are obtained;
The first historical performance parameter sets are grouped with the time, are obtained and the first historical performance parameter sets pair
The multiple storage usage amount sequences answered, to form storage usage amount arrangement set;
The target user is obtained in the historical time section using the CPU of the corresponding consumption of Cloud Server by the collection period
Usage amount obtains the second historical performance parameter sets corresponding with the target user;
The second historical performance parameter sets are grouped with the time, are obtained and the second historical performance parameter sets pair
The multiple CPU usage amount sequences answered, to form CPU usage amount arrangement set;
History corresponding with the target user is formed by the storage usage amount arrangement set and the CPU usage arrangement set
It can parameter sets.
3. server resource prediction technique according to claim 2, which is characterized in that described to be joined according to the historical performance
Manifold conjunction treats trained reverse transmittance nerve network and carries out model training, obtains the backpropagation mind for estimated performance parameter value
Through network, comprising:
The target capabilities parameter sets corresponding with selected parameter in the historical performance parameter sets are obtained, by described
The objective matrix that each sequence of target capabilities parameter sets is formed by row;
Remove the row vector of last line in the objective matrix to obtain training matrix, by the training matrix it is each arrange to
Measure input as reverse transmittance nerve network to be trained, by the row vector of the objective matrix last line with the training
Output of the one-to-one vector value of each column vector of matrix as reverse transmittance nerve network to be trained, to described reversed to training
Propagation Neural Network is trained, and obtains reverse transmittance nerve network.
4. server resource prediction technique according to claim 3, which is characterized in that the acquisition is in the historical performance
Target capabilities parameter sets corresponding with selected parameter in parameter sets, by each sequence of target capabilities parameter sets
After objective matrix by row composition, further includes:
Each row vector in the objective matrix is obtained by row, by each vector value of each row vector and adjacent previous vector value
Compare with evaluation growth rate;
The current value growth rate that the vector value of row vector obtains compared with adjacent previous vector value if it exists is beyond preset
Corresponding vector value is replaced according to preset numerical value replacement policy, obtains updated objective matrix by growth rate threshold value.
5. server resource prediction technique according to claim 2, which is characterized in that described to be joined according to the historical performance
Manifold is closed and institute's received time point to be predicted, corresponding to obtain current input sequence, comprising:
The time interval at time point corresponding with last line row vector in the objective matrix at the time point to be predicted is obtained, it will
The time interval is divided by 1 year to obtain space-number;
If the space-number is equal to 1, the first row row vector removal in the objective matrix is adjusted rear matrix, it will be described
After adjustment in matrix column vector corresponding with time point to be predicted as current input sequence.
6. a kind of server resource prediction meanss characterized by comprising
Historical set acquiring unit, for will selected user is as target user in user list, by preset acquisition
Period obtains the historical performance parameter that the target user corresponds to consumption in preset historical time section using Cloud Server, with
Obtain historical performance parameter sets corresponding with the target user;Wherein, the historical performance parameter includes at least storage and makes
Dosage and CPU usage amount;
Model training unit carries out model for treating trained reverse transmittance nerve network according to the historical performance parameter sets
Training, obtains the reverse transmittance nerve network for estimated performance parameter value;
Current sequence acquiring unit, for corresponding to according to the historical performance parameter sets and institute's received time point to be predicted
Obtain current input sequence;And
Predicted value acquiring unit obtains and institute for the current input sequence to be input to the reverse transmittance nerve network
State time point to be predicted corresponding predicted value.
7. server resource prediction meanss according to claim 6, which is characterized in that the historical set acquiring unit,
Include:
First set acquiring unit uses in the historical time section for obtaining the target user by the collection period
The storage usage amount of the corresponding consumption of Cloud Server, obtains the first historical performance parameter sets corresponding with the target user;
First grouped element obtains and described first for being grouped the first historical performance parameter sets with the time
The corresponding multiple storage usage amount sequences of historical performance parameter sets, to form storage usage amount arrangement set;
Second set acquiring unit uses in the historical time section for obtaining the target user by the collection period
The CPU usage amount of the corresponding consumption of Cloud Server, obtains the second historical performance parameter sets corresponding with the target user;
Second packet unit obtains and described second for being grouped the second historical performance parameter sets with the time
The corresponding multiple CPU usage amount sequences of historical performance parameter sets, to form CPU usage amount arrangement set;
Assembled unit is used for being made of the storage usage amount arrangement set and the CPU usage arrangement set with the target
The corresponding historical performance parameter sets in family.
8. server resource prediction meanss according to claim 7, which is characterized in that the model training unit, comprising:
Objective matrix acquiring unit, for obtaining the target corresponding with selected parameter in the historical performance parameter sets
Performance parameter set, with the objective matrix being made of each sequence of target capabilities parameter sets by row;
Training unit, for removing the row vector of last line in the objective matrix to obtain training matrix, by the training
Input of each column vector as reverse transmittance nerve network to be trained in matrix, by the row of the objective matrix last line to
Output in amount with the one-to-one vector value of each column vector of the training matrix as reverse transmittance nerve network to be trained, it is right
The reverse transmittance nerve network to be trained is trained, and obtains reverse transmittance nerve network.
9. a kind of computer equipment, including memory, processor and it is stored on the memory and can be on the processor
The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 5 when executing the computer program
Any one of described in server resource prediction technique.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer journey
Sequence, the computer program make the processor execute such as clothes described in any one of claim 1 to 5 when being executed by a processor
Business device resource prediction method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910198343.6A CN110059858A (en) | 2019-03-15 | 2019-03-15 | Server resource prediction technique, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910198343.6A CN110059858A (en) | 2019-03-15 | 2019-03-15 | Server resource prediction technique, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110059858A true CN110059858A (en) | 2019-07-26 |
Family
ID=67316996
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910198343.6A Pending CN110059858A (en) | 2019-03-15 | 2019-03-15 | Server resource prediction technique, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110059858A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111026626A (en) * | 2019-11-29 | 2020-04-17 | 中国建设银行股份有限公司 | CPU consumption estimation and estimation model training method and device |
CN111125097A (en) * | 2019-11-29 | 2020-05-08 | 中盈优创资讯科技有限公司 | Report scheduling method and device |
CN111277445A (en) * | 2020-02-17 | 2020-06-12 | 网宿科技股份有限公司 | Method and device for evaluating performance of online node server |
CN111598390A (en) * | 2019-10-16 | 2020-08-28 | 中国南方电网有限责任公司 | Server high availability evaluation method, device, equipment and readable storage medium |
CN111625440A (en) * | 2020-06-04 | 2020-09-04 | 中国银行股份有限公司 | Method and device for predicting performance parameters |
CN111680835A (en) * | 2020-06-05 | 2020-09-18 | 广州汇量信息科技有限公司 | Risk prediction method and device, storage medium and electronic equipment |
CN111694814A (en) * | 2020-05-27 | 2020-09-22 | 平安银行股份有限公司 | Batch expansion method and device for date partition table, computer equipment and storage medium |
CN111935025A (en) * | 2020-07-08 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Control method, device, equipment and medium for TCP transmission performance |
CN111985726A (en) * | 2020-08-31 | 2020-11-24 | 重庆紫光华山智安科技有限公司 | Resource quantity prediction method and device, electronic equipment and storage medium |
CN112001116A (en) * | 2020-07-17 | 2020-11-27 | 新华三大数据技术有限公司 | Cloud resource capacity prediction method and device |
CN112182069A (en) * | 2020-09-30 | 2021-01-05 | 中国平安人寿保险股份有限公司 | Agent retention prediction method and device, computer equipment and storage medium |
CN112527470A (en) * | 2020-05-27 | 2021-03-19 | 上海有孚智数云创数字科技有限公司 | Model training method and device for predicting performance index and readable storage medium |
CN112783740A (en) * | 2020-12-30 | 2021-05-11 | 科大国创云网科技有限公司 | Server performance prediction method and system based on time series characteristics |
CN113254153A (en) * | 2021-05-20 | 2021-08-13 | 深圳市金蝶天燕云计算股份有限公司 | Process task processing method and device, computer equipment and storage medium |
CN113268403A (en) * | 2021-05-25 | 2021-08-17 | 中国联合网络通信集团有限公司 | Time series analysis and prediction method, device, equipment and storage medium |
CN113271606A (en) * | 2021-04-21 | 2021-08-17 | 北京邮电大学 | Service scheduling method for ensuring stability of cloud native mobile network and electronic equipment |
CN113422801A (en) * | 2021-05-13 | 2021-09-21 | 河南师范大学 | Edge network node content distribution method, system, device and computer equipment |
CN113642638A (en) * | 2021-08-12 | 2021-11-12 | 云知声智能科技股份有限公司 | Capacity adjustment method, model training method, device, equipment and storage medium |
WO2022110444A1 (en) * | 2020-11-30 | 2022-06-02 | 中国科学院深圳先进技术研究院 | Dynamic prediction method and apparatus for cloud native resources, computer device and storage medium |
WO2022142120A1 (en) * | 2020-12-31 | 2022-07-07 | 平安科技(深圳)有限公司 | Data detection method and apparatus based on artificial intelligence, and server and storage medium |
CN115883392A (en) * | 2023-02-21 | 2023-03-31 | 浪潮通信信息系统有限公司 | Data perception method and device of computing power network, electronic equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080126881A1 (en) * | 2006-07-26 | 2008-05-29 | Tilmann Bruckhaus | Method and apparatus for using performance parameters to predict a computer system failure |
US20080222646A1 (en) * | 2007-03-06 | 2008-09-11 | Lev Sigal | Preemptive neural network database load balancer |
CN103678004A (en) * | 2013-12-19 | 2014-03-26 | 南京大学 | Host load prediction method based on unsupervised feature learning |
CN105373830A (en) * | 2015-12-11 | 2016-03-02 | 中国科学院上海高等研究院 | Prediction method and system for error back propagation neural network and server |
CN105550323A (en) * | 2015-12-15 | 2016-05-04 | 北京国电通网络技术有限公司 | Load balancing prediction method of distributed database, and predictive analyzer |
US20170351948A1 (en) * | 2016-06-01 | 2017-12-07 | Seoul National University R&Db Foundation | Apparatus and method for generating prediction model based on artificial neural network |
CN107608781A (en) * | 2016-07-11 | 2018-01-19 | 华为软件技术有限公司 | A kind of load predicting method, device and network element |
CN109284871A (en) * | 2018-09-30 | 2019-01-29 | 北京金山云网络技术有限公司 | Resource adjusting method, device and cloud platform |
WO2019019255A1 (en) * | 2017-07-25 | 2019-01-31 | 平安科技(深圳)有限公司 | Apparatus and method for establishing prediction model, program for establishing prediction model, and computer-readable storage medium |
-
2019
- 2019-03-15 CN CN201910198343.6A patent/CN110059858A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080126881A1 (en) * | 2006-07-26 | 2008-05-29 | Tilmann Bruckhaus | Method and apparatus for using performance parameters to predict a computer system failure |
US20080222646A1 (en) * | 2007-03-06 | 2008-09-11 | Lev Sigal | Preemptive neural network database load balancer |
CN103678004A (en) * | 2013-12-19 | 2014-03-26 | 南京大学 | Host load prediction method based on unsupervised feature learning |
CN105373830A (en) * | 2015-12-11 | 2016-03-02 | 中国科学院上海高等研究院 | Prediction method and system for error back propagation neural network and server |
CN105550323A (en) * | 2015-12-15 | 2016-05-04 | 北京国电通网络技术有限公司 | Load balancing prediction method of distributed database, and predictive analyzer |
US20170351948A1 (en) * | 2016-06-01 | 2017-12-07 | Seoul National University R&Db Foundation | Apparatus and method for generating prediction model based on artificial neural network |
CN107608781A (en) * | 2016-07-11 | 2018-01-19 | 华为软件技术有限公司 | A kind of load predicting method, device and network element |
WO2019019255A1 (en) * | 2017-07-25 | 2019-01-31 | 平安科技(深圳)有限公司 | Apparatus and method for establishing prediction model, program for establishing prediction model, and computer-readable storage medium |
CN109284871A (en) * | 2018-09-30 | 2019-01-29 | 北京金山云网络技术有限公司 | Resource adjusting method, device and cloud platform |
Non-Patent Citations (5)
Title |
---|
WANG JINA;YAN YONGMING;GUO JUN: "Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network", MATEC WEB OF CONFERENCES, vol. 65, pages 1 - 4 * |
吴长伟: "基于BP神经网络的负载均衡技术的研究", 信息科技, no. 2013, pages 139 - 26 * |
王磊: "基于服务组合的"系统的系统"的可靠性时间序列预测方法及关键技术研究", 基础科学;信息科技, no. 2017, pages 139 - 5 * |
陈志佳;朱元昌;邸彦强;冯少冲;: "基于改进神经网络的IaaS云资源需求预测方法", 华中科技大学学报(自然科学版), no. 01, pages 51 - 56 * |
鲍一丹, 吴燕萍, 何勇: "BP神经网络最优组合预测方法及其应用", 农机化研究, no. 03, pages 166 - 168 * |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111598390B (en) * | 2019-10-16 | 2023-12-01 | 中国南方电网有限责任公司 | Method, device, equipment and readable storage medium for evaluating high availability of server |
CN111598390A (en) * | 2019-10-16 | 2020-08-28 | 中国南方电网有限责任公司 | Server high availability evaluation method, device, equipment and readable storage medium |
CN111125097A (en) * | 2019-11-29 | 2020-05-08 | 中盈优创资讯科技有限公司 | Report scheduling method and device |
CN111026626A (en) * | 2019-11-29 | 2020-04-17 | 中国建设银行股份有限公司 | CPU consumption estimation and estimation model training method and device |
CN111125097B (en) * | 2019-11-29 | 2024-03-15 | 中盈优创资讯科技有限公司 | Report scheduling method and device |
CN111277445A (en) * | 2020-02-17 | 2020-06-12 | 网宿科技股份有限公司 | Method and device for evaluating performance of online node server |
CN111277445B (en) * | 2020-02-17 | 2022-06-07 | 网宿科技股份有限公司 | Method and device for evaluating performance of online node server |
CN111694814A (en) * | 2020-05-27 | 2020-09-22 | 平安银行股份有限公司 | Batch expansion method and device for date partition table, computer equipment and storage medium |
CN112527470B (en) * | 2020-05-27 | 2023-05-26 | 上海有孚智数云创数字科技有限公司 | Model training method and device for predicting performance index and readable storage medium |
CN111694814B (en) * | 2020-05-27 | 2024-04-09 | 平安银行股份有限公司 | Batch expansion method and device for date partition table, computer equipment and storage medium |
CN112527470A (en) * | 2020-05-27 | 2021-03-19 | 上海有孚智数云创数字科技有限公司 | Model training method and device for predicting performance index and readable storage medium |
CN111625440A (en) * | 2020-06-04 | 2020-09-04 | 中国银行股份有限公司 | Method and device for predicting performance parameters |
CN111680835A (en) * | 2020-06-05 | 2020-09-18 | 广州汇量信息科技有限公司 | Risk prediction method and device, storage medium and electronic equipment |
CN111935025B (en) * | 2020-07-08 | 2023-10-17 | 腾讯科技(深圳)有限公司 | Control method, device, equipment and medium for TCP transmission performance |
CN111935025A (en) * | 2020-07-08 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Control method, device, equipment and medium for TCP transmission performance |
CN112001116A (en) * | 2020-07-17 | 2020-11-27 | 新华三大数据技术有限公司 | Cloud resource capacity prediction method and device |
CN111985726B (en) * | 2020-08-31 | 2023-04-18 | 重庆紫光华山智安科技有限公司 | Resource quantity prediction method and device, electronic equipment and storage medium |
CN111985726A (en) * | 2020-08-31 | 2020-11-24 | 重庆紫光华山智安科技有限公司 | Resource quantity prediction method and device, electronic equipment and storage medium |
CN112182069B (en) * | 2020-09-30 | 2023-11-24 | 中国平安人寿保险股份有限公司 | Agent retention prediction method, agent retention prediction device, computer equipment and storage medium |
CN112182069A (en) * | 2020-09-30 | 2021-01-05 | 中国平安人寿保险股份有限公司 | Agent retention prediction method and device, computer equipment and storage medium |
WO2022110444A1 (en) * | 2020-11-30 | 2022-06-02 | 中国科学院深圳先进技术研究院 | Dynamic prediction method and apparatus for cloud native resources, computer device and storage medium |
CN112783740B (en) * | 2020-12-30 | 2022-11-18 | 科大国创云网科技有限公司 | Server performance prediction method and system based on time series characteristics |
CN112783740A (en) * | 2020-12-30 | 2021-05-11 | 科大国创云网科技有限公司 | Server performance prediction method and system based on time series characteristics |
WO2022142120A1 (en) * | 2020-12-31 | 2022-07-07 | 平安科技(深圳)有限公司 | Data detection method and apparatus based on artificial intelligence, and server and storage medium |
CN113271606B (en) * | 2021-04-21 | 2022-08-05 | 北京邮电大学 | Service scheduling method for ensuring stability of cloud native mobile network and electronic equipment |
CN113271606A (en) * | 2021-04-21 | 2021-08-17 | 北京邮电大学 | Service scheduling method for ensuring stability of cloud native mobile network and electronic equipment |
CN113422801A (en) * | 2021-05-13 | 2021-09-21 | 河南师范大学 | Edge network node content distribution method, system, device and computer equipment |
CN113422801B (en) * | 2021-05-13 | 2022-12-06 | 河南师范大学 | Edge network node content distribution method, system, device and computer equipment |
CN113254153B (en) * | 2021-05-20 | 2023-10-13 | 深圳市金蝶天燕云计算股份有限公司 | Method and device for processing flow task, computer equipment and storage medium |
CN113254153A (en) * | 2021-05-20 | 2021-08-13 | 深圳市金蝶天燕云计算股份有限公司 | Process task processing method and device, computer equipment and storage medium |
CN113268403B (en) * | 2021-05-25 | 2023-10-31 | 中国联合网络通信集团有限公司 | Time series analysis and prediction method, device, equipment and storage medium |
CN113268403A (en) * | 2021-05-25 | 2021-08-17 | 中国联合网络通信集团有限公司 | Time series analysis and prediction method, device, equipment and storage medium |
CN113642638A (en) * | 2021-08-12 | 2021-11-12 | 云知声智能科技股份有限公司 | Capacity adjustment method, model training method, device, equipment and storage medium |
CN115883392A (en) * | 2023-02-21 | 2023-03-31 | 浪潮通信信息系统有限公司 | Data perception method and device of computing power network, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110059858A (en) | Server resource prediction technique, device, computer equipment and storage medium | |
He et al. | Data-driven patient scheduling in emergency departments: A hybrid robust-stochastic approach | |
CN103297626A (en) | Scheduling method and scheduling device | |
Hawkins et al. | A goal programming model for capital budgeting | |
CN105843679B (en) | Adaptive many-core resource regulating method | |
CN106779253A (en) | The term load forecasting for distribution and device of a kind of meter and photovoltaic | |
Mendoza et al. | A fuzzy multiple objective linear programming approach to forest planning under uncertainty | |
Yang et al. | An uncertain workforce planning problem with job satisfaction | |
CN108549276A (en) | A kind of method and system of intelligent interaction control water making device | |
CN109976901A (en) | A kind of resource regulating method, device, server and readable storage medium storing program for executing | |
CN103455509B (en) | A kind of method and system obtaining time window model parameter | |
Safiri et al. | Ladybug Beetle Optimization algorithm: application for real-world problems | |
Chang | A study of due-date assignment rules with constrained tightness in a dynamic job shop | |
CN111816291A (en) | Equipment maintenance method and device | |
CN109542585A (en) | A kind of Virtual Machine Worker load predicting method for supporting irregular time interval | |
CN110209656A (en) | Data processing method and device | |
CN110334517A (en) | The update method and device of credible strategy, credible and secure management platform | |
Kendall | Multiple objective planning for regional blood centers | |
CN111415261B (en) | Control method, system and device for dynamically updating flow control threshold of bank system | |
Mehra et al. | Adaptive load-balancing strategies for distributed systems | |
CN110378580A (en) | A kind of electric network fault multi-agent system preferentially diagnostic method and device | |
CN109918366A (en) | A kind of data safety processing method based on big data | |
Fukumoto et al. | Learning Algorithms with Regularization Criteria for Fuzzy Reasoning Model | |
Deep et al. | Ranking of alternatives in fuzzy environment using integral value | |
DeLaurentis et al. | Hospital stockpiling for influenza pandemics with pre-determined response levels |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |