CN112565378A - Cloud native resource dynamic prediction method and device, computer equipment and storage medium - Google Patents

Cloud native resource dynamic prediction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112565378A
CN112565378A CN202011373082.6A CN202011373082A CN112565378A CN 112565378 A CN112565378 A CN 112565378A CN 202011373082 A CN202011373082 A CN 202011373082A CN 112565378 A CN112565378 A CN 112565378A
Authority
CN
China
Prior art keywords
data
resource
prediction
time sequence
performance index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011373082.6A
Other languages
Chinese (zh)
Inventor
叶可江
陈文艳
须成忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202011373082.6A priority Critical patent/CN112565378A/en
Priority to PCT/CN2020/139679 priority patent/WO2022110444A1/en
Publication of CN112565378A publication Critical patent/CN112565378A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application belongs to the technical field of information, and relates to a dynamic cloud native resource prediction method which comprises the steps of carrying out relevancy sorting on acquired resource data to be predicted and performance index data based on a Pearson correlation coefficient to obtain a relevancy relation between the resource data to be predicted and the performance index data; defining a correlation threshold based on the correlation relationship; taking the performance index data which is greater than or equal to the correlation threshold value as performance index time sequence data; performing transverse data expansion on the performance index time sequence data to obtain training data and test data; inputting training data into the constructed time sequence neural network model for training to obtain a trained time sequence neural network prediction model; and inputting the test data into a time sequence neural network prediction model to perform prediction operation, so as to obtain a resource prediction result. The application also provides a dynamic prediction device of the cloud native resources, computer equipment and a storage medium. According to the method and the device, the prediction complexity can be reduced, and the prediction accuracy is improved.

Description

Cloud native resource dynamic prediction method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of information technologies, and in particular, to a method and an apparatus for dynamically predicting cloud native resources, a computer device, and a storage medium.
Background
The rapid development of the cloud native technology enables the number of users and the data size to increase rapidly, and serious problems and challenges are brought to resource management of the cloud native cluster. On one hand, the resource requests of users are very frequent and diversified, and the existing resource prediction can only accurately predict the traditional periodicity, but can not accurately predict the occurrence of the mutation points; on the other hand, the performance of the cloud native cluster is improved to a certain extent by the mixed deployment of real-time online service and offline operation, but the mixed mode also brings about the problems of resource competition and performance reduction, and further increases the complexity of resource prediction. In addition, the conventional resource prediction model often has a certain delay, which causes a certain performance barrier for real-time dynamic resource allocation and management. Therefore, how to predict the resources in the mixed cloud native cluster accurately in real time is a key problem of current research when allocating reasonable resource allocation for the load dynamically.
As cloud native is used as a new cloud computing mode, the cloud native platform has the characteristics of high expansibility, access on demand, lighter weight and the like, and more enterprises and individuals select to use the cloud native platform to provide services. Due to the characteristics of complexity, heterogeneity and dynamics of upper-layer cloud native applications, requirements for resource management are higher and higher, so that different types of application mixed deployment modes are widely applied to a cloud native platform in order to effectively improve the performance of cloud native cluster resource management.
However, the current resource prediction method is mainly based on linear regression method and prediction based on machine learning method, wherein the first method has good accuracy in predicting the periodicity of the model, but usually can not predict the mutation point well, and the method only considers the self time sequence correlation of the prediction resource usually, but ignores the influence of other performance indexes on the resource to be predicted easily; the second method trains and cross-verifies the historical data through a machine learning model, can input multi-dimensional resource data, and can effectively capture time sequence information through long-term memory of a neural network, but the method still has a plurality of defects when the method is used for predicting mutation points.
Disclosure of Invention
The embodiment of the application aims to provide a cloud native resource dynamic prediction method, a cloud native resource dynamic prediction device, a computer device and a storage medium, so as to at least solve the problems of high prediction complexity and low mutation point prediction accuracy of the traditional resource prediction method.
In order to solve the above technical problem, an embodiment of the present application provides a method for dynamically predicting cloud native resources, which adopts the following technical solutions:
receiving a dynamic prediction request sent by a user terminal;
responding to the dynamic prediction request, reading a local database, and acquiring resource data to be predicted and performance index data of container load in the cloud native cluster;
performing relevance ranking on the resource data to be predicted and the performance index data based on the Pearson correlation coefficient to obtain a relevance relation between the resource data to be predicted and the performance index data;
defining a correlation threshold based on the correlation relationship;
taking the performance index data which is greater than or equal to the correlation threshold value as performance index time sequence data;
performing transverse data expansion on the performance index time sequence data to obtain training data and test data;
inputting training data into the constructed time sequence neural network model for training to obtain a trained time sequence neural network prediction model;
and inputting the test data into a time sequence neural network prediction model for prediction operation to obtain a resource prediction result.
Further, the method further comprises:
collecting historical resource data of container loads in the cloud native cluster based on a preset time interval;
and preprocessing the historical resource data to obtain resource data to be predicted.
Further, the step of preprocessing the historical resource data to obtain the resource data to be predicted includes:
deleting invalid or abnormal data in the historical resource data to obtain valid time sequence data;
and carrying out normalization processing on the effective time sequence data to obtain resource data to be predicted.
Further, the method further comprises:
repeatedly executing the prediction operation process to obtain real-time prediction data;
and feeding back the real-time prediction data to the user terminal.
Further, the method further comprises:
and adding a preset full connection layer and an attention mechanism into the time sequence neural network basic model framework to obtain a time sequence neural network model.
In order to solve the above technical problem, an embodiment of the present application further provides a device for dynamically predicting cloud native resources, which adopts the following technical solutions:
the request receiving module is used for receiving a dynamic prediction request sent by a user terminal;
the request response module is used for responding to the dynamic prediction request, reading the local database and acquiring resource data to be predicted and performance index data of the container load in the cloud native cluster;
the relevancy sorting module is used for carrying out relevancy sorting on the resource data to be predicted and the performance index data based on the Pearson correlation coefficient to obtain a relevancy relation between the resource data to be predicted and the performance index data;
a threshold definition module for defining a correlation threshold based on the correlation relationship;
the time sequence data acquisition module is used for taking the performance index data which is greater than or equal to the correlation threshold as performance index time sequence data;
the data expansion module is used for performing transverse data expansion on the performance index time sequence data to obtain training data and test data;
the model training module is used for inputting training data into the constructed time sequence neural network model for training to obtain a trained time sequence neural network prediction model;
and the data prediction module is used for inputting the test data into the time sequence neural network prediction model to perform prediction operation so as to obtain a resource prediction result.
Further, the apparatus further comprises:
the data acquisition module is used for acquiring historical resource data of container loads in the cloud native cluster based on a preset time interval;
and the preprocessing module is used for preprocessing the historical resource data to obtain the resource data to be predicted.
Further, the preprocessing module includes:
the data deleting unit is used for deleting invalid or abnormal data in the historical resource data to obtain valid time sequence data;
and the normalization processing unit is used for performing normalization processing on the effective time sequence data to obtain resource data to be predicted.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
the cloud native resource dynamic prediction method comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the cloud native resource dynamic prediction method when executing the computer program.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the cloud native resource dynamic prediction method as described above.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the application provides a dynamic prediction method of cloud native resources, which comprises the following steps: receiving a dynamic prediction request sent by a user terminal; responding to the dynamic prediction request, reading a local database, and acquiring resource data to be predicted and performance index data of container load in the cloud native cluster; performing relevance ranking on the resource data to be predicted and the performance index data based on the Pearson correlation coefficient to obtain a relevance relation between the resource data to be predicted and the performance index data; defining a correlation threshold based on the correlation relationship; taking the performance index data which is greater than or equal to the correlation threshold value as performance index time sequence data; performing transverse data expansion on the performance index time sequence data to obtain training data and test data; inputting training data into the constructed time sequence neural network model for training to obtain a trained time sequence neural network prediction model; and inputting the test data into a time sequence neural network prediction model for prediction operation to obtain a resource prediction result. Performing relevancy sorting on the resource data to be predicted and the performance index data acquired in the local database based on the Pearson correlation coefficient to acquire a relevancy relation between the resource data to be predicted and the performance index data to define a relevancy threshold, and acquiring performance index time sequence data based on the relevancy threshold; pruning and information extraction are carried out on the performance index time sequence data based on transverse data expansion to obtain training data and test data, and effective information of the data can be reserved on the basis of reducing input data; then, training a time sequence neural network prediction model which can be used for capturing time sequence data information of long-term dependence based on the training data, and further enabling the test data to be used as the input of the time sequence neural network prediction model to obtain a resource prediction result with high prediction accuracy. By reducing the input data of the time sequence neural network model, the computation complexity is effectively reduced, so that the prediction complexity is reduced, and the accuracy and the efficiency of cloud native cluster resource prediction are improved to a certain extent.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is an exemplary schematic diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for dynamic prediction of cloud native resources according to the present application;
FIG. 3 is a flow diagram of data pre-processing according to the cloud native resource dynamic prediction method of the present application;
FIG. 4 is a flowchart of one embodiment of step S302 of FIG. 3;
FIG. 5 is a schematic diagram of an embodiment of a cloud native resource dynamic prediction apparatus according to the present application;
FIG. 6 is a schematic diagram of the data preprocessing of a cloud native resource dynamic prediction apparatus according to the present application;
FIG. 7 is a schematic diagram of one embodiment of the pre-processing module of FIG. 6;
FIG. 8 is a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
Example one
Referring to fig. 1 and fig. 2, a flowchart of an embodiment of a method for cloud native resource dynamic prediction according to an embodiment of the present application is shown, and for convenience of description, only a part related to the present application is shown.
In step S1, a dynamic prediction request transmitted from the user terminal is received.
In this embodiment, the dynamic prediction request is an operation request that is sent by a user in order to deeply understand the cloud native cluster resource characteristics, so as to select a suitable resource prediction model and optimize in combination with a specific scenario, and provide a valuable decision basis for dynamic resource allocation.
In step S2, in response to the dynamic prediction request, the local database is read, and the resource data to be predicted and the performance index data of the container load in the cloud native cluster are obtained.
In this embodiment, the resource data to be predicted is time sequence data obtained by processing historical resource data loaded by a container in the cloud native cluster according to a preset processing mode, where the processing mode may specifically be a reasonable compression mode, an extraction mode, a format conversion mode, and the like, and no specific limitation is imposed here, the size of input data can be reduced on the premise of not losing long-term dependence information, and the training speed of a subsequent training time sequence neural network prediction model is favorably increased, so that real-time and dynamic allocation of resources is realized, the hysteresis of resource allocation is reduced, and the accuracy and the efficiency of cloud native cluster resource prediction are improved to a certain extent.
In this embodiment, the performance index data may specifically be performance indexes of the application layer, such as CPU utilization, memory utilization, disk IO size, network bandwidth, and the like; and performance indexes of the micro-architecture layer such as IPC (instructions per cycle), Branch prediction (Branch prediction) and Cache miss (Cache miss) can be used for intuitively reflecting index data of cluster performance.
In this embodiment, because the resource utilization rates of different applications have the characteristics of dynamics and complexity, the present embodiment obtains, in the local database based on the dynamic prediction request, time series data obtained by processing the historical resource data of the container load in the cloud native cluster based on a preset processing manner, namely resource data to be predicted and performance index data which can be used for visually reflecting cluster performance, so that the size of input data can be reduced on the premise of not missing long-term dependence information based on the resource data to be predicted and the performance index data subsequently, the training speed of a neural network prediction model of a subsequent training time sequence is favorably improved, therefore, real-time and dynamic allocation of resources is realized, and the hysteresis of resource allocation is reduced, so that the accuracy and the efficiency of cloud native cluster resource prediction are improved to a certain extent.
In step S3, the resource data to be predicted and the performance index data are subjected to relevancy sorting based on the pearson correlation coefficient, so as to obtain a correlation relationship between the resource data to be predicted and the performance index data.
In this embodiment, the pearson correlation coefficient is a coefficient that can measure the correlation strength of the resource data to be predicted and the performance index data.
Wherein the Pearson correlation coefficient is expressed as:
Figure BDA0002807376930000071
the resource data to be predicted is r, X represents the time sequence data of r, Y represents the time sequence data of other performance index data, and n represents the length of the time sequence data.
In this embodiment, the correlation relationship refers to the correlation strength between different performance index data and the resource data r to be predicted.
In the embodiment, due to the mixed deployment of different loads, resource competition can be generated due to the limited resources at the same time, and the degree of competition is closely related to the load type, so the present embodiment is based on the pearson correlation coefficient, calculating the correlation coefficient of the resource data to be predicted and the performance index data to obtain the correlation coefficient between different performance index data and the resource data r to be predicted respectively, further, the coefficients are sorted according to a preset sorting mode, the sorted correlation coefficients are used for expressing the correlation strength relation between different performance index data and the resource data r to be predicted respectively, so that pruning of the input data can be further realized subsequently based on the correlation strength relationship, therefore, effective information of the data is reserved on the basis of reducing input data, and accuracy and efficiency of cloud native cluster resource prediction are improved to a certain extent.
In step S4, a correlation threshold is defined based on the correlation relationship.
In this embodiment, the correlation threshold is an index for extracting time series data with strong correlation, and the threshold Cmax is customized based on the strength relationship of the correlation, that is, the correlation threshold, so that pruning of input data can be further realized, effective information of the data is retained on the basis of reducing the input data, and accuracy and efficiency of cloud native cluster resource prediction are improved to a certain extent.
In step S5, the performance index data that is greater than or equal to the correlation threshold value is taken as the performance index time series data.
In this embodiment, based on the self-defined correlation threshold of the strong and weak relationship of the correlation, the performance index data smaller than the correlation threshold may be deleted completely, and the performance index data greater than or equal to Cmax is retained as the performance index time series data, so that the input data may be further pruned based on the performance index time series data, thereby retaining the effective information of the data on the basis of reducing the input data, and improving the accuracy and efficiency of the cloud native cluster resource prediction to a certain extent.
In step S6, the performance index timing sequence data is subjected to horizontal data expansion to obtain training data and test data.
In this embodiment, the performance index time series data is subjected to horizontal data expansion, specifically, the predicted performance index data is assumed to be cpu, and the performance index data with the correlation value larger than Cmax is assumed to be cpu, memory, disk, so that at time t, the data input matrix is arr ═ cput,memoryt,diskt]The expanded input matrix is arr ═ cput-2,cput-1,cput,memoryt-2,memoryt-1,memoryt,diskt-2,diskt-1,diskt]。
In this embodiment, the training data and the test data are data sets obtained by performing horizontal data expansion based on performance index time series data, and the data sets are obtained according to the following steps: after the proportion of 3 is divided, the training data which can be used for training the time sequence neural network model and the test data which can be used for resource prediction are obtained, pruning of input data can be achieved, effective information of the data is reserved on the basis of reducing the input data, and therefore accuracy and efficiency of cloud native cluster resource prediction are improved to a certain extent.
In step S7, the training data is input into the constructed time-series neural network model for training, so as to obtain a trained time-series neural network prediction model.
In this embodiment, the time series neural network prediction model is obtained by optimizing based on a time series neural network (TCNs), and by adopting an expansion convolution and attention mechanism, a wider receptive field can be obtained by using less data, and long-term dependent time series data information capture can be performed, so that long-term dependent information can be effectively retained, and accuracy of cloud native resource prediction can be effectively improved to a certain extent.
In this embodiment, training data that can be used for carrying out the training of time sequence neural network model is input to train in the time sequence neural network model that has built, and after continuous cycle optimization, it can be through adopting inflation convolution and attention mechanism to obtain using less data to obtain wider receptive field, and the time sequence data information that carries out the long term dependence snatchs time sequence neural network prediction model, can effectively remain long term dependence information to effectively promote the degree of accuracy of cloud primary resource prediction to a certain extent.
In step S8, the test data is input into the time-series neural network prediction model for prediction operation, and a resource prediction result is obtained.
In this embodiment, after the training of the time sequence neural network prediction model is finished, the test data capable of being used for resource prediction is directly input into the trained time sequence neural network prediction model to predict the cloud native resources, so that the resource prediction result including the predicted future resource utilization rate can be obtained.
The application provides a dynamic prediction method of cloud native resources, which comprises the following steps: receiving a dynamic prediction request sent by a user terminal; responding to the dynamic prediction request, reading a local database, and acquiring resource data to be predicted and performance index data of container load in the cloud native cluster; performing relevance ranking on the resource data to be predicted and the performance index data based on the Pearson correlation coefficient to obtain a relevance relation between the resource data to be predicted and the performance index data; defining a correlation threshold based on the correlation relationship; taking the performance index data which is greater than or equal to the correlation threshold value as performance index time sequence data; performing transverse data expansion on the performance index time sequence data to obtain training data and test data; inputting training data into the constructed time sequence neural network model for training to obtain a trained time sequence neural network prediction model; and inputting the test data into a time sequence neural network prediction model for prediction operation to obtain a resource prediction result. Performing relevancy sorting on the resource data to be predicted and the performance index data acquired in the local database based on the Pearson correlation coefficient to acquire a relevancy relation between the resource data to be predicted and the performance index data to define a relevancy threshold, and acquiring performance index time sequence data based on the relevancy threshold; pruning and information extraction are carried out on the performance index time sequence data based on transverse data expansion to obtain training data and test data, and effective information of the data can be reserved on the basis of reducing input data; then, training a time sequence neural network prediction model which can be used for capturing time sequence data information of long-term dependence based on the training data, and further enabling the test data to be used as the input of the time sequence neural network prediction model to obtain a resource prediction result with high prediction accuracy. By reducing the input data of the time sequence neural network model, the computation complexity is effectively reduced, so that the prediction complexity is reduced, and the accuracy and the efficiency of cloud native cluster resource prediction are improved to a certain extent.
Continuing to refer to fig. 3, a flowchart of data preprocessing of a cloud native resource dynamic prediction method provided in an embodiment of the present application is shown, and for convenience of description, only a part related to the present application is shown.
In some optional implementation manners of the first embodiment, before responding to the dynamic prediction request, reading the local database, and acquiring the resource data to be predicted and the performance index data of the container load in the cloud native cluster in step S2, the method further includes: step S301 and step S302.
In step S301, historical resource data of container loads in the cloud native cluster is collected based on a preset time interval.
In step S302, the historical resource data is preprocessed to obtain resource data to be predicted.
In this embodiment, the historical resource data may specifically be data including attribute values such as a CPU utilization rate, a memory utilization rate, a disk IO size, a network bandwidth, and the like, and the sampling frequency is 60s once.
In this embodiment, the historical resource data is preprocessed, specifically, the resource data to be predicted may be obtained by deleting invalid and abnormal data.
In this embodiment, historical resource data of data, such as data including attribute values of CPU utilization, memory utilization, disk IO size, network bandwidth, etc., and sampling frequency of 60s once, in a container load in a cloud-native cluster is collected at preset time intervals, for example, every 60s, further, the historical resource data is deleted ineffectively and the abnormal data is preprocessed to obtain the resource data to be predicted, the size of input data is reduced on the premise of not missing long-term dependence information by reasonably compressing, extracting, converting formats and the like the collected historical resource data, thereby enabling the training speed of the sequential neural network model to be increased subsequently based on the reduction of input data, therefore, real-time and dynamic allocation of resources is realized, and the hysteresis of resource allocation is reduced, so that the accuracy and the efficiency of cloud native cluster resource prediction are improved to a certain extent.
Continuing to refer to fig. 4, a flowchart of a specific implementation of step S302 in the first embodiment of the present application is shown, and for convenience of description, only the portions related to the present application are shown.
In some optional implementation manners of the first embodiment, the step S302 of preprocessing the historical resource data to obtain resource data to be predicted includes: step S401 and step S402.
In step S401, invalid or abnormal data in the history resource data is deleted, and valid time series data is obtained.
In step S402, the effective time series data is normalized to obtain the resource data to be predicted.
In this embodiment, in order to effectively improve the performance of cloud native cluster resource management, the prediction time is improved by pruning input data, so that cloud native resources are reasonably distributed, over-sale of resources and generation of resource fragments are reduced, and therefore, in order to implement pruning of the input data, the reduction and effective extraction of effective time series data are implemented by deleting invalid or abnormal data in historical resource data and then normalizing the deleted effective time series data, so as to obtain resource data to be predicted.
In some optional implementations of the first embodiment, after step S8, the method further includes:
repeatedly executing the prediction operation process to obtain real-time prediction data;
and feeding back the real-time prediction data to the user terminal.
In this embodiment, the real-time prediction data is prediction data including a predicted future resource utilization rate, which is obtained by performing dynamic online real-time prediction on resources based on a time-series neural network prediction model.
In this embodiment, in order to implement real-time and dynamic allocation of resources and reduce the hysteresis of resource allocation, the embodiment repeatedly performs the prediction operation process of inputting the test data into the time sequence neural network prediction model to obtain the real-time prediction data including the predicted future resource utilization rate, so as to meet the real-time requirement of resource allocation.
In some optional implementations of the first embodiment, before the step S7, the method further includes:
and adding a preset full connection layer and an attention mechanism into the time sequence neural network basic model framework to obtain a time sequence neural network model.
In the present embodiment, the basic model of the time-series neural network is a time-series neural network tcns (temporal relational networks).
In this embodiment, a preset full connection layer and an attention mechanism are added to the time sequence neural network basic model framework, and specifically, a layer of full connection layer and an attention mechanism are added on the basis of the time sequence neural network TCNs, so that the time sequence neural network model can capture information that is dependent on the time sequence neural network model for a long time through a small amount of data during training, and therefore resources are reasonably and dynamically allocated, and the performance of cloud-originated resource management is improved to a certain extent.
In summary, the present application provides a dynamic prediction method for cloud native resources, including: receiving a dynamic prediction request sent by a user terminal; responding to the dynamic prediction request, reading a local database, and acquiring resource data to be predicted and performance index data of container load in the cloud native cluster; performing relevance ranking on the resource data to be predicted and the performance index data based on the Pearson correlation coefficient to obtain a relevance relation between the resource data to be predicted and the performance index data; defining a correlation threshold based on the correlation relationship; taking the performance index data which is greater than or equal to the correlation threshold value as performance index time sequence data; performing transverse data expansion on the performance index time sequence data to obtain training data and test data; inputting training data into the constructed time sequence neural network model for training to obtain a trained time sequence neural network prediction model; and inputting the test data into a time sequence neural network prediction model for prediction operation to obtain a resource prediction result. Acquiring historical resource data of container loads in the cloud native cluster based on a preset time interval, and carrying out pretreatment such as deletion, normalization and the like on the acquired historical resource data to obtain resource data to be predicted; further, performing relevance ranking on the resource data to be predicted and the performance index data based on the Pearson correlation coefficient to obtain a relevance relation between the resource data to be predicted and the performance index data to define a relevance threshold, and obtaining performance index time sequence data based on the relevance threshold; pruning and information extraction are carried out on the performance index time sequence data based on transverse data expansion to obtain training data and test data, and effective information of the data can be reserved on the basis of reducing input data; then, training a time sequence neural network model added with a full connection layer and an attention mechanism based on training data to obtain a time sequence neural network prediction model capable of being used for capturing time sequence data information of long-term dependence, and further enabling test data to be used as input of the time sequence neural network prediction model to obtain a resource prediction result with high prediction accuracy; then, the prediction operation process is repeatedly executed, and the obtained real-time prediction data is fed back to the user terminal. By reducing the input data of the time sequence neural network model, the computation complexity is effectively reduced, so that the prediction complexity is reduced, and the accuracy and the efficiency of cloud native cluster resource prediction are improved to a certain extent.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
Example two
With further reference to fig. 5, as an implementation of the method shown in fig. 2, the present application provides an embodiment of a cloud native resource dynamic prediction apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 5, the cloud native resource dynamic prediction apparatus 100 according to the present embodiment includes: the system comprises a request receiving module 101, a request responding module 102, a relevancy ranking module 103, a threshold value defining module 104, a time sequence data acquiring module 105, a data expanding module 106, a model training module 107 and a data predicting module 108. Wherein:
a request receiving module 101, configured to receive a dynamic prediction request sent by a user terminal;
in this embodiment, the dynamic prediction request is an operation request that is sent by a user in order to deeply understand the cloud native cluster resource characteristics, so as to select a suitable resource prediction model and optimize in combination with a specific scenario, and provide a valuable decision basis for dynamic resource allocation.
The request response module 102 is configured to respond to the dynamic prediction request, read the local database, and obtain resource data to be predicted and performance index data of a container load in the cloud native cluster;
in this embodiment, the resource data to be predicted is time sequence data obtained by processing historical resource data loaded by a container in the cloud native cluster according to a preset processing mode, where the processing mode may specifically be a reasonable compression mode, an extraction mode, a format conversion mode, and the like, and no specific limitation is imposed here, the size of input data can be reduced on the premise of not losing long-term dependence information, and the training speed of a subsequent training time sequence neural network prediction model is favorably increased, so that real-time and dynamic allocation of resources is realized, the hysteresis of resource allocation is reduced, and the accuracy and the efficiency of cloud native cluster resource prediction are improved to a certain extent.
In this embodiment, the performance index data may specifically be performance indexes of the application layer, such as CPU utilization, memory utilization, disk IO size, network bandwidth, and the like; and performance indexes of the micro-architecture layer such as IPC (instructions per cycle), Branch prediction (Branch prediction) and Cache miss (Cache miss) can be used for intuitively reflecting index data of cluster performance.
In this embodiment, because the resource utilization rates of different applications have the characteristics of dynamics and complexity, the present embodiment obtains, in the local database based on the dynamic prediction request, time series data obtained by processing the historical resource data of the container load in the cloud native cluster based on a preset processing manner, namely resource data to be predicted and performance index data which can be used for visually reflecting cluster performance, so that the size of input data can be reduced on the premise of not missing long-term dependence information based on the resource data to be predicted and the performance index data subsequently, the training speed of a neural network prediction model of a subsequent training time sequence is favorably improved, therefore, real-time and dynamic allocation of resources is realized, and the hysteresis of resource allocation is reduced, so that the accuracy and the efficiency of cloud native cluster resource prediction are improved to a certain extent.
The relevancy sorting module 103 is configured to perform relevancy sorting on the resource data to be predicted and the performance index data based on the pearson correlation coefficient to obtain a relevancy relationship between the resource data to be predicted and the performance index data;
in this embodiment, the pearson correlation coefficient is a coefficient that can measure the correlation strength of the resource data to be predicted and the performance index data.
Wherein the Pearson correlation coefficient is expressed as:
Figure BDA0002807376930000151
the resource data to be predicted is r, X represents the time sequence data of r, Y represents the time sequence data of other performance index data, and n represents the length of the time sequence data.
In this embodiment, the correlation relationship refers to the correlation strength between different performance index data and the resource data r to be predicted.
In the embodiment, due to the mixed deployment of different loads, resource competition can be generated due to the limited resources at the same time, and the degree of competition is closely related to the load type, so the present embodiment is based on the pearson correlation coefficient, calculating the correlation coefficient of the resource data to be predicted and the performance index data to obtain the correlation coefficient between different performance index data and the resource data r to be predicted respectively, further, the coefficients are sorted according to a preset sorting mode, the sorted correlation coefficients are used for expressing the correlation strength relation between different performance index data and the resource data r to be predicted respectively, so that pruning of the input data can be further realized subsequently based on the correlation strength relationship, therefore, effective information of the data is reserved on the basis of reducing input data, and accuracy and efficiency of cloud native cluster resource prediction are improved to a certain extent.
A threshold definition module 104, configured to define a correlation threshold based on the correlation relationship;
in this embodiment, the correlation threshold is an index for extracting time series data with strong correlation, and the threshold Cmax is customized based on the strength relationship of the correlation, that is, the correlation threshold, so that pruning of input data can be further realized, effective information of the data is retained on the basis of reducing the input data, and accuracy and efficiency of cloud native cluster resource prediction are improved to a certain extent.
A time series data obtaining module 105, configured to use performance index data that is greater than or equal to the correlation threshold as performance index time series data;
in this embodiment, based on the self-defined correlation threshold of the strong and weak relationship of the correlation, the performance index data smaller than the correlation threshold may be deleted completely, and the performance index data greater than or equal to Cmax is retained as the performance index time series data, so that the input data may be further pruned based on the performance index time series data, thereby retaining the effective information of the data on the basis of reducing the input data, and improving the accuracy and efficiency of the cloud native cluster resource prediction to a certain extent.
The data expansion module 106 is configured to perform horizontal data expansion on the performance index time sequence data to obtain training data and test data;
in this embodiment, the performance index time series data is subjected to horizontal data expansion, specifically, the predicted performance index data is assumed to be cpu, and the performance index data with the correlation value larger than Cmax is assumed to be cpu, memory, disk, so that at time t, the data input matrix is arr ═ cput,memoryt,diskt]The expanded input matrix is arr ═ cput-2,cput-1,cput,memoryt-2,memoryt-1,memoryt,diskt-2,diskt-1,diskt]。
In this embodiment, the training data and the test data are data sets obtained by performing horizontal data expansion based on performance index time series data, and the data sets are obtained according to the following steps: after the proportion of 3 is divided, the training data which can be used for training the time sequence neural network model and the test data which can be used for resource prediction are obtained, pruning of input data can be achieved, effective information of the data is reserved on the basis of reducing the input data, and therefore accuracy and efficiency of cloud native cluster resource prediction are improved to a certain extent.
The model training module 107 is used for inputting training data into the constructed time sequence neural network model for training to obtain a trained time sequence neural network prediction model;
in this embodiment, the time series neural network prediction model is obtained by optimizing based on a time series neural network (TCNs), and by adopting an expansion convolution and attention mechanism, a wider receptive field can be obtained by using less data, and long-term dependent time series data information capture can be performed, so that long-term dependent information can be effectively retained, and accuracy of cloud native resource prediction can be effectively improved to a certain extent.
In this embodiment, training data that can be used for carrying out the training of time sequence neural network model is input to train in the time sequence neural network model that has built, and after continuous cycle optimization, it can be through adopting inflation convolution and attention mechanism to obtain using less data to obtain wider receptive field, and the time sequence data information that carries out the long term dependence snatchs time sequence neural network prediction model, can effectively remain long term dependence information to effectively promote the degree of accuracy of cloud primary resource prediction to a certain extent.
And the data prediction module 108 is configured to input the test data into the time sequence neural network prediction model to perform prediction operation, so as to obtain a resource prediction result.
In this embodiment, after the training of the time sequence neural network prediction model is finished, the test data capable of being used for resource prediction is directly input into the trained time sequence neural network prediction model to predict the cloud native resources, so that the resource prediction result including the predicted future resource utilization rate can be obtained.
The application provides a cloud native resource dynamic prediction device, includes: performing relevancy sorting on the resource data to be predicted and the performance index data acquired in the local database based on the Pearson correlation coefficient to acquire a relevancy relation between the resource data to be predicted and the performance index data to define a relevancy threshold, and acquiring performance index time sequence data based on the relevancy threshold; pruning and information extraction are carried out on the performance index time sequence data based on transverse data expansion to obtain training data and test data, and effective information of the data can be reserved on the basis of reducing input data; then, training a time sequence neural network prediction model which can be used for capturing time sequence data information of long-term dependence based on the training data, and further enabling the test data to be used as the input of the time sequence neural network prediction model to obtain a resource prediction result with high prediction accuracy. By reducing the input data of the time sequence neural network model, the computation complexity is effectively reduced, so that the prediction complexity is reduced, and the accuracy and the efficiency of cloud native cluster resource prediction are improved to a certain extent.
Continuing to refer to fig. 6, a schematic structural diagram of data preprocessing of a cloud native resource dynamic prediction apparatus provided in an embodiment of the present application is shown, and for convenience of description, only a part related to the present application is shown.
In some optional implementations of the second embodiment, the apparatus further includes: a data acquisition module 601 and a preprocessing module 602.
The data acquisition module 601 is used for acquiring historical resource data of container loads in the cloud native cluster based on a preset time interval;
the preprocessing module 602 is configured to preprocess the historical resource data to obtain resource data to be predicted.
In this embodiment, the historical resource data may specifically be data including attribute values such as a CPU utilization rate, a memory utilization rate, a disk IO size, a network bandwidth, and the like, and the sampling frequency is 60s once.
In this embodiment, the historical resource data is preprocessed, specifically, the resource data to be predicted may be obtained by deleting invalid and abnormal data.
In this embodiment, historical resource data of data, such as data including attribute values of CPU utilization, memory utilization, disk IO size, network bandwidth, etc., and sampling frequency of 60s once, in a container load in a cloud-native cluster is collected at preset time intervals, for example, every 60s, further, the historical resource data is deleted ineffectively and the abnormal data is preprocessed to obtain the resource data to be predicted, the size of input data is reduced on the premise of not missing long-term dependence information by reasonably compressing, extracting, converting formats and the like the collected historical resource data, thereby enabling the training speed of the sequential neural network model to be increased subsequently based on the reduction of input data, therefore, real-time and dynamic allocation of resources is realized, and the hysteresis of resource allocation is reduced, so that the accuracy and the efficiency of cloud native cluster resource prediction are improved to a certain extent.
With continued reference to fig. 7, a flowchart of a specific implementation of the preprocessing module 602 in fig. 6 according to an embodiment of the present application is shown, and for convenience of description, only the relevant portions of the present application are shown.
In some optional implementations of the second embodiment, the preprocessing module 602 includes: a data deleting unit 701 and a normalization processing unit 702.
A data deleting unit 701, configured to delete invalid or abnormal data in the historical resource data to obtain valid time sequence data;
and the normalization processing unit 702 is configured to perform normalization processing on the effective time sequence data to obtain resource data to be predicted.
In this embodiment, in order to effectively improve the performance of cloud native cluster resource management, the prediction time is improved by pruning input data, so that cloud native resources are reasonably distributed, over-sale of resources and generation of resource fragments are reduced, and therefore, in order to implement pruning of the input data, the reduction and effective extraction of effective time series data are implemented by deleting invalid or abnormal data in historical resource data and then normalizing the deleted effective time series data, so as to obtain resource data to be predicted.
In some optional implementations of the second embodiment, the apparatus further includes: a real-time prediction module and a data feedback module.
The real-time prediction module is used for repeatedly executing the prediction operation process to obtain real-time prediction data;
and the data feedback module is used for feeding back the real-time prediction data to the user terminal.
In this embodiment, the real-time prediction data is prediction data including a predicted future resource utilization rate, which is obtained by performing dynamic online real-time prediction on resources based on a time-series neural network prediction model.
In this embodiment, in order to implement real-time and dynamic allocation of resources and reduce the hysteresis of resource allocation, the embodiment repeatedly performs the prediction operation process of inputting the test data into the time sequence neural network prediction model to obtain the real-time prediction data including the predicted future resource utilization rate, so as to meet the real-time requirement of resource allocation.
In some optional implementations of the second embodiment, the apparatus further includes:
and adding a preset full connection layer and an attention mechanism into the time sequence neural network basic model framework to obtain a time sequence neural network model.
In the present embodiment, the basic model of the time-series neural network is a time-series neural network tcns (temporal relational networks).
In this embodiment, a preset full connection layer and an attention mechanism are added to the time sequence neural network basic model framework, and specifically, a layer of full connection layer and an attention mechanism are added on the basis of the time sequence neural network TCNs, so that the time sequence neural network model can capture information that is dependent on the time sequence neural network model for a long time through a small amount of data during training, and therefore resources are reasonably and dynamically allocated, and the performance of cloud-originated resource management is improved to a certain extent.
To sum up, the present application provides a dynamic prediction device for cloud native resources, including: the request receiving module is used for receiving a dynamic prediction request sent by a user terminal; the request response module is used for responding to the dynamic prediction request, reading the local database and acquiring resource data to be predicted and performance index data of the container load in the cloud native cluster; the relevancy sorting module is used for carrying out relevancy sorting on the resource data to be predicted and the performance index data based on the Pearson correlation coefficient to obtain a relevancy relation between the resource data to be predicted and the performance index data; a threshold definition module for defining a correlation threshold based on the correlation relationship; the time sequence data acquisition module is used for taking the performance index data which is greater than or equal to the correlation threshold as performance index time sequence data; the data expansion module is used for performing transverse data expansion on the performance index time sequence data to obtain training data and test data; the model training module is used for inputting training data into the constructed time sequence neural network model for training to obtain a trained time sequence neural network prediction model; and the data prediction module is used for inputting the test data into the time sequence neural network prediction model to perform prediction operation so as to obtain a resource prediction result. Acquiring historical resource data of container loads in the cloud native cluster based on a preset time interval, and carrying out pretreatment such as deletion, normalization and the like on the acquired historical resource data to obtain resource data to be predicted; further, performing relevance ranking on the resource data to be predicted and the performance index data based on the Pearson correlation coefficient to obtain a relevance relation between the resource data to be predicted and the performance index data to define a relevance threshold, and obtaining performance index time sequence data based on the relevance threshold; pruning and information extraction are carried out on the performance index time sequence data based on transverse data expansion to obtain training data and test data, and effective information of the data can be reserved on the basis of reducing input data; then, training a time sequence neural network model added with a full connection layer and an attention mechanism based on training data to obtain a time sequence neural network prediction model capable of being used for capturing time sequence data information of long-term dependence, and further enabling test data to be used as input of the time sequence neural network prediction model to obtain a resource prediction result with high prediction accuracy; then, the prediction operation process is repeatedly executed, and the obtained real-time prediction data is fed back to the user terminal. By reducing the input data of the time sequence neural network model, the computation complexity is effectively reduced, so that the prediction complexity is reduced, and the accuracy and the efficiency of cloud native cluster resource prediction are improved to a certain extent.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 8, fig. 8 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 8 comprises a memory 81, a processor 82, a network interface 83 communicatively connected to each other via a system bus. It is noted that only computer device 8 having components 81-83 is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 81 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 81 may be an internal storage unit of the computer device 8, such as a hard disk or a memory of the computer device 8. In other embodiments, the memory 81 may also be an external storage device of the computer device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 8. Of course, the memory 81 may also comprise both an internal storage unit of the computer device 8 and an external storage device thereof. In this embodiment, the memory 81 is generally used for storing an operating system installed in the computer device 8 and various types of application software, such as program codes of a dynamic prediction method for cloud native resources. Further, the memory 81 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 82 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 82 is typically used to control the overall operation of the computer device 8. In this embodiment, the processor 82 is configured to execute the program code stored in the memory 81 or process data, for example, execute the program code of the cloud native resource dynamic prediction method.
The network interface 83 may comprise a wireless network interface or a wired network interface, and the network interface 83 is generally used for establishing communication connections between the computer device 8 and other electronic devices.
The present application further provides another embodiment, which is to provide a computer readable storage medium storing a cloud-native-resource dynamic prediction program, which is executable by at least one processor to cause the at least one processor to perform the steps of the cloud-native-resource dynamic prediction method as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A dynamic prediction method for cloud native resources is characterized by comprising the following steps:
receiving a dynamic prediction request sent by a user terminal;
responding to the dynamic prediction request, reading a local database, and acquiring resource data to be predicted and performance index data of container load in the cloud native cluster;
carrying out relevancy sorting on the resource data to be predicted and the performance index data based on a Pearson correlation coefficient to obtain a relevancy relation between the resource data to be predicted and the performance index data;
defining a relevance threshold based on the relevance relationship;
taking the performance index data which is greater than or equal to the correlation threshold value as performance index time sequence data;
performing transverse data expansion on the performance index time sequence data to obtain training data and test data;
inputting the training data into a constructed time sequence neural network model for training to obtain a trained time sequence neural network prediction model;
and inputting the test data into the time sequence neural network prediction model for prediction operation to obtain a resource prediction result.
2. The method according to claim 1, wherein before the steps of reading a local database and obtaining resource data to be predicted and performance index data of a container load in a cloud native cluster in response to the dynamic prediction request, the method further comprises:
collecting historical resource data of container loads in the cloud native cluster based on a preset time interval;
and preprocessing the historical resource data to obtain the resource data to be predicted.
3. The method according to claim 2, wherein the step of preprocessing the historical resource data to obtain the resource data to be predicted comprises:
deleting invalid or abnormal data in the historical resource data to obtain valid time sequence data;
and normalizing the effective time sequence data to obtain the resource data to be predicted.
4. The method according to claim 1, wherein after the step of inputting the test data into the time-series neural network prediction model for prediction operation to obtain resource prediction result, the method further comprises:
repeatedly executing the prediction operation process to obtain real-time prediction data;
and feeding back the real-time prediction data to the user terminal.
5. The method according to claim 1, wherein before the step of inputting the training data into the constructed time-series neural network model for training to obtain the trained time-series neural network prediction model, the method further comprises:
and adding a preset full connection layer and an attention mechanism to a time sequence neural network basic model framework to obtain the time sequence neural network model.
6. A cloud native resource dynamic prediction apparatus, comprising:
the request receiving module is used for receiving a dynamic prediction request sent by a user terminal;
the request response module is used for responding to the dynamic prediction request, reading a local database and acquiring resource data to be predicted and performance index data of a container load in the cloud native cluster;
the relevancy sorting module is used for carrying out relevancy sorting on the resource data to be predicted and the performance index data based on a Pearson correlation coefficient to obtain a relevancy relation between the resource data to be predicted and the performance index data;
a threshold definition module for defining a correlation threshold based on the correlation relationship;
the time sequence data acquisition module is used for taking the performance index data which is greater than or equal to the correlation threshold value as performance index time sequence data;
the data expansion module is used for performing transverse data expansion on the performance index time sequence data to obtain training data and test data;
the model training module is used for inputting the training data into the constructed time sequence neural network model for training to obtain a trained time sequence neural network prediction model;
and the data prediction module is used for inputting the test data into the time sequence neural network prediction model to perform prediction operation so as to obtain a resource prediction result.
7. The cloud native resource dynamic prediction apparatus according to claim 6, wherein the apparatus further comprises:
the data acquisition module is used for acquiring historical resource data of container loads in the cloud native cluster based on a preset time interval;
and the preprocessing module is used for preprocessing the historical resource data to obtain the resource data to be predicted.
8. The cloud native resource dynamic prediction device of claim 7, wherein the pre-processing module comprises:
the data deleting unit is used for deleting invalid or abnormal data in the historical resource data to obtain valid time sequence data;
and the normalization processing unit is used for performing normalization processing on the effective time sequence data to obtain the resource data to be predicted.
9. A computer device comprising a memory having stored therein a computer program and a processor which when executed implements the steps of the cloud native resource dynamic prediction method of any one of claims 1 to 5.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, carries out the steps of the method for dynamic prediction of cloud native resources according to any one of claims 1 to 5.
CN202011373082.6A 2020-11-30 2020-11-30 Cloud native resource dynamic prediction method and device, computer equipment and storage medium Pending CN112565378A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011373082.6A CN112565378A (en) 2020-11-30 2020-11-30 Cloud native resource dynamic prediction method and device, computer equipment and storage medium
PCT/CN2020/139679 WO2022110444A1 (en) 2020-11-30 2020-12-25 Dynamic prediction method and apparatus for cloud native resources, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011373082.6A CN112565378A (en) 2020-11-30 2020-11-30 Cloud native resource dynamic prediction method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112565378A true CN112565378A (en) 2021-03-26

Family

ID=75046109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011373082.6A Pending CN112565378A (en) 2020-11-30 2020-11-30 Cloud native resource dynamic prediction method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112565378A (en)
WO (1) WO2022110444A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113283171A (en) * 2021-05-27 2021-08-20 上海交通大学 Industrial platform resource optimal allocation device and method
CN113408221A (en) * 2021-07-06 2021-09-17 太仓比泰科自动化设备有限公司 Probe service life prediction method, system, device and storage medium
CN115514711A (en) * 2022-09-21 2022-12-23 浙江齐安信息科技有限公司 Data transmission system, method and medium based on Internet of things and big data
CN115600111A (en) * 2022-11-07 2023-01-13 宁波吉利汽车研究开发有限公司(Cn) Resource prediction model training method, cloud resource prediction method and device
CN117170995A (en) * 2023-11-02 2023-12-05 中国科学院深圳先进技术研究院 Performance index-based interference anomaly detection method, device, equipment and medium

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220374515A1 (en) * 2021-04-23 2022-11-24 Ut-Battelle, Llc Universally applicable signal-based controller area network (can) intrusion detection system
US11811676B2 (en) * 2022-03-30 2023-11-07 International Business Machines Corporation Proactive auto-scaling
CN116055497A (en) * 2023-01-18 2023-05-02 紫光云技术有限公司 Method for realizing load balancing LB multi-activity oversized cluster
CN117077802B (en) * 2023-06-15 2024-07-02 深圳计算科学研究院 Sequencing prediction method and device for time sequence data
CN118055133B (en) * 2024-01-29 2024-08-27 百融至信(北京)科技有限公司 Resource request method, device, equipment and storage medium
CN118555216B (en) * 2024-07-26 2024-09-20 北京邮电大学 Multi-granularity sampling-based multi-dimensional resource joint prediction method and system for computing power network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260794A (en) * 2015-10-12 2016-01-20 上海交通大学 Load predicting method of cloud data center
CN109714395A (en) * 2018-12-10 2019-05-03 平安科技(深圳)有限公司 Cloud platform resource uses prediction technique and terminal device
CN109858611A (en) * 2019-01-11 2019-06-07 平安科技(深圳)有限公司 Neural network compression method and relevant device based on channel attention mechanism
CN110838075A (en) * 2019-05-20 2020-02-25 全球能源互联网研究院有限公司 Training and predicting method and device for prediction model of transient stability of power grid system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3304544A1 (en) * 2015-05-26 2018-04-11 Katholieke Universiteit Leuven Speech recognition system and method using an adaptive incremental learning approach
CN106205126B (en) * 2016-08-12 2019-01-15 北京航空航天大学 Large-scale Traffic Network congestion prediction technique and device based on convolutional neural networks
CN110059858A (en) * 2019-03-15 2019-07-26 深圳壹账通智能科技有限公司 Server resource prediction technique, device, computer equipment and storage medium
CN110751326B (en) * 2019-10-17 2022-10-28 江苏远致能源科技有限公司 Photovoltaic day-ahead power prediction method and device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260794A (en) * 2015-10-12 2016-01-20 上海交通大学 Load predicting method of cloud data center
CN109714395A (en) * 2018-12-10 2019-05-03 平安科技(深圳)有限公司 Cloud platform resource uses prediction technique and terminal device
CN109858611A (en) * 2019-01-11 2019-06-07 平安科技(深圳)有限公司 Neural network compression method and relevant device based on channel attention mechanism
CN110838075A (en) * 2019-05-20 2020-02-25 全球能源互联网研究院有限公司 Training and predicting method and device for prediction model of transient stability of power grid system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113283171A (en) * 2021-05-27 2021-08-20 上海交通大学 Industrial platform resource optimal allocation device and method
CN113408221A (en) * 2021-07-06 2021-09-17 太仓比泰科自动化设备有限公司 Probe service life prediction method, system, device and storage medium
CN115514711A (en) * 2022-09-21 2022-12-23 浙江齐安信息科技有限公司 Data transmission system, method and medium based on Internet of things and big data
CN115600111A (en) * 2022-11-07 2023-01-13 宁波吉利汽车研究开发有限公司(Cn) Resource prediction model training method, cloud resource prediction method and device
CN117170995A (en) * 2023-11-02 2023-12-05 中国科学院深圳先进技术研究院 Performance index-based interference anomaly detection method, device, equipment and medium
CN117170995B (en) * 2023-11-02 2024-05-17 中国科学院深圳先进技术研究院 Performance index-based interference anomaly detection method, device, equipment and medium

Also Published As

Publication number Publication date
WO2022110444A1 (en) 2022-06-02

Similar Documents

Publication Publication Date Title
CN112565378A (en) Cloud native resource dynamic prediction method and device, computer equipment and storage medium
CN108563739B (en) Weather data acquisition method and device, computer device and readable storage medium
CN111368043A (en) Event question-answering method, device, equipment and storage medium based on artificial intelligence
CN111061837A (en) Topic identification method, device, equipment and medium
CN112995414B (en) Behavior quality inspection method, device, equipment and storage medium based on voice call
CN114780746A (en) Knowledge graph-based document retrieval method and related equipment thereof
CN112181835A (en) Automatic testing method and device, computer equipment and storage medium
CN114861746A (en) Anti-fraud identification method and device based on big data and related equipment
CN112836521A (en) Question-answer matching method and device, computer equipment and storage medium
CN112085087A (en) Method and device for generating business rules, computer equipment and storage medium
CN113010542A (en) Service data processing method and device, computer equipment and storage medium
CN113626438A (en) Data table management method and device, computer equipment and storage medium
CN117217684A (en) Index data processing method and device, computer equipment and storage medium
CN116821493A (en) Message pushing method, device, computer equipment and storage medium
CN112199374A (en) Data feature mining method aiming at data missing and related equipment thereof
CN116777646A (en) Artificial intelligence-based risk identification method, apparatus, device and storage medium
CN116684903A (en) Cell parameter processing method, device, equipment and storage medium
CN116028446A (en) Time sequence data file management method, device, equipment and storage medium thereof
CN115878768A (en) NLP-based vehicle insurance service call-back clue recommendation method and related equipment thereof
CN114549053A (en) Data analysis method and device, computer equipment and storage medium
CN112084408A (en) List data screening method and device, computer equipment and storage medium
CN113157896A (en) Voice conversation generation method and device, computer equipment and storage medium
CN106547788B (en) Data processing method and device
CN118505326A (en) Commission amount determining method, apparatus, computer device and storage medium
CN117812185B (en) Control method and system of intelligent outbound system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210326

RJ01 Rejection of invention patent application after publication