CN112988371A - Machine room resource prediction method and device based on large-scale distributed operation and maintenance system - Google Patents

Machine room resource prediction method and device based on large-scale distributed operation and maintenance system Download PDF

Info

Publication number
CN112988371A
CN112988371A CN201911285148.3A CN201911285148A CN112988371A CN 112988371 A CN112988371 A CN 112988371A CN 201911285148 A CN201911285148 A CN 201911285148A CN 112988371 A CN112988371 A CN 112988371A
Authority
CN
China
Prior art keywords
machine room
resource
utilization rate
computer
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911285148.3A
Other languages
Chinese (zh)
Inventor
刘宇
张小虎
严永峰
陈清阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Electronic Commerce Co Ltd
Original Assignee
Tianyi Electronic Commerce Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Electronic Commerce Co Ltd filed Critical Tianyi Electronic Commerce Co Ltd
Priority to CN201911285148.3A priority Critical patent/CN112988371A/en
Publication of CN112988371A publication Critical patent/CN112988371A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides a machine room resource prediction method and device based on a large-scale distributed operation and maintenance system. The machine room resource prediction method based on the large-scale distributed operation and maintenance system comprises the following steps: building a machine room graph by taking all machine room servers as nodes, taking the shared relation of all machine room servers as edges and taking the self configuration of all machine room servers as attributes; performing vector representation on all nodes of the machine room diagram; and predicting the future resource utilization rate of each machine room server through a time sequence consisting of each node vector, the self configuration of each machine room server and the resource utilization rate so as to early warn in the peak use time period of each machine room server. The method and the system can accurately predict the future resource utilization rate of the machine room, thereby facilitating the subsequent allocation of corresponding resources to the machine room by a large-scale distributed operation and maintenance system and further improving the operation and maintenance efficiency.

Description

Machine room resource prediction method and device based on large-scale distributed operation and maintenance system
Technical Field
The invention relates to the technical field of operation and maintenance, in particular to a machine room resource prediction method and device based on a large-scale distributed operation and maintenance system.
Background
With the arrival of the big data era, artificial intelligence and cloud computing technologies become mature day by day and are hot, the traditional operation and maintenance system cannot meet the increasing demands, and the intelligent operation and maintenance technology becomes a hot topic in the big data era. At present, more and more operation and maintenance systems are gradually developed into large-scale distributed operation and maintenance systems with wide distribution and large scale. How to efficiently improve the operation and maintenance efficiency of a large-scale distributed operation and maintenance system, reduce the operation and maintenance cost, and maintain the stable execution of the service is a task which cannot be ignored.
However, the existing large-scale distributed operation and maintenance system lacks a machine room resource prediction and allocation mechanism, which hinders further improvement of operation and maintenance efficiency.
Disclosure of Invention
In view of the above disadvantages of the prior art, an object of the present invention is to provide a machine room resource prediction method and device based on a large-scale distributed operation and maintenance system, so as to add a machine room resource prediction and allocation mechanism to the large-scale distributed operation and maintenance system, thereby improving operation and maintenance efficiency.
In order to achieve the above and other related objects, the present invention provides a machine room resource prediction method based on a large-scale distributed operation and maintenance system, including: building a machine room graph by taking all machine room servers as nodes, taking the shared relation of all machine room servers as edges and taking the self configuration of all machine room servers as attributes; performing vector representation on all nodes of the machine room diagram; and predicting the future resource utilization rate of each machine room server through a time sequence consisting of each node vector, the self configuration of each machine room server and the resource utilization rate so as to early warn in the peak use time period of each machine room server.
In an embodiment of the present invention, an implementation manner of performing vector representation on all nodes of the machine room graph includes: randomly walking in the machine room map by a Deepwalk algorithm; training the walking sequence through a Skip-gram model; and obtaining the embedded representation of each node through multiple iterations.
In an embodiment of the present invention, the training before predicting the future resource utilization is implemented by using a long-time and short-time memory network; the input characteristics of the long-time memory network comprise: and each node vector, the self configuration of each machine room server and the resource utilization rate.
In an embodiment of the present invention, the method further includes: and providing a resource allocation scheme based on the predicted resource utilization rate trend of each machine room server.
In order to achieve the above and other related objects, the present invention provides a machine room resource prediction apparatus based on a large-scale distributed operation and maintenance system, including: the computer room network construction module is used for constructing a computer room graph by taking each computer room server as a node, taking the shared relation of each computer room server as an edge and taking each computer room server as an attribute; the computer room network graph embedding module is used for performing vector representation on all nodes of the computer room graph; and the time sequence prediction module is used for predicting the future resource utilization rate of each machine room server through a time sequence formed by each node vector, the self configuration of each machine room server and the resource utilization rate so as to early warn in the peak use time period of each machine room server.
In an embodiment of the present invention, an implementation manner of performing vector representation on all nodes of the machine room graph includes: randomly walking in the machine room map by a Deepwalk algorithm; training the walking sequence through a Skip-gram model; and obtaining the embedded representation of each node through multiple iterations.
In an embodiment of the present invention, the training before the time series prediction module predicts the future resource utilization is implemented by using a long-term memory network; the input characteristics of the long-time memory network comprise: and each node vector, the self configuration of each machine room server and the resource utilization rate.
In an embodiment of the present invention, the apparatus further includes: and the resource scheduling and distributing module is used for providing a resource distribution scheme based on the predicted resource utilization rate trend of each machine room server.
In order to achieve the above and other related objects, the present invention provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is loaded and executed by a processor, the computer-room resource prediction method based on a large-scale distributed operation and maintenance system is implemented.
To achieve the above and other related objects, the present invention provides an electronic device, comprising: a processor and a memory; wherein the memory is for storing a computer program; the processor is used for loading and executing the computer program so as to enable the electronic equipment to execute the computer room resource prediction method based on the large-scale distributed operation and maintenance system.
As described above, according to the machine room resource prediction method and device based on the large-scale distributed operation and maintenance system, firstly, a machine room graph and an embedded representation of the graph are constructed to obtain an embedded vector of a machine room node; and then, the future resource utilization rate of the machine room server is predicted by combining the self-attribute of the machine room node and the time sequence information of the resource utilization rate, and early warning is carried out on the resource peak use time period, so that the large-scale distributed operation and maintenance system can conveniently distribute corresponding resources for the machine room subsequently, and the operation and maintenance efficiency is improved.
Drawings
Fig. 1 is a schematic flow chart illustrating a machine room resource prediction method based on a large-scale distributed operation and maintenance system according to an embodiment of the present invention.
FIG. 2 is a diagram illustrating a machine room map constructed and vector-based representation according to an embodiment of the invention.
Fig. 3 is a schematic block diagram of a machine room resource prediction apparatus based on a large-scale distributed operation and maintenance system according to an embodiment of the present invention.
Fig. 4 is a schematic view illustrating a specific scenario of a machine room resource prediction apparatus based on a large-scale distributed operation and maintenance system according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
Referring to fig. 1, the present embodiment provides a machine room resource prediction method based on a large-scale distributed operation and maintenance system, including the following steps:
s11: building a machine room graph by taking all machine room servers as nodes, taking the shared relation of all machine room servers as edges and taking the self configuration of all machine room servers as attributes;
specifically, in the step, the machine room server is used as a node of the graph, the opportunity that the machine room server is shared by the same person is used as an edge, and the configuration of the machine room server is used as an attribute, so that the machine room graph is constructed.
For example, a machine room graph is defined as G ═ V, E, a, where V is a set of all machine room servers, E is a set of all machine room server relationships, a represents a set of node attributes, and in this patent, a node attribute is a utilization rate of a server, and the like. If two machine room servers are commonly used by the same person, e ═ v1,v2) Otherwise, it is 0. Herein, v1And v2Are all elements in V, and E is an element in E. In the example of fig. 2, there are 5 room servers, i.e. V comprises V1~v5Total five machine room nodes, and e1=(v1,v2)=1,e2=(v1,v3)=1,e3=(v4,v2)=1,e4=(v3,v5)=1,e5=(v4,v5)=0。
Potential relations and information among the machine room servers can be mined by constructing the machine room graph to generate graph vectors, and internal relations among the machine room nodes are generated by establishing a public relation among the machine room nodes, so that the machine room nodes have relevance.
It should be noted that, because some machine room servers often have similar working states and the same working loads, the peak resource utilization rate is common on these similar machine room nodes, and the machine room graph is constructed to help improve the accuracy of machine room resource prediction in the subsequent steps.
S12: performing vector representation on all nodes of the machine room diagram;
specifically, all machine room nodes of the machine room graph are subjected to vector representation, namely, a mapping f is constructed, namely V → RdWhere d represents the dimension of the vector, preferably, d is 128 dimensions. In detail, in the step, a Deepwalk algorithm is adopted to randomly walk in a machine room diagram, a Skip-gram model is used for training a walking sequence, and the embedded representation of each machine room node is obtained through multiple iterations. Referring to fig. 2, in the example of fig. 2, each row of squares represents a vector, and each square represents a component of the vector. The vector is an embedding vector, and is an implicit expression of 'server relation' information, and each component has no separate actual meaning. Since each component is of a different magnitude, a different color is used. The actual vector is similar to (0.56, -1.02, -0.27, 0.08).
S13: and predicting the future resource utilization rate of each machine room server through a time sequence consisting of each node vector, the self configuration of each machine room server and the resource utilization rate so as to early warn in the peak use time period of each machine room server.
Specifically, the future resource utilization rate is predicted through a time sequence consisting of the machine room node vector, the self attribute and the resource utilization rate, and the training method adopts a deep learning long-time memory network (LSTM) and trains by embedding the machine room node into the vector, the self attribute and the time sequence information. By adopting the current advanced time series prediction method based on deep learning, the future resource utilization development trend is obtained by predicting the future resource utilization rate, and the peak use time period can be obtained pertinently and accurately, so that early warning is performed.
In detail, in this step, a machine room node vector is additionally added as an input feature of the LSTM, so that the prediction of the resource utilization rate can obtain the relational information between the machine room servers, that is, the input feature of the LSTM includes the machine room server attribute, the machine room server resource utilization rate, and the machine room server embedded vector.
Further, in an embodiment, after the step S13, the method further includes the steps of: and providing a resource allocation scheme based on the predicted resource utilization rate trend of each machine room server.
Specifically, due to the fact that the future resource utilization rate trend of the machine room servers is predicted in advance, early warning can be given to some machine room servers which are about to reach the peak, and therefore a reasonable and efficient resource allocation scheme is provided for resource allocation in the peak time period, such as peak flow balancing and the like, and the operation and maintenance management and control capacity of the large-scale distributed operation and maintenance system is improved.
All or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. Based upon such an understanding, the present invention also provides a computer program product comprising one or more computer instructions. The computer instructions may be stored in a computer readable storage medium. The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Referring to fig. 3, the present embodiment provides a machine room resource prediction apparatus 30 based on a large-scale distributed operation and maintenance system, which is installed in an electronic device as a piece of software to execute the machine room resource prediction method based on the large-scale distributed operation and maintenance system described in the foregoing method embodiment during running. Since the technical principle of the embodiment of the system is similar to that of the embodiment of the method, repeated description of the same technical details is omitted.
The machine room resource prediction apparatus 30 based on the large-scale distributed operation and maintenance system of this embodiment specifically includes: the computer room network construction module 31, the computer room network map embedding module 32, and the time series prediction module 33, and further, the prediction apparatus 30 further includes a resource scheduling allocation module 34.
The machine room network building module 31 is configured to execute the step S11 described in the foregoing method embodiment, the machine room network map embedding module 32 is configured to execute the step S12 described in the foregoing method embodiment, the time sequence prediction module 33 is configured to execute the step S13 described in the foregoing method embodiment, and the resource scheduling and allocating module 34 is configured to provide a resource allocation scheme based on the predicted resource utilization rate trend of each of the machine room servers.
In detail, the machine room graph building module 31 defines machine room nodes, paths and attributes, and builds a machine room graph to meet the input requirement of the graph embedding module 32; the machine room graph embedding module 32 performs embedding representation on the machine room graph through a graph embedding technology, extracts a representation vector of the machine room graph (the vector contains relationship information between machine room nodes), and uses the representation vector for input of the time series prediction module 33; the time sequence prediction module 33 predicts the future development trend of the computer room node resources through the existing time sequence information and graph embedding vectors to obtain the future resource utilization rate and the development trend thereof, particularly predicts a peak section and gives an early warning; the resource scheduling and allocating module 34 extracts the peak time period based on the resource utilization development trend calculated by the time sequence predicting module 33, and correspondingly generates a corresponding resource scheduling and allocating scheme.
Those skilled in the art should understand that the division of the modules in the embodiment of fig. 3 is only a logical division, and the actual implementation can be fully or partially integrated into one or more physical entities. And the modules can be realized in a form that all software is called by the processing element, or in a form that all the modules are realized in a form that all the modules are called by the processing element, or in a form that part of the modules are called by the hardware. For example, the time-series prediction module 33 may be a separate processing element, or may be implemented by being integrated in a chip, or may be stored in a memory in the form of program code, and the function of the time-series prediction module 33 may be called and executed by a certain processing element. Other modules are implemented similarly. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
In the example shown in fig. 4, the machine room network of the data layer is constructed by the machine room network construction module 31, the network embedding model of the model layer is executed by the machine room network map embedding module 32 to obtain the embedding vector of the result layer, the time sequence prediction module 33 obtains the future machine room resource utilization rate of the result layer based on the machine room resource time sequence of the data layer by using the time sequence model of the model layer, and the resource scheduling and allocating module 34 outputs the automatic resource allocation scheme at the application layer based on the future machine room resource utilization rate.
Referring to fig. 5, the present embodiment provides an electronic device 50, and the electronic device 50 may be a desktop computer, a laptop computer, or the like. In detail, the electronic device 50 comprises at least, connected by a bus 51: a memory 52 and a processor 53, wherein the memory 52 is used for storing computer programs, and the processor 53 is used for executing the computer programs stored in the memory 52 to execute all or part of the steps of the foregoing method embodiments.
The above-mentioned system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface is used for realizing communication between the database access device and other equipment (such as a client, a read-write library and a read-only library). The Memory may include a Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In summary, the machine room resource prediction method and device based on the large-scale distributed operation and maintenance system of the present invention extracts graph embedding vectors and calculates potential information in graph nodes based on graph embedding technology; the time series prediction method based on deep learning extracts time series information, fuses the time series information and graph characteristic information to jointly predict the flow trend and peak time period in the future time period, and provides a reliable and reasonable resource distribution scheme according to the future development trend. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A machine room resource prediction method based on a large-scale distributed operation and maintenance system is characterized by comprising the following steps:
building a machine room graph by taking all machine room servers as nodes, taking the shared relation of all machine room servers as edges and taking the self configuration of all machine room servers as attributes;
performing vector representation on all nodes of the machine room diagram;
and predicting the future resource utilization rate of each machine room server through a time sequence consisting of each node vector, the self configuration of each machine room server and the resource utilization rate so as to early warn in the peak use time period of each machine room server.
2. The method of claim 1, wherein the vector representation of all nodes of the machine-room graph is implemented by:
randomly walking in the machine room map by a Deepwalk algorithm;
training the walking sequence through a Skip-gram model;
and obtaining the embedded representation of each node through multiple iterations.
3. The method of claim 1, wherein the training prior to predicting future resource usage is implemented using a long-and-short memory network; the input characteristics of the long-time memory network comprise: and each node vector, the self configuration of each machine room server and the resource utilization rate.
4. The method of claim 1, further comprising: and providing a resource allocation scheme based on the predicted resource utilization rate trend of each machine room server.
5. A computer room resource prediction device based on a large-scale distributed operation and maintenance system is characterized by comprising:
the computer room network construction module is used for constructing a computer room graph by taking each computer room server as a node, taking the shared relation of each computer room server as an edge and taking each computer room server as an attribute;
the computer room network graph embedding module is used for performing vector representation on all nodes of the computer room graph;
and the time sequence prediction module is used for predicting the future resource utilization rate of each machine room server through a time sequence formed by each node vector, the self configuration of each machine room server and the resource utilization rate so as to early warn in the peak use time period of each machine room server.
6. The apparatus of claim 5, wherein the implementation of vector representation for all nodes of the machine-room diagram comprises:
randomly walking in the machine room map by a Deepwalk algorithm;
training the walking sequence through a Skip-gram model;
and obtaining the embedded representation of each node through multiple iterations.
7. The apparatus of claim 5, wherein training before the time series prediction module predicts the future resource usage is implemented by using an long-and-short memory network; the input characteristics of the long-time memory network comprise: and each node vector, the self configuration of each machine room server and the resource utilization rate.
8. The apparatus of claim 5, further comprising: and the resource scheduling and distributing module is used for providing a resource distribution scheme based on the predicted resource utilization rate trend of each machine room server.
9. A computer-readable storage medium, in which a computer program is stored, and when the computer program is loaded and executed by a processor, the computer-readable storage medium implements the method for predicting resources of a machine room based on a large-scale distributed operation and maintenance system according to any one of claims 1 to 4.
10. An electronic device, comprising: a processor and a memory; wherein the content of the first and second substances,
the memory is used for storing a computer program;
the processor is used for loading and executing the computer program to enable the electronic device to execute the computer room resource prediction method based on the large-scale distributed operation and maintenance system according to any one of claims 1 to 4.
CN201911285148.3A 2019-12-13 2019-12-13 Machine room resource prediction method and device based on large-scale distributed operation and maintenance system Pending CN112988371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911285148.3A CN112988371A (en) 2019-12-13 2019-12-13 Machine room resource prediction method and device based on large-scale distributed operation and maintenance system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911285148.3A CN112988371A (en) 2019-12-13 2019-12-13 Machine room resource prediction method and device based on large-scale distributed operation and maintenance system

Publications (1)

Publication Number Publication Date
CN112988371A true CN112988371A (en) 2021-06-18

Family

ID=76342146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911285148.3A Pending CN112988371A (en) 2019-12-13 2019-12-13 Machine room resource prediction method and device based on large-scale distributed operation and maintenance system

Country Status (1)

Country Link
CN (1) CN112988371A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063327A (en) * 2010-12-15 2011-05-18 中国科学院深圳先进技术研究院 Application service scheduling method with power consumption consciousness for data center
CN104932944A (en) * 2015-06-15 2015-09-23 浙江金大科技有限公司 Cloud computing resource service combination method based on weighted bipartite graph
US20180097744A1 (en) * 2016-10-05 2018-04-05 Futurewei Technologies, Inc. Cloud Resource Provisioning for Large-Scale Big Data Platform
CN107895038A (en) * 2017-11-29 2018-04-10 四川无声信息技术有限公司 A kind of link prediction relation recommends method and device
KR20180099238A (en) * 2017-02-28 2018-09-05 한국인터넷진흥원 Method for predicting cyber incident and Apparatus thereof
CN109032914A (en) * 2018-09-06 2018-12-18 掌阅科技股份有限公司 Resource occupation data predication method, electronic equipment, storage medium
US20190163744A1 (en) * 2016-07-27 2019-05-30 Epistema Ltd. Computerized environment for human expert analysts
US20190213099A1 (en) * 2018-01-05 2019-07-11 NEC Laboratories Europe GmbH Methods and systems for machine-learning-based resource prediction for resource allocation and anomaly detection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063327A (en) * 2010-12-15 2011-05-18 中国科学院深圳先进技术研究院 Application service scheduling method with power consumption consciousness for data center
CN104932944A (en) * 2015-06-15 2015-09-23 浙江金大科技有限公司 Cloud computing resource service combination method based on weighted bipartite graph
US20190163744A1 (en) * 2016-07-27 2019-05-30 Epistema Ltd. Computerized environment for human expert analysts
US20180097744A1 (en) * 2016-10-05 2018-04-05 Futurewei Technologies, Inc. Cloud Resource Provisioning for Large-Scale Big Data Platform
KR20180099238A (en) * 2017-02-28 2018-09-05 한국인터넷진흥원 Method for predicting cyber incident and Apparatus thereof
CN107895038A (en) * 2017-11-29 2018-04-10 四川无声信息技术有限公司 A kind of link prediction relation recommends method and device
US20190213099A1 (en) * 2018-01-05 2019-07-11 NEC Laboratories Europe GmbH Methods and systems for machine-learning-based resource prediction for resource allocation and anomaly detection
CN109032914A (en) * 2018-09-06 2018-12-18 掌阅科技股份有限公司 Resource occupation data predication method, electronic equipment, storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
人工智能: "图嵌入概述", pages 1 - 4, Retrieved from the Internet <URL:《https://blog.csdn.net/weixin_42137700/article/details/103021961》> *

Similar Documents

Publication Publication Date Title
CN110301128B (en) Learning-based resource management data center cloud architecture implementation method
CN108009016B (en) Resource load balancing control method and cluster scheduler
CN108549583B (en) Big data processing method and device, server and readable storage medium
US9122676B2 (en) License reconciliation with multiple license types and restrictions
JP2018533795A (en) Stream based accelerator processing of calculation graph
US20140026147A1 (en) Varying a characteristic of a job profile relating to map and reduce tasks according to a data size
US9483393B1 (en) Discovering optimized experience configurations for a software application
US10541936B1 (en) Method and system for distributed analysis
CN103677958A (en) Virtualization cluster resource scheduling method and device
US20180176148A1 (en) Method of dynamic resource allocation for public clouds
US20190081907A1 (en) Systems and methods for computing infrastructure resource allocation
CN108205469B (en) MapReduce-based resource allocation method and server
WO2018086467A1 (en) Method, apparatus and system for allocating resources of application clusters under cloud environment
EP3238102A1 (en) Techniques to generate a graph model for cloud infrastructure elements
US20170026305A1 (en) System to place virtual machines onto servers based upon backup runtime constraints
Deng et al. A data and task co-scheduling algorithm for scientific cloud workflows
US20220091763A1 (en) Storage capacity forecasting for storage systems in an active tier of a storage environment
JP2017117449A (en) Data flow programming of computing apparatus with vector estimation-based graph partitioning
CN108833592A (en) Cloud host schedules device optimization method, device, equipment and storage medium
CN113378498A (en) Task allocation method and device
Hammer et al. An inhomogeneous hidden Markov model for efficient virtual machine placement in cloud computing environments
CN106201655B (en) Virtual machine distribution method and virtual machine distribution system
US10387578B1 (en) Utilization limiting for nested object queries
CN116737373A (en) Load balancing method, device, computer equipment and storage medium
CN116647560A (en) Method, device, equipment and medium for coordinated optimization control of Internet of things computer clusters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination