CN112667403B - Scheduling method and device of server and electronic equipment - Google Patents
Scheduling method and device of server and electronic equipment Download PDFInfo
- Publication number
- CN112667403B CN112667403B CN202011625084.XA CN202011625084A CN112667403B CN 112667403 B CN112667403 B CN 112667403B CN 202011625084 A CN202011625084 A CN 202011625084A CN 112667403 B CN112667403 B CN 112667403B
- Authority
- CN
- China
- Prior art keywords
- server
- scheduling
- data
- data center
- deployed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000013508 migration Methods 0.000 claims description 19
- 230000005012 migration Effects 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 15
- 238000005265 energy consumption Methods 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 11
- 238000005057 refrigeration Methods 0.000 claims description 6
- 238000010801 machine learning Methods 0.000 abstract description 10
- 230000008569 process Effects 0.000 abstract description 9
- 238000013473 artificial intelligence Methods 0.000 abstract description 8
- 238000013135 deep learning Methods 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 238000013461 design Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011982 device technology Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000003924 mental process Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Power Sources (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a server scheduling method, a server scheduling device and electronic equipment, and relates to the technical field of artificial intelligence such as deep learning and machine learning. The specific implementation method comprises the following steps: acquiring historical operation data of deployed servers; acquiring attribute information of a data center and a reserved space of a cabinet position; generating a scheduling policy for a server based on the historical operating data, the attribute information and the reserved space; the server scheduling is carried out based on the scheduling strategy, so that the optimal scheduling strategy can be provided by comprehensively considering the historical operation data of the deployed server, the attribute information of the existing data center and the reserved space of the cabinet position, the reasonable and reliable server scheduling is further carried out, the cabinet load rate and the power density can be effectively improved, the operation energy efficiency of the data center is improved, the problems of insufficient use or excessive use of resources are avoided, and the effectiveness and the reliability of the server in the scheduling process are improved.
Description
Technical Field
The technical field of computers, in particular to the technical field of artificial intelligence such as deep learning, machine learning and the like.
Background
The server is the main electric equipment of the data center and occupies more than 70% of the whole electric consumption of the data center. In the related art, the overall loading rate and the loading rate of the data center cabinet are at a medium-low level due to stability, safety and other factors. In this way, the technical problems of wasted cabinet resources, corresponding power, failure of the refrigeration equipment to operate with optimal efficiency and the like are caused.
However, the server scheduling method in the related art is not perfect, and there is a technical problem of extremely low security in the server scheduling process. Therefore, how to reasonably configure server resources and ensure that corresponding electric power and refrigeration equipment can operate with better efficiency has become one of important research directions.
Disclosure of Invention
The disclosure provides a server scheduling method and device and electronic equipment.
According to an aspect of the present disclosure, there is provided a scheduling method of a server, including:
acquiring historical operation data of deployed servers;
Acquiring attribute information of a data center and a reserved space of a cabinet position;
Generating a scheduling policy for a server based on the historical operating data, the attribute information and the reserved space;
And carrying out server scheduling based on the scheduling policy.
According to another aspect of the present disclosure, there is provided a scheduling apparatus of a server, including:
The first acquisition module is used for acquiring historical operation data of the deployed server;
the second acquisition module is used for acquiring attribute information of the data center and a reserved space of the cabinet position;
the generation module is used for generating a scheduling strategy for the server based on the historical operation data, the attribute information and the reserved space;
and the scheduling module is used for scheduling the server based on the scheduling strategy.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of scheduling a server according to the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the scheduling method of the server according to the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the scheduling method of a server according to the first aspect of the present disclosure.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a server scheduling overall process;
FIG. 5 is a block diagram of a scheduling apparatus of a server for implementing a scheduling method of the server of an embodiment of the present disclosure;
FIG. 6 is a block diagram of a scheduling apparatus of a server for implementing a scheduling method of the server of an embodiment of the present disclosure;
fig. 7 is a block diagram of an electronic device used to implement a server scheduling method or server scheduling apparatus according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The technical field to which the aspects of the present disclosure relate is briefly described below:
Computer technology (Computer Technology) is widely used and can be broadly divided into computer system technology, computer device technology, computer component technology, and computer assembly technology. The computer technology comprises: the basic principle of the operation method and the application of the basic principle in the design of an arithmetic unit, an instruction system, a Central Processing Unit (CPU) design, a pipeline principle and the CPU design, a storage system, a bus and input and output.
AI (ARTIFICIAL INTELLIGENCE ) is a discipline that studies the computer to simulate certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include computer vision technologies, speech recognition technologies, natural language processing technologies, and learning/deep learning, big data processing technologies, knowledge graph technologies, and the like.
ML (MACHINE LEARNING ) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, and the like. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. It is an artificial intelligence core, which is the fundamental way to make computers intelligent.
DL (DEEP LEARNING ), a new research direction in the field of ML machine learning (MACHINE LEARNING ), was introduced into machine learning to bring it closer to the original goal-artificial intelligence. Deep learning is the inherent law and presentation hierarchy of learning sample data, and the information obtained during such learning is helpful in interpreting data such as text, images and sounds. Its final goal is to have the machine have analytical learning capabilities like a person, and to recognize text, image, and sound data. Deep learning is a complex machine learning algorithm that achieves far greater results in terms of speech and image recognition than prior art.
The following describes a server scheduling method, a server scheduling device and an electronic device according to embodiments of the present disclosure with reference to the accompanying drawings.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure. It should be noted that, the execution body of the scheduling method of the server in the embodiment of the present disclosure is a scheduling device of the server, and the scheduling device of the server may specifically be a hardware device, or software in the hardware device, etc. Wherein the hardware devices such as terminal devices, servers, etc. As shown in fig. 1, the scheduling method of the server according to the present embodiment includes the following steps:
s101, acquiring historical operation data of the deployed server.
Wherein, the historical operation data can include, but is not limited to, the following data: and data such as rated power consumption, real-time power consumption and the like of resources such as a data center server, an infrastructure, a cabinet position and the like.
It should be noted that, in the present disclosure, a specific manner of acquiring the historical operation data of the deployed server is not limited, and may be selected according to actual situations. For example, a platform may be built by itself and historical operating data stored; for another example, historical operating data may be stored using an existing resource management platform, such as a DCIM (DATA CENTER Infrastructure management ) platform.
For example, the deployed servers are 1 to 7000, and 7000 servers in total, in which case the historical operation data of the 7000 servers can be acquired.
S102, acquiring attribute information of a data center and a reserved space of a cabinet position.
Wherein, the attribute information of the data center can include, but is not limited to, the following data: and infrastructure redundancy configuration information such as a power capacity threshold value and a refrigeration capacity threshold value.
S103, generating a scheduling strategy for the server based on the historical operation data, the attribute information and the reserved space.
In the related art, only the server is generally monitored, and the server is scheduled by relying on the monitoring result as the only basis, that is, the scheduling method of the server in the related art cannot be combined with the data center power and refrigeration infrastructure. This tends to result in the inability to schedule servers according to the optimal scheduling policy.
Therefore, in the method, the historical operation data can be utilized, attribute information and a reserved space are combined, rules are generated based on a preset scheduling policy, and a more reasonable and reliable scheduling policy for the server is generated.
The scheduling policy generation rule may be set according to an actual situation.
S104, server scheduling is carried out based on the scheduling policy.
For example, the scheduling policy is to add 100 servers to the cabinet positions 1-50 in the first machine room and the cabinet positions 1-50 in the second machine room, in which case, based on the scheduling policy, 100 newly added servers may be arranged on the cabinet positions 1-50 in the first and second machine rooms.
According to the server scheduling method disclosed by the embodiment of the disclosure, the historical operation data of the deployed server can be obtained, the attribute information of the data center and the reserved space of the cabinet position can be obtained, then the scheduling strategy for the server is generated based on the historical operation data, the attribute information and the reserved space, and further the server scheduling is performed based on the scheduling strategy, so that the optimal scheduling strategy can be provided by comprehensively considering the historical operation data of the deployed server, the attribute information of the existing data center and the reserved space of the cabinet position, and further the reasonable and reliable server scheduling is performed, the cabinet load rate and the power density can be effectively improved, the operation energy efficiency of the data center is improved, the problems of insufficient use or excessive use of resources are avoided, and the effectiveness and the reliability of the server in the scheduling process are improved.
In practical applications, the capacity status of the data center may be different, for example, a capacity surplus status, an overload status, and the like. Thus, in the present disclosure, a real-time scheduling policy for a server may be generated based on the acquired historical operating data, attribute information, and reserved space.
Fig. 2 is a schematic diagram according to a second embodiment of the present disclosure. As a possible implementation manner, as shown in fig. 2, on the basis of the foregoing embodiment, the method specifically includes the following steps:
S201, acquiring historical operation data of the deployed server.
S202, acquiring attribute information of a data center and a reserved space of a cabinet position.
Steps S201 to S202 are the same as steps S101 to S102, and will not be described here again.
The specific procedure in step S103 includes the following steps S203 to S205.
S203, acquiring energy consumption data of the deployed server based on the historical operation data.
It should be noted that, in practical application, after the running time of the deployed server reaches a certain time, the actual power consumption often has a certain difference from the rated power consumption, and certain time attributes may exist, such as long-term stability, irregular fluctuation, time and service usage fluctuation, etc., so that the deployed server is one of important factors affecting the rack loading rate and the load rate of the cabinet of the data center.
Thus, in the embodiment of the present disclosure, the energy consumption data of the deployed server may be obtained based on the historical operation data.
S204, acquiring the capacity state of the data center based on the attribute information and the energy consumption data of the deployed server.
The capacity state of the data center may include: excess capacity, overload, etc.
It should be noted that, in the present disclosure, a specific manner of acquiring the capacity state of the data center based on the attribute information and the energy consumption data of the deployed server is not limited, and may be selected according to actual situations.
As one possible implementation, the capacity currently available to the data center may be obtained based on the attribute information and the energy consumption data of the deployed servers. Further, the currently available capacity can be compared with a first preset capacity threshold and a second preset capacity threshold, and if the currently available capacity is greater than the first preset capacity threshold, the capacity state of the data center is identified as overload; and if the currently available capacity is smaller than the second preset capacity threshold, identifying the capacity state of the data center as capacity surplus. The first preset capacity threshold and the second preset capacity threshold can be set according to actual conditions.
Further, in the present disclosure, after acquiring the capacity state of the data center, the alert information matched with the capacity state may be generated based on the capacity state and sent to the associated device.
As a possible implementation manner, if the capacity state of the data center is overload, the first alarm information may be generated and sent to the associated device; if the capacity state of the data center is that the capacity is excessive, second alarm information can be generated and sent to the associated equipment.
The first alarm information and the second alarm information can be set according to actual conditions. For example, the first alarm information may be set to "capacity overload, please adjust the scheduling policy in time", etc., and the second alarm information may be set to "capacity overload, please make reasonable use of the data center capacity, etc.
The associated device may be a mobile terminal of a related operation and maintenance manager, such as a smart phone, or may be a display device of a dispatching center, so that the operation and maintenance manager may view the information, or may be other associated devices.
And S205, determining the data and the position of the server to be deployed based on the reserved space and the deployment position of the deployed server in response to the capacity state of the data center being the capacity surplus, so as to generate an incremental scheduling strategy for increasing the servers.
The capacity state of the data center is the capacity excess and overload, respectively, will be explained below.
Alternatively, if the capacity state of the data center is the capacity excess, the identification information of the deployed server may be acquired in response to the capacity state of the data center being the capacity excess, and the data center and the corresponding deployment location may be identified based on the identification information.
Further, the data and location of the server to be deployed may be determined based on the reserved space and the deployment location of the deployed server to generate a provisioning scheduling policy for adding the server.
For example, the reserved space is 1-200 of the cabinet in the first machine room, 1-200 of the cabinet in the second machine room, 1-200 of the cabinet in the third machine room and 1-200 of the cabinet in the fourth machine room, and the deployed positions of the deployed servers are 1-200 of the cabinet in the first machine room, 1-200 of the cabinet in the second machine room and 1-200 of the cabinet in the third machine room, respectively, so that the data of the server to be deployed can be determined to be 200, and the cabinet in the fourth machine room is 1-200. Further, a provisioning scheduling policy may be generated for adding servers.
For a data center capacity state that is overloaded, alternatively, if the data center capacity state is overloaded, a migration scheduling policy for migrating the deployed servers may be generated in response to the data center capacity state being overloaded.
As a possible implementation manner, as shown in fig. 3, on the basis of the foregoing embodiment, a specific process of generating a migration scheduling policy for migrating a deployed server in the foregoing step includes the following steps:
s301, acquiring capacity states of candidate data centers, and determining a target data center with excessive capacity based on the capacity states of the candidate data centers.
In this case, the target data center may be selected from the plurality of data centers with excess capacity according to a preset target data center selection policy.
The target data center selection policy may be set according to actual situations, for example, the target data center selection policy may be set to take the data center with the largest excess capacity as the target data center; for another example, a target data center selection policy may be set to randomly select any data center with excess capacity as the target data center.
For example, there are a total of a, b, and c, and 3 candidate data centers, where the capacity states of the candidate data centers a and b are capacity overage, in this case, the data center a may be determined to be the target data center, and the data center b may be determined to be the target data center.
When attempting to obtain the deployment location of the deployed server, the identification information of the deployed server may be obtained, and based on the identification information, the data center to which the deployed server belongs and the corresponding deployment location may be identified.
For example, the identification information of 50 deployed servers is 1-50 respectively, in this case, based on the identification information 1-50, the data center to which the deployed servers belong can be identified as the first data center and the second data center, and the corresponding deployment positions are respectively the cabinet positions 1-30 of the first machine room of the data center and the cabinet positions 1-20 of the second machine room of the data center.
S302, based on the reserved space corresponding to the target data center and the deployment position of the deployed server, determining the allowed data and the allowed deployment position of the deployable server in the target data.
In the embodiment of the disclosure, after the target data center is determined, a reserved space corresponding to the target data center and a deployment position of a deployed server can be acquired, and then the allowable data and the allowable deployment position of the server which can be deployed in the target data are determined based on the reserved space corresponding to the target data center and the deployment position of the deployed server.
For example, the cabinet position 1-200 of the machine room c corresponding to the target data center is obtained, and the deployment position of the deployed server is 1-50 of the machine room c, in this case, it may be determined that the allowable data of the server capable of being deployed in the target data is 150, and the allowable deployment position is 51-200 of the machine room c.
S303, generating a migration scheduling strategy according to the data of the target server and the position of the target, wherein the migration scheduling strategy comprises the allowed data and the allowed deployment position.
In the embodiment of the disclosure, after the permission data and the permission deployment position of the deployable server in the target data are determined, a migration scheduling policy including the permission data and the permission deployment position can be generated according to the data of the target server and the position of the target.
For example, it is determined that the allowable data of the deployable server in the target data is 150 and the allowable deployment position is the cabinet position 51-200 of the third machine room, in this case, the following migration scheduling policy may be generated: 150 servers are allowed to be dispatched, and the deployment position is the cabinet position 51-200 of the machine room C.
S206, generating a scheduling strategy for the server based on the historical operation data, the attribute information and the reserved space.
The step S206 is the same as the step S103, and will not be described here again.
S207, server scheduling is conducted based on the scheduling policy.
In embodiments of the present disclosure, in an attempt to schedule a server based on a scheduling policy, a scheduling query may be issued and server scheduling based on the scheduling policy is performed in response to a query determination indication.
Further, after the scheduling is completed, a recording worksheet may be generated and sent to the associated device in response to the scheduling being completed.
According to the scheduling method of the server, the matched scheduling strategies can be generated aiming at different capacity states of the data center. Alternatively, if the capacity state of the data center is overloaded, a migration scheduling policy for migrating the deployed servers may be generated. Therefore, the problems of migration, allocation, overload, excess capacity and the like of the server can be effectively solved, the cooperation of multiple platforms and even manual modes is not needed, the labor cost and the time cost are reduced, the intelligent degree is improved, and the effectiveness and the reliability of the server in the dispatching process are further improved.
In summary, as shown in fig. 4, in the server scheduling method provided by the present disclosure, the historical operation data, the attribute information and the reserved space can be analyzed by the resource management platform, so as to generate an optimal scheduling policy for the server, and perform scheduling of the server based on the scheduling policy. Further, after the scheduling is completed, the work order may be sent to the associated device, and further, the change is completed through the offline operation.
It should be noted that, the operation and maintenance manager and the associated device may receive the alarm information matched with the capacity state of the data center, so as to timely understand the real-time capacity state of the data center. Meanwhile, the generated scheduling strategy can be checked, and the reliability of the scheduling strategy is further ensured.
According to the server scheduling method disclosed by the embodiment of the disclosure, the optimal scheduling strategy can be provided by comprehensively considering the historical operation data of the deployed servers, the attribute information of the existing data center and the reserved space of the cabinet position, so that reasonable and reliable server scheduling is performed, the cabinet load rate and the power density can be effectively improved, the operation energy efficiency of the data center is improved, and the problems of insufficient use or excessive use of resources are avoided. Further, the problems of migration, distribution increase, overload, excess capacity and the like of the server can be effectively solved, the cooperation of multiple platforms and even manual modes is not needed, the labor cost and the time cost are reduced, the intelligent degree is improved, and the effectiveness and the reliability of the server in the dispatching process are improved.
Corresponding to the server scheduling methods provided in the foregoing several embodiments, an embodiment of the present disclosure further provides a server scheduling apparatus, and since the server scheduling apparatus provided in the embodiment of the present disclosure corresponds to the server scheduling method provided in the foregoing several embodiments, implementation of the server scheduling method is also applicable to the server scheduling apparatus provided in the embodiment, and will not be described in detail in the present embodiment.
Fig. 5 is a schematic structural diagram of a scheduling apparatus of a server according to an embodiment of the present disclosure.
As shown in fig. 5, the scheduling apparatus 500 of the server includes: a first acquisition module 510, a second acquisition module 520, a generation module 530, and a scheduling module 540. Wherein,
A first obtaining module 510, configured to obtain historical operating data of a deployed server;
a second obtaining module 520, configured to obtain attribute information of the data center and a reserved space of the cabinet position;
a generating module 530, configured to generate a scheduling policy for a server based on the historical operating data, the attribute information, and the reserved space;
And the scheduling module 540 is configured to perform server scheduling based on the scheduling policy.
Fig. 6 is a schematic structural diagram of a scheduling apparatus of a server according to an embodiment of the present disclosure.
As shown in fig. 6, the scheduling apparatus 600 of the server includes: a first acquisition module 610, a second acquisition module 620, a generation module 630, and a scheduling module 640.
Wherein the generating module 630 includes:
A first obtaining sub-module 631, configured to obtain energy consumption data of the deployed server based on the historical operation data;
A second obtaining sub-module 632, configured to obtain a capacity state of the data center based on the attribute information and the energy consumption data of the deployed server;
A first generating sub-module 633, configured to determine, based on the reserved space and the deployment location of the deployed server, data and a location of the server to be deployed, to generate an incremental deployment policy for adding the server, in response to the capacity status of the data center being that the capacity status is excessive.
A second generation sub-module 634 is configured to generate a migration scheduling policy for migrating the deployed server in response to the capacity status of the data center being overloaded.
Wherein the second generating sub-module 634 includes:
A first determining unit 6341 configured to acquire a capacity state of a candidate data center, and determine a target data center with an excess capacity based on the capacity state of the candidate data center;
a second determining unit 6342, configured to determine, based on the reserved space corresponding to the target data center and a deployment location of the deployed server, allowed data and an allowed deployment location of a server that is deployable in the target data;
The first generating unit 6343 is configured to generate the migration scheduling policy according to the data of the target server and the location of the target, where the migration scheduling policy includes the allowed data and the allowed deployment location.
Wherein, the generating module 630 further includes:
and an identifying sub-module 635, configured to obtain identification information of the deployed server, and identify, based on the identification information, the data center to which the deployed server belongs and the corresponding deployment location.
Wherein, the scheduling module 640 includes:
The scheduling sub-module 641 is configured to issue a scheduling query, and respond to the query to determine an indication, and perform server scheduling based on the scheduling policy.
Wherein, the second generating sub-module 634 further comprises:
And a second generating unit 6344, configured to generate, based on the capacity state, alarm information that matches the capacity state, and send the alarm information to the associated device.
Wherein, the scheduling module 640 includes:
And the sending submodule 642 is used for generating a recording work order after the scheduling is completed and sending the recording work order to the associated equipment.
It should be noted that the first acquiring module 610 and the second acquiring module 620 have the same functions and structures as the first acquiring module 510 and the second acquiring module 520.
According to the server scheduling device disclosed by the embodiment of the disclosure, the historical operation data of the deployed server can be obtained, the attribute information of the data center and the reserved space of the cabinet position can be obtained, then the scheduling strategy for the server is generated based on the historical operation data, the attribute information and the reserved space, and further the server scheduling is performed based on the scheduling strategy, so that the optimal scheduling strategy can be provided by comprehensively considering the historical operation data of the deployed server, the attribute information of the existing data center and the reserved space of the cabinet position, further the reasonable and reliable server scheduling is performed, the cabinet load rate and the power density can be effectively improved, the operation energy efficiency of the data center is improved, the problems of insufficient use or excessive use of resources are avoided, and the effectiveness and the reliability of the server in the scheduling process are improved.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, for example, a scheduling method of a server. For example, in some embodiments, the scheduling method of the server may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the server scheduling method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the scheduling method of the server by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a scheduling apparatus of a general purpose computer, special purpose computer or other programmable server such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual PRIVATE SERVER" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
According to an embodiment of the present disclosure, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements a scheduling method of a server according to the first aspect of the present disclosure.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (17)
1. A server scheduling method, comprising:
acquiring historical operation data of deployed servers;
Acquiring attribute information of a data center and a reserved space of a cabinet position, wherein the attribute information of the data center comprises an electric power capacity threshold value and a refrigeration capacity threshold value;
Generating a scheduling policy for a server based on the historical operating data, the attribute information and the reserved space;
server scheduling is carried out based on the scheduling strategy;
Wherein the generating a scheduling policy for a server based on the historical operating data, the attribute information, and the reserved space includes:
acquiring energy consumption data of the deployed server based on the historical operation data;
acquiring the capacity state of the data center based on the attribute information and the energy consumption data of the deployed server;
And determining the data and the position of the server to be deployed based on the reserved space and the deployment position of the deployed server in response to the capacity state of the data center being the capacity surplus so as to generate an allocation and scheduling strategy for increasing the server.
2. The scheduling method of a server according to claim 1, further comprising:
And generating a migration scheduling policy for migrating the deployed server in response to the capacity state of the data center being overloaded.
3. The server scheduling method of claim 2, wherein the generating a migration scheduling policy for migrating the deployed server comprises:
Acquiring capacity states of candidate data centers, and determining a target data center with excessive capacity based on the capacity states of the candidate data centers;
determining allowable data and an allowable deployment position of a deployable server in the target data based on the reserved space corresponding to the target data center and the deployment position of the deployed server;
And generating the migration scheduling strategy according to the data of the target server and the position of the target, wherein the migration scheduling strategy comprises the permission data and the permission deployment position.
4. A scheduling method of a server according to claim 2 or 3, further comprising:
and acquiring the identification information of the deployed server, and identifying the data center and the corresponding deployment position based on the identification information.
5. The server scheduling method according to claim 1, wherein the server scheduling based on the scheduling policy includes:
And sending out a scheduling inquiry, responding to the inquiry determining instruction, and scheduling the server based on the scheduling policy.
6. The scheduling method of a server according to claim 1, wherein after the acquiring the capacity state of the data center, further comprising:
And generating alarm information matched with the capacity state based on the capacity state and sending the alarm information to the associated equipment.
7. The server scheduling method according to claim 1, wherein the server scheduling based on the scheduling policy includes:
and generating a recording work order after the dispatching is finished, and sending the recording work order to the associated equipment.
8. A server scheduling apparatus comprising:
The first acquisition module is used for acquiring historical operation data of the deployed server;
The second acquisition module is used for acquiring attribute information of the data center and a reserved space of the cabinet position, wherein the attribute information of the data center comprises an electric power capacity threshold value and a refrigeration capacity threshold value;
the generation module is used for generating a scheduling strategy for the server based on the historical operation data, the attribute information and the reserved space;
the scheduling module is used for scheduling the server based on the scheduling strategy;
wherein, the generating module includes:
the first acquisition sub-module is used for acquiring the energy consumption data of the deployed server based on the historical operation data;
the second acquisition sub-module is used for acquiring the capacity state of the data center based on the attribute information and the energy consumption data of the deployed server;
and the first generation sub-module is used for determining the data and the position of the server to be deployed based on the reserved space and the deployment position of the deployed server to generate an allocation and scheduling strategy for increasing the server in response to the capacity state of the data center being the capacity surplus.
9. The server scheduling apparatus of claim 8, wherein the generating module further comprises:
And the second generation sub-module is used for generating a migration scheduling strategy for migrating the deployed server in response to the overload of the capacity state of the data center.
10. The scheduling apparatus of claim 9, wherein the second generating sub-module comprises:
A first determining unit, configured to obtain a capacity state of a candidate data center, and determine a target data center with an excessive capacity based on the capacity state of the candidate data center;
the second determining unit is used for determining allowed data and allowed deployment positions of the deployable servers in the target data based on the reserved space corresponding to the target data center and the deployment positions of the deployed servers;
The first generation unit is used for generating the migration scheduling strategy according to the data of the target server and the position of the target, wherein the migration scheduling strategy comprises the permission data and the permission deployment position.
11. The scheduling apparatus of the server according to claim 9 or 10, wherein the generating module further comprises:
The identification sub-module is used for acquiring the identification information of the deployed server and identifying the data center and the corresponding deployment position based on the identification information.
12. The scheduling apparatus of claim 8, wherein the scheduling module comprises:
And the scheduling sub-module is used for sending scheduling inquiry, responding to the inquiry determining indication, and scheduling the server based on the scheduling strategy.
13. The server scheduling apparatus of claim 8, wherein the second acquisition sub-module further comprises:
And the second generation unit is used for generating alarm information matched with the capacity state based on the capacity state and sending the alarm information to the associated equipment.
14. The scheduling apparatus of claim 8, wherein the scheduling module comprises:
and the sending submodule is used for generating a recording work order after the dispatching is completed and sending the recording work order to the associated equipment.
15. An electronic device comprising a processor and a memory;
Wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for implementing the scheduling method of the server according to any one of claims 1 to 7.
16. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the scheduling method of the server according to any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements a scheduling method of a server according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011625084.XA CN112667403B (en) | 2020-12-31 | 2020-12-31 | Scheduling method and device of server and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011625084.XA CN112667403B (en) | 2020-12-31 | 2020-12-31 | Scheduling method and device of server and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112667403A CN112667403A (en) | 2021-04-16 |
CN112667403B true CN112667403B (en) | 2024-06-18 |
Family
ID=75412303
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011625084.XA Active CN112667403B (en) | 2020-12-31 | 2020-12-31 | Scheduling method and device of server and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112667403B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283715A (en) * | 2021-05-10 | 2021-08-20 | 建信金融科技有限责任公司 | Method, device, equipment and storage medium for determining placement of server |
CN113467944B (en) * | 2021-06-30 | 2022-04-01 | 西南大学 | Resource deployment device and method for complex software system |
CN114064282B (en) * | 2021-11-23 | 2023-07-25 | 北京百度网讯科技有限公司 | Resource mining method and device and electronic equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109471722A (en) * | 2018-10-19 | 2019-03-15 | 北京金山云网络技术有限公司 | Physical machine management method, device and electronic equipment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104423531A (en) * | 2013-09-05 | 2015-03-18 | 中兴通讯股份有限公司 | Data center energy consumption scheduling method and data center energy consumption scheduling device |
WO2017024005A1 (en) * | 2015-08-03 | 2017-02-09 | Convida Wireless, Llc | Mobile core network service exposure for the user equipment |
CN110597599A (en) * | 2019-09-16 | 2019-12-20 | 电子科技大学广东电子信息工程研究院 | Virtual machine migration method and system |
-
2020
- 2020-12-31 CN CN202011625084.XA patent/CN112667403B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109471722A (en) * | 2018-10-19 | 2019-03-15 | 北京金山云网络技术有限公司 | Physical machine management method, device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112667403A (en) | 2021-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112667403B (en) | Scheduling method and device of server and electronic equipment | |
US20170300359A1 (en) | Policy based workload scaler | |
CN111880914A (en) | Resource scheduling method, resource scheduling apparatus, electronic device, and storage medium | |
CN113590329A (en) | Resource processing method and device | |
EP4180956A1 (en) | Virtual-machine cold migration method and apparatus, electronic device and storage medium | |
CN113704058B (en) | Service model monitoring method and device and electronic equipment | |
CN113742457B (en) | Response processing method, device, electronic equipment and storage medium | |
CN113656239A (en) | Monitoring method and device for middleware and computer program product | |
CN116594563A (en) | Distributed storage capacity expansion method and device, electronic equipment and storage medium | |
US12007965B2 (en) | Method, device and storage medium for deduplicating entity nodes in graph database | |
CN116204310A (en) | Resource scheduling method, device, equipment and storage medium | |
CN115912355A (en) | Method, device, equipment and medium for dividing power supply area of transformer substation | |
CN115129565A (en) | Log data processing method, device, system, equipment and medium | |
CN116909757B (en) | Cluster management control system, method, electronic device and storage medium | |
CN115933839A (en) | Server noise reduction method, device, equipment and storage medium | |
CN113342463B (en) | Capacity adjustment method, device, equipment and medium of computer program module | |
CN114064282B (en) | Resource mining method and device and electronic equipment | |
CN113360258B (en) | Data processing method, device, electronic equipment and storage medium | |
CN113550893B (en) | Equipment detection method and device, electronic equipment and storage medium | |
CN112291292B (en) | Data storage method, device, equipment and medium | |
CN117611139A (en) | Method and device for determining equipment operation and maintenance strategy, electronic equipment and storage medium | |
CN115686862A (en) | Capacity data processing method, device, equipment and storage medium | |
CN116414999A (en) | Knowledge graph-based management method and device, electronic equipment and storage medium | |
CN118520092A (en) | Complaint event processing method and device, electronic equipment and storage medium | |
CN115718608A (en) | Parameter updating method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |