CN108519917B - Resource pool allocation method and device - Google Patents

Resource pool allocation method and device Download PDF

Info

Publication number
CN108519917B
CN108519917B CN201810158890.7A CN201810158890A CN108519917B CN 108519917 B CN108519917 B CN 108519917B CN 201810158890 A CN201810158890 A CN 201810158890A CN 108519917 B CN108519917 B CN 108519917B
Authority
CN
China
Prior art keywords
resource
task
resource pool
hardware resources
logic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810158890.7A
Other languages
Chinese (zh)
Other versions
CN108519917A (en
Inventor
孙发强
黄道超
张鸿
刘欣然
朱春鸽
李正民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Computer Network and Information Security Management Center
Original Assignee
National Computer Network and Information Security Management Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Computer Network and Information Security Management Center filed Critical National Computer Network and Information Security Management Center
Priority to CN201810158890.7A priority Critical patent/CN108519917B/en
Publication of CN108519917A publication Critical patent/CN108519917A/en
Application granted granted Critical
Publication of CN108519917B publication Critical patent/CN108519917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a resource pool allocation method and a resource pool allocation device. The method comprises the following steps: dividing the hardware resources into different types of logic resource pools according to the resource information of the hardware resources; and issuing the task to a logic resource pool of a corresponding type according to the type of the task, and running on hardware resources in the logic resource pool. The method and the device firstly divide the resource pools of the physical hardware resources, each resource pool has one type when the resource pools are divided, and the tasks are distributed to the corresponding resource pools according to the types when the tasks are distributed, so that the characteristics of the resource sharing platform and the tasks are sensed, the two are combined in time and space, and the resource utilization rate of the resource sharing platform is effectively improved.

Description

Resource pool allocation method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a resource pool allocation method and apparatus.
Background
A typical resource sharing platform is a Data Center (Data Center), such as: amazon cloud, google cloud, arrhizus, and the like. Each data center has thousands of multi-brand or multi-series servers, and the servers form a resource sharing platform through interconnection.
In the big data era, millions of tasks are often running simultaneously on a resource sharing platform. In the present day with limited resources, the resource sharing platform uses a resource over-selling mode to avoid resource waste, and fig. 1 is taken as an example to illustrate the working principle of the resource over-selling mode. Resource over-selling refers to the fact that the sum of the amount of resources distributed to tasks by a resource sharing platform is larger than the capacity of a server on a single server. Since the resources occupied by the running of the tasks are far less than the resource allocation amount in part of time, even if the sum of the allocation amounts of the tasks is greater than the capacity of the server, the server can meet the resource requirements of the tasks during the task execution. With resource overscaling, the number of tasks simultaneously performed on the server is greater than when not overscaling.
In the prior art, different tasks have different requirements for resources, different resources (such as different server brands or series) have a great influence on the same task, and of course, even if the same task has different resource requirements at different times, however, the resource over-selling mode avoids resource waste to a certain extent, but cannot reasonably allocate resources according to the requirements of the tasks, and cannot effectively improve the resource utilization rate.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a resource pool allocation method and device, so as to solve the problem of low resource utilization rate in the prior art.
In order to solve the technical problems, the invention solves the problems by the following technical scheme:
the invention provides a resource pool allocation method, which comprises the following steps: dividing the hardware resources into different types of logic resource pools according to the resource information of the hardware resources; and issuing the task to a logic resource pool of a corresponding type according to the type of the task, and running on hardware resources in the logic resource pool.
The method for dividing the hardware resources into different types of logic resource pools according to the resource information of the hardware resources comprises the following steps: setting a plurality of over-sale resource pools with different over-sale ratios; and dividing the hardware resources into different over-sale resource pools according to the performance information of the hardware resources.
The method for dividing the hardware resources into different types of logic resource pools according to the resource information of the hardware resources comprises the following steps: in each over-sale resource pool, dividing the hardware resources into different sub-resource pools in the over-sale resource pool according to the configuration information and the performance information of the hardware resources.
After the task is issued to the logic resource pool of the corresponding type, the method further comprises the following steps: monitoring tasks in the logical resource pool; and when the task meets a preset migration condition, migrating the task running on the hardware resource to the hardware resource meeting a preset target condition in other logic resource pools.
Wherein the types include: supermarket, offline, online, interactive, compute intensive, access intensive, high input/output.
The invention also provides a resource pool allocation device, which comprises: the dividing module is used for dividing the hardware resources into different types of logic resource pools according to the resource information of the hardware resources; and the issuing module is used for issuing the task to the logic resource pool of the corresponding type according to the type of the task and running on the hardware resource in the logic resource pool.
Wherein the dividing module is configured to: setting a plurality of over-sale resource pools with different over-sale ratios; and dividing the hardware resources into different over-sale resource pools according to the performance information of the hardware resources.
Wherein the dividing module is further configured to: in each over-sale resource pool, dividing the hardware resources into different sub-resource pools in the over-sale resource pool according to the configuration information and the performance information of the hardware resources.
Wherein the apparatus further comprises a migration module; the migration module is used for monitoring the tasks in the logic resource pool after the tasks are issued to the logic resource pools of the corresponding types; and when the task meets a preset migration condition, migrating the task running on the hardware resource to the hardware resource meeting a preset target condition in other logic resource pools.
Wherein the types include: supermarket, offline, online, interactive, compute intensive, access intensive, high input/output.
The invention has the following beneficial effects:
according to the method, the physical hardware resources are divided into the resource pools, each resource pool is provided with one type when the resource pools are divided, and the tasks are distributed to the corresponding resource pools according to the types when the tasks are distributed, so that the characteristics of the resource sharing platform and the tasks are sensed, the two are combined in time and space, and the resource utilization rate of the resource sharing platform is effectively improved.
Drawings
FIG. 1 is a diagram illustrating the operation of a resource reselling method in the prior art;
FIG. 2 is a flowchart of a resource pool allocation method according to a first embodiment of the present invention;
FIG. 3 is a flowchart of the hardware resource partitioning step according to a second embodiment of the present invention;
FIG. 4 is a diagram illustrating the partitioning of hardware resources according to a second embodiment of the present invention;
FIG. 5 is a diagram illustrating the partitioning of hardware resources according to a second embodiment of the present invention;
fig. 6 is a structural diagram of a resource pool allocation apparatus according to a third embodiment of the present invention.
Detailed Description
The present invention will be described in further detail below with reference to the drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
Example one
The embodiment provides a resource pool allocation method. Fig. 2 is a flowchart of a resource pool allocation method according to a first embodiment of the present invention.
Step S210, according to the resource information of the hardware resources, the hardware resources are divided into different types of logic resource pools.
Resource information, including but not limited to: CPU brand, CPU speed, CPU utilization, memory capacity, memory utilization, and network bandwidth. The resource information that changes with the running of the task may be used as performance information, and the resource information that does not change with the running of the task may be used as configuration information.
The types, including but not limited to: supermarket, offline, online, interactive, compute intensive, access intensive, high I/O (Input/Output) type.
Specifically, the dividing manner may be set according to specific requirements. Hardware resources having the same resource information may be divided into the same type of logical resource pool. For example: and dividing the hardware resource with the content capacity of 1T into an access-intensive logic resource pool. The hardware resources meeting the preset type condition can be divided into the logic resource pool corresponding to the type condition. For example: setting a speed threshold value, and dividing the hardware resource with the CPU speed larger than the speed threshold value into a calculation intensive logic resource pool.
Step S220, according to the type of the task, the task is issued to the logic resource pool of the corresponding type and runs on the hardware resource in the logic resource pool.
The type of the task corresponds to the type of the logical resource pool.
In this embodiment, after the task is issued to the logical resource pool of the corresponding type, the task in the logical resource pool is monitored; and when the task meets a preset migration condition, migrating (transferring) the task running on the hardware resource to the hardware resource meeting a preset target condition in other logic resource pools.
The migration conditions may be set according to specific requirements. For example: the migration condition is that the performance of the task is smaller than a preset threshold value, and the hardware resource in the logic resource pool where the task is located cannot meet the performance of the task.
The target conditions may be set according to specific requirements, for example: the target condition is that the hardware resources in the logic resource pool can enable the performance of the task to be larger than or equal to a preset threshold value.
In other words, the migration condition means that the performance of the task is smaller than a preset threshold, and the logic resource pool where the task is located cannot improve the performance of the task above the preset threshold; the target condition is that the task performance can be improved to be above a preset threshold value by migrating the task to the resource in the target logic resource pool.
For example: if the number of floating point operations (performance) of the CPU-intensive task is less than 1G FLOPS/s (a preset threshold), and the CPU of the logical resource pool (e.g., pentium CPU) is busy (average CPU utilization is 90%), it is determined that the task meets a preset migration condition, and the task may be migrated to a high-performance logical resource pool, such as a Xeon server cluster, where the CPU utilization of the hardware resource is less than 60% (target condition).
In this embodiment, before issuing the task to the corresponding type of logical resource pool, a virtual machine or a container (container) may be set on the hardware resource of the logical resource pool, and after issuing the task to the hardware resource, the task is made to run on the virtual machine or the container.
In this embodiment, the logic resource pool includes a plurality of hardware resources, and before issuing the task to the logic resource pool of the corresponding type, the hardware resource that can run the task may be determined in the logic resource pool according to a load balancing algorithm, and then the task is issued to the determined hardware resource.
In the embodiment, the resource pools of the physical hardware resources are divided, each resource pool is provided with one type when the resource pools are divided, and the tasks are allocated to the corresponding resource pools according to the types when the tasks are allocated, so that the characteristics of the resource sharing platform and the tasks are sensed, the two are combined in time and space, and the resource utilization rate of the resource sharing platform is effectively improved.
Example two
The present embodiment describes the division of hardware resources. In this embodiment, logical resource pools of different levels are set, so as to obtain a logical resource pool of finer granularity.
Fig. 3 is a flowchart of the hardware resource partitioning step according to the second embodiment of the present invention.
Step S310, a plurality of over-sale resource pools with different over-sale ratios are set.
Because the performance of hardware resources is different, the over-selling ratios that can be borne are different, in order to further improve the resource utilization rate, a plurality of over-selling resource pools are set, and the over-selling ratio is correspondingly set for each over-selling resource pool.
And step S320, dividing the hardware resources into different over-sale resource pools according to the performance information of the hardware resources.
For example: and dividing the hardware resources with good performance into a over-sale resource pool with a large over-sale ratio, and dividing the hardware resources with poor performance into a over-sale resource pool with a small over-sale ratio.
And step S330, dividing the hardware resources into different sub-resource pools in each over-sale resource pool according to the configuration information and the performance information of the hardware resources.
The child resource pool is a finer-grained logical resource pool divided in each over-sell resource pool (parent resource pool). Each sub-resource pool has its corresponding type.
Types of child resource pools, including but not limited to: offline, online, interactive, compute intensive, access intensive, high I/O.
In this embodiment, one or more parent resource pools and one or more child resource pools may be set as needed.
In the present embodiment, each logical resource pool (the reselling resource pool and the sub resource pool) corresponds to one or more sets of servers (hardware resources), and the same server may correspond to each logical resource pool.
As shown in fig. 4 and 5, in the resource sharing platform, a reselling area 1 resource pool, a reselling area 2 resource pool, and a reselling area 3 resource pool are set; taking the over-selling area 1 as an example, in the over-selling area 1, a calculation intensive resource pool, an access intensive resource pool and a high I/O resource pool are set; the servers 1 to 3 are divided into a computation intensive resource pool, the servers 1 to 3 are divided into an access intensive resource pool, namely, hardware resources of the computation intensive resource pool are overlapped with hardware resources of the access intensive resource pool, and 4 SSD (Solid State Drives) servers are divided into a high I/O resource pool.
The hierarchical resource pool logical partitioning mode of the embodiment enables the resource pool to process tasks more specifically, and allocates different tasks to different resource pools, so that the task processing performance is improved, tasks with complementary resource requirements are allocated to the same server, the concurrency of task processing is provided, resource conflicts are reduced, and the resource utilization rate is further provided.
In this embodiment, when a task is allocated, a logical resource pool corresponding to the type is found according to the type of the task. In this embodiment, since all tasks can be executed in the resource over-selling manner, any task can run in any over-selling resource pool.
The online task has high requirement on response performance, and resources are occupied all the time;
the off-line task has large resource consumption and no special performance requirement, but needs to be completed before a specified time point;
the interactive task has high requirement on response performance, but the resource is only occupied when the user uses the interactive task;
the calculation-intensive tasks have high requirements on the resource operation performance;
the access intensive tasks have high requirements on the capacity of the resource memory;
the high I/O type task has a high amount of access to the interface.
Because various tasks have time difference of resource occupation, after the task type is identified, the embodiment issues the task to the logic resource pool of the corresponding type for processing, so that the processing speed can be improved to a greater extent, and the resource utilization rate can be improved.
When the performance of the hardware resources for running the task is insufficient, the task may be migrated between the child resource pools, or may be directly migrated between the parent resource pools, as shown in fig. 4 and 5.
EXAMPLE III
The embodiment provides a resource pool allocation device. Fig. 6 is a structural diagram of a resource pool allocation apparatus according to a third embodiment of the present invention.
The resource pool allocation device comprises:
the partitioning module 610 is configured to partition the hardware resources into different types of logical resource pools according to the resource information of the hardware resources.
And the issuing module 620 is configured to issue the task to a logic resource pool of a corresponding type according to the type of the task, and run on a hardware resource in the logic resource pool.
Optionally, the dividing module 610 is configured to: setting a plurality of over-sale resource pools with different over-sale ratios; and dividing the hardware resources into different over-sale resource pools according to the performance information of the hardware resources.
Optionally, the dividing module 610 is further configured to: in each over-sale resource pool, dividing the hardware resources into different sub-resource pools in the over-sale resource pool according to the configuration information and the performance information of the hardware resources.
Optionally, the apparatus further comprises a migration module (not shown in the figures); the migration module is used for monitoring the tasks in the logic resource pool after the tasks are issued to the logic resource pools of the corresponding types; and when the task meets a preset migration condition, migrating the task running on the hardware resource to the hardware resource meeting a preset target condition in other logic resource pools.
Optionally, the types include: supermarket, offline, online, interactive, compute intensive, access intensive, high input/output.
The functions of the apparatus in this embodiment have already been described in the method embodiments shown in fig. 2 to fig. 5, so that reference may be made to the related descriptions in the foregoing embodiments for details in the description of this embodiment, which are not repeated herein.
Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, and the scope of the invention should not be limited to the embodiments described above.

Claims (2)

1. A resource pool allocation method is characterized by comprising the following steps:
dividing the hardware resources into different preset types of logic resource pools according to the resource information of the hardware resources; the resource information includes: CPU brand, CPU speed, CPU utilization rate, memory capacity, memory utilization rate and network broadband, wherein the types of the logic resource pool comprise: over-sell, off-line, on-line, interactive, compute intensive, access intensive, high input/output;
according to the type of the task, the task is issued to a logic resource pool of a corresponding type and runs on hardware resources in the logic resource pool;
the type of the task corresponds to the type of the logic resource pool; monitoring tasks in the logical resource pool; the corresponding hardware resources of each logic resource pool can be overlapped;
when the task meets a preset migration condition, migrating the task running on the hardware resource to the hardware resource meeting a preset target condition in other logic resource pools; the migration condition is that the performance of the task is smaller than a preset threshold value, and the hardware resource in the logic resource pool where the task is located can not meet the performance of the task;
dividing the hardware resources into different types of logic resource pools according to the resource information of the hardware resources, wherein the steps comprise:
setting a plurality of over-sale resource pools with different over-sale ratios;
dividing the hardware resources into different over-selling resource pools according to the performance information of the hardware resources;
dividing the hardware resources into different types of logic resource pools according to the resource information of the hardware resources, wherein the steps comprise:
in each over-sale resource pool, dividing hardware resources into different sub-resource pools in the over-sale resource pool according to configuration information and performance information of the hardware resources;
and when the task is migrated, migrating in a sub resource pool in the same over-sell resource pool, or migrating in different over-sell resource pools.
2. An apparatus for allocating resource pools, comprising:
the dividing module is used for dividing the hardware resources into different types of logic resource pools according to the resource information of the hardware resources; the resource information includes: CPU brand, CPU speed, CPU utilization ratio, memory capacity, memory utilization ratio and network broadband, wherein the types of the logic resource pool comprise: over-sell, off-line, on-line, interactive, compute intensive, access intensive, high input/output;
the issuing module is used for issuing the task to a logic resource pool corresponding to a preset type according to the type of the task and running on hardware resources in the logic resource pool; the type of the task corresponds to the type of the logic resource pool; the corresponding hardware resources of each logic resource pool can be overlapped;
the migration module is used for monitoring the tasks in the logic resource pools after the tasks are issued to the logic resource pools of the corresponding types; when the task meets a preset migration condition, migrating the task running on the hardware resource to the hardware resource meeting a preset target condition in other logic resource pools; the migration condition is that the performance of the task is smaller than a preset threshold value, and the hardware resource in the logic resource pool where the task is located can not meet the performance of the task;
the dividing module is configured to:
setting a plurality of over-sale resource pools with different over-sale ratios;
dividing the hardware resources into different over-sale resource pools according to the performance information of the hardware resources;
the dividing module is further configured to:
in each over-sale resource pool, dividing hardware resources into different sub-resource pools in the over-sale resource pool according to configuration information and performance information of the hardware resources;
and when the task is migrated, migrating in a sub resource pool in the same over-sale resource pool or migrating in different over-sale resource pools.
CN201810158890.7A 2018-02-24 2018-02-24 Resource pool allocation method and device Active CN108519917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810158890.7A CN108519917B (en) 2018-02-24 2018-02-24 Resource pool allocation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810158890.7A CN108519917B (en) 2018-02-24 2018-02-24 Resource pool allocation method and device

Publications (2)

Publication Number Publication Date
CN108519917A CN108519917A (en) 2018-09-11
CN108519917B true CN108519917B (en) 2023-04-07

Family

ID=63433301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810158890.7A Active CN108519917B (en) 2018-02-24 2018-02-24 Resource pool allocation method and device

Country Status (1)

Country Link
CN (1) CN108519917B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110928649A (en) * 2018-09-19 2020-03-27 北京国双科技有限公司 Resource scheduling method and device
CN109471727B (en) * 2018-10-29 2021-01-22 北京金山云网络技术有限公司 Task processing method, device and system
CN109558245A (en) * 2018-12-03 2019-04-02 群蜂信息技术(上海)有限公司 A kind of method for processing business based on microserver framework, device and server
CN109634888A (en) * 2018-12-12 2019-04-16 浪潮(北京)电子信息产业有限公司 A kind of FC interface card exchange resource identification processing method and associated component
CN111144830A (en) * 2019-11-20 2020-05-12 上海泛云信息科技有限公司 Enterprise-level computing resource management method, system and computer equipment
CN112948067A (en) * 2019-12-11 2021-06-11 北京金山云网络技术有限公司 Service scheduling method and device, electronic equipment and storage medium
CN112965806B (en) * 2021-03-26 2023-08-04 北京汇钧科技有限公司 Method and device for determining resources
CN113535405A (en) * 2021-07-30 2021-10-22 上海壁仞智能科技有限公司 Cloud service system and operation method thereof
CN113553195A (en) * 2021-09-22 2021-10-26 苏州浪潮智能科技有限公司 Memory pool resource sharing method, device, equipment and readable medium
CN114356586B (en) * 2022-03-17 2022-09-02 飞腾信息技术有限公司 Processor and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102958166A (en) * 2011-08-29 2013-03-06 华为技术有限公司 Resource allocation method and resource management platform
WO2016176231A1 (en) * 2015-04-29 2016-11-03 Microsoft Technology Licensing, Llc Optimal allocation of dynamic cloud computing platform resources
CN107368336A (en) * 2017-07-25 2017-11-21 郑州云海信息技术有限公司 A kind of cloud data center deployed with devices and the method and apparatus of management

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8489797B2 (en) * 2009-09-30 2013-07-16 International Business Machines Corporation Hardware resource arbiter for logical partitions
CN105320559B (en) * 2014-07-30 2019-02-19 中国移动通信集团广东有限公司 A kind of dispatching method and device of cloud computing system
CN107305505A (en) * 2016-04-20 2017-10-31 中兴通讯股份有限公司 The operation method and virtual platform of virtual platform

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102958166A (en) * 2011-08-29 2013-03-06 华为技术有限公司 Resource allocation method and resource management platform
WO2016176231A1 (en) * 2015-04-29 2016-11-03 Microsoft Technology Licensing, Llc Optimal allocation of dynamic cloud computing platform resources
CN107368336A (en) * 2017-07-25 2017-11-21 郑州云海信息技术有限公司 A kind of cloud data center deployed with devices and the method and apparatus of management

Also Published As

Publication number Publication date
CN108519917A (en) 2018-09-11

Similar Documents

Publication Publication Date Title
CN108519917B (en) Resource pool allocation method and device
US10728091B2 (en) Topology-aware provisioning of hardware accelerator resources in a distributed environment
CN107025205B (en) Method and equipment for training model in distributed system
US20070169127A1 (en) Method, system and computer program product for optimizing allocation of resources on partitions of a data processing system
JP6241300B2 (en) Job scheduling apparatus, job scheduling method, and job scheduling program
CN104008013A (en) Core resource allocation method and apparatus and multi-core system
CN110221920B (en) Deployment method, device, storage medium and system
CN104598316A (en) Storage resource distribution method and device
US11544113B2 (en) Task scheduling for machine-learning workloads
US20230037293A1 (en) Systems and methods of hybrid centralized distributive scheduling on shared physical hosts
AU2018303662B2 (en) Scalable statistics and analytics mechanisms in cloud networking
US10599436B2 (en) Data processing method and apparatus, and system
CN108256182B (en) Layout method of dynamically reconfigurable FPGA
WO2016202153A1 (en) Gpu resource allocation method and system
CN114625500A (en) Method and application for scheduling micro-service application based on topology perception in cloud environment
US20230367654A1 (en) Automatic node fungibility between compute and infrastructure nodes in edge zones
CN115658311A (en) Resource scheduling method, device, equipment and medium
CN115705247A (en) Process running method and related equipment
CN106447755A (en) Animation rendering system
WO2017133421A1 (en) Method and device for sharing resources among multiple tenants
CN109558214B (en) Host machine resource management method and device in heterogeneous environment and storage medium
CN114281516A (en) Resource allocation method and device based on NUMA attribute
CN112416538A (en) Multilayer architecture and management method of distributed resource management framework
CN112988367A (en) Resource allocation method and device, computer equipment and readable storage medium
US11886926B1 (en) Migrating workloads between computing platforms according to resource utilization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant