CN108519917A - A kind of resource pool distribution method and device - Google Patents
A kind of resource pool distribution method and device Download PDFInfo
- Publication number
- CN108519917A CN108519917A CN201810158890.7A CN201810158890A CN108519917A CN 108519917 A CN108519917 A CN 108519917A CN 201810158890 A CN201810158890 A CN 201810158890A CN 108519917 A CN108519917 A CN 108519917A
- Authority
- CN
- China
- Prior art keywords
- resource
- task
- hardware
- pond
- oversold
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
- G06F9/4856—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5011—Pool
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5017—Task decomposition
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a kind of resource pool distribution method and devices.This method includes:According to the resource information of hardware resource, hardware resource is divided into different types of logical resource pond;According to the type of task, by the logical resource pond of the mission dispatching to corresponding types, and run on the hardware resource in the logical resource pond.The present invention first carries out resource pool division to physical hardware resources, and when dividing resource pool, making each resource pool tool, there are one types, in the task of distribution, task is distributed into corresponding resource pool according to type, the feature for having accomplished perception resource sharing platform and task makes to combine the two over time and space, effectively increases the resource utilization of resource sharing platform.
Description
Technical field
The present invention relates to field of computer technology, more particularly to a kind of resource pool distribution method and device.
Background technology
Typical resource sharing platform is data center (Data Center), such as:Amazon clouds, Google clouds, Ali
Cloud etc..Each data center has the server of thousands of multi-brand or multi-series, these servers to be formed by interconnecting
Resource sharing platform.
In the big data epoch, on resource sharing platform, it will usually have million grades of task while run.It is limited in resource
Today, resource sharing platform avoids the wasting of resources using the mode of resource oversold, resource oversold mode illustrated by taking Fig. 1 as an example
Operation principle.Resource oversold refers on single server, and the stock number summation that resource sharing platform distributes to task is more than
The capacity of server.Since the resource that is occupied when task run is much smaller than resource allocation in part-time, so even if these
The sendout summation of task is more than server capacity, and in task execution, server can also meet the resource requirement of task.By
Task amount when being performed simultaneously in resource oversold, server for task is more than non-oversold.
In the prior art, different, different resource (such as different server brand of demand of the different tasks to resource
Or series etc.) influence to same task is very big, certainly, even same task is also likely to be present different in different time
Resource requirement, although however, resource oversold mode avoids the wasting of resources to a certain extent, but cannot be according to task
Demand reasonable distribution resource, can not effectively promote resource utilization.
Invention content
The technical problem to be solved in the present invention is to provide a kind of resource pool distribution method and devices, to solve the prior art
The low problem of middle resource utilization.
In order to solve the above-mentioned technical problem, the present invention solves by the following technical programs:
The present invention provides a kind of resource pool distribution methods, including:According to the resource information of hardware resource, by hardware resource
It is divided into different types of logical resource pond;According to the type of task, by the logical resource of the mission dispatching to corresponding types
Pond, and run on the hardware resource in the logical resource pond.
Wherein, according to the resource information of hardware resource, hardware resource is divided into different types of logical resource pond, is wrapped
It includes:Be arranged it is multiple with different oversolds than oversold resource pool;According to the performance information of hardware resource, hardware resource is divided into
Different oversold resource pools.
Wherein, according to the resource information of hardware resource, hardware resource is divided into different types of logical resource pond, is wrapped
It includes:In each oversold resource pool, according to the configuration information and performance information of hardware resource, hardware resource is divided into described super
Sell the different child resource ponds in resource pool.
Wherein, after the logical resource pond by the mission dispatching to corresponding types, further include:Monitor the logic money
Task in the pond of source;When the task meets and presets transition condition, the task immigration that will be run on the hardware resource
To meeting on the hardware resource of goal-selling condition in other logical resource ponds.
Wherein, the type includes:Oversold type, offline model, line style, interactive, computation-intensive, access it is intensive,
High input/output type.
The present invention also provides a kind of resource pool distributors, including:Division module, for the resource according to hardware resource
Hardware resource is divided into different types of logical resource pond by information;Module is issued, it, will be described for the type according to task
Mission dispatching is run to the logical resource pond of corresponding types on the hardware resource in the logical resource pond.
Wherein, the division module, is used for:Be arranged it is multiple with different oversolds than oversold resource pool;It is provided according to hardware
Hardware resource is divided into different oversold resource pools by the performance information in source.
Wherein, the division module, is further used for:In each oversold resource pool, confidence is matched according to hardware resource
Hardware resource, is divided into the different child resource ponds in the oversold resource pool by breath and performance information.
Wherein, described device further includes transferring module;The transferring module, for by the mission dispatching to correspondence class
After the logical resource pond of type, the task in the logical resource pond is monitored;It, will when the task meets and presets transition condition
Meet the hardware resource of goal-selling condition in the task immigration run on the hardware resource to other logical resource ponds
On.
Wherein, the type includes:Oversold type, offline model, line style, interactive, computation-intensive, access it is intensive,
High input/output type.
The present invention has the beneficial effect that:
The present invention first carries out resource pool division to physical hardware resources, and when dividing resource pool, makes each resource pool
Task is distributed to corresponding resource pool according to type, has accomplished perception resource-sharing by tool there are one type in the task of distribution
The feature of platform and task makes to combine the two over time and space, effectively increases the resource utilization of resource sharing platform.
Description of the drawings
Fig. 1 is the fundamental diagram of resource oversold mode in the prior art;
Fig. 2 is the flow chart of resource pool distribution method according to a first embodiment of the present invention;
Fig. 3 is the partiting step flow chart of hardware resource according to a second embodiment of the present invention;
Fig. 4 is the division schematic diagram of hardware resource according to a second embodiment of the present invention;
Fig. 5 is the division schematic diagram of hardware resource according to a second embodiment of the present invention;
Fig. 6 is the structure chart of resource pool distributor according to a third embodiment of the present invention.
Specific implementation mode
Below in conjunction with attached drawing and embodiment, the present invention will be described in further detail.It should be appreciated that described herein
Specific embodiment be only used to explain the present invention, limit the present invention.
Embodiment one
The present embodiment provides a kind of resource pool distribution methods.Fig. 2 is that resource pool according to a first embodiment of the present invention distributes
The flow chart of method.
Hardware resource is divided into different types of logical resource pond by step S210 according to the resource information of hardware resource.
Resource information, including but not limited to:CPU brands, CPU speed, cpu busy percentage, memory size, memory usage and
Network bandwidth.Wherein, with task run, changed resource information can be used as performance information, not with task run
The changed resource information of meeting can be used as configuration information.
The type, including but not limited to:Oversold type, offline model, line style, interactive, computation-intensive, access it is intensive
Type, high I/O (Input/Output, input/output) type.
Specifically, dividing mode can be arranged according to specific requirements.It can will be provided with the hardware of same asset information
Source is divided into same class logical resource pond.Such as:The hardware resource that content capacity is 1T is divided into the intensive logic money of access
In the pond of source.The hardware resource for meeting preset kind condition can be divided into the corresponding logical resource pond of the type condition.Example
Such as:Threshold speed is set, and the hardware resource that CPU speed is more than the threshold speed is divided into computation-intensive logical resource pond.
Step S220, according to the type of task, by the logical resource pond of the mission dispatching to corresponding types, and described
It is run on hardware resource in logical resource pond.
The type of task and the type in logical resource pond are corresponding.
In the present embodiment, after the logical resource pond by the mission dispatching to corresponding types, the logic is monitored
Task in resource pool;When the task meets and presets transition condition, the task run on the hardware resource is moved
It moves in (transfer) to other logical resource ponds and meets on the hardware resource of goal-selling condition.
Transition condition can be arranged according to specific requirements.Such as:Transition condition is less than predetermined threshold value for the performance of task, and
And it is impossible to meet the performances of task for the hardware resource in logical resource pond where task.
Goal condition can be arranged according to specific requirements, such as:Goal condition is the hardware resource energy in logical resource pond
The performance of task is enough set to be more than or equal to predetermined threshold value.
In other words, transition condition refers to that the performance of task is less than predetermined threshold value, and the logical resource pond where the task
It can not will be more than the performance boost to predetermined threshold value of task;Goal condition is task immigration to the money in target logic resource pool
Source can promote the mission performance to more than predetermined threshold value.
Such as:If the flops (performance) of CPU intensive type task is less than 1G FLOPS/s (predetermined threshold value), and
And the cpu busy (average CPU utilization is 90%) of logical resource pond (such as Pentium CPU), then it is preset to judge that the task meets
Transition condition can patrol the cpu busy percentage of the task immigration to hardware resource less than 60% the high performance of (goal condition)
It collects in resource pool, such as in Xeon server clusters.
In the present embodiment, it by before the logical resource pond of the mission dispatching to corresponding types, can be provided in the logic
Virtual machine or container (container) are arranged on the hardware resource in source pond makes the task after reaching hardware resource under task
It operates on virtual machine or container, when task needs migration, it is only necessary to migrate the virtual machine or container of task run.
In the present embodiment, logical resource pond includes multiple hardware resources, by the mission dispatching to corresponding types
Logical resource pond before, can according to load-balancing algorithm, in logical resource pond determination can run the hardware of the task
Resource, then will be on the hardware resource of mission dispatching to the determination.
The present embodiment first carries out resource pool division to physical hardware resources, and when dividing resource pool, makes each resource
Task is distributed to corresponding resource pool by pond tool in the task of distribution there are one type according to type, has accomplished that perception resource is total
The feature of platform and task is enjoyed, makes to combine the two over time and space, effectively increases the utilization of resources of resource sharing platform
Rate.
Embodiment two
The division of hardware resource is described in the present embodiment.In the present embodiment, the logical resource of different levels is set
Pond obtains more fine-grained logical resource pond.
Fig. 3 is the partiting step flow chart of hardware resource according to a second embodiment of the present invention.
Step S310, setting it is multiple with different oversolds than oversold resource pool.
Since the performance of hardware resource is different, so the oversold ratio that can be born is different, in order to further promote resource
Utilization rate is arranged multiple oversold resource pools, and is correspondingly arranged oversold ratio for each oversold resource pool.
Hardware resource is divided into different oversold resource pools by step S320 according to the performance information of hardware resource.
Such as:The good hardware resource of performance is divided into oversold than big oversold resource pool, by the hardware resource of poor performance
Oversold is divided into than small oversold resource pool.
Step S330, according to the configuration information and performance information of hardware resource, hardware is provided in each oversold resource pool
Source is divided into the different child resource ponds in the oversold resource pool.
Child resource pond is the more fine-grained logical resource pond of division in each oversold resource pool (parent resource pond).Often
A sub- resource pool has its corresponding type.
The type in child resource pond, including but not limited to:Offline model, line style, interactive, computation-intensive, access it is intensive
Type, high I/O types.
In the present embodiment, one or more parent resource ponds and one or more child resources can be arranged as required to
Pond.
In the present embodiment, each logical resource pond (oversold resource pool and child resource pond) corresponds to one group or multigroup service
Device set (hardware resource), and identical server can be corresponded between logical resource pond.
As shown in Figure 4 and Figure 5, in resource sharing platform, 1 resource pool of setting oversold area, 2 resource pool of oversold area and oversold
3 resource pool of area;By taking oversold area 1 as an example, in oversold area 1, computation-intensive resource pool is set, accesses intensive resource pool and high I/O
Resource pool;1~server of server 3 is divided into computation-intensive resource pool, and 1~server of server 3, which is divided into, accesses intensive money
Source pond, i.e., the hardware resource of computation-intensive resource pool and the hardware resource overlapping for accessing intensive resource pool, 4 SSD (Solid
State Drives, solid-state hardware) server is divided into high I/O resource pools.
The resource pool logical partitioning mode of the present embodiment stratification makes resource pool processing task have more specific aim, will be different
Task be assigned to different resource pools, to improve task process performance, the task of resource requirement complementation is assigned to same
On server, the concurrency of task processing is provided, reduces resource contention, and then provide resource utilization.
In the present embodiment, in the task of distribution, according to the type of task, the corresponding logical resource pond of the type is found.
In the present embodiment, because task can use the mode of resource oversold to execute, arbitrary task can be in arbitrary oversold
It is run in resource pool.
Height, resource is required to be occupied by the moment response performance in line style task;
Offline model task is big to resource consumption, no property requirement, but is completed before needing at the specified time point;
Interactive task requires height to response performance, but resource is only occupied when users use;
Computation-intensive task requires resource operational performance high;
It accesses intensive task and height is required to resource memory size;
High I/O types task is higher to the visit capacity of interface.
Since each generic task is there are the time difference of resource occupation, the present embodiment after identifying task type,
The logical resource pond of mission dispatching to corresponding types is handled, processing speed can be largely improved, promotes resource
Utilization rate.
In the hardware resource performance deficiency of operation task, task can migrate between child resource pond, can also be direct
It is migrated between parent resource pond, as shown in Figure 4 and Figure 5.
Embodiment three
The present embodiment provides a kind of resource pool distributors.Fig. 6 is resource pool distribution according to a third embodiment of the present invention
The structure chart of device.
The resource pool distributor, including:
Hardware resource is divided into different types of logic by division module 610 for the resource information according to hardware resource
Resource pool.
Module 620 is issued, for according to the type of task, by the logical resource pond of the mission dispatching to corresponding types,
And it is run on the hardware resource in the logical resource pond.
Optionally, the division module 610, is used for:Be arranged it is multiple with different oversolds than oversold resource pool;According to hard
Hardware resource is divided into different oversold resource pools by the performance information of part resource.
Optionally, the division module 610, is further used for:In each oversold resource pool, according to matching for hardware resource
Confidence ceases and performance information, and hardware resource is divided into the different child resource ponds in the oversold resource pool.
Optionally, described device further includes transferring module (not shown);The transferring module, for described will appoint
Business is issued to after the logical resource pond of corresponding types, monitors the task in the logical resource pond;Meet in the task pre-
If when transition condition, goal-selling will be met in the task immigration run on the hardware resource to other logical resource ponds
On the hardware resource of condition.
Optionally, the type includes:Oversold type, offline model, line style, interactive, computation-intensive, access it is intensive
Type, high input/output type.
The function of device described in the present embodiment is described in Fig. 2~embodiment of the method shown in fig. 5, therefore
Not detailed place, may refer to the related description in previous embodiment, this will not be repeated here in the description of the present embodiment.
Although being example purpose, the preferred embodiment of the present invention is had been disclosed for, those skilled in the art will recognize
Various improvement, increase and substitution are also possible, and therefore, the scope of the present invention should be not limited to the above embodiments.
Claims (10)
1. a kind of resource pool distribution method, which is characterized in that including:
According to the resource information of hardware resource, hardware resource is divided into different types of logical resource pond;
According to the type of task, by the logical resource pond of the mission dispatching to corresponding types, and in the logical resource pond
Hardware resource on run.
2. the method as described in claim 1, which is characterized in that according to the resource information of hardware resource, hardware resource is divided
To different types of logical resource pond, including:
Be arranged it is multiple with different oversolds than oversold resource pool;
According to the performance information of hardware resource, hardware resource is divided into different oversold resource pools.
3. method as claimed in claim 2, which is characterized in that according to the resource information of hardware resource, hardware resource is divided
To different types of logical resource pond, including:
In each oversold resource pool, according to the configuration information and performance information of hardware resource, hardware resource is divided into described
Different child resource ponds in oversold resource pool.
4. the method as described in claim 1, which is characterized in that by the logical resource pond of the mission dispatching to corresponding types
Later, further include:
Monitor the task in the logical resource pond;
When the task meets and presets transition condition, by the task immigration run on the hardware resource to other logics
On the hardware resource for meeting goal-selling condition in resource pool.
5. the method as described in any one of claim 1-4, which is characterized in that the type includes:Oversold type, offline model,
In line style, interactive, computation-intensive, access intensive, high input/output type.
6. a kind of resource pool distributor, which is characterized in that including:
Hardware resource is divided into different types of logical resource pond by division module for the resource information according to hardware resource;
Module is issued, for the type according to task, by the logical resource pond of the mission dispatching to corresponding types, and described
It is run on hardware resource in logical resource pond.
7. device as claimed in claim 6, which is characterized in that the division module is used for:
Be arranged it is multiple with different oversolds than oversold resource pool;
According to the performance information of hardware resource, hardware resource is divided into different oversold resource pools.
8. device as claimed in claim 7, which is characterized in that the division module is further used for:
In each oversold resource pool, according to the configuration information and performance information of hardware resource, hardware resource is divided into described
Different child resource ponds in oversold resource pool.
9. device as claimed in claim 6, which is characterized in that described device further includes transferring module;
The transferring module, for after the logical resource pond by the mission dispatching to corresponding types, monitoring the logic
Task in resource pool;When the task meets and presets transition condition, the task run on the hardware resource is moved
It moves on in other logical resource ponds and meets on the hardware resource of goal-selling condition.
10. the device as described in any one of claim 6-9, which is characterized in that the type includes:Oversold type, offline model,
In line style, interactive, computation-intensive, access intensive, high input/output type.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810158890.7A CN108519917B (en) | 2018-02-24 | 2018-02-24 | Resource pool allocation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810158890.7A CN108519917B (en) | 2018-02-24 | 2018-02-24 | Resource pool allocation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108519917A true CN108519917A (en) | 2018-09-11 |
CN108519917B CN108519917B (en) | 2023-04-07 |
Family
ID=63433301
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810158890.7A Active CN108519917B (en) | 2018-02-24 | 2018-02-24 | Resource pool allocation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108519917B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109471727A (en) * | 2018-10-29 | 2019-03-15 | 北京金山云网络技术有限公司 | A kind of task processing method, apparatus and system |
CN109558245A (en) * | 2018-12-03 | 2019-04-02 | 群蜂信息技术(上海)有限公司 | A kind of method for processing business based on microserver framework, device and server |
CN109634888A (en) * | 2018-12-12 | 2019-04-16 | 浪潮(北京)电子信息产业有限公司 | A kind of FC interface card exchange resource identification processing method and associated component |
CN110928649A (en) * | 2018-09-19 | 2020-03-27 | 北京国双科技有限公司 | Resource scheduling method and device |
CN111144830A (en) * | 2019-11-20 | 2020-05-12 | 上海泛云信息科技有限公司 | Enterprise-level computing resource management method, system and computer equipment |
CN112948067A (en) * | 2019-12-11 | 2021-06-11 | 北京金山云网络技术有限公司 | Service scheduling method and device, electronic equipment and storage medium |
CN112965806A (en) * | 2021-03-26 | 2021-06-15 | 北京汇钧科技有限公司 | Method and apparatus for determining resources |
CN113535405A (en) * | 2021-07-30 | 2021-10-22 | 上海壁仞智能科技有限公司 | Cloud service system and operation method thereof |
CN113553195A (en) * | 2021-09-22 | 2021-10-26 | 苏州浪潮智能科技有限公司 | Memory pool resource sharing method, device, equipment and readable medium |
CN114356586A (en) * | 2022-03-17 | 2022-04-15 | 飞腾信息技术有限公司 | Processor and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102958166A (en) * | 2011-08-29 | 2013-03-06 | 华为技术有限公司 | Resource allocation method and resource management platform |
US8489797B2 (en) * | 2009-09-30 | 2013-07-16 | International Business Machines Corporation | Hardware resource arbiter for logical partitions |
CN105320559A (en) * | 2014-07-30 | 2016-02-10 | 中国移动通信集团广东有限公司 | Scheduling method and device of cloud computing system |
WO2016176231A1 (en) * | 2015-04-29 | 2016-11-03 | Microsoft Technology Licensing, Llc | Optimal allocation of dynamic cloud computing platform resources |
CN107305505A (en) * | 2016-04-20 | 2017-10-31 | 中兴通讯股份有限公司 | The operation method and virtual platform of virtual platform |
CN107368336A (en) * | 2017-07-25 | 2017-11-21 | 郑州云海信息技术有限公司 | A kind of cloud data center deployed with devices and the method and apparatus of management |
-
2018
- 2018-02-24 CN CN201810158890.7A patent/CN108519917B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8489797B2 (en) * | 2009-09-30 | 2013-07-16 | International Business Machines Corporation | Hardware resource arbiter for logical partitions |
CN102958166A (en) * | 2011-08-29 | 2013-03-06 | 华为技术有限公司 | Resource allocation method and resource management platform |
CN105320559A (en) * | 2014-07-30 | 2016-02-10 | 中国移动通信集团广东有限公司 | Scheduling method and device of cloud computing system |
WO2016176231A1 (en) * | 2015-04-29 | 2016-11-03 | Microsoft Technology Licensing, Llc | Optimal allocation of dynamic cloud computing platform resources |
CN107305505A (en) * | 2016-04-20 | 2017-10-31 | 中兴通讯股份有限公司 | The operation method and virtual platform of virtual platform |
CN107368336A (en) * | 2017-07-25 | 2017-11-21 | 郑州云海信息技术有限公司 | A kind of cloud data center deployed with devices and the method and apparatus of management |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110928649A (en) * | 2018-09-19 | 2020-03-27 | 北京国双科技有限公司 | Resource scheduling method and device |
CN109471727A (en) * | 2018-10-29 | 2019-03-15 | 北京金山云网络技术有限公司 | A kind of task processing method, apparatus and system |
CN109558245A (en) * | 2018-12-03 | 2019-04-02 | 群蜂信息技术(上海)有限公司 | A kind of method for processing business based on microserver framework, device and server |
CN109634888A (en) * | 2018-12-12 | 2019-04-16 | 浪潮(北京)电子信息产业有限公司 | A kind of FC interface card exchange resource identification processing method and associated component |
CN111144830A (en) * | 2019-11-20 | 2020-05-12 | 上海泛云信息科技有限公司 | Enterprise-level computing resource management method, system and computer equipment |
CN112948067A (en) * | 2019-12-11 | 2021-06-11 | 北京金山云网络技术有限公司 | Service scheduling method and device, electronic equipment and storage medium |
CN112965806A (en) * | 2021-03-26 | 2021-06-15 | 北京汇钧科技有限公司 | Method and apparatus for determining resources |
WO2022199204A1 (en) * | 2021-03-26 | 2022-09-29 | 北京汇钧科技有限公司 | Method and apparatus for determining resources |
CN112965806B (en) * | 2021-03-26 | 2023-08-04 | 北京汇钧科技有限公司 | Method and device for determining resources |
CN113535405A (en) * | 2021-07-30 | 2021-10-22 | 上海壁仞智能科技有限公司 | Cloud service system and operation method thereof |
CN113553195A (en) * | 2021-09-22 | 2021-10-26 | 苏州浪潮智能科技有限公司 | Memory pool resource sharing method, device, equipment and readable medium |
CN114356586A (en) * | 2022-03-17 | 2022-04-15 | 飞腾信息技术有限公司 | Processor and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108519917B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108519917A (en) | A kind of resource pool distribution method and device | |
US10728091B2 (en) | Topology-aware provisioning of hardware accelerator resources in a distributed environment | |
US8949847B2 (en) | Apparatus and method for managing resources in cluster computing environment | |
CN107688492B (en) | Resource control method and device and cluster resource management system | |
US20150120923A1 (en) | Optimization Of Resource Utilization In A Collection of Devices | |
CN112269641B (en) | Scheduling method, scheduling device, electronic equipment and storage medium | |
CN103942098A (en) | System and method for task processing | |
CN110221920B (en) | Deployment method, device, storage medium and system | |
US20140359126A1 (en) | Workload partitioning among heterogeneous processing nodes | |
CN107450855B (en) | Model-variable data distribution method and system for distributed storage | |
CN104598316B (en) | A kind of storage resource distribution method and device | |
WO2021018183A1 (en) | Resource allocation method and resource offloading method | |
CN104008013A (en) | Core resource allocation method and apparatus and multi-core system | |
CN108334396A (en) | The creation method and device of a kind of data processing method and device, resource group | |
CN103441918A (en) | Self-organizing cluster server system and self-organizing method thereof | |
US20120233313A1 (en) | Shared scaling server system | |
CN105491150A (en) | Load balance processing method based on time sequence and system | |
CN112463395A (en) | Resource allocation method, device, equipment and readable storage medium | |
CN107343023A (en) | Resource allocation methods, device and electronic equipment in a kind of Mesos management cluster | |
CN114356543A (en) | Kubernetes-based multi-tenant machine learning task resource scheduling method | |
CN111418187A (en) | Scalable statistics and analysis mechanism in cloud networks | |
CN110309229A (en) | The data processing method and distributed system of distributed system | |
CN107169138B (en) | Data distribution method for distributed memory database query engine | |
CN115705247A (en) | Process running method and related equipment | |
CN104809026A (en) | Method for borrowing CPU computing resources by using remote node |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |