CN109688222B - Shared computing resource scheduling method, shared computing system, server and storage medium - Google Patents
Shared computing resource scheduling method, shared computing system, server and storage medium Download PDFInfo
- Publication number
- CN109688222B CN109688222B CN201811601521.7A CN201811601521A CN109688222B CN 109688222 B CN109688222 B CN 109688222B CN 201811601521 A CN201811601521 A CN 201811601521A CN 109688222 B CN109688222 B CN 109688222B
- Authority
- CN
- China
- Prior art keywords
- shared computing
- node
- shared
- task
- computing node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a scheduling method of shared computing resources, which comprises the following steps: acquiring a shared computing task to be executed; acquiring all alternative shared computing node lists; selecting a shared computing node from the shared computing node list that matches the shared computing task; and issuing the shared computing task to the shared computing node matched with the shared computing task. The invention also provides a shared computing system, a server and a storage medium. The invention can select a proper shared computing node according to the resource requirement of the user and deal with the fluctuation of the node in real time to make corresponding scheduling.
Description
Technical Field
The present invention relates to the field of shared computing technologies, and in particular, to a method for scheduling shared computing resources, a shared computing system, a server, and a storage medium.
Background
At present, a lot of enterprises need to use a large amount of bandwidth, magnetic disks and CPU resources to provide stable and high-speed services for users distributed in different regions and different network environments, resources such as bandwidth and storage of a home environment are idle greatly, a set of shared computing system is built by taking intelligent hardware deployed in a user home as a home node, the resources can be used fully, and the service cost of the enterprises is greatly reduced. The family node has the following characteristics: 1. numerous, perhaps as many as one hundred thousand, millions, or even orders of magnitude higher; 2. the stability of the home node is lower than that of the server node; 3. the nodes are interconnected through a public network, and the IP addresses of the nodes are dynamically changed; 4. the physical resources owned by a single node fluctuate rarely and in real time.
In the mode, flexible and efficient management of resources collected by intelligent hardware is a core point, different service programs are required to be deployed quickly, resource management and safety control are carried out on the service programs, and meanwhile, real-time scheduling is carried out on the resource use condition of each node according to services, so that the physical resources of the nodes are utilized to the maximum extent. For more than one million nodes deployed in a home network environment, virtual computing, storage and network resources are abstracted, and no mature scheme exists in the industry at present.
Disclosure of Invention
In view of the above, the present invention provides a scheduling method of shared computing resources, a shared computing system, a server and a storage medium, so as to solve at least one of the above technical problems.
First, in order to achieve the above object, the present invention provides a method for scheduling shared computing resources, which is characterized in that the method includes:
acquiring a shared computing task to be executed;
acquiring all alternative shared computing node lists;
selecting a shared computing node from the shared computing node list that matches the shared computing task;
and issuing the shared computing task to the shared computing node matched with the shared computing task.
Optionally, the shared computing node list includes an ID of each shared computing node and available resource data;
the shared computing task includes a requirement of a shared computing resource that needs to be configured;
the selecting, from the list of shared computing nodes, a shared computing node that matches the shared computing task comprises:
and selecting the shared computing node matched with the shared computing task from the shared computing node list according to the requirement of the shared computing resource needing to be configured and the available resource data of each shared computing node.
Optionally, the demand for shared computing resources comprises: at least one of bandwidth requirements, storage space requirements, and computing resource requirements.
Optionally, the available resource data in the shared computing node list is obtained by computing according to the real-time state of the node uploaded by each shared computing node, the task state, and data generated when the node executes the task.
Optionally, the selecting, according to the demand of the shared computing resource that needs to be configured and the available resource data of each shared computing node, a shared computing node that matches the shared computing task from the shared computing node list includes:
acquiring available resource data of each shared computing node in the shared computing node list;
selecting the shared computing nodes of which the available resource data reach preset values from the shared computing node list to generate an available node list;
and scoring each shared computing node in the available node list according to a preset index, and adopting a packing algorithm to split the demand of the shared computing resource needing to be configured to the shared computing node with the scoring value exceeding a preset threshold value to obtain a final matching node list.
Optionally, the selecting, according to the demand of the shared computing resource that needs to be configured and the available resource data of each shared computing node, a shared computing node that matches the shared computing task from the shared computing node list further includes:
acquiring the current available resource data of the selected shared computing node at fixed time;
and judging whether node addition or deletion is needed according to the requirement of the shared computing resource needing to be configured and the current available resource data of the shared computing node.
Optionally, the preset index includes regional resource allowance and historical stability.
Optionally, the acquiring the shared computing task to be executed includes: and acquiring a docker mirror image generated according to the shared computing task to be executed.
Optionally, the issuing the shared computing task to the shared computing node matched with the shared computing task includes: and issuing the docker mirror image corresponding to the shared computing task to the shared computing node matched with the shared computing task.
In addition, to achieve the above object, the present invention further provides a server, which includes a memory and a processor, wherein the memory stores a scheduler of shared computing resources that can be executed on the processor, and the scheduler of shared computing resources implements the above scheduling method of shared computing resources when executed by the processor.
Further, to achieve the above object, the present invention also provides a shared computing system, including:
the task management unit is used for receiving the shared computing task to be executed from the client and dispatching the shared computing task to the scheduling service unit;
the scheduling service unit is used for acquiring the shared computing tasks from the task management unit, acquiring all alternative shared computing node lists according to the states and historical data of the shared computing nodes provided by the node management unit and the data warehouse, and selecting the shared computing nodes matched with the shared computing tasks from the shared computing node lists;
and the deployment service unit is used for issuing the shared computing task to the shared computing node which is selected by the scheduling service unit and matched with the shared computing task.
Further, to achieve the above object, the present invention also provides a storage medium storing a scheduler of shared computing resources, the scheduler of shared computing resources being executable by at least one processor to make the at least one processor execute the scheduling method of shared computing resources as described above.
The scheduling method of the shared computing resources, the shared computing system, the server and the storage medium provided by the invention can uniformly manage a Docker cluster consisting of millions of shared computing nodes, distribute the shared computing nodes matched with the shared computing tasks according to the resources required by the shared computing tasks, and perform node scheduling at any time according to the state change of the nodes to maintain the stability of the total amount of the resources.
Drawings
FIG. 1 is a block diagram of a shared computing system according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a dispatch server according to a second embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for scheduling shared computing resources according to a third embodiment of the present invention;
fig. 4 is a detailed flowchart of S24 in fig. 3.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the description relating to "first", "second", etc. in the present invention is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
First embodiment
Referring to fig. 1, a first embodiment of the invention provides a shared computing system. The shared computing system is a set of IaaS (Infrastructure as a Service) system constructed by using distributed node resources, and has the core function of selecting a proper node according to the resource requirements of users, carrying out lightweight virtualization, carrying program logic of the users, and carrying out corresponding scheduling and adjustment in real time in response to fluctuations of network positions, bandwidths, storage and the like of the nodes.
In this embodiment, the shared computing system 1 includes a server 10 and a shared computing node 19. The server 10 includes a task management unit 11, a scheduling service unit 12, a node management unit 13, a data repository 14, a deployment service unit 15, and a mirror repository 17. The shared computing system 1 and the client 2 perform data communication through a network, and is configured to allocate a corresponding shared computing node 19 according to a shared computing task initiated by the client 2 to execute the shared computing task.
The client 2 is configured to select the specification and capacity of the required resource and the program logic to be executed, automatically generate a Docker (application container engine) image according to the program logic, and encapsulate the selected required resource into a standardized shared computing task. In this embodiment, a user may select specification and capacity (e.g., bandwidth amount, storage amount, etc.) of a required resource in a management console, a CLI (Command-line Interface) tool, an API (Application Programming Interface) Interface call, and other various manners at the client 2, select a program logic (which may be implemented by various languages) to be executed, and automatically generate a Docker image from the program logic after processing by a debugging platform and a cross-compiling platform. For example, the resource requirement is 100Gbps bandwidth, 10PB memory, and the executed logic code is hello. Meanwhile, the user of the client 2 can also perform start-stop, addition, deletion and other control on the program logic. The client 2 encapsulates the selected required resource into a standardized task, and then delivers the task to the task management unit 11. The program logic selected by the user at the client 2 is encapsulated into a standardized Docker image, which shields programming language and execution environment differences, and then submitted to the image repository 17.
The task management unit 11 is configured to dispatch the task to the scheduling service unit 12 after receiving the task from the client 2. In this embodiment, the task management unit 11 will arrange the received tasks into a plurality of parallelized pipelines according to the priorities and the association degrees, and the scheduling service unit 12 will sequentially fetch the tasks from the pipelines.
The scheduling service unit 12 is configured to obtain a task from the task management unit 11, and select a shared computing node 19 matching the shared computing task according to the state and history data of each shared computing node 19 provided by the node management unit 13 and the data warehouse 14. The scheduling service unit 12 needs to rely on the real-time status of the total number of nodes obtained from the node management unit 13 and the historical data (e.g., the historical stability of the nodes, etc.) of the nodes and tasks obtained from the data warehouse 14. For example, the scheduling service unit 12 first obtains a list of all current candidate shared computing nodes, where the list of shared computing nodes includes IDs of the shared computing nodes 19 and available resource data, and the available resource data can be obtained by calculation according to a real-time state of a node uploaded by each shared computing node 19, a task state, and data generated when a task is executed on the node. Then, the scheduling Service unit 12 splits the resource requirement of the task, and selects an available node list that reaches a preset value according to a region, an ISP (Internet Service Provider), an NAT (Network Address Translation) type, a bandwidth, a storage space, a calculation resource, and the like. And finally, scoring each shared computing node 19 in the available node list according to preset indexes such as regional resource allowance, historical stability and the like, splitting the demand of the shared computing resources required to be configured by the task to the shared computing nodes 19 with scoring values exceeding a preset threshold value according to resource cost by adopting a boxing algorithm according to a resource utilization maximization principle, and selecting a final matching node list. In addition, after the selected shared computing node 19 uploads the node real-time status and the task status (so as to obtain the current available resource data), the scheduling service unit 12 is further configured to determine whether to add or delete a node.
The node management unit 13 is configured to receive the real-time node status and the task status uploaded by each shared computing node 19 and provide the real-time node status and the task status to the scheduling service unit 12 for scheduling.
The data warehouse 14 is used for receiving data generated when tasks are executed and uploaded by each shared computing node 19 and providing the data to the scheduling service unit 12 for scheduling.
The deployment service unit 15 is configured to issue the deployed task to the shared computing node 19 selected by the scheduling service unit 12.
The image repository 17 is configured to receive a Docker image generated by the client 2 and provide the Docker image to the shared compute node 19.
The shared computing node 19 is configured to receive and execute a task deployed by the deployment service unit 15, download a corresponding Docker image from the image repository 17, start an image instance, and upload a node real-time state, a task state, and data generated on the node. In the present embodiment, the shared computing node 19 downloads the Docker image from the image repository 17, and in other embodiments, the Docker images downloaded by other shared computing nodes 19 may be acquired through P2P transmission between the shared computing nodes 19. After the Docker image is downloaded, the Docker image may also be transmitted to other shared compute nodes through P2P.
Further, the shared computing system 1 further includes:
and the signaling gateway 16 is configured to issue the tasks deployed by the deployment service unit 15 to the corresponding shared computing nodes 19, receive the node real-time states and the task states uploaded by the shared computing nodes 19, and send the node real-time states and the task states to the node management unit 13.
And the data gateway 18 is used for transmitting the Docker image to the shared computing node 19, receiving data generated in the execution process of the Docker instance uploaded by the shared computing node 19, and uploading the data to the data warehouse 14.
The transmission of the signaling and the data is dynamically accelerated by adopting a Content Delivery Network (CDN).
Further, the shared compute node 19 includes a local signaling proxy 190, a local data proxy 192, and a Docker manager 194. The node resources are divided and managed virtually by a local signaling agent 190, a local data agent 192 and a Docker manager 194 deployed on each shared computing node 19, and the node and task states and data generated on the node are collected in real time.
The local signaling proxy 190 is used to receive signaling (e.g., deployed tasks) from the signaling gateway 16, parse the signaling, pass to the Docker manager 194, and upload node real-time status and task status to the signaling gateway 16. The Docker manager 194 is configured to download the Docker image according to the task received by the local signaling proxy 190, and load and start an image instance. The local data broker 192 is configured to receive a Docker image downloaded from the image repository 17 from the data gateway 18 or obtain the Docker image from another shared computing node 19 through P2P transmission, and upload data generated during execution of the Docker instance, such as a result generated during execution of the Docker instance, a log, a core dump (Coredump), and the like, which may be subsequently used as historical data of the node as a reference when the scheduling service unit 12 schedules. After the partially shared compute node 19 has downloaded the Docker image, P2P flooding may be performed by the local data broker 192 to reduce the download bandwidth pressure of the data gateway 18.
The shared computing system 1 provided in this embodiment can implement lightweight virtualization on resource-limited home intelligent hardware in a Docker manner, uniformly manage a Docker cluster formed by million-level public network nodes, and has cluster management and fault tolerance capabilities across provinces and operators. The transmission of signaling and data is dynamically accelerated by a CDN network, and the Docker mirror image is diffused and distributed in a P2P mode, so that the distribution efficiency is improved, and the bandwidth of a server is saved. The Docker image instance carried by the shared computing node 19 is in a public network environment, the NAT type, the operator, and the region of the node may change dynamically, and the scheduling service unit 12 may increase or decrease the node constantly through a binning algorithm, so as to maintain the stability of the total amount of resources.
Second embodiment
Referring to fig. 2, a server 10 is provided according to a second embodiment of the present invention.
The server 10 includes: memory 21, processor 23, network interface 25, and communication bus 27. The network interface 25 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others. A communication bus 27 is used to enable connection communication between these components.
Memory 21 includes at least one type of readable storage medium. The at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, and the like. In some embodiments, the memory 21 may be an internal storage unit of the server 10, such as a hard disk of the server 10. In other embodiments, the memory 21 may be an external storage unit of the server 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the server 10.
The memory 21 may be used for storing application software installed in the server 10 and various data, such as program codes of the scheduler 20 sharing computing resources and related data generated during the operation thereof.
The processor 23 may be, in some embodiments, a central processing unit, microprocessor or other data processing chip that executes program code stored in the memory 21 or processes data.
Fig. 2 shows only the server 10 with components 21-27 and a scheduler 20 sharing computing resources, but it should be understood that fig. 2 does not show all of the components of the server 10, and more or fewer components may be implemented instead.
In the embodiment of the server 10 shown in fig. 2, the memory 21 as a computer storage medium stores the program code of the scheduler 20 for sharing computing resources, and when the processor 23 executes the program code of the scheduler 20 for sharing computing resources, the following method is implemented:
(1) and acquiring the shared computing task to be executed.
(2) A list of all alternative shared compute nodes is obtained.
(3) From the shared computing node list, a shared computing node 19 is selected that matches the shared computing task.
(4) And sending the shared computing task to the shared computing node 19 matched with the shared computing task.
For a detailed description of the above method, please refer to the following third embodiment, which is not repeated herein.
Third embodiment
Referring to fig. 3, a third embodiment of the present invention provides a method for scheduling shared computing resources, which is applied to the server 10. In this embodiment, the execution order of the steps in the flowchart shown in fig. 3 may be changed and some steps may be omitted according to different requirements. The method comprises the following steps:
and S20, acquiring the shared computing task to be executed.
In this embodiment, the shared computing task includes a requirement of a shared computing resource that needs to be configured. The demand for the shared computing resource includes at least one of a bandwidth demand, a storage space demand, and a computing resource demand. After the user selects the specification and capacity of the required resources and the program logic to be executed at the client 2, the client 2 automatically generates a Docker mirror image according to the program logic, and encapsulates the selected required resources into a standardized task. Then, the client 2 delivers the job to the job management unit 11, and delivers the Docker image to the image repository 17. The task management unit 11 will arrange the received tasks into a plurality of parallelized pipelines according to the priority and the relevance, and the scheduling service unit 12 will sequentially fetch the tasks from the pipelines.
S22, acquiring all alternative shared computing node lists.
In this embodiment, the shared computing node list includes an ID of each shared computing node 19 and available resource data, and the available resource data may be obtained by calculating according to a real-time state of a node uploaded by each shared computing node 19, a task state, and data generated when a task is executed on the node. The node management unit 13 receives the real-time node status and the task status uploaded by each shared computing node 19 and provides the real-time node status and the task status to the scheduling service unit 12 for scheduling. The data warehouse 14 receives the generated data uploaded by each shared compute node 19 and provides the scheduling service 12 for scheduling. The scheduling service unit 12 needs to rely on the real-time status of the total number of nodes obtained from the node management unit 13 and the historical data (e.g., the historical stability of the nodes, etc.) of the nodes and tasks obtained from the data warehouse 14.
S24, select the shared computing node 19 matching the shared computing task from the list of shared computing nodes.
The scheduling service unit 12 selects a shared computing node 19 matching the shared computing task from the shared computing node list according to the demand of the shared computing resource to be configured and the available resource data of each shared computing node 19. For example, the scheduling service unit 12 first obtains all current candidate shared computing node lists, then splits the resource requirement of the task, selects an available node list that reaches a preset value according to regions, ISPs, NAT types, bandwidths, storage spaces, computing resources, and the like, finally scores each shared computing node 19 in the available node list according to preset indexes such as regional resource margins, historical stability, and the like, splits the requirement of the shared computing resources that the task needs to configure to the shared computing node 19 whose score value exceeds a preset threshold value according to resource costs and a binning algorithm according to a resource utilization maximization principle, and selects a final matching node list. In addition, after the selected shared computing node 19 uploads the node real-time status and the task status (so as to obtain the current available resource data), the scheduling service unit 12 is further configured to determine whether to add or delete a node.
Fig. 3 is a schematic diagram of the detailed process of S24. The refining process comprises the following steps:
s240, acquiring the available resource data of each shared computing node 19 in the shared computing node list.
S242, select the shared computing node 19 whose available resource data reaches the preset value from the shared computing node list, and generate an available node list.
And S244, scoring each shared computing node 19 in the available node list according to a preset index, and adopting a packing algorithm to split the demand of the shared computing resource required to be configured by the task to the shared computing node 19 with the score value exceeding a preset threshold value, so as to obtain a final matching node list.
And S246, periodically acquiring the current available resource data of the selected shared computing node 19.
S248, determining whether node addition or deletion is needed according to the demand of the shared computing resource and the current available resource data of the shared computing node 19. For example, when the on-line and off-line status of a node changes, the NAT type or operator changes, the disk storage changes, the task load changes, and the like occur, the node may need to be added or deleted.
S26, sending the shared computing task to the shared computing node 19 matching the shared computing task.
After the scheduling service unit 12 selects the shared computing node 19, the task acquired from the task management unit 11 may be distributed to each selected shared computing node 19, and then the task distributed to each selected shared computing node 19 is issued to the corresponding shared computing node 19 through the deployment service unit 15.
The shared computing node 19 receives and executes the issued task, downloads the corresponding Docker image from the image warehouse 17, starts an image instance, and uploads the real-time state of the node, the task state, and data generated on the node.
The scheduling method for shared computing resources provided by this embodiment can perform lightweight virtualization on resource-limited home intelligent hardware in a Docker manner, uniformly manage a Docker cluster formed by million-level public network nodes, and has cluster management and fault tolerance capabilities across provinces and operators. The Docker image instance carried by the shared computing node 19 is in a public network environment, the NAT type, the operator, and the region of the node may change dynamically, and the scheduling service unit 12 may increase or decrease the node constantly through a binning algorithm, so as to maintain the stability of the total amount of resources.
Fourth embodiment
The present invention also provides another embodiment, which is to provide a computer-readable storage medium, wherein the computer-readable storage medium stores a scheduler 20 for sharing computing resources, and the scheduler 20 for sharing computing resources is executable by at least one processor, so that the at least one processor executes the scheduling method for sharing computing resources as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a client (such as a mobile phone, a computer, an electronic device, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (8)
1. A method for scheduling shared computing resources, the method comprising:
acquiring a Docker mirror image generated according to a shared computing task to be executed from a mirror image warehouse;
acquiring all alternative shared computing node lists;
selecting a shared compute node from the list of shared compute nodes that matches the shared compute task, comprising: selecting a shared computing node with available resource data reaching a preset value from the shared computing node list to generate an available node list, scoring each shared computing node in the available node list according to a preset index, splitting the demand of the shared computing resource required to be configured by the shared computing task to the shared computing node with the score value exceeding a preset threshold value by adopting a packing algorithm to obtain a matched node list, regularly acquiring the current available resource data of the shared computing node in the matched node list, judging whether node adding and deleting operations need to be executed on the shared computing node in the matched node list according to the demand of the shared computing resource and the available resource data, and executing adding or deleting operations on the shared computing node in the matched node list when judging that the adding and deleting operations need to be executed;
transmitting the Docker mirror image to the matched shared computing node in a CDN dynamic acceleration mode, and receiving the node real-time state, the task state and data generated on the node returned by the matched shared computing node after downloading the Docker mirror image, loading and starting a mirror image instance, wherein: after any matched shared computing node finishes downloading the Docker mirror image, the downloaded Docker mirror image is diffused to other matched shared computing nodes through P2P by a local data agent.
2. The method for scheduling of shared computing resources of claim 1, wherein the list of shared computing nodes includes an ID of each shared computing node, available resource data.
3. The method for scheduling of shared computing resources according to claim 1 or 2, wherein the demand for the shared computing resources comprises: at least one of bandwidth requirements, storage space requirements, and computing resource requirements.
4. The method according to claim 1 or 2, wherein the available resource data in the shared computing node list is calculated according to the real-time state of the node uploaded by each shared computing node, the task state, and data generated when the task is executed on the node.
5. The method according to claim 1, wherein the predetermined criteria include regional resource margins and historical stability.
6. A server, comprising a memory having stored thereon a scheduler of shared computing resources operable on the processor, the scheduler of shared computing resources implementing the method of any of claims 1-5 when executed by the processor.
7. A shared computing system, the system comprising:
the system comprises a task management unit, a scheduling service unit and a task management unit, wherein the task management unit is used for receiving a Docker mirror image generated by a client according to a shared computing task to be executed from a mirror image warehouse and dispatching the Docker mirror image to the scheduling service unit;
the scheduling service unit is configured to obtain the Docker mirror image from the task management unit, obtain all candidate shared computing node lists according to states and history data of the shared computing nodes provided by the node management unit and the data warehouse, and select a shared computing node matched with the shared computing task from the shared computing node lists, including: selecting a shared computing node with available resource data reaching a preset value from the shared computing node list to generate an available node list, scoring each shared computing node in the available node list according to a preset index, splitting the demand of the shared computing resource required to be configured by the shared computing task to the shared computing node with the score value exceeding a preset threshold value by adopting a packing algorithm to obtain a matched node list, regularly acquiring the current available resource data of the shared computing node in the matched node list, judging whether node adding and deleting operations need to be executed on the shared computing node in the matched node list according to the demand of the shared computing resource and the available resource data, and executing adding or deleting operations on the shared computing node in the matched node list when judging that the adding and deleting operations need to be executed;
the deployment service unit is used for issuing the Docker mirror image to the matched shared computing node in a CDN dynamic acceleration mode;
the node management unit is used for receiving the node real-time state, the task state and data generated on the node which are returned after the matched shared computing node downloads the Docker mirror image, loads and starts the mirror image instance;
the data warehouse is used for receiving data generated on each shared computing node;
wherein: after any matched shared computing node finishes downloading the Docker mirror image, the downloaded Docker mirror image is diffused to other matched shared computing nodes through P2P by a local data agent.
8. A storage medium storing a scheduler of shared computing resources, the scheduler of shared computing resources being executable by at least one processor to cause the at least one processor to perform the method of scheduling of shared computing resources according to any of claims 1-5.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811601521.7A CN109688222B (en) | 2018-12-26 | 2018-12-26 | Shared computing resource scheduling method, shared computing system, server and storage medium |
PCT/CN2019/092458 WO2020133967A1 (en) | 2018-12-26 | 2019-06-24 | Method for scheduling shared computing resources, shared computing system, server, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811601521.7A CN109688222B (en) | 2018-12-26 | 2018-12-26 | Shared computing resource scheduling method, shared computing system, server and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109688222A CN109688222A (en) | 2019-04-26 |
CN109688222B true CN109688222B (en) | 2020-12-25 |
Family
ID=66189634
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811601521.7A Active CN109688222B (en) | 2018-12-26 | 2018-12-26 | Shared computing resource scheduling method, shared computing system, server and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109688222B (en) |
WO (1) | WO2020133967A1 (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109688222B (en) * | 2018-12-26 | 2020-12-25 | 深圳市网心科技有限公司 | Shared computing resource scheduling method, shared computing system, server and storage medium |
CN110381159B (en) * | 2019-07-26 | 2022-02-01 | 中国联合网络通信集团有限公司 | Task processing method and system |
CN110661646B (en) * | 2019-08-06 | 2020-08-04 | 上海孚典智能科技有限公司 | Computing service management technology for high-availability Internet of things |
CN112394944B (en) * | 2019-08-13 | 2024-06-25 | 阿里巴巴集团控股有限公司 | Distributed development method, device, storage medium and computer equipment |
CN110649958B (en) * | 2019-09-05 | 2022-07-26 | 北京百度网讯科技有限公司 | Method, apparatus, device and medium for processing satellite data |
CN110677464A (en) * | 2019-09-09 | 2020-01-10 | 深圳市网心科技有限公司 | Edge node device, content distribution system, method, computer device, and medium |
CN112702306B (en) * | 2019-10-23 | 2023-05-09 | 中国移动通信有限公司研究院 | Method, device, equipment and storage medium for intelligent service sharing |
CN111126895A (en) * | 2019-11-18 | 2020-05-08 | 青岛海信网络科技股份有限公司 | Management warehouse and scheduling method for scheduling intelligent analysis algorithm in complex scene |
CN111949394B (en) * | 2020-07-16 | 2024-07-16 | 广州玖的数码科技有限公司 | Method, system and storage medium for sharing computing power resource |
CN112068954B (en) * | 2020-08-18 | 2024-08-16 | 弥伦工业产品设计(上海)有限公司 | Method and system for scheduling network computing resources |
CN112015521B (en) * | 2020-09-30 | 2024-06-07 | 北京百度网讯科技有限公司 | Configuration method and device of reasoning service, electronic equipment and storage medium |
CN112199193A (en) * | 2020-09-30 | 2021-01-08 | 北京达佳互联信息技术有限公司 | Resource scheduling method and device, electronic equipment and storage medium |
CN112540836B (en) * | 2020-12-11 | 2024-05-31 | 光大兴陇信托有限责任公司 | Service scheduling management method and system |
CN112738174B (en) * | 2020-12-23 | 2022-11-25 | 中国人民解放军63921部队 | Cross-region multi-task data transmission method and system for private network |
CN112799742B (en) * | 2021-02-09 | 2024-02-13 | 上海海事大学 | Machine learning practical training system and method based on micro-service |
US12106082B2 (en) | 2021-05-20 | 2024-10-01 | International Business Machines Corporation | Generative experiments for application deployment in 5G networks |
CN115766430A (en) * | 2022-11-21 | 2023-03-07 | 中电云数智科技有限公司 | Method for deploying cluster service based on hatched middleware instance |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102917077A (en) * | 2012-11-20 | 2013-02-06 | 无锡城市云计算中心有限公司 | Resource allocation method in cloud computing system |
CN102938790A (en) * | 2012-11-20 | 2013-02-20 | 无锡城市云计算中心有限公司 | Resource allocation method of cloud computing system |
CN105791447A (en) * | 2016-05-20 | 2016-07-20 | 北京邮电大学 | Method and device for dispatching cloud resource orienting to video service |
CN106371889A (en) * | 2016-08-22 | 2017-02-01 | 浪潮(北京)电子信息产业有限公司 | Method and device for realizing high-performance cluster system for scheduling mirror images |
CN106919445A (en) * | 2015-12-28 | 2017-07-04 | 华为技术有限公司 | A kind of method and apparatus of the container of Parallel Scheduling in the cluster |
CN107239329A (en) * | 2016-03-29 | 2017-10-10 | 西门子公司 | Unified resource dispatching method and system under cloud environment |
CN107566443A (en) * | 2017-07-12 | 2018-01-09 | 郑州云海信息技术有限公司 | A kind of distributed resource scheduling method |
CN107733977A (en) * | 2017-08-31 | 2018-02-23 | 北京百度网讯科技有限公司 | A kind of cluster management method and device based on Docker |
CN108563500A (en) * | 2018-05-08 | 2018-09-21 | 深圳市零度智控科技有限公司 | Method for scheduling task, cloud platform based on cloud platform and computer storage media |
CN108628674A (en) * | 2018-05-11 | 2018-10-09 | 深圳市零度智控科技有限公司 | Method for scheduling task, cloud platform based on cloud platform and computer storage media |
CN109062658A (en) * | 2018-06-29 | 2018-12-21 | 优刻得科技股份有限公司 | Realize dispatching method, device, medium, equipment and the system of computing resource serviceization |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9268613B2 (en) * | 2010-12-20 | 2016-02-23 | Microsoft Technology Licensing, Llc | Scheduling and management in a personal datacenter |
CN104506600A (en) * | 2014-12-16 | 2015-04-08 | 苏州海博智能系统有限公司 | Computation resource sharing method, device and system as well as client side and server |
US10375115B2 (en) * | 2016-07-27 | 2019-08-06 | International Business Machines Corporation | Compliance configuration management |
CN107819802B (en) * | 2016-09-13 | 2021-02-26 | 华为技术有限公司 | Mirror image obtaining method in node cluster, node equipment and server |
WO2018067047A1 (en) * | 2016-10-05 | 2018-04-12 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and module for assigning task to server entity |
CN107105029B (en) * | 2017-04-18 | 2018-03-20 | 北京友普信息技术有限公司 | A kind of CDN dynamic contents accelerated method and system based on Docker technologies |
CN107844376A (en) * | 2017-11-21 | 2018-03-27 | 北京星河星云信息技术有限公司 | Resource allocation method, computing system, medium and the server of computing system |
CN109067890B (en) * | 2018-08-20 | 2021-06-29 | 广东电网有限责任公司 | CDN node edge computing system based on docker container |
CN109688222B (en) * | 2018-12-26 | 2020-12-25 | 深圳市网心科技有限公司 | Shared computing resource scheduling method, shared computing system, server and storage medium |
-
2018
- 2018-12-26 CN CN201811601521.7A patent/CN109688222B/en active Active
-
2019
- 2019-06-24 WO PCT/CN2019/092458 patent/WO2020133967A1/en active Application Filing
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102917077A (en) * | 2012-11-20 | 2013-02-06 | 无锡城市云计算中心有限公司 | Resource allocation method in cloud computing system |
CN102938790A (en) * | 2012-11-20 | 2013-02-20 | 无锡城市云计算中心有限公司 | Resource allocation method of cloud computing system |
CN106919445A (en) * | 2015-12-28 | 2017-07-04 | 华为技术有限公司 | A kind of method and apparatus of the container of Parallel Scheduling in the cluster |
CN107239329A (en) * | 2016-03-29 | 2017-10-10 | 西门子公司 | Unified resource dispatching method and system under cloud environment |
CN105791447A (en) * | 2016-05-20 | 2016-07-20 | 北京邮电大学 | Method and device for dispatching cloud resource orienting to video service |
CN106371889A (en) * | 2016-08-22 | 2017-02-01 | 浪潮(北京)电子信息产业有限公司 | Method and device for realizing high-performance cluster system for scheduling mirror images |
CN107566443A (en) * | 2017-07-12 | 2018-01-09 | 郑州云海信息技术有限公司 | A kind of distributed resource scheduling method |
CN107733977A (en) * | 2017-08-31 | 2018-02-23 | 北京百度网讯科技有限公司 | A kind of cluster management method and device based on Docker |
CN108563500A (en) * | 2018-05-08 | 2018-09-21 | 深圳市零度智控科技有限公司 | Method for scheduling task, cloud platform based on cloud platform and computer storage media |
CN108628674A (en) * | 2018-05-11 | 2018-10-09 | 深圳市零度智控科技有限公司 | Method for scheduling task, cloud platform based on cloud platform and computer storage media |
CN109062658A (en) * | 2018-06-29 | 2018-12-21 | 优刻得科技股份有限公司 | Realize dispatching method, device, medium, equipment and the system of computing resource serviceization |
Also Published As
Publication number | Publication date |
---|---|
CN109688222A (en) | 2019-04-26 |
WO2020133967A1 (en) | 2020-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109688222B (en) | Shared computing resource scheduling method, shared computing system, server and storage medium | |
CN108737270B (en) | Resource management method and device for server cluster | |
US10725752B1 (en) | Dependency handling in an on-demand network code execution system | |
US10564946B1 (en) | Dependency handling in an on-demand network code execution system | |
CN106844137B (en) | Server monitoring method and device | |
US10061613B1 (en) | Idempotent task execution in on-demand network code execution systems | |
US11010188B1 (en) | Simulated data object storage using on-demand computation of data objects | |
CN108566290B (en) | Service configuration management method, system, storage medium and server | |
CN109617996B (en) | File uploading and downloading method, server and computer readable storage medium | |
CN108173774B (en) | Client upgrading method and system | |
CN106302632B (en) | Downloading method of basic mirror image and management node | |
CN106371889B (en) | Method and device for realizing high-performance cluster system of scheduling mirror image | |
US10721260B1 (en) | Distributed execution of a network vulnerability scan | |
CN111897550B (en) | Mirror image preloading method, device and storage medium | |
CN111221550B (en) | Rule updating method and device for streaming computing and streaming computing system | |
KR101033813B1 (en) | Cloud computing network system and file distrubuting method of the same | |
CN114153581A (en) | Data processing method, data processing device, computer equipment and storage medium | |
CN113434230A (en) | Jump control method and device for H5 page, storage medium and electronic device | |
US11647103B1 (en) | Compression-as-a-service for data transmissions | |
CN114281263A (en) | Storage resource processing method, system and equipment of container cluster management system | |
CN109951551B (en) | Container mirror image management system and method | |
US11144359B1 (en) | Managing sandbox reuse in an on-demand code execution system | |
CN116028196A (en) | Data processing method, device and storage medium | |
CN112988062B (en) | Metadata reading limiting method and device, electronic equipment and medium | |
CN106657195B (en) | Task processing method and relay device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |