CN113641448A - Edge computing container allocation and layer download ordering architecture and method thereof - Google Patents
Edge computing container allocation and layer download ordering architecture and method thereof Download PDFInfo
- Publication number
- CN113641448A CN113641448A CN202110808605.3A CN202110808605A CN113641448A CN 113641448 A CN113641448 A CN 113641448A CN 202110808605 A CN202110808605 A CN 202110808605A CN 113641448 A CN113641448 A CN 113641448A
- Authority
- CN
- China
- Prior art keywords
- container
- layer
- download
- allocation
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention discloses an edge computing container allocation and layer download ordering system structure and a method thereof, wherein the system structure comprises: including UEs, edge nodes, schedulers, and container stores. The method comprises the following steps: grouping a mirror image layer, distributing containers, and downloading and sequencing the mirror image layer; the container allocation comprises a layer-aware container allocation algorithm, and the mirror layer download ordering comprises a greedy mirror layer ordering algorithm. The invention can effectively reduce the total starting time of the container, and a developer does not need to modify the structure of the container system.
Description
Technical Field
The invention relates to the technical field of information, belongs to the field of distributed systems in computer networks, the field of edge calculation and the field of machine learning, and relates to methods for resource scheduling and deep reinforcement learning of machine learning in edge calculation and distributed systems.
Background
For applications that have short execution times (e.g., handling periodic updates from internet of things sensors) or require fast response times in edge computing (e.g., robotic motion), delay in starting the container can be a very problematic issue. The container start-up delay includes obtaining (if not locally present) the container image from the remote registry to its host and installing it. In edge computing, the longer the image download time, the greater the startup delay due to limited bandwidth. The image download time takes up a large portion of the boot time because the image installation delay is lower (about 1 second) and is also more stable on heterogeneous devices. Furthermore, due to limited storage resources, dynamic movement of users, large number of container images, etc., it is not possible to store all images on each edge node in advance. Therefore, the large container start-up time becomes a problem to be optimized.
There have been many recent efforts to reduce the start-up delay of containers by acquiring an image file, extracting a common portion of the image file, or reorganizing the image as needed, which efforts require modification of the container or the entire system architecture. There is also a related effort to reduce the startup time of containers on deployment nodes by deciding on container scheduling by considering the size of the layer that the container mirror already exists on the edge nodes.
Currently, there is a method of modifying a container file structure or a container system architecture to reduce the start-up time of a container, which reduces the isolation of the container and the stability of the container. Such an approach requires the developer to spend a significant amount of time modifying the containers they have published.
The method for deciding the container scheduling directly according to the size of the existing mirror image layer on the edge node ignores the following problems: the first is the scenario of multiple container co-scheduling, common in edge computing, with the goal of optimizing the total start-up time of multiple containers. Secondly, in a scenario of multi-container joint scheduling, the downloading order of the mirror layer further affects the total starting time of the container. Finally, the heterogeneity of the download bandwidth between edge nodes also affects the start-up time of the container.
Disclosure of Invention
To overcome the above problems, we have devised an edge computing container allocation and layer download ordering architecture and method thereof. The method firstly groups the layers shared by the same group of containers to reduce the scale of the optimization problem and accelerate the running speed of the method. And secondly, considering layer sharing among containers and the size of the existing mirror image layer on the edge node, sequentially selecting one container to be placed on the proper edge node. Finally, in order to sort the mirror layer downloads of each edge node, an edge computing container allocation and layer download sorting architecture and a method thereof are designed.
The method specifically comprises the following contents:
an edge computing container allocation and layer download ordering architecture and method thereof: the architecture of the edge computing comprises UEs, edge nodes, a scheduler and a container warehouse. The edge node is coupled to a plurality of UEs via wireless communication. Each edge node is provided with a downloading queue, and the edge nodes download the mirror image layers from the container warehouse in sequence according to the layer sequence in the downloading queue. The scheduler is used for collecting information of a container warehouse and an edge node, the container warehouse is a container mirror image storage library or a storage library set, and the container warehouse is deployed at the cloud end.
Further, 2, an edge computing container allocation and layer download ordering architecture and method thereof: the method mainly comprises the following steps: (1) mirror layer grouping, (2) container allocation, (3) mirror layer download ordering.
The mirror image layer is divided into: any two layers li, li' having the same relationship with each container store will be added to the same group, treating the entire group of mirrored layers as one mirrored layer.
The container is distributed as follows: selecting a container warehouse and an edge node to implement distribution, and determining a container warehouse distribution variable according to a layer-aware container distribution algorithm, wherein the layer-aware container distribution algorithm specifically comprises the following steps:
the downloading sequence of the mirror image layer is as follows: firstly, dividing a container layer into a plurality of sorted sets according to a side decomposition (side decomposition) method, then scheduling the container layer in the same set by taking the container as a unit, selecting the container with the minimum residual size for each round according to a greedy mirror image layer sorting algorithm, and downloading the residual mirror image layers. The greedy mirror image layer sequencing algorithm specifically comprises the following steps:
further, an edge computing container allocation and layer download ordering architecture and method thereof, the container warehouse scheduling process includes:
(1) multiple UEs offload multiple tasks.
(2) The scheduler collects task information and edge node status.
(3) Based on the collected information, the scheduler makes decisions on container warehouse allocation and layer download ordering.
(4) With these decisions and other a priori information, each edge node has a download queue and downloads the image layer according to the sequence in the download queue.
(5) Each container store starts running after all the mirror layers belonging to it have been downloaded.
Further, the edge computing container allocation and layer download ordering architecture and method thereof, the layer grouping and container allocation method is implemented and run in a scheduler, and the layer ordering method can be implemented in kubel of each node.
Further, the edge computing container allocation and layer download ordering architecture and method thereof, the creating container, comprising the steps of:
first, the user invokes the API through the interface to initiate a pod create request,
the scheduler then creates a pod on the selected node and returns the result according to the output of the layer packet and container allocation method,
finally, each node's kubel manages the mirror layer download according to the results of the layer ordering method and creates a container in a given pod.
Further, an edge computing container allocation and layer download ordering architecture and method thereof, wherein the layer-aware container allocation algorithm is computed in each cycle A score comprised of layer sharing and existing layer size to select a container and node pair.
Further, an edge computing container allocation and layer download ordering architecture and method thereof, wherein the mirror image layer download ordering schedules container layers in the same set according to container as unit, such as j ← argmin in greedy mirror image layer ordering algorithmjpjEach round selects the smallest size container remaining and downloads its remaining mirrored layers as shown.
Drawings
Figure 1 is an architectural diagram of the edge calculation of the present invention,
figure 2 is a block diagram of the kubernets system for open source container management in an embodiment of the present invention,
FIG. 3 is a CDF graph of container start-up times for different scheduling algorithms under uniform distribution in an embodiment of the present invention,
FIG. 4 is a CDF graph of container start times for different scheduling algorithms under Zipf distribution in an embodiment of the present invention,
figure 5 is a comparison of the number of different maximum vessel runs in an embodiment of the invention,
figure 6 is a comparison of different maximum bandwidths in an embodiment of the invention,
figure 7 is a comparison of different maximum storage spaces in an embodiment of the invention,
figure 8 is a graph of the effect of layer grouping for different container counts on algorithm runtime in an embodiment of the present invention,
fig. 9 is a diagram showing the effect of layer packets with different numbers of edge nodes on the running time of the algorithm in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
In order to achieve the purpose, firstly, modeling is carried out on the problem of container scheduling and mirror layer downloading ordering in edge computing, and the problem is proved to be an NP difficulty problem. Considering the heterogeneity of edge nodes and the characteristic that a mirror layer is shared among a plurality of mirrors, a mirror layer-aware scheduling method is designed, and the feasibility of the method on a Kubernets system of an open source container management system is discussed. Finally, the effectiveness of the method is proved based on simulation experiments.
The structure of the kubernets system for managing the source containers is described additionally and specifically. The Kubernetes cluster is composed of at least one master node and a plurality of computing nodes. The master node includes a highly available key value database named etcd, an API server for exposing APIs, a scheduler for scheduling deployment, and a controller for managing the entire cluster. Each node is the master of the kubernets cluster, consisting of many pod and one management component, kubel. In the node, kubel is used to handle tasks and manage pod. pod is a collection of containers, the core management unit of Kubernetes. The detailed structure of the figure refers to fig. 2.
The scenario we consider is a heterogeneous edge computing environment comprising a plurality of User Equipments (UEs), a set of heterogeneous edge nodes E ═ E1,e2,…,e|E|A scheduler and a container store. The scheduler is used to collect information of containers and edge nodes. Then, the container allocation and layer download order is determined. A container store is a container mirror store or collection of stores. The container scheduling process comprises: (1) multiple UEs offload multiple tasks. (2) The scheduler collects task information and edge node status. (3) Based on the collected information, the scheduler makes container allocation and layer download ordering decisions. (4) With these decisions and other a priori information, each edge node has a download queue and downloads the image layer according to the sequence in the download queue. (5) Each container starts to run after all the mirror layers belonging to it have been downloaded.
It is assumed that each UE offloads tasks running on a particular container. Thus, C ═ C1,c2,…,c|C|It is used to represent both containers and all tasks. Binary variable ajkIndicating container cjWhether or not to assign to edge node ek. If c is to bejIs assigned to ekThen a isjk1, otherwise ajk0. Each container cjShould be assigned to an edge node, which means that the UE will task cjOffloading to the edge node:
the set of layers consisting of all containers is denoted L ═ L1,l2,…,l|L|}. Layer liIs defined as pi. Relation variable rijForIndicating layer liAnd a container cjThe relationship between them. If layer liBelonging to a container cjOf containers, i.e. containers cjRequires a layer liThen binary variable rijE {0,1} is set to 1, otherwise to 0. Before scheduling rijAre known. Layers are shared by different images so the edge node only needs to download a layer once. Binary variable dikE {0,1} is defined as whether it is at edge node ekUpper and lower layers li. Edge node ekFinish downloading layer liIs defined as a layer liReady time tl ik. Container cjIs defined as tc j. Before downloading all layers belonging to its image, container cjCannot be at edge node ekStart running above, thus tc jGreater than its tier at its designated edge node ekReady time in (1):
edge nodes are computing devices deployed at access points that are connected with UEs through low-latency wireless communications. The storage, bandwidth and run container number limits of the edge node ek are defined as sk, bk and mk, respectively. The total size of the layers stored in the edge node ek cannot exceed its storage limit:
a limited number of containers can run simultaneously on the edge nodes:
it is assumed that each edge node has a download queue. The edge node downloads one layer at a time at the very front (left) of the queue. The layers in the download queue need to be ordered. The download priority of li layer and li' layer on edge node ek is defined as a binary systemVariable xkii 'e {0,1}. if xkii' is set to 1, then layer li should be downloaded before layer li ', otherwise, layer li' should be downloaded before layer li. In particular, xkii equals 1, ensuring that the ready time of each layer contains its own download time. Furthermore, the order of priority between the two layers needs to be relative:
considering both the queuing time and the downloading time, the ready time of li layer on edge node ek can be calculated by the following formula:
our goal is to optimize the sum of the start-up times for all containers. The optimization problem is expressed as:
(9) the variable M in (b) is set to a large positive number, e.g., C. (9) Ensures that for each layer liWhen any container c requires itjIs assigned to an edge node ek(i.e., a)jk1), layer l should be appliediDownload to edge node ekI.e. d ik1. Otherwise, dikEqual to 0 to minimize the amount of downloading. The scheduling problem can be proved to be an NP difficulty problem even if only one edge node exists through a reduction method.
Therefore, in order to solve the problem, a heuristic scheduling method of layer perception is designed, and the method mainly comprises the following three parts:
1. the mirror layer packets. Any two layers l having the same relationship with each containeri,li′Will be added to the same group, i.e./i,li′Satisfies the following conditions:
because the downloading completion time of the image needs to be after all layers of the image are ready, the problem scale can be effectively reduced after the image layers are grouped under the condition that the scheduling result is not influenced. Second, the mirror layer packet only needs to traverse all the relationship variables rijThat is, the time complexity is O (| L | | E |). In the two subsequent parts we consider the whole set of mirror layers as one mirror layer.
2. And (4) dispensing the container. We choose one container and one edge node at a time to perform the assignment, mainly considering two important factors:
(1) layer sharing between containers. If containers are allocated to edge nodes without regard to tier sharing, different edge nodes tend to download duplicate tiers, which consumes additional bandwidth. In addition, downloading the redundant layer also increases queuing delay. Thus, the total start-up time for all containers is increased.
(2) The existing layer size of each edge node. Merely assigning containers to edge nodes based on layer sharing between containers and edge nodes may result in a workload imbalance between different edge nodes. Doing so, while eliminating redundant layer downloads, it causes most of the mirror layer downloads to be on a few edge nodes, leaving other nodes unflushed, resulting in higher queuing delay. We therefore devised a layer-aware container allocation algorithm, method 1 below, which method 1 takes into account both of the above factors:
in method 1, the input is the set of bandwidths bk|ekE.g., E), and storing set sk|ekE.g., E), running container number limit { mk|ekE.g., E), layer size pi|lie.L and a relationship variable rij|li∈L,cjE.g. C). The output is a dispensing decision a for the containerjk. In particular, in each loop, the container and node pairs are selected by computing a score in line 10, which consists of layer sharing and existing layer size.
And (5) mirror layer downloading and sequencing. When the container allocation variable is determined, the binary variable dikBy a distribution variable ajkAnd equation (9). The original problem is decomposed into | E | independent mirror layer download ordering problems. For each mirror layer download ordering problem, we first translate it into a task-dependent minimum weighted total completion time problem on a single machine. Sidney Decomposition is an effective solution to this problem, while achieving an approximate ratio of 2. Therefore, a greedy mirror layer sorting algorithm is designed based on the Sidney decomplexing method (i.e., the method 2 is specifically configured as follows:
in method 2 the container layer is divided into sets. The precedence between sets is determined by Sidney decomplexion, and thus the precedence of container layers within different sets is also determined. We schedule the container layers in the same set in units of containers, as shown in method 2, line 15, and each round selects the container with the smallest size and downloads its remaining mirror layers.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
We implemented the scheduling method and simulation environment in a simulated environment with python3.6 on a desktop computer with Intel kernel i710750h2.60GHz CPU and 16gb ram. In the experiment, a real edge calculation scene with a plurality of edge nodes is considered. By default we set the bandwidth to 10Mbps, the number of edge nodes to 15, the run container number limit to 50, the storage capacity limit to 20GB, the total number of containers to 200, and the superparameter α to 0.5. In a default setting, the container has sufficient capacity to store the layer and run the container.
In terms of data, we collected the latest version of 5000 most popular images from Docker hub. To compare with other scheduling methods, we selected 155 most commonly used mirrors from the crawled dataset. After filtering the mirror layer of size 0, there are 810 mirror layers in total. The total size of 155 images is 60gb, and the total size of all layers is 30 gb. For each experiment, all containers were randomly selected from the 155 images described above according to a Zipf distribution and a uniform distribution. We set the shape factor of Zipf to 1.1 by default. For each experiment, we repeated ten times.
We compare this method (LASA for short) with the following reference method:
(1) random Scheduling (RS): the containers are randomly assigned to the edge nodes and sequence layers each time according to an assignment order.
(2) Layer match Scheduling (LS): for each container in a random order, an edge node is selected, most of the image layers of the node are stored locally, and the layers are arranged according to the order of allocation of the containers.
(3) Sidney decomposition based scheduling (SDS): all containers are sorted first by Sidney decomposition, then each container is assigned to a node in succession until a set threshold is reached, and then the layers in each edge node are sorted by method 2.
(4) Kubernetes Scheduling (K8S): the scheduling policy used in Kubernetes schedules containers to edge nodes that locally store the required images, or to edge nodes that download the smallest size.
Under the default setting, we compare the overall results of different scheduling methods in the case of uniform distribution and Zipf distribution, as shown in fig. 3 and fig. 4. The advantages of the present method (shown as LASA) can be clearly compared: the method can stably reduce the total starting time under different distributions for containers with different sizes.
Meanwhile, the performances of different methods in different heterogeneous environments are compared, and the method specifically comprises the following steps:
1. a comparison of the total start-up time of the vessels at different maximum vessel run numbers, the results are shown in figure 5,
2. comparison of total start-up time of the container at different maximum bandwidths, results as shown in figure 6,
3. a comparison of the total start-up time of the containers at different maximum storage spaces, as shown in figure 7,
as can be seen from the above experimental results, the present method achieves a smaller vessel start-up time than the baseline method under different heterogeneous environments.
In addition, we compare the method running time of enabling layer grouping with that of not enabling layer grouping, specifically:
1. the effect of layer grouping for different container numbers on the method run time, and as a result as shown in figure 8,
2. the effect of layer grouping for different numbers of edge nodes on the method run time is shown in fig. 9.
By comparison, it can be found that: layer grouping is enabled, and the method running time can be effectively reduced by 50% -70%.
The specific implementation mode and the embodiment obviously show that the method has the following beneficial effects:
(1) the method allows for mirror-level sharing between containers, reducing the total boot time of a container by joint scheduling of multiple containers.
(2) The method considers the joint allocation of a plurality of containers in a heterogeneous edge computing scene. Meanwhile, the downloading sequence of the mirror image layer on each node after the container distribution is finished is further considered; considering both container allocation and mirror layer download order may maximize the reduction in total boot time for multiple containers.
(3) The method effectively reduces the scale of the optimization problem by a mirror image layer grouping method under the condition of not influencing the optimization result of the total starting time of the container.
(4) By collecting relevant data of a real container and a mirror image in Docker hub and designing a simulation experiment, compared with other scheduling methods, the method effectively reduces the total starting time of the container and does not need a developer to modify the structure of a container system.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent flow transformations made by using the contents of the specification and the drawings, or applied directly or indirectly to other related systems, are included in the scope of the present invention.
Claims (7)
1. An edge computing container allocation and layer download ordering architecture and method thereof: the architecture of the edge computing comprises UEs, edge nodes, a scheduler and a container warehouse. The edge node is coupled to a plurality of UEs via wireless communication. Each edge node is provided with a downloading queue, and the edge nodes download the mirror image layers from the container warehouse in sequence according to the layer sequence in the downloading queue. The scheduler is used for collecting information of a container warehouse and an edge node, the container warehouse is a container mirror image storage library or a storage library set, and the container warehouse is deployed at the cloud end.
2. An edge computing container allocation and layer download ordering architecture and method thereof: the method mainly comprises the following steps: (1) mirror layer grouping, (2) container allocation, (3) mirror layer download ordering.
The mirror image layer is divided into: any two layers li, li' having the same relationship with each container store will be added to the same group, treating the entire group of mirrored layers as one mirrored layer.
The container is distributed as follows: selecting a container warehouse and an edge node to implement distribution, and determining a container warehouse distribution variable according to a layer-aware container distribution algorithm, wherein the layer-aware container distribution algorithm specifically comprises the following steps:
inputting: { bk|ek∈E},{sk|ek∈E},{mk|ek∈E},{pi|li∈L},{rij|li∈L,cj∈C}
And (3) outputting: a isjk
The downloading sequence of the mirror image layer is as follows: firstly, dividing a container layer into a plurality of sorted sets according to a side decomposition (side decomposition) method, then scheduling the container layer in the same set by taking the container as a unit, selecting the container with the minimum residual size for each round according to a greedy mirror image layer sorting algorithm, and downloading the residual mirror image layers. The greedy mirror image layer sequencing algorithm specifically comprises the following steps:
inputting: { pi|li∈L},{rij|li∈L,cj∈C},bk,C
3. the edge computing container allocation and layer download ordering architecture and method thereof according to claim 1, the container warehouse scheduling process comprising:
(1) multiple UEs offload multiple tasks.
(2) The scheduler collects task information and edge node status.
(3) Based on the collected information, the scheduler makes decisions on container warehouse allocation and layer download ordering.
(4) With these decisions and other a priori information, each edge node has a download queue and downloads the image layer according to the sequence in the download queue.
(5) Each container store starts running after all the mirror layers belonging to it have been downloaded.
4. An edge computing container allocation and layer download ordering architecture and method thereof according to claim 1, wherein said layer grouping and container warehouse allocation algorithm is implemented and run in a scheduler, said layer ordering algorithm can be implemented in kubelelet of each node.
5. The edge computing container allocation and layer download ordering architecture and method thereof according to claim 1, wherein said creating a container warehouse comprises the steps of:
(1) the user initiates a pod create request by calling the API through the interface,
(2) the scheduler creates a pod on the selected node and returns the result according to the output of the layer grouping and container warehouse allocation algorithm,
(3) the kubel for each node manages the mirror layer download according to the results of the layer ordering algorithm and creates a container store in a given pod.
7. The edge computing container allocation and layer download sorting architecture and method as claimed in claim 2, wherein said mirror layer download sorting schedules the container layers in the same set according to container as unit, such as j ← argmin in greedy mirror layer sorting algorithmjpjEach round selects the smallest size container remaining and downloads its remaining mirrored layers as shown.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110808605.3A CN113641448A (en) | 2021-07-16 | 2021-07-16 | Edge computing container allocation and layer download ordering architecture and method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110808605.3A CN113641448A (en) | 2021-07-16 | 2021-07-16 | Edge computing container allocation and layer download ordering architecture and method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113641448A true CN113641448A (en) | 2021-11-12 |
Family
ID=78417622
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110808605.3A Pending CN113641448A (en) | 2021-07-16 | 2021-07-16 | Edge computing container allocation and layer download ordering architecture and method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113641448A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114194770A (en) * | 2021-12-27 | 2022-03-18 | 锐视智能(济南)技术研究中心(有限合伙) | Feeding device for automatically orienting and sequencing rectangular bottles |
CN117850980A (en) * | 2023-12-25 | 2024-04-09 | 慧之安信息技术股份有限公司 | Container mirror image construction method and system based on cloud edge cooperation |
-
2021
- 2021-07-16 CN CN202110808605.3A patent/CN113641448A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114194770A (en) * | 2021-12-27 | 2022-03-18 | 锐视智能(济南)技术研究中心(有限合伙) | Feeding device for automatically orienting and sequencing rectangular bottles |
CN114194770B (en) * | 2021-12-27 | 2024-02-20 | 锐视智能(济南)技术研究中心(有限合伙) | Feeding device for automatically and directionally sorting rectangular bottles |
CN117850980A (en) * | 2023-12-25 | 2024-04-09 | 慧之安信息技术股份有限公司 | Container mirror image construction method and system based on cloud edge cooperation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9442760B2 (en) | Job scheduling using expected server performance information | |
CN107273185B (en) | Load balancing control method based on virtual machine | |
CN109788046B (en) | Multi-strategy edge computing resource scheduling method based on improved bee colony algorithm | |
Abad et al. | Package-aware scheduling of faas functions | |
CN114138486A (en) | Containerized micro-service arranging method, system and medium for cloud edge heterogeneous environment | |
CN107291536B (en) | Application task flow scheduling method in cloud computing environment | |
CN111381950A (en) | Task scheduling method and system based on multiple copies for edge computing environment | |
CN110058924A (en) | A kind of container dispatching method of multiple-objection optimization | |
CN110166507B (en) | Multi-resource scheduling method and device | |
CN112416585A (en) | GPU resource management and intelligent scheduling method for deep learning | |
CN108182109A (en) | Workflow schedule and data distributing method under a kind of cloud environment | |
CN113641448A (en) | Edge computing container allocation and layer download ordering architecture and method thereof | |
Fu et al. | An optimal locality-aware task scheduling algorithm based on bipartite graph modelling for spark applications | |
CN115022311B (en) | Method and device for selecting micro-service container instance | |
Ijaz et al. | MOPT: list-based heuristic for scheduling workflows in cloud environment | |
CN110221920A (en) | Dispositions method, device, storage medium and system | |
CN112860383A (en) | Cluster resource scheduling method, device, equipment and storage medium | |
CN111159859B (en) | Cloud container cluster deployment method and system | |
CN107070965A (en) | A kind of Multi-workflow resource provision method virtualized under container resource | |
Garg et al. | Heuristic and reinforcement learning algorithms for dynamic service placement on mobile edge cloud | |
CN109062683B (en) | Method, apparatus and computer readable storage medium for host resource allocation | |
CN115361349B (en) | Resource using method and device | |
Filippini et al. | SPACE4AI-R: a runtime management tool for AI applications component placement and resource scaling in computing continua | |
Meddeber et al. | Tasks assignment for Grid computing | |
CN116010051A (en) | Federal learning multitasking scheduling method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |