CN113114715A - Scheduling method based on edge calculation and edge equipment cluster - Google Patents

Scheduling method based on edge calculation and edge equipment cluster Download PDF

Info

Publication number
CN113114715A
CN113114715A CN202110205681.5A CN202110205681A CN113114715A CN 113114715 A CN113114715 A CN 113114715A CN 202110205681 A CN202110205681 A CN 202110205681A CN 113114715 A CN113114715 A CN 113114715A
Authority
CN
China
Prior art keywords
container
edge device
scheduling
edge
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110205681.5A
Other languages
Chinese (zh)
Other versions
CN113114715B (en
Inventor
朱少武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wangsu Science and Technology Co Ltd
Original Assignee
Wangsu Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wangsu Science and Technology Co Ltd filed Critical Wangsu Science and Technology Co Ltd
Priority to CN202110205681.5A priority Critical patent/CN113114715B/en
Publication of CN113114715A publication Critical patent/CN113114715A/en
Application granted granted Critical
Publication of CN113114715B publication Critical patent/CN113114715B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a scheduling method based on edge calculation and an edge device cluster, wherein the method comprises the following steps: receiving an arrangement task sent by a control center, and creating a container matched with the arrangement task; detecting equipment information of each edge device, and screening out target edge devices which are adaptive to the container from each edge device on the basis of the equipment information; and dispatching the container to the target edge device, and binding the container and the target edge device to establish a mapping relation between the container and a corresponding cache in the target edge device. The technical scheme provided by the application can improve the deployment efficiency of the container in the edge device.

Description

Scheduling method based on edge calculation and edge equipment cluster
Technical Field
The invention relates to the technical field of internet, in particular to a scheduling method based on edge calculation and an edge device cluster.
Background
In the current edge computing scenario, in order to maximize the utilization of the edge device resources, a container virtualization technology may be adopted to program a container meeting the user's desire to the edge device, so as to provide services through the container deployed in the edge device.
At present, the container arrangement and deployment process can be implemented in a Content Delivery Network (CDN) through a system architecture of kubernets and kubbedge. The conventional container arrangement and deployment process usually only considers whether the CPU resources and memory resources of the edge device are sufficient. However, in practical applications, the resources of the edge device cannot be fully utilized by only deploying the container according to the CPU resources and the memory resources, and the container cannot be well compatible with the edge device.
Disclosure of Invention
The application aims to provide a scheduling method based on edge computing and an edge device cluster, which can improve the deployment efficiency of a container in edge devices.
In order to achieve the above object, an aspect of the present application provides a scheduling method based on edge computing, where the method is applied to an edge device cluster, where the edge device cluster includes a plurality of edge devices of a nanotube, and the method includes: receiving an arrangement task sent by a control center, and creating a container matched with the arrangement task; detecting equipment information of each edge device, and screening out target edge devices which are adaptive to the container from each edge device on the basis of the equipment information; and dispatching the container to the target edge device, and binding the container and the target edge device to establish a mapping relation between the container and a corresponding cache in the target edge device.
In order to achieve the above object, another aspect of the present application further provides an edge device cluster, where the edge device cluster includes a plurality of edge devices managed by a nanotube, and the edge device cluster further includes a service interface, a storage unit, a controller, and a scheduler, where: the service interface is used for receiving the scheduling tasks sent by the control center and writing the scheduling tasks into the storage unit; the controller is used for monitoring the programming tasks written in the storage unit and creating containers matched with the programming tasks; the scheduler is configured to detect device information of each of the edge devices, and screen out a target edge device adapted to the container from each of the edge devices based on the device information; dispatching the container at the target edge device to bind the container with the target edge device.
As can be seen from the above, in the technical solutions provided in one or more embodiments of the present application, the control center may receive a container deployment requirement of a user, and then generate a corresponding scheduling task, where the scheduling task may be issued to the edge device cluster. The edge device cluster may create one or more containers that meet the user's expectations for the received orchestration task. Then, by analyzing the device information of each edge device, a target edge device adapted to the container can be screened out. After the container is scheduled to the target edge device, the container may be bound to the target edge device, and a mapping relationship between the container and the corresponding cache may be established. In this way, since the service in the CDN is usually a cache type service, when the container is scheduled, the cache bound to the container may be preferentially considered, so that the cache in the edge device can be multiplexed, which not only can improve the bandwidth utilization of the edge device, but also can improve the deployment efficiency and service efficiency of the container, so that the deployed container is more compatible with the CDN service.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a scheduling system according to an embodiment of the present invention;
FIG. 2 is a flow chart of a scheduling method based on edge calculation according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an edge device cluster in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, the technical solutions of the present application will be clearly and completely described below with reference to the detailed description of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application are within the scope of protection of the present application.
The present application provides a scheduling method based on edge computation, which can be applied to the system architecture shown in fig. 1. The system architecture can comprise a management platform, a control center and an edge device cluster. The edge device cluster may be configured to manage a plurality of edge devices, a system image provided by a CDN vendor may be installed in an edge device, and after the system image runs, a client may be configured in the edge device. Through the client, the edge device can be automatically managed to the edge device cluster matched with the current region and the operator. In addition, the edge device can report its own device information to the control center through the client, and the control center can issue the device information of the edge device to the corresponding edge device cluster, and the device information of the edge device can be used as a basis in the subsequent container scheduling process.
The management platform may be a user-oriented platform, and the user may select the area information and the operator information for container deployment by logging in the management platform. Therefore, after receiving the content selected by the user, the management platform can perform data interaction with the control center to screen out the edge equipment cluster matched with the area information and the operator information input by the user. Subsequently, a container for the user can be deployed in an edge device of the edge device cluster. After determining the edge device cluster, the management platform may further enable a user to confirm service parameter configuration information of the container, where the service parameter configuration information is used to define a specification of the container, and the service parameter configuration information may include, for example, CPU resources and memory resources required by the container, a container deployment area, an environment variable of the container, a start parameter of a mirror image, a health monitoring parameter, and the like. The service parameter configuration information can be transmitted to the control center by the management platform after being confirmed by the user.
The control center can generate an arrangement task corresponding to the container based on the service parameter configuration information sent by the management platform, and issue the arrangement task to the corresponding edge device cluster, so as to deploy the container in the edge devices of the edge device cluster.
In practical applications, the newly enabled edge device may be added to the corresponding edge device cluster according to the flow illustrated in fig. 1. Specifically, the edge device may automatically initiate a registration request to the management platform through the installed client, and may report its own device information. The management platform can know the attribution of the edge device and the supported operator by analyzing the device information, then can inquire the edge device cluster meeting the attribution and the operator according to the analyzed information, and can send a device admission request to the control center of the edge device cluster. After confirming that the device admission request is legal, the control center can generate the arrangement task of the edge device and issue the arrangement task to the corresponding edge device cluster. After executing the scheduling task, the edge device cluster may feed back a scheduling completion notification to the control center. In this way, the control center may feed back a hosting request result to the management platform for the device hosting request, where the hosting request result may include a cluster identifier of the edge device cluster for receiving the edge device. The management platform may provide the cluster identification to the edge device. Subsequently, the edge device may establish a websocket connection with the edge device cluster pointed by the cluster identifier, thereby joining the edge device cluster. In this way, the nanotube process of the newly enabled edge device can be completed.
In the prior art, resource bandwidth of a CDN node is basically purchased from an operator, and the purchased edge node bandwidth resources have their corresponding upper limit and unit price. In the existing kubernets, the default scheduling basically considers whether the CPU and the memory reach upper limits, and is rarely considered in terms of bandwidth cost, so that the bandwidth resources are not maximally utilized. That is, in the prior art, if the CPU and the memory of a certain edge node do not reach the upper limit, the container is preferentially scheduled to the edge node. However, during the container scheduling process, the bandwidth resource of the edge node is continuously occupied, and the bandwidth resources of other edge nodes may be in an idle state. Generally, the more bandwidth resources are occupied on a certain edge node, the higher the corresponding bandwidth unit price may be, and then the more bandwidth resources are occupied on a certain edge node, while the bandwidth resources on other edge nodes are in an idle state, which may increase the cost of the whole bandwidth resources. In some scenarios, the quality of service may be affected if the upper bandwidth limit of the scheduled edge node exceeds the line bandwidth. In view of this, in the present application, the bandwidth resources of each edge node may be used as a reference in the container scheduling process, so as to utilize the bandwidth resources of each edge node as much as possible, thereby reducing the overall bandwidth cost. In addition, when kubernets are scheduled, Input and Output (IO) condition resources of a disk are not considered, IO resources of different edge devices are usually different, most of CDN are cache type services, and the IO resources affect service bandwidth to a great extent. In addition, kubernets are mostly stateless scheduling, but the CDN is mainly a cache type service, and a historical cache state needs to be considered during scheduling, so that the cache is reused as much as possible, thereby improving the bandwidth utilization efficiency.
In order to improve the above problem, please refer to fig. 2 and fig. 3, an edge computing-based scheduling method provided by an embodiment of the present application may be applied to the above edge device cluster, which may be a kubernets (k8s) -based cluster, and the method may include the following steps.
S1: and receiving the programming tasks sent by the control center, and creating containers matched with the programming tasks.
As shown in fig. 2, the edge device cluster may include a service interface (APIserver), a storage unit (etcd), a controller (control manager), a scheduler (scheduler), and a scheduling interface (cloudcore). The scheduling task issued by the control center (pontus) can be received by the service interface, and the service interface can write the received scheduling task into the storage unit. The controller may listen to the storage unit, and when it is listened that a newly written orchestration task occurs in the storage unit, the controller may parse the orchestration task and may create one or more containers matching the orchestration task. The scheduler may manage containers in the edge device cluster, and when a created container is detected, since the part of the container is not scheduled, the scheduler may detect device information of each edge device in the current edge device cluster, and screen out a target edge device adapted to the container from each edge device based on the detected device information.
S3: and detecting equipment information of each edge device, and screening out target edge devices which are suitable for the container from each edge device on the basis of the equipment information.
In practical applications, the device information of the edge device may include various parameters. For example, the device information may include CPU resources and memory resources of the edge device, a device name of the edge device, port information of the edge device, hard disk information in the edge device (volume information of the hard disk, available domain label, etc.), stain information (taint) of the edge device, and so on. In addition, in the present application, in order to make the container better compatible with the edge device, the dial line of the edge device, the bandwidth resource of the edge device, and the device type of the edge device may also be used as parameters to be considered, and added to the device information of the edge device. Subsequently, according to the actual scheduling requirement, one or more parameters in the device information can be selected for analysis, so as to determine the edge device with better compatibility with the container.
In one embodiment, the container is created based on service parameter configuration information set by a user, so that the service parameter configuration information can accurately describe the requirements of a container. In order to query the edge device with better adaptability to the container from a plurality of edge devices, the service parameter configuration information of the container may be compared with the device information of the edge device, so as to determine the edge device which meets the service parameter configuration information. The service parameter configuration information of the container may include, in addition to the plurality of parameters described above, one or more of the following parameters: CPU resources and memory resources required by the container; the name of the container-specified host; a port of a host machine to which the container applies; a container-specified scheduling node; an available domain label corresponding to the container; a tolerance field of the container; affinity and mutual exclusion relationships of the container; the amount of bandwidth requested by the container; and the container corresponds to the service type. In practical applications, whether an edge device can be a scheduling node of a container or not can be detected from the following angles for each parameter:
PodFitsResources (CPU and memory resources required for containers)
The PodFitsResources checks the requests field of the container, in which the minimum CPU resource and the minimum memory resource required by the container can be noted, and by comparing the parameters, whether the CPU resource and the memory resource of the edge device are sufficient can be judged.
PodFitsHost (name of host specified by container)
The parameter may be defined by spec. nodename of the container, and by identifying the parameter, it may be determined whether the name of the edge device coincides with the name of the host specified by the container.
PodFitsHostPort (Container application host port)
Nodeport may be defined by the spec of the container, and by identifying this parameter, it may be determined whether the port of the host that the container applies for conflicts with the port that the edge device has used.
PodMatchNodeSelector (container-specified scheduling node)
The container may specify the scheduling node through a node selector or a node affinity, and by identifying the parameter, it may be determined whether the edge device is the scheduling node specified by the container.
Nodiskconflict (hard disk conflict detection)
By identifying the parameter, it can be determined whether there is a conflict between persistent volumes (volumes) that multiple container claims mount.
MaxpDVolumeCountPredicate (upper limit detection of persistent volume)
By identifying this parameter, it can be determined whether a certain type of persistent Volume on an edge device has exceeded a certain number, and if so, it is declared that a container using this type of persistent Volume can no longer be dispatched to this edge device.
VolumeZonePredicate (available field tag for container)
By identifying the parameter, it can be determined whether the available domain label of the Volume defined by the container matches the available domain label of the edge device.
PodTolerateNodeTaints (tolerance field of container)
By identifying this parameter, it can be determined whether the tolerance field of the container matches the stain (taint) of the edge device. A container can only be dispatched to an edge device if the container's tolerance field indicates that a stain can be tolerated on the edge device.
NodeMemoryPressurePredicate (memory pressure test)
By identifying the parameter, it can be determined whether the memory of the current edge device is sufficient, and if not, the container cannot be dispatched to the edge device.
PodAffinityPredicate and podAbiAffinity (affinity and mutual exclusion relationship of container)
By identifying this parameter, it can be determined whether there are other containers in the edge device that are either compatible or exclusive with the container.
In addition to the above detection, the present application may also detect a dial line, a bandwidth resource, and a device type of an edge device, specifically:
1. line dialing detection
According to the line exclusive algorithm, containers of the same service type need to be distributed and deployed on different lines, and if one line is occupied, other containers cannot be dispatched to the line.
2. Bandwidth resource detection
According to the bandwidth amount applied by the container, the bandwidth resources can be distributed in a stacking mode on a plurality of dial-up lines of the same edge device. Specifically, the manner of stacking may represent: when the bandwidth of one line is completely allocated, the bandwidth of the second line is not allocated, and the pre-occupied bandwidth is formed after the bandwidth allocation is finished, and other containers cannot preempt the pre-occupied bandwidth of the container unless the container is deleted by the control center.
3. Device type detection
The device types may include architecture types, network types, storage types, etc., and the supported service types are different according to the device types, and the containers generally correspond to the service types. Therefore, containers of different service types can be deployed differently according to the device types.
As can be seen from the above, when determining an edge device that meets the service parameter configuration information, the line information of the edge device may be detected to ensure that containers of the same service type are deployed on different lines, and if the current line is occupied by one container, other containers cannot be scheduled onto the current line. In addition, after the target edge device adapted to the container is determined, the corresponding pre-occupied bandwidth may be allocated to the container in the first line of the target edge device according to the bandwidth amount applied by the container, where the pre-occupied bandwidth cannot be occupied by other containers, and the bandwidth of the second line is allocated only after the bandwidth of the first line is completely allocated.
In one embodiment, since there may be a large number of edge devices that conform to the service parameter configuration information, respective score values may be calculated for the part of edge devices based on a preset preference rule, and the edge device with the highest score value may be used as a target edge device adapted to the container. In practical applications, the preset optimization rule may be implemented in various ways:
1. and calculating the proportion of idle CPU resources and the proportion of idle memory resources in the edge equipment, and then grading based on the idle resource proportion, wherein the more idle resources, the higher the grading value.
2. And identifying the CPU resource utilization ratio, the memory resource utilization ratio and the hard disk resource utilization ratio in the edge device, then calculating the variance of the ratios, and grading based on the variance, wherein the larger the variance is, the lower the grading value is.
Furthermore, when scoring in the above-described manner, the following rules may also be followed:
the greater the number of fields that satisfy the affinity rules of an edge device, the higher the score of the edge device will be;
the greater the number of fields that satisfy the taint tolerance rule, the higher the score of the edge device will be
The more the number of fields satisfying the affinity rule of the container to be scheduled, the higher the score of the edge device will be;
if the mirror image that needs to be used by the container to be dispatched is large and the mirror image already exists on a certain edge device, the score of the edge device is higher;
and scoring each dial line of the edge device based on the line type specified by the container to be dispatched, wherein if the dial lines with the consistent types are more, the scoring is higher.
In the above manner, a corresponding score value can be calculated for each edge device, and the edge device with the highest score value can be used as the target edge device adapted to the container.
S5: and dispatching the container to the target edge device, and binding the container and the target edge device to establish a mapping relation between the container and a corresponding cache in the target edge device.
In this embodiment, after the scheduler screens out the target edge device, the container may be scheduled to the target edge device, and the container may be bound to the target edge device. After obtaining the scheduling result, the service interface may update the scheduling result of the container in the storage unit, so that the resource used by the container may be deducted from the edge device cluster, rather than avoiding the repeated allocation of the resource. In addition, the scheduling interface (cloudcore) may monitor the scheduling result in the storage unit, and after monitoring that the scheduling result is updated in the storage unit, may parse the scheduling result, and issue a container scheduling instruction to the target edge device to which the scheduling result points. Thus, after receiving the container arrangement instruction, the target edge device may locally deploy the corresponding container, and already establish the mapping relationship between the container and the corresponding cache.
After the container deployment is completed, the target edge device may report the deployment result to the scheduling interface through an reporting interface (edgecore). After receiving the deployment result reported by the target edge device, the scheduling interface may update the deployment state of the container in the storage unit by calling a service interface (APIserver) based on the deployment result. In this way, the container scheduling and deployment process can be completed.
In the present application, it is considered that most of the services in the CDN are cache type services, and when scheduling a container, in order to improve bandwidth utilization and service effect, it is necessary to consider a historical cache state as much as possible, and it is preferable to be able to multiplex existing caches. Based on the purpose, once the container is successfully scheduled to a certain edge device, the container establishes a mapping relationship with the storage (hard disk resources and cache resources) and the bandwidth resources of the edge device and the edge device. Subsequently, unless the mapping is released, the container will only run and schedule to the edge device defined by the mapping, and use the line, bandwidth and storage resources defined by the mapping.
In one embodiment, if the communication between the edge device cluster and the control center is interrupted, after the communication is resumed, the container may still be scheduled to the target edge device according to the established mapping relationship, so that the container can continue to use the cache defined by the mapping relationship.
In the present application, when a container is bound to a target edge device, the container may be allocated with a line according to a dial-up line and a bandwidth resource of the target edge device, and may pre-occupy a part of the bandwidth on the line. In addition, according to the shared algorithm, an optimal hard disk can be screened out for the container in the target edge device, data of the container is written into the optimal hard disk, and a binding relationship between the container and the optimal hard disk is established, wherein the binding relationship can be persisted to the edge device cluster. Thus, when the container is created, the edge device cluster can bind the hard disk resource and the network resource of the container. Meanwhile, if the first scheduling is successful, the binding relationship between the container and the edge device is persisted, and the state data of the container is ensured not to be lost, so that the cache type service can be better provided.
As shown in fig. 3, an embodiment of the present application further provides an edge device cluster, where the edge device cluster includes a plurality of edge devices managed by a nanotube, and the edge device cluster further includes a service interface, a storage unit, a controller, and a scheduler, where:
the service interface is used for receiving the scheduling tasks sent by the control center and writing the scheduling tasks into the storage unit;
the controller is used for monitoring the programming tasks written in the storage unit and creating containers matched with the programming tasks;
the scheduler is configured to detect device information of each of the edge devices, and screen out a target edge device adapted to the container from each of the edge devices based on the device information; dispatching the container at the target edge device to bind the container with the target edge device.
In one embodiment, the service interface is further configured to update the scheduling result of the container in the storage unit after the container is scheduled to the target edge device, so as to deduct the resource used by the container from the edge device cluster.
In an embodiment, the edge device cluster further includes a scheduling interface, where the scheduling interface is configured to monitor an updated scheduling result in the storage unit, and issue a container orchestration instruction to a target edge device to which the scheduling result points, so as to deploy the container in the target edge device, and establish a mapping relationship between the container and a corresponding cache in the target edge device.
In an embodiment, the scheduling interface is further configured to receive a deployment result reported by the target edge device, and call the service interface based on the deployment result to update the deployment state of the container in the storage unit.
As can be seen from the above, in the technical solutions provided in one or more embodiments of the present application, the control center may receive a container deployment requirement of a user, and then generate a corresponding scheduling task, where the scheduling task may be issued to the edge device cluster. The edge device cluster may create one or more containers that meet the user's expectations for the received orchestration task. Then, by analyzing the device information of each edge device, a target edge device adapted to the container can be screened out. After the container is scheduled to the target edge device, the container may be bound to the target edge device, and a mapping relationship between the container and the corresponding cache may be established. In this way, since the service in the CDN is usually a cache type service, when the container is scheduled, the cache bound to the container may be preferentially considered, so that the cache in the edge device can be multiplexed, which not only can improve the bandwidth utilization of the edge device, but also can improve the deployment efficiency and service efficiency of the container, so that the deployed container is more compatible with the CDN service.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the edge service cluster, reference may be made to the introduction of the embodiments of the method described above for a comparative explanation.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, hard disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an embodiment of the present application, and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (12)

1. The scheduling method based on edge computing is applied to an edge device cluster, wherein the edge device cluster comprises a plurality of edge devices with nanotubes, and the method comprises the following steps:
receiving an arrangement task sent by a control center, and creating a container matched with the arrangement task;
detecting equipment information of each edge device, and screening out target edge devices which are adaptive to the container from each edge device on the basis of the equipment information;
and dispatching the container to the target edge device, and binding the container and the target edge device to establish a mapping relation between the container and a corresponding cache in the target edge device.
2. The method of claim 1, wherein the edge device cluster is matched to user-entered regional information and operator information.
3. The method of claim 1, wherein the device information is used to characterize at least one of a dial-up line, bandwidth resources, device type of the edge device;
screening out a target edge device from each of the edge devices that is adapted to the container comprises:
identifying service parameter configuration information of the container, and comparing equipment information of each edge device with the service parameter configuration information to determine the edge devices which accord with the service parameter configuration information;
and calculating respective scoring values for the determined edge devices based on a preset preference rule, and taking the edge device with the highest scoring value as a target edge device adapted to the container.
4. The method of claim 3, wherein the service parameter configuration information is used to characterize at least one of:
CPU resources and memory resources required by the container; the name of the container-specified host; a port of a host machine to which the container applies; a container-specified scheduling node; an available domain label corresponding to the container; a tolerance field of the container; affinity and mutual exclusion relationships of the container; the amount of bandwidth requested by the container; and the container corresponds to the service type.
5. The method according to claim 3 or 4, wherein when determining an edge device that conforms to the service parameter configuration information, the method further comprises:
the line information of the edge device is detected to ensure that containers of the same service type are deployed on different lines, and if the current line is occupied by one container, other containers cannot be dispatched to the current line.
6. The method of claim 1, further comprising:
and according to the bandwidth amount applied by the container, allocating corresponding pre-occupied bandwidth for the container in a first line of the target edge device, wherein the pre-occupied bandwidth cannot be occupied by other containers, and after the bandwidth of the first line is completely allocated, allocating the bandwidth of a second line.
7. The method of claim 1, further comprising:
screening out an optimal hard disk for the container in the target edge device, writing the data of the container into the optimal hard disk, and establishing a binding relationship between the container and the optimal hard disk.
8. The method of claim 1, further comprising:
and if the communication between the edge device cluster and the control center is interrupted, after the communication is recovered, scheduling the container to the target edge device according to the established mapping relation, so that the container continues to use the cache defined by the mapping relation.
9. An edge device cluster, wherein the edge device cluster includes a plurality of edge devices managed therein, and the edge device cluster further includes a service interface, a storage unit, a controller, and a scheduler, wherein:
the service interface is used for receiving the scheduling tasks sent by the control center and writing the scheduling tasks into the storage unit;
the controller is used for monitoring the programming tasks written in the storage unit and creating containers matched with the programming tasks;
the scheduler is configured to detect device information of each of the edge devices, and screen out a target edge device adapted to the container from each of the edge devices based on the device information; dispatching the container at the target edge device to bind the container with the target edge device.
10. The edge device cluster of claim 9, wherein the service interface is further configured to update the scheduling result of the container in the storage unit after the container is scheduled to the target edge device, so as to deduct resources used by the container from the edge device cluster.
11. The edge device cluster of claim 10, further comprising a scheduling interface, wherein the scheduling interface is configured to monitor the updated scheduling result in the storage unit, and issue a container orchestration instruction to a target edge device pointed by the scheduling result, so as to deploy the container in the target edge device, and establish a mapping relationship between the container and a corresponding cache in the target edge device.
12. The edge device cluster of claim 11, wherein the scheduling interface is further configured to receive a deployment result reported by the target edge device, and invoke the service interface based on the deployment result to update the deployment state of the container in the storage unit.
CN202110205681.5A 2021-02-24 2021-02-24 Scheduling method based on edge calculation and edge equipment cluster Active CN113114715B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110205681.5A CN113114715B (en) 2021-02-24 2021-02-24 Scheduling method based on edge calculation and edge equipment cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110205681.5A CN113114715B (en) 2021-02-24 2021-02-24 Scheduling method based on edge calculation and edge equipment cluster

Publications (2)

Publication Number Publication Date
CN113114715A true CN113114715A (en) 2021-07-13
CN113114715B CN113114715B (en) 2024-01-23

Family

ID=76709378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110205681.5A Active CN113114715B (en) 2021-02-24 2021-02-24 Scheduling method based on edge calculation and edge equipment cluster

Country Status (1)

Country Link
CN (1) CN113114715B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113783797A (en) * 2021-09-13 2021-12-10 京东科技信息技术有限公司 Network flow control method, device, equipment and storage medium of cloud native container
CN113934545A (en) * 2021-12-17 2022-01-14 飞诺门阵(北京)科技有限公司 Video data scheduling method, system, electronic equipment and readable medium
CN114124948A (en) * 2021-09-19 2022-03-01 济南浪潮数据技术有限公司 High-availability method, device, equipment and readable medium for cloud component
CN114461382A (en) * 2021-12-27 2022-05-10 天翼云科技有限公司 Flexibly configurable computing power scheduling implementation method and device and storage medium
WO2023193609A1 (en) * 2022-04-06 2023-10-12 International Business Machines Corporation Selective privileged container augmentation
CN117850980A (en) * 2023-12-25 2024-04-09 慧之安信息技术股份有限公司 Container mirror image construction method and system based on cloud edge cooperation
WO2024104395A1 (en) * 2022-11-15 2024-05-23 杭州阿里云飞天信息技术有限公司 Communication service system, method for implementing communication service, device and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105700961A (en) * 2016-02-29 2016-06-22 华为技术有限公司 Business container creation method and device
CN105933137A (en) * 2015-12-21 2016-09-07 中国银联股份有限公司 Resource management method, device and system
CN108462746A (en) * 2018-03-14 2018-08-28 广州西麦科技股份有限公司 A kind of container dispositions method and framework based on openstack
CN109067890A (en) * 2018-08-20 2018-12-21 广东电网有限责任公司 A kind of CDN node edge calculations system based on docker container
CN109634522A (en) * 2018-12-10 2019-04-16 北京百悟科技有限公司 A kind of method, apparatus and computer storage medium of resource management
CN111381936A (en) * 2020-03-23 2020-07-07 中山大学 Method and system for allocating service container resources under distributed cloud system-cloud cluster architecture
CN111506391A (en) * 2020-03-31 2020-08-07 新华三大数据技术有限公司 Container deployment method and device
CN111522639A (en) * 2020-04-16 2020-08-11 南京邮电大学 Multidimensional resource scheduling method under Kubernetes cluster architecture system
CN111679891A (en) * 2020-08-14 2020-09-18 支付宝(杭州)信息技术有限公司 Container multiplexing method, device, equipment and storage medium
CN111694658A (en) * 2020-04-30 2020-09-22 北京城市网邻信息技术有限公司 CPU resource allocation method, device, electronic equipment and storage medium
CN111831450A (en) * 2020-07-20 2020-10-27 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for allocating server resources
US10841152B1 (en) * 2017-12-18 2020-11-17 Pivotal Software, Inc. On-demand cluster creation and management
CN112214280A (en) * 2020-09-16 2021-01-12 中国科学院计算技术研究所 Power system simulation cloud method and system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933137A (en) * 2015-12-21 2016-09-07 中国银联股份有限公司 Resource management method, device and system
WO2017148239A1 (en) * 2016-02-29 2017-09-08 华为技术有限公司 Service container creation method and device
CN105700961A (en) * 2016-02-29 2016-06-22 华为技术有限公司 Business container creation method and device
US10841152B1 (en) * 2017-12-18 2020-11-17 Pivotal Software, Inc. On-demand cluster creation and management
CN108462746A (en) * 2018-03-14 2018-08-28 广州西麦科技股份有限公司 A kind of container dispositions method and framework based on openstack
CN109067890A (en) * 2018-08-20 2018-12-21 广东电网有限责任公司 A kind of CDN node edge calculations system based on docker container
CN109634522A (en) * 2018-12-10 2019-04-16 北京百悟科技有限公司 A kind of method, apparatus and computer storage medium of resource management
CN111381936A (en) * 2020-03-23 2020-07-07 中山大学 Method and system for allocating service container resources under distributed cloud system-cloud cluster architecture
CN111506391A (en) * 2020-03-31 2020-08-07 新华三大数据技术有限公司 Container deployment method and device
CN111522639A (en) * 2020-04-16 2020-08-11 南京邮电大学 Multidimensional resource scheduling method under Kubernetes cluster architecture system
CN111694658A (en) * 2020-04-30 2020-09-22 北京城市网邻信息技术有限公司 CPU resource allocation method, device, electronic equipment and storage medium
CN111831450A (en) * 2020-07-20 2020-10-27 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for allocating server resources
CN111679891A (en) * 2020-08-14 2020-09-18 支付宝(杭州)信息技术有限公司 Container multiplexing method, device, equipment and storage medium
CN112214280A (en) * 2020-09-16 2021-01-12 中国科学院计算技术研究所 Power system simulation cloud method and system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113783797A (en) * 2021-09-13 2021-12-10 京东科技信息技术有限公司 Network flow control method, device, equipment and storage medium of cloud native container
CN113783797B (en) * 2021-09-13 2023-11-07 京东科技信息技术有限公司 Network flow control method, device and equipment of cloud primary container and storage medium
CN114124948A (en) * 2021-09-19 2022-03-01 济南浪潮数据技术有限公司 High-availability method, device, equipment and readable medium for cloud component
CN113934545A (en) * 2021-12-17 2022-01-14 飞诺门阵(北京)科技有限公司 Video data scheduling method, system, electronic equipment and readable medium
CN114461382A (en) * 2021-12-27 2022-05-10 天翼云科技有限公司 Flexibly configurable computing power scheduling implementation method and device and storage medium
WO2023193609A1 (en) * 2022-04-06 2023-10-12 International Business Machines Corporation Selective privileged container augmentation
US11953972B2 (en) 2022-04-06 2024-04-09 International Business Machines Corporation Selective privileged container augmentation
WO2024104395A1 (en) * 2022-11-15 2024-05-23 杭州阿里云飞天信息技术有限公司 Communication service system, method for implementing communication service, device and storage medium
CN117850980A (en) * 2023-12-25 2024-04-09 慧之安信息技术股份有限公司 Container mirror image construction method and system based on cloud edge cooperation

Also Published As

Publication number Publication date
CN113114715B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN113114715B (en) Scheduling method based on edge calculation and edge equipment cluster
US10698712B2 (en) Methods and apparatus to manage virtual machines
US10033816B2 (en) Workflow service using state transfer
US20150186129A1 (en) Method and system for deploying a program module
US20150242200A1 (en) Re-configuration in cloud computing environments
CN112532675A (en) Method, device and medium for establishing network edge computing system
US20180113748A1 (en) Automated configuration of virtual infrastructure
US10733015B2 (en) Prioritizing applications for diagonal scaling in a distributed computing environment
CN112532669A (en) Network edge computing method, device and medium
CN112532758B (en) Method, device and medium for establishing network edge computing system
CN113641311A (en) Method and system for dynamically allocating container storage resources based on local disk
CN113867937A (en) Resource scheduling method and device for cloud computing platform and storage medium
CN111866045A (en) Information processing method and device, computer system and computer readable medium
CN114979286B (en) Access control method, device, equipment and computer storage medium for container service
CN111600771A (en) Network resource detection system and method
WO2023091215A1 (en) Mapping an application signature to designated cloud resources
CN115086166A (en) Computing system, container network configuration method, and storage medium
CN112738181B (en) Method, device and server for cluster external IP access
JP7501983B2 (en) Secure handling of unified message flows in multitenant containers
US10812407B2 (en) Automatic diagonal scaling of workloads in a distributed computing environment
US12113719B2 (en) Method for allocating resources of a network infrastructure
CN110290210B (en) Method and device for automatically allocating different interface flow proportions in interface calling system
CN113010263A (en) Method, system, equipment and storage medium for creating virtual machine in cloud platform
US9628401B2 (en) Software product instance placement
CN115658287A (en) Method, apparatus, medium, and program product for scheduling execution units

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant