CN113114715B - Scheduling method based on edge calculation and edge equipment cluster - Google Patents

Scheduling method based on edge calculation and edge equipment cluster Download PDF

Info

Publication number
CN113114715B
CN113114715B CN202110205681.5A CN202110205681A CN113114715B CN 113114715 B CN113114715 B CN 113114715B CN 202110205681 A CN202110205681 A CN 202110205681A CN 113114715 B CN113114715 B CN 113114715B
Authority
CN
China
Prior art keywords
container
edge
equipment
edge device
target edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110205681.5A
Other languages
Chinese (zh)
Other versions
CN113114715A (en
Inventor
朱少武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wangsu Science and Technology Co Ltd
Original Assignee
Wangsu Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wangsu Science and Technology Co Ltd filed Critical Wangsu Science and Technology Co Ltd
Priority to CN202110205681.5A priority Critical patent/CN113114715B/en
Publication of CN113114715A publication Critical patent/CN113114715A/en
Application granted granted Critical
Publication of CN113114715B publication Critical patent/CN113114715B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a scheduling method based on edge calculation and an edge equipment cluster, wherein the method comprises the following steps: receiving an orchestration task sent by a control center, and creating a container matched with the orchestration task; detecting equipment information of each edge equipment, and screening target edge equipment which is adapted to the container from the edge equipment based on the equipment information; and dispatching the container to the target edge equipment, and binding the container with the target edge equipment so as to establish a mapping relation between the container and a corresponding cache in the target edge equipment. According to the technical scheme, the deployment efficiency of the container in the edge equipment can be improved.

Description

Scheduling method based on edge calculation and edge equipment cluster
Technical Field
The invention relates to the technical field of internet, in particular to a scheduling method based on edge calculation and an edge equipment cluster.
Background
In the current edge computing scenario, to maximize utilization of edge device resources, container virtualization techniques may be employed to orchestrate containers that meet user expectations to the edge devices, thereby providing services through the containers deployed in the edge devices.
Currently, the above described process of container orchestration and deployment can be implemented in a content delivery network (content delivery network, CDN) through the system architecture of kubernetes and kubredde. The conventional process of arranging and deploying containers only usually considers whether the CPU resource and the memory resource of the edge device are sufficient. However, in practical applications, the deployment of the container based on the CPU resources and the memory resources alone cannot fully utilize the resources of the edge device, and the container cannot be well compatible with the edge device.
Disclosure of Invention
The invention aims to provide a scheduling method based on edge calculation and an edge equipment cluster, which can improve the deployment efficiency of containers in edge equipment.
In order to achieve the above object, an aspect of the present application provides a scheduling method based on edge computing, where the method is applied to an edge device cluster, and the edge device cluster includes a plurality of edge devices of nanotubes, and the method includes: receiving an orchestration task sent by a control center, and creating a container matched with the orchestration task; detecting equipment information of each edge equipment, and screening target edge equipment which is adapted to the container from the edge equipment based on the equipment information; and dispatching the container to the target edge equipment, and binding the container with the target edge equipment so as to establish a mapping relation between the container and a corresponding cache in the target edge equipment.
To achieve the above object, another aspect of the present application further provides an edge device cluster, where the edge device cluster includes a plurality of edge devices of nanotubes, and the edge device cluster further includes a service interface, a storage unit, a controller, and a scheduler, where: the service interface is used for receiving the programming task sent by the control center and writing the programming task into the storage unit; the controller is used for monitoring the programming task written in the storage unit and creating a container matched with the programming task; the scheduler is used for detecting the equipment information of each edge equipment and screening target edge equipment which is adapted to the container from the edge equipment based on the equipment information; the container is dispatched to the target edge device to bind the container with the target edge device.
From the foregoing, it can be seen that, according to the technical solutions provided in one or more embodiments of the present application, a control center may receive a container deployment requirement of a user, and then generate a corresponding orchestration task, where the orchestration task may be issued to an edge device cluster. The edge device cluster may create one or more containers that conform to the user's expectations for the received orchestration tasks. Then, by analyzing the device information of each edge device, the target edge device adapted to the container can be screened out. After the container is dispatched to the target edge device, the container and the target edge device can be bound, and a mapping relationship between the container and the corresponding cache can be established. Therefore, since the service in the CDN is usually a cache type service, when the container is scheduled, the cache bound with the container can be preferentially considered, so that the cache in the edge equipment can be reused, the bandwidth utilization rate of the edge equipment can be improved, the deployment efficiency and the service efficiency of the container can be improved, and the deployed container is more compatible with the CDN service.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a scheduling system architecture in an embodiment of the present invention;
FIG. 2 is a flow chart of a scheduling method based on edge computation in an embodiment of the invention;
fig. 3 is a schematic structural diagram of an edge device cluster in an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the specific embodiments of the present application and the corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The application provides a scheduling method based on edge calculation, which can be applied to a system architecture shown in fig. 1. In the system architecture, a management platform, a control center and an edge device cluster can be included. The edge device cluster can receive a plurality of edge devices, a system image provided by CDN manufacturers can be installed in the edge devices, and after the system image operates, a client can be configured in the edge devices. By means of the client, the edge devices can be automatically managed into an edge device cluster matched with the current area and the operator. In addition, the edge device can report the device information of the edge device to the control center through the client, the control center can send the device information of the edge device to the corresponding edge device cluster, and the device information of the edge device can be used as a basis in the subsequent container scheduling process.
The management platform may be a user-oriented platform, and the user may select the regional information and the operator information of the container deployment by logging into the management platform. Therefore, after receiving the content selected by the user, the management platform can screen out the edge equipment clusters matched with the area information and the operator information input by the user through data interaction with the control center. Subsequently, the user's container may be deployed in the edge device of the cluster of edge devices. After determining the edge device cluster, the management platform may further enable the user to confirm service parameter configuration information of the container, where the service parameter configuration information is used to define a specification of the container, and the service parameter configuration information may include, for example, CPU resources and memory resources required by the container, a region where the container is deployed, environment variables of the container, a mirror image start parameter, a health monitoring parameter, and so on. After the user confirms the service parameter configuration information, the service parameter configuration information can be transmitted to the control center by the management platform.
The control center can generate an arranging task of a corresponding container based on the service parameter configuration information sent by the management platform, and send the arranging task to a corresponding edge device cluster so as to deploy the container in the edge devices of the edge device cluster.
In practical applications, the newly enabled edge device may be added to the corresponding edge device cluster according to the flow illustrated in fig. 1. Specifically, the edge device can automatically initiate a registration request to the management platform through the installed client, and can report own device information. The management platform analyzes the equipment information to know the attribution of the edge equipment and the supported operators, and then can query the edge equipment cluster meeting the attribution and the operators according to the analyzed information, so that the equipment management request can be sent to the control center of the edge equipment cluster. After confirming that the device nanotube request is legal, the control center can generate an arranging task of the edge device and send the arranging task to a corresponding edge device cluster. After the edge device cluster performs the scheduling task, a scheduling completion notification can be fed back to the control center. In this way, the control center can feed back a nanotube request result to the management platform for the device nanotube request, wherein the nanotube request result can comprise a cluster identifier of an edge device cluster for accommodating the edge device. The management platform may provide the cluster identity to the edge device. Subsequently, the edge device can establish websocket connection with the edge device cluster pointed by the cluster identifier, so as to join the edge device cluster. In this way, the nanotube flow of the newly enabled edge device may be completed.
In the prior art, the resource bandwidth of the CDN node is basically purchased from an operator, and the purchased edge node bandwidth resources have corresponding upper limits and unit prices. In the existing kubernetes, the default scheduling basically considers whether the CPU and the memory reach the upper limit, and rarely considers the bandwidth cost, so that the bandwidth resource is not utilized to the maximum extent. That is, in the prior art, if the CPU and memory of an edge node do not reach the upper limit, the edge node is preferentially dispatched with containers. However, during the container scheduling process, the bandwidth resources of the edge node are constantly occupied, while the bandwidth resources of other edge nodes may be in an idle state. Generally, the more bandwidth resources on an edge node occupy, the higher the corresponding bandwidth unit price may be, so that more bandwidth resources are occupied on an edge node, while bandwidth resources on other edge nodes are in idle state, which increases the cost of the overall bandwidth resources. In some scenarios, if the upper bandwidth limit of the scheduled edge node exceeds the line bandwidth, the quality of service may be affected. In view of this, in the present application, the bandwidth resources of each edge node may be used as a reference basis in the container scheduling process, so that the bandwidth resources of each edge node are utilized as much as possible, and the overall bandwidth cost is reduced. In addition, when kubernetes is scheduled, input and Output (IO) condition resources of a disk are not considered, IO resources of different edge devices are usually different, most of CDNs are cache type services, and the IO resources influence the bandwidth of the services to a great extent. In addition, kubernetes is mostly in stateless scheduling, but is mainly a cache type service in the CDN, and the history cache state needs to be considered during scheduling, so that the caches are multiplexed as much as possible, and the bandwidth utilization efficiency is improved.
In order to improve the above problem, referring to fig. 2 and 3, the edge computing-based scheduling method provided in one embodiment of the present application may be applied to the above-mentioned edge device cluster, where the edge device cluster may be a kubernetes (k 8 s) -based cluster, and the method may include the following steps.
S1: and receiving the programming task sent by the control center, and creating a container matched with the programming task.
As shown in fig. 2, the edge device cluster may include a service interface (apis erver), a storage unit (etcd), a controller (controller manager), a scheduler (scheduler), and a scheduling interface (cloudcore). The orchestration task issued by the control center (pontus) may be received by the service interface, which may write the received orchestration task into the storage unit. The controller may snoop the memory locations, and when it is snooped that a newly written orchestration task is present in the memory locations, the controller may parse the orchestration task and may create one or more containers that match the orchestration task. The scheduler may manage containers in the edge device cluster, and when a created container is detected, since the container is not scheduled, the scheduler may detect device information of each edge device in the current edge device cluster, and screen out target edge devices adapted to the container from each edge device based on the detected device information.
S3: and detecting the equipment information of each edge equipment, and screening target edge equipment which is adapted to the container from the edge equipment based on the equipment information.
In practical applications, the device information of the edge device may include various parameters. For example, the device information may include a CPU resource, a memory resource of the edge device, a device name of the edge device, port information of the edge device, hard disk information (volume information of a hard disk, an available domain label, etc.) in the edge device, stain information (point) of the edge device, and the like. In addition, in the present application, in order to make the container better compatible with the edge device, the dial-up line of the edge device, the bandwidth resource of the edge device, and the device type of the edge device may also be used as parameters to be considered and added to the device information of the edge device. Subsequently, according to the actual scheduling requirement, one or more parameters in the equipment information can be selected for analysis, so that the edge equipment with better compatibility with the container is determined.
In one embodiment, the containers are created based on user-set service parameter configuration information, so that the service parameter configuration information accurately describes the needs of a container. In order to query edge devices with good adaptability to the container from a plurality of edge devices, service parameter configuration information of the container can be compared with device information of the edge devices, so that the edge devices conforming to the service parameter configuration information can be determined. The service parameter configuration information of the container may include, in addition to the plurality of parameters described above, one or more of the following: CPU resources and memory resources required by the container; the name of the host designated by the container; a port of a host machine for the container application; a scheduling node specified by the container; the available domain label corresponding to the container; a tolerance field of the container; affinity and mutex relationships of containers; the amount of bandwidth applied by the container; the type of service corresponding to the container. In practical applications, it may be detected for each parameter whether an edge device is able to act as a scheduling node for a container from several angles:
PodFitsResources (CPU resources and memory resources required by the Container)
PodFitsResources checks the requests field of the container, in which the lowest CPU resource and the lowest memory resource required by the container can be noted, and by comparing this parameter, it can be determined whether the CPU resource and the memory resource of the edge device are sufficient.
PodFitsHost (name of host designated by Container)
The parameter may be defined by the spec.nodename of the container, and by identifying the parameter, it may be determined whether the name of the edge device matches the name of the host designated by the container.
PodFitsHostPorts (Port of host for Container application)
The parameter may be defined by a spec.nodeport of the container, and by identifying the parameter, it may be determined whether the port of the host to which the container applies is in conflict with the port that has been used by the edge device.
PodMatchNodeSelect (scheduling node specified by the Container)
The container may specify a schedule node by a nodeSelector nodeAfffficiency, and by identifying this parameter, it may be determined whether the edge device is the schedule node specified by the container.
NoDiskConflict (hard disk conflict detection)
By identifying this parameter, it can be determined whether there is a conflict with the multiple container claims to mount the persistent Volume (Volume).
MaxPDVoliumeCountPrecate (upper bound detection of persistent volumes)
By identifying this parameter, it can be determined whether a certain type of persistent Volume on an edge device has exceeded a certain number, and if so, it is stated that a container using this type of persistent Volume cannot be dispatched to this edge device anymore.
VolumeZonePredimetive (available Domain Label for Container)
By identifying this parameter, it can be determined whether the available domain label of the Volume defined by the container matches the available domain label of the edge device.
PodTitereatesNodeTaints (tolerance field of Container)
By identifying this parameter, it can be determined whether the tolerance (Toleration) field of the container matches the stain (point) of the edge device. If the container's tolerance field indicates that a stain of an edge device can be tolerated, then the container can be dispatched onto the edge device.
NodeMemoryPressuredimate (memory pressure detection)
By identifying this parameter, it can be determined whether the memory of the current edge device is sufficient, and if not, the container cannot be dispatched to the edge device.
PodAffinityPrecatie and PodAntiAfforesity (affinity and mutual exclusion relationship of containers)
By identifying this parameter, it can be determined whether there are other containers in the current edge device that are either compatible or exclusive with the container.
In addition to the above detection, the dial-up line, bandwidth resources and device type of the edge device may also be detected in the present application, specifically:
1. line dialing detection
According to the line exclusive algorithm, containers of the same traffic type need to be distributed on different lines and if one line is occupied, other containers cannot be dispatched on that line anymore.
2. Bandwidth resource detection
Bandwidth resources can be allocated in a stacked manner on multiple dial-up lines of the same edge device, depending on the amount of bandwidth applied by the container. Specifically, the manner of stacking may represent: when the bandwidth of one line is completely allocated, the bandwidth of the second line is allocated, and the bandwidth allocation is completed to form a preemption bandwidth, and other containers cannot preempt the preemption bandwidth of the container unless the container is deleted by the control center.
3. Device type detection
The device types may include architecture types, network types, storage types, etc., and the supported service types are different according to the device types, and the container generally corresponds to the service type. Thus, containers deploying different traffic types can be distinguished according to device type.
From the above, when determining the edge device conforming to the service parameter configuration information, the line information of the edge device may be detected to ensure that containers of the same service type are deployed on different lines, and if a current line is occupied by one container, other containers cannot be scheduled to the current line. In addition, after the target edge device matched with the container is determined, a corresponding preemption bandwidth can be allocated to the container in a first line of the target edge device according to the bandwidth amount applied by the container, wherein the preemption bandwidth cannot be occupied by other containers, and after the bandwidth of the first line is completely allocated, the bandwidth of the second line is allocated.
In one embodiment, since there may be a large number of edge devices that conform to the service parameter configuration information, for the portion of edge devices, respective score values may be calculated for the portion of edge devices based on a preset preference rule, and the edge device with the highest score value may be used as the target edge device adapted to the container. In practical application, the preset preferable rule can be realized in the following ways:
1. and calculating the idle CPU resource proportion and the idle memory resource proportion in the edge equipment, and grading based on the idle resource proportion, wherein the grading value is higher as the idle resources are more.
2. The method comprises the steps of identifying the use duty ratio of CPU resources, the use duty ratio of memory resources and the use duty ratio of hard disk resources in the edge equipment, calculating variances of the duty ratios, scoring based on the variances, wherein the larger the variances are, the lower the scoring value is.
In addition, when scoring is performed in the above manner, the following rule may also be followed:
the more fields that meet affinity rules for edge devices, the higher the score for the edge device;
the more fields that meet the stain tolerance rules, the higher the score of the edge device
The more fields that meet affinity rules for containers to be scheduled, the higher the score for the edge device;
if the image to be used by the container to be scheduled is large and the image is already present on an edge device, the score of the edge device is high;
each dialline of the edge device is scored based on the line type specified by the container to be scheduled, the more diallines of a consistent type, the higher the score.
By the method, the corresponding scoring value can be calculated for each edge device, and the edge device with the highest scoring value can be used as the target edge device which is adapted to the container.
S5: and dispatching the container to the target edge equipment, and binding the container with the target edge equipment so as to establish a mapping relation between the container and a corresponding cache in the target edge equipment.
In this embodiment, after screening out the target edge device, the scheduler may schedule the container to the target edge device, and may bind the container with the target edge device. After the scheduling result is obtained, the service interface may update the scheduling result of the container in the storage unit, so that the resources used by the container may be deducted from the edge device cluster, and repeated allocation of the resources may be avoided. In addition, the scheduling interface (clodcore) can monitor the scheduling result in the storage unit, and after the scheduling result is updated in the storage unit is monitored, the scheduling result can be analyzed, and a container arrangement instruction is issued to the target edge device pointed to by the scheduling result. In this way, after receiving the container arrangement instruction, the target edge device can locally deploy the corresponding container, and the mapping relationship between the container and the corresponding cache is already established.
After the deployment of the container is completed, the target edge device can report the deployment result to the scheduling interface through a reporting interface (edgecore). After receiving the deployment result reported by the target edge device, the scheduling interface can update the deployment state of the container in the storage unit by calling a service interface (apis erver) based on the deployment result. In this way, the scheduling and deployment process of the containers may be completed.
In this application, considering that most of services in the CDN are cache type services, in order to improve bandwidth utilization and service effect when scheduling containers, it is necessary to consider a historical cache state as much as possible, and it is preferable to be able to reuse existing caches. It is based on this objective that once a container is successfully scheduled to an edge device, a mapping relationship is established between the container and the storage (hard disk resource, cache resource) of the edge device, bandwidth resource, and the edge device itself. Subsequently, unless the mapping is released, the container will only run the scheduling to the edge device defined by the mapping, and use the line, bandwidth and storage resources defined by the mapping.
In one embodiment, if the communication between the edge device cluster and the control center is interrupted, after the communication is resumed, the container may still be scheduled to the target edge device according to the established mapping relationship, so that the container may continue to use the cache defined by the mapping relationship.
In the application, when the container is bound with the target edge device, the container can be allocated with a line according to the dial-up line and bandwidth resource of the target edge device, and part of the bandwidth on the line can be preempted. In addition, according to the average algorithm, an optimal hard disk can be screened out for the container in the target edge device, the data of the container is written into the optimal hard disk, and a binding relationship between the container and the optimal hard disk is established, wherein the binding relationship can be persisted to the edge device cluster. Thus, when the container is created, the edge device cluster can bind the hard disk resource and the network resource of the container. Meanwhile, if the first scheduling is successful, the binding relation between the container and the edge equipment is durable, so that the state data of the container is not lost, and the cache type service can be better provided.
As shown in fig. 3, an embodiment of the present application further provides an edge device cluster, where the edge device cluster includes a plurality of edge devices of a nanotube, and the edge device cluster further includes a service interface, a storage unit, a controller, and a scheduler, where:
the service interface is used for receiving the programming task sent by the control center and writing the programming task into the storage unit;
the controller is used for monitoring the programming task written in the storage unit and creating a container matched with the programming task;
the scheduler is used for detecting the equipment information of each edge equipment and screening target edge equipment which is adapted to the container from the edge equipment based on the equipment information; the container is dispatched to the target edge device to bind the container with the target edge device.
In one embodiment, the service interface is further configured to update the scheduling result of the container in the storage unit after the container is scheduled to the target edge device, so as to deduct the resources used by the container from the edge device cluster.
In one embodiment, the edge device cluster further includes a scheduling interface, where the scheduling interface is configured to monitor a scheduling result updated in the storage unit, and send a container arrangement instruction to a target edge device pointed by the scheduling result, so as to deploy the container in the target edge device, and establish a mapping relationship between the container and a corresponding cache in the target edge device.
In one embodiment, the scheduling interface is further configured to receive a deployment result reported by the target edge device, and call the service interface based on the deployment result, so as to update the deployment state of the container in the storage unit.
From the foregoing, it can be seen that, according to the technical solutions provided in one or more embodiments of the present application, a control center may receive a container deployment requirement of a user, and then generate a corresponding orchestration task, where the orchestration task may be issued to an edge device cluster. The edge device cluster may create one or more containers that conform to the user's expectations for the received orchestration tasks. Then, by analyzing the device information of each edge device, the target edge device adapted to the container can be screened out. After the container is dispatched to the target edge device, the container and the target edge device can be bound, and a mapping relationship between the container and the corresponding cache can be established. Therefore, since the service in the CDN is usually a cache type service, when the container is scheduled, the cache bound with the container can be preferentially considered, so that the cache in the edge equipment can be reused, the bandwidth utilization rate of the edge equipment can be improved, the deployment efficiency and the service efficiency of the container can be improved, and the deployed container is more compatible with the CDN service.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are referred to each other, and each embodiment is mainly described as different from other embodiments. In particular, for embodiments of the edge service cluster, reference may be made to the description of embodiments of the foregoing method for comparison explanation.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, hard disk memory, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely an embodiment of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (11)

1. A scheduling method based on edge computation, wherein the method is applied to an edge device cluster, and a plurality of edge devices including nanotubes in the edge device cluster, the method comprising:
receiving an orchestration task sent by a control center, and creating a container matched with the orchestration task;
detecting equipment information of each edge equipment, and screening target edge equipment which is adapted to the container from the edge equipment based on the equipment information;
and dispatching the container to the target edge equipment, and binding the container with the target edge equipment to establish a mapping relation between the container and the hard disk resource, the cache resource and the bandwidth resource of the target edge equipment and between the container and the target edge equipment in the target edge equipment.
2. The method of claim 1, wherein the cluster of edge devices matches user-entered regional information and operator information.
3. The method of claim 1, wherein the device information is used to characterize at least one of a dial-up line, a bandwidth resource, a device type of the edge device;
screening out target edge devices from the edge devices, the target edge devices being adapted to the container, comprises:
identifying service parameter configuration information of the container, and comparing the equipment information of each edge equipment with the service parameter configuration information to determine the edge equipment conforming to the service parameter configuration information;
and calculating respective scoring values for the determined edge devices based on preset preference rules, and taking the edge device with the highest scoring value as the target edge device which is adapted to the container.
4. A method according to claim 3, wherein the service parameter configuration information is used to characterize at least one of:
CPU resources and memory resources required by the container; the name of the host designated by the container; a port of a host machine for the container application; a scheduling node specified by the container; the available domain label corresponding to the container; a tolerance field of the container; affinity and mutex relationships of containers; the amount of bandwidth applied by the container; the type of service corresponding to the container.
5. The method according to claim 3 or 4, wherein when determining an edge device that meets the service parameter configuration information, the method further comprises:
the line information of the edge devices is detected to ensure that containers of the same service type are deployed on different lines, and if a current line is occupied by one container, other containers cannot be scheduled on the current line.
6. The method according to claim 1, wherein the method further comprises:
and according to the bandwidth amount applied by the container, allocating a corresponding preemption bandwidth for the container in a first line of the target edge device, wherein the preemption bandwidth cannot be occupied by other containers, and allocating the bandwidth of a second line after the bandwidth of the first line is completely allocated.
7. The method according to claim 1, wherein the method further comprises:
and screening an optimal hard disk for the container in the target edge equipment, writing the data of the container into the optimal hard disk, and establishing a binding relation between the container and the optimal hard disk.
8. The method according to claim 1, wherein the method further comprises:
and if the communication between the edge device cluster and the control center is interrupted, after the communication is restored, the container is dispatched to the target edge device according to the established mapping relation, so that the container continues to use the cache defined by the mapping relation.
9. The edge device cluster is characterized by comprising a plurality of edge devices of a nanotube, and further comprising a service interface, a storage unit, a controller, a scheduler and a scheduling interface, wherein:
the service interface is used for receiving the programming task sent by the control center and writing the programming task into the storage unit;
the controller is used for monitoring the programming task written in the storage unit and creating a container matched with the programming task;
the scheduler is used for detecting the equipment information of each edge equipment and screening target edge equipment which is adapted to the container from the edge equipment based on the equipment information; dispatching the container to the target edge device to bind the container with the target edge device;
the scheduling interface is used for monitoring the updated scheduling result in the storage unit, and issuing a container arrangement instruction to target edge equipment pointed by the scheduling result so as to deploy the container in the target edge equipment, and establishing a mapping relation between the container and hard disk resources, cache resources and bandwidth resources of the target edge equipment and the mapping relation between the container and the target edge equipment in the target edge equipment.
10. The edge device cluster of claim 9, wherein the service interface is further configured to update the scheduling result of the container in the storage unit after the container is scheduled to the target edge device to deduct resources used by the container from the edge device cluster.
11. The edge device cluster of claim 10, wherein the scheduling interface is further configured to receive a deployment result reported by the target edge device, and invoke the service interface based on the deployment result to update a deployment state of the container in the storage unit.
CN202110205681.5A 2021-02-24 2021-02-24 Scheduling method based on edge calculation and edge equipment cluster Active CN113114715B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110205681.5A CN113114715B (en) 2021-02-24 2021-02-24 Scheduling method based on edge calculation and edge equipment cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110205681.5A CN113114715B (en) 2021-02-24 2021-02-24 Scheduling method based on edge calculation and edge equipment cluster

Publications (2)

Publication Number Publication Date
CN113114715A CN113114715A (en) 2021-07-13
CN113114715B true CN113114715B (en) 2024-01-23

Family

ID=76709378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110205681.5A Active CN113114715B (en) 2021-02-24 2021-02-24 Scheduling method based on edge calculation and edge equipment cluster

Country Status (1)

Country Link
CN (1) CN113114715B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113783797B (en) * 2021-09-13 2023-11-07 京东科技信息技术有限公司 Network flow control method, device and equipment of cloud primary container and storage medium
CN114124948A (en) * 2021-09-19 2022-03-01 济南浪潮数据技术有限公司 High-availability method, device, equipment and readable medium for cloud component
CN113934545A (en) * 2021-12-17 2022-01-14 飞诺门阵(北京)科技有限公司 Video data scheduling method, system, electronic equipment and readable medium
US11953972B2 (en) * 2022-04-06 2024-04-09 International Business Machines Corporation Selective privileged container augmentation
CN117850980A (en) * 2023-12-25 2024-04-09 慧之安信息技术股份有限公司 Container mirror image construction method and system based on cloud edge cooperation

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105700961A (en) * 2016-02-29 2016-06-22 华为技术有限公司 Business container creation method and device
CN105933137A (en) * 2015-12-21 2016-09-07 中国银联股份有限公司 Resource management method, device and system
CN108462746A (en) * 2018-03-14 2018-08-28 广州西麦科技股份有限公司 A kind of container dispositions method and framework based on openstack
CN109067890A (en) * 2018-08-20 2018-12-21 广东电网有限责任公司 A kind of CDN node edge calculations system based on docker container
CN109634522A (en) * 2018-12-10 2019-04-16 北京百悟科技有限公司 A kind of method, apparatus and computer storage medium of resource management
CN111381936A (en) * 2020-03-23 2020-07-07 中山大学 Method and system for allocating service container resources under distributed cloud system-cloud cluster architecture
CN111506391A (en) * 2020-03-31 2020-08-07 新华三大数据技术有限公司 Container deployment method and device
CN111522639A (en) * 2020-04-16 2020-08-11 南京邮电大学 Multidimensional resource scheduling method under Kubernetes cluster architecture system
CN111679891A (en) * 2020-08-14 2020-09-18 支付宝(杭州)信息技术有限公司 Container multiplexing method, device, equipment and storage medium
CN111694658A (en) * 2020-04-30 2020-09-22 北京城市网邻信息技术有限公司 CPU resource allocation method, device, electronic equipment and storage medium
CN111831450A (en) * 2020-07-20 2020-10-27 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for allocating server resources
US10841152B1 (en) * 2017-12-18 2020-11-17 Pivotal Software, Inc. On-demand cluster creation and management
CN112214280A (en) * 2020-09-16 2021-01-12 中国科学院计算技术研究所 Power system simulation cloud method and system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933137A (en) * 2015-12-21 2016-09-07 中国银联股份有限公司 Resource management method, device and system
WO2017148239A1 (en) * 2016-02-29 2017-09-08 华为技术有限公司 Service container creation method and device
CN105700961A (en) * 2016-02-29 2016-06-22 华为技术有限公司 Business container creation method and device
US10841152B1 (en) * 2017-12-18 2020-11-17 Pivotal Software, Inc. On-demand cluster creation and management
CN108462746A (en) * 2018-03-14 2018-08-28 广州西麦科技股份有限公司 A kind of container dispositions method and framework based on openstack
CN109067890A (en) * 2018-08-20 2018-12-21 广东电网有限责任公司 A kind of CDN node edge calculations system based on docker container
CN109634522A (en) * 2018-12-10 2019-04-16 北京百悟科技有限公司 A kind of method, apparatus and computer storage medium of resource management
CN111381936A (en) * 2020-03-23 2020-07-07 中山大学 Method and system for allocating service container resources under distributed cloud system-cloud cluster architecture
CN111506391A (en) * 2020-03-31 2020-08-07 新华三大数据技术有限公司 Container deployment method and device
CN111522639A (en) * 2020-04-16 2020-08-11 南京邮电大学 Multidimensional resource scheduling method under Kubernetes cluster architecture system
CN111694658A (en) * 2020-04-30 2020-09-22 北京城市网邻信息技术有限公司 CPU resource allocation method, device, electronic equipment and storage medium
CN111831450A (en) * 2020-07-20 2020-10-27 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for allocating server resources
CN111679891A (en) * 2020-08-14 2020-09-18 支付宝(杭州)信息技术有限公司 Container multiplexing method, device, equipment and storage medium
CN112214280A (en) * 2020-09-16 2021-01-12 中国科学院计算技术研究所 Power system simulation cloud method and system

Also Published As

Publication number Publication date
CN113114715A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN113114715B (en) Scheduling method based on edge calculation and edge equipment cluster
EP3739845B1 (en) Borrowing data storage resources in a distributed file system
US10698712B2 (en) Methods and apparatus to manage virtual machines
CA3000422C (en) Workflow service using state transfer
CN112532675B (en) Method, device and medium for establishing network edge computing system
US10853196B2 (en) Prioritizing microservices on a container platform for a restore operation
CN110865881A (en) Resource scheduling method and device
US20090006063A1 (en) Tuning and optimizing distributed systems with declarative models
US20180113748A1 (en) Automated configuration of virtual infrastructure
CN112532674A (en) Method, device and medium for establishing network edge computing system
US20210203714A1 (en) System and method for identifying capabilities and limitations of an orchestration based application integration
CN111866045A (en) Information processing method and device, computer system and computer readable medium
EP3672203A1 (en) Distribution method for distributed data computing, device, server and storage medium
CN113867937A (en) Resource scheduling method and device for cloud computing platform and storage medium
CN110290210B (en) Method and device for automatically allocating different interface flow proportions in interface calling system
CN112532758B (en) Method, device and medium for establishing network edge computing system
CN113518002A (en) Monitoring method, device, equipment and storage medium based on server-free platform
CN110011850B (en) Management method and device for services in cloud computing system
CN110968406B (en) Method, device, storage medium and processor for processing task
US9628401B2 (en) Software product instance placement
CN111600771B (en) Network resource detection system and method
CN115686841A (en) Data processing and resource management method, device and system based on service grid
CN115048186A (en) Method and device for processing expansion and contraction of service container, storage medium and electronic equipment
CN114201284A (en) Timed task management method and system
CN112291287A (en) Cloud platform-based containerized application network flow control method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant