CN112732440A - Resource allocation method and device, cloud platform server and edge node equipment - Google Patents

Resource allocation method and device, cloud platform server and edge node equipment Download PDF

Info

Publication number
CN112732440A
CN112732440A CN202110017141.4A CN202110017141A CN112732440A CN 112732440 A CN112732440 A CN 112732440A CN 202110017141 A CN202110017141 A CN 202110017141A CN 112732440 A CN112732440 A CN 112732440A
Authority
CN
China
Prior art keywords
edge node
container
manufacturer
resource configuration
hardware resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110017141.4A
Other languages
Chinese (zh)
Inventor
窦笠
邹勇
王庆辉
万博
徐佳祥
齐宏亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Tower Co Ltd
Original Assignee
China Tower Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Tower Co Ltd filed Critical China Tower Co Ltd
Priority to CN202110017141.4A priority Critical patent/CN112732440A/en
Publication of CN112732440A publication Critical patent/CN112732440A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application provides a resource allocation method, a resource allocation device, a cloud platform server and edge node equipment, wherein the method comprises the steps of obtaining service mirror images of a plurality of manufacturers and service use conditions of the edge node equipment, wherein the service use conditions comprise manufacturers related to the edge node; sending the service mirror images of the factories related to the edge nodes in the plurality of factories to the edge node equipment according to the service use condition of the edge nodes; and sending a container hardware resource configuration instruction to the edge node equipment according to the service use condition of the edge node, wherein the container hardware resource configuration instruction is used for indicating the container hardware resource configuration of the edge node equipment aiming at each manufacturer. The resource utilization rate of the edge node equipment can be improved.

Description

Resource allocation method and device, cloud platform server and edge node equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a resource allocation method, an apparatus cloud platform server, and an edge node device.
Background
The technology of P2P CDN (Peer to Peer Content Delivery Network) is based on widely distributed edge nodes. Currently, each large internet manufacturer develops its own edge node device product, which generally appears in the form of an intelligent router or an intelligent box. The P2P CDN brings better user experience through a more-marginalized device deployment manner, but each manufacturer develops and deploys independently, the size of edge devices is small, services are independent of each other, deployment services are single, each node deploys the same manufacturer application, and the installation of an edge device client is usually factory pre-installation, which cannot be dynamically adjusted according to actual service use conditions, and if the service requirements of the manufacturer in the area are less, the resource utilization rate of the node device is low.
Disclosure of Invention
The embodiment of the application provides a resource allocation method, a resource allocation device, an electronic device and a readable storage medium, so as to solve the problem of resource waste.
In a first aspect, an embodiment of the present application provides a resource allocation method, which is executed by a cloud platform server, and includes:
acquiring service mirror images of a plurality of manufacturers and service use conditions of edge node equipment, wherein the service use conditions comprise manufacturers related to the edge nodes;
sending the service mirror images of the factories related to the edge nodes in the plurality of factories to the edge node equipment according to the service use condition of the edge nodes;
and sending a container hardware resource configuration instruction to the edge node equipment according to the service use condition of the edge node, wherein the container hardware resource configuration instruction is used for indicating the container hardware resource configuration of the edge node equipment aiming at each manufacturer.
In a second aspect, an embodiment of the present application further provides a resource allocation method, which is executed by an edge node device, and includes:
receiving a service mirror image sent by a cloud platform server, wherein a manufacturer corresponding to the service mirror image is associated with the edge node equipment;
correspondingly creating a container based on the manufacturer corresponding to the service mirror image;
and receiving a container hardware resource configuration instruction sent by the cloud platform server, responding to the container hardware resource configuration instruction, and configuring container hardware resources of each manufacturer.
In a third aspect, an embodiment of the present application further provides a resource allocation apparatus, which is applied to a cloud platform server, and includes:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring service mirror images of a plurality of manufacturers and service use conditions of edge node equipment, and the service use conditions comprise manufacturers related to edge nodes;
a first sending module, configured to send, according to a service usage condition of the edge node, a service image of a manufacturer associated with the edge node among the plurality of manufacturers to the edge node device;
a second sending module, configured to send a container hardware resource configuration indication to the edge node device according to the service usage of the edge node, where the container hardware resource configuration indication is used to indicate the container hardware resource configuration of the edge node device for each vendor.
In a fourth aspect, an embodiment of the present application further provides a resource allocation apparatus, which is applied to an edge node device, and includes:
the first receiving module is used for receiving a service mirror image sent by a cloud platform server, and a manufacturer corresponding to the service mirror image is associated with the edge node equipment;
the creating module is used for correspondingly creating a container based on the manufacturer corresponding to the service mirror image;
and the second receiving module is used for receiving the container hardware resource configuration instruction sent by the cloud platform server and responding to the container hardware resource configuration instruction to configure the container hardware resources of each manufacturer.
In a fifth aspect, an embodiment of the present application further provides a cloud platform server, including: the resource allocation method includes a memory, a processor, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps in the resource allocation method disclosed in the first aspect of the embodiments of the present application.
In a sixth aspect, an embodiment of the present application provides an edge node device, including: a memory, a processor, and a program or instructions stored on the memory and executable on the processor, wherein the program or instructions, when executed by the processor, implement the steps of the resource allocation method disclosed in the second aspect of the embodiments of the present application.
In this way, in the embodiment of the present application, service deployment of the edge node device is completed according to the service usage of the edge node device, and the edge node device may deploy services of multiple manufacturers according to the service usage and configure hardware resources of containers corresponding to the manufacturers, thereby achieving a technical effect of improving resource utilization of the edge node device.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a resource allocation method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another resource allocation method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a resource allocation system to which embodiments of the present application are applicable;
FIG. 4 is a schematic diagram of another resource allocation system to which embodiments of the present application are applicable;
fig. 5 is a schematic structural diagram of a resource allocation apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another resource allocation apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another resource allocation apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another resource allocation apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of another resource allocation apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of another resource allocation apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a cloud platform server according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an edge node device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flow chart of resource allocation provided in an embodiment of the present application, and as shown in fig. 1, the resource allocation is executed by a cloud platform server and includes the following steps:
step 101, obtaining service mirror images of a plurality of manufacturers and service use conditions of edge node equipment, wherein the service use conditions comprise manufacturers related to the edge nodes.
The edge node device may be one or more, for example: in a P2P CDN scenario, when the cloud platform server performs service deployment for a certain edge node device, a manufacturer associated with the edge node may be obtained, when the cloud platform server performs service deployment for multiple edge node devices, manufacturers associated with the edge nodes may be obtained, the manufacturer associated with the edge node may also be one or multiple manufacturers, and in an actual service use process of a user, the manufacturer associated with the edge node may dynamically change, such as adding an associated manufacturer or reducing an associated manufacturer.
The service usage may be a usage of a past CDN (Content Delivery Network) service of the edge node, for example: the user quantity of each manufacturer at the edge node, the service times and frequency corresponding to each manufacturer used by the user of the edge node, and the like.
And step 102, according to the service use condition of the edge node equipment, sending the service mirror images of the manufacturers associated with the edge nodes in the plurality of manufacturers to the edge node equipment.
Step 103, sending a container hardware resource configuration instruction to the edge node device according to the service usage of the edge node, where the container hardware resource configuration instruction is used to instruct the edge node device to configure container hardware resources for each manufacturer.
The container of the edge node device may correspond to a manufacturer associated with the edge node one to one, for example: the edge node owns users of manufacturers 1 and 2, and the edge node device may correspondingly create a container 1 and a container 2, which correspond to the manufacturers 1 and 2, respectively, the hardware resource of the container 1 may be used for the storage content of the manufacturers 1, and the hardware resource of the container 2 may be used for the storage content of the manufacturers 2.
In addition, after the configuration of the service mirror image and the container hardware resource is completed, the edge node device may directly receive corresponding storage contents sent by each manufacturer.
In the embodiment of the application, service deployment of the edge node device is completed according to the service use condition of the edge node device, and the edge node device can deploy services of a plurality of manufacturers according to the service use condition and configure hardware resources of containers corresponding to the manufacturers, so that the technical effect of improving the resource utilization rate of the edge node device is achieved.
In addition, the service images of the manufacturers associated with the edge nodes in the plurality of manufacturers are sent to the edge node equipment, the edge node equipment can deploy services of the plurality of manufacturers, and the services of the plurality of manufacturers are separated in a container mode, so that the situation that different edge node equipment is deployed in the same area by each manufacturer can be avoided, and the number of the deployed edge node equipment is reduced.
Optionally, after step 103, the method may further include the steps of:
acquiring the container resource utilization rate of edge node equipment aiming at a target manufacturer, wherein the target manufacturer is a manufacturer in a manufacturer associated with the edge node;
and if the utilization rate of the container resources meets a preset condition, sending a container hardware resource configuration change instruction aiming at the target manufacturer to the edge node equipment, wherein the container hardware resource configuration change instruction is used for indicating the edge node equipment to change the container hardware resource configuration aiming at the target manufacturer.
The container resource utilization rate may be a resource utilization rate obtained dynamically within a specified time, for example: the cloud platform server may monitor a service usage of the edge node, and if an actual resource utilization rate of a container allocated to the target manufacturer within a specified time is 10%, the hardware resource allocated to the container corresponding to the target manufacturer may be reduced.
The preset condition may be a range requirement for the container resource utilization rate, for example: and keeping the hardware resource configuration of the corresponding container with the container resource utilization rate of 40-70%, and adjusting the hardware resource configuration of the corresponding container which is lower than 40% or higher than 70%.
The indication of changing the hardware resource configuration of the container may include increasing or decreasing the hardware resource configuration of the container, for example: allocating 50% of hardware resources to the container corresponding to the target manufacturer, and if the actual utilization rate of the resources of the container corresponding to the target manufacturer is 20%, reducing the hardware resources allocated to the container corresponding to the target manufacturer to 20%, so that the actual utilization rate of the resources of the container corresponding to the target manufacturer can correspondingly reach 50%; if the actual resource utilization rate of the container corresponding to the target manufacturer is 80%, the hardware resource allocated to the container corresponding to the target manufacturer may be increased to 65%, and the actual resource utilization rate of the container corresponding to the target manufacturer may correspondingly reach 61.5%. Meanwhile, the increasing may include newly creating a container, and the decreasing may also include deleting a container corresponding to a certain manufacturer.
In this embodiment, by obtaining the container resource utilization rate of the edge node device for the target manufacturer, and if the container resource utilization rate satisfies the preset condition, sending a container hardware resource configuration change instruction for the target manufacturer to the edge node device, the container hardware resource configuration of the edge node device can be dynamically adjusted according to the container resource utilization rate, and the resource utilization rate of the edge node device is further improved.
Optionally, if the utilization rate of the container resource meets a preset condition, sending a container hardware resource configuration change instruction for the target vendor to the edge node device may specifically include:
if the container resource utilization rate exceeds a first preset value, sending a container hardware resource configuration improvement instruction aiming at the target manufacturer to the edge node equipment;
and if the container resource utilization rate is lower than a second preset value, sending a container hardware resource configuration reduction indication aiming at the target manufacturer to the edge node equipment.
The first preset value and the second preset value may be empirical values, such as: for the deployments of different manufacturers and edge node devices, the first preset value and the second preset value can be set according to specific service contents.
The indication for reducing the container hardware resource allocation may include an indication for deleting the container allocation hardware resource, that is, an indication for deleting the container in the edge node device and a service image of a manufacturer corresponding to the container, and the redundant resource obtained after deletion may be allocated to the corresponding container when the utilization rate of other container resources is high, or may be directly recovered and allocated to the corresponding container when the service of other manufacturers needs the same.
In this embodiment, by determining the relationship between the container resource utilization rate and the first preset value and the second preset value, the container hardware resource allocation of the edge node device is correspondingly increased or decreased, the factory container hardware resource allocation corresponding to the low container resource utilization rate is decreased, and the redundant resources are allocated to the factory container corresponding to the high container utilization rate, so that resource sharing is achieved, the utilization rate of the edge node device is increased, and the device cost is reduced.
Referring to fig. 2, fig. 2 is a schematic flow chart of another resource allocation provided in the embodiment of the present application, and as shown in fig. 2, the resource allocation is executed by an edge node device and includes the following steps:
step 201, receiving a service image sent by a cloud platform server, where a manufacturer corresponding to the service image is associated with the edge node device.
And 202, correspondingly creating a container based on the manufacturer corresponding to the service mirror image.
Step 203, receiving a container hardware resource configuration instruction sent by the cloud platform server, and configuring container hardware resources of each manufacturer in response to the container hardware resource configuration instruction.
Optionally, after step 203, the method may further include the steps of:
sending the container resource utilization rate aiming at a target manufacturer to the cloud platform server, wherein the target manufacturer is a manufacturer in a manufacturer corresponding to the service mirror image;
and receiving a container hardware resource configuration change instruction aiming at a target manufacturer and sent by the cloud platform server, and responding to the container hardware resource configuration change instruction and changing the container hardware resource configuration corresponding to the target manufacturer.
Optionally, the receiving a container hardware resource configuration change instruction for a target manufacturer sent by the cloud platform server, and in response to the container hardware resource configuration change instruction, changing the container hardware resource configuration corresponding to the target manufacturer may specifically include:
receiving a container hardware resource configuration improvement instruction aiming at the target manufacturer and sent by the cloud platform server, and improving the container resource configuration corresponding to the target manufacturer;
and receiving a container hardware resource configuration reduction instruction aiming at the target manufacturer and sent by the cloud platform server, and reducing the resource configuration aiming at the container corresponding to the target manufacturer.
It should be noted that, the present embodiment is implemented as an edge node device corresponding to the foregoing method embodiment, and therefore, reference may be made to relevant descriptions in the foregoing method embodiment, and the same beneficial effects may be achieved. To avoid repetition of the description, the description is omitted.
The various optional implementations described in the embodiments of the present application may be implemented in combination with each other or implemented separately without conflicting with each other, and the embodiments of the present application are not limited to this.
For ease of understanding, examples are illustrated below:
referring to fig. 3, fig. 3 is a schematic diagram of a resource allocation system applicable to the embodiment of the present application, as shown in fig. 3, including a factory side 301, a cloud platform 302, an edge node 303, and a user side 304, where the factory side includes a factory 1 scheduling node, a factory 2 scheduling node, and a factory 3 scheduling node, and the edge node side includes an edge node 1, an edge node 2, and an edge node 3.
The resource allocation method can comprise an edge cloud platform mirror image deployment phase and a container resource dynamic adjustment phase.
The edge cloud platform image deployment phase may include the following processes:
the factory 1 scheduling node, the factory 2 scheduling node and the factory 3 scheduling node respectively send respective service mirror images to the edge cloud platform;
the edge cloud platform judges the user use condition of each manufacturer at each edge node, taking fig. 3 as an example, the edge node 1 has users of all 3 manufacturers, and the edge cloud platform issues the service images of the 3 manufacturers to the edge node 1; the edge node 2 has users of a manufacturer 1 and a manufacturer 2, and the edge cloud platform issues the service images of the manufacturer 1 and the manufacturer 2 to the edge node 2; the edge node 3 has users of a manufacturer 2 and a manufacturer 3, and the edge cloud platform issues the service images of the manufacturer 2 and the manufacturer 3 to the edge node 3;
the edge cloud platform distributes hardware resources of each edge node while issuing a service mirror image according to the user service prediction quantity in each edge node, and also takes fig. 3 as an example, if the user service prediction quantities of manufacturers of the edge node 1 are the same, the hardware configuration of the edge node equipment is averagely distributed to containers corresponding to 3 service manufacturers; in the edge node 2, if the service forecast amount of the manufacturer 1 is large and the service forecast amount of the manufacturer 2 is small, distributing the hardware configuration of the edge node equipment to containers corresponding to the manufacturer 1 and the manufacturer 2 as required; in the edge node 3, if the service prediction quantity of the manufacturer 2 is large and the service prediction quantity of the manufacturer 3 is small, the hardware configuration of the edge node equipment is distributed to the containers corresponding to the manufacturer 2 and the manufacturer 3 according to requirements;
after the edge cloud platform mirror image deployment is completed, the service starts to be in a running state, and the scheduling node of each manufacturer is responsible for scheduling the storage content.
The service mirror image of each manufacturer corresponds to the storage content scheduled by the scheduling node of each manufacturer, and after the deployment of the service mirror image and the configuration of container resources in each edge node device are completed through the edge cloud platform, the scheduling node of each manufacturer can realize the distribution of the storage content to the edge node device according to the deployment mirror image of the edge cloud platform, and the edge node device can be an intelligent router, an intelligent box and the like. The service forecast amount corresponds to the service usage in the method embodiment shown in fig. 1, and reference may be made to the relevant description in the method embodiment, which is not described herein again.
The container resource dynamic adjustment phase may include the following processes:
in the service operation stage, the edge cloud platform monitors the service use condition of each edge node, and the monitoring content comprises the container resource utilization rate allocated to each manufacturer in the mirror image deployment stage of the edge cloud platform and the change condition of an actual user;
the edge cloud platform dynamically adjusts the resource conditions of each manufacturer according to the monitoring content: as shown in fig. 4, if a user of a manufacturer 2 in an edge node 1 changes and does not use the edge node 1, the edge cradle head deletes the CDN service mirror image of the original manufacturer 2 in the edge node 1, and migrates the remaining resources to other service manufacturers for use; a manufacturer 3 user is newly added in the edge node 2, so that the edge cloud platform newly adds a CDN service mirror image of the manufacturer 3 in the edge node 2 and dynamically adjusts the service mirror image resource allocation of the original manufacturer 1 and the manufacturer 2; in the edge node 3, according to the actual service use conditions of the manufacturers 2 and 3, the container configuration of the manufacturer 2 is increased, and the container configuration of the manufacturer 3 is reduced.
The specific algorithm of the container resource utilization rate is as follows:
R(1/2/3)=U(1/2/3)/S(1/2/3),If R(1/2/3)≥80%,S(1/2/3)=S(1/2/3)*130%;
r (1/2/3) represents resource utilization rate of manufacturer 1, manufacturer 2 or manufacturer 3 in an edge node, U represents resources currently used by manufacturer 1, manufacturer 2 or manufacturer 3 in an edge node, S (1/2/3) represents container allocation resources corresponding to manufacturer 1, manufacturer 2 or manufacturer 3 in an edge node, and when the resource utilization rate of a certain manufacturer exceeds 80%, the container allocation resources corresponding to the manufacturer are improved by 30%.
In the process of monitoring each edge node device by the edge cloud platform, the resource utilization rate of each container in each edge node device can be calculated, so that the dynamic adjustment of resource allocation of each container is realized.
In the embodiment of the application, in a shared P2P CDN scene, an edge cloud platform is constructed for deploying a service mirror image and managing an edge node device and a container in the edge node device, and resource deployment is performed in a container mode through a multi-vendor service shared edge node device. The edge cloud platform can perform hotspot analysis according to the number of users and the use conditions of different manufacturers in each area, can automatically issue respective service images to edge node equipment with more user requirements, can perform dynamic adjustment on container resources, delete the service images with lower use rates or reduce corresponding container resource configurations, and allocate redundant resources to service applications with higher use rates, thereby realizing resource sharing, so as to improve the utilization rate of the edge node equipment and reduce the cost of the edge node equipment.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a resource allocation apparatus according to an embodiment of the present application, and as shown in fig. 5, when the resource allocation apparatus 500 is applied to a cloud platform server, the resource allocation apparatus includes:
a first obtaining module 501, configured to obtain service mirrors of multiple manufacturers and service usage of edge node equipment, where the service usage includes manufacturers associated with the edge node;
a first sending module 502, configured to send, according to the service usage of the edge node, a service image of a manufacturer associated with the edge node in the plurality of manufacturers to the edge node device;
a second sending module 503, configured to send a container hardware resource configuration indication to the edge node device according to the service usage of the edge node, where the container hardware resource configuration indication is used to indicate the container hardware resource configuration of the edge node device for each vendor.
Optionally, as shown in fig. 6, the resource allocation apparatus 500 may further include:
a second obtaining module 504, configured to obtain a container resource utilization rate of the edge node device for a target manufacturer, where the target manufacturer is a manufacturer in a manufacturer associated with the edge node;
a third sending module 505, configured to send, to the edge node device, a container hardware resource configuration change instruction for the target vendor if the container resource utilization rate meets a preset condition, where the container hardware resource configuration change instruction is used to indicate that the edge node device changes the container hardware resource configuration for the target vendor.
Optionally, as shown in fig. 7, the third sending module 505 specifically includes:
a first sending unit 5051, configured to send a container hardware resource configuration improvement indication for the target vendor to the edge node device if the container resource utilization rate exceeds a first preset value;
a first sending unit 5052, configured to send a container hardware resource configuration reduction indication for the target vendor to the edge node device if the container resource utilization rate is lower than a second preset value.
The resource allocation apparatus 500 can implement each process of the method embodiment in fig. 1 in the embodiment of the present application, and achieve the same beneficial effects, and for avoiding repetition, the details are not described here again.
Referring to fig. 8, fig. 8 is a schematic structural diagram of another resource allocation apparatus according to an embodiment of the present application, and as shown in fig. 8, the resource allocation apparatus 800 is applied to an edge node device, and includes:
a first receiving module 801, configured to receive a service image sent by a cloud platform server, where a manufacturer corresponding to the service image is associated with the edge node device;
a creating module 802, configured to correspondingly create a container based on a vendor corresponding to the service mirror;
a second receiving module 803, configured to receive a container hardware resource configuration instruction sent by the cloud platform server, and configure, in response to the container hardware resource configuration instruction, container hardware resources of each manufacturer.
Optionally, as shown in fig. 9, the resource allocation apparatus 800 may further include:
a sending module 804, configured to send a container resource utilization rate for a target manufacturer to the cloud platform server, where the target manufacturer is a manufacturer in a manufacturer corresponding to the service mirror image;
a third receiving module 805, configured to receive a container hardware resource configuration change instruction, which is sent by the cloud platform server and is for a target manufacturer, and change, in response to the container hardware resource configuration change instruction, a container hardware resource configuration corresponding to the target manufacturer.
Optionally, as shown in fig. 10, the third receiving module 805 may specifically include:
a first receiving unit 8051, configured to receive a container hardware resource configuration improvement instruction, which is sent by the cloud platform server and is for the target manufacturer, and improve a container resource configuration corresponding to the target manufacturer;
a second receiving unit 8052, configured to receive a container hardware resource configuration reduction instruction, which is sent by the cloud platform server and is for the target manufacturer, and reduce the resource configuration of the container corresponding to the target manufacturer.
The resource allocation apparatus 800 can implement each process of the method embodiment in fig. 2 in the embodiment of the present application, and achieve the same beneficial effects, and for avoiding repetition, the details are not described here again.
Referring to fig. 11, an embodiment of the present application further provides a cloud platform server, where the cloud platform server 1100 includes a processor 1101, a memory 1102, and a program or an instruction stored in the memory 1102 and capable of running on the processor 1101, and when the program or the instruction is executed by the processor 1101, the process of the embodiment of the method in fig. 1 in the embodiment of the present application is implemented, and the same technical effect can be achieved, and is not repeated here to avoid repetition.
Referring to fig. 12, an edge node apparatus 1200 according to an embodiment of the present application further includes a processor 1201, a memory 1202, and a program or an instruction stored in the memory 1202 and executable on the processor 1201, where the program or the instruction when executed by the processor 1201 implements each process of the method embodiment in fig. 2 according to the embodiment of the present application, and can achieve the same technical effect, and is not described herein again to avoid repetition.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A resource allocation method executed by a cloud platform server is characterized by comprising the following steps:
acquiring service mirror images of a plurality of manufacturers and service use conditions of edge node equipment, wherein the service use conditions comprise manufacturers related to the edge nodes;
sending the service mirror images of the factories related to the edge nodes in the plurality of factories to the edge node equipment according to the service use condition of the edge nodes;
and sending a container hardware resource configuration instruction to the edge node equipment according to the service use condition of the edge node, wherein the container hardware resource configuration instruction is used for indicating the container hardware resource configuration of the edge node equipment aiming at each manufacturer.
2. The method of claim 1, wherein after sending a container hardware resource configuration indication to the edge node device based on traffic usage of the edge node, the method further comprises:
acquiring the container resource utilization rate of edge node equipment aiming at a target manufacturer, wherein the target manufacturer is a manufacturer in a manufacturer associated with the edge node;
and if the utilization rate of the container resources meets a preset condition, sending a container hardware resource configuration change instruction aiming at the target manufacturer to the edge node equipment, wherein the container hardware resource configuration change instruction is used for indicating the edge node equipment to change the container hardware resource configuration aiming at the target manufacturer.
3. The method of claim 2, wherein the sending a container hardware resource configuration change indication for the target vendor to the edge node device if the container resource utilization satisfies a predetermined condition comprises:
if the container resource utilization rate exceeds a first preset value, sending a container hardware resource configuration improvement instruction aiming at the target manufacturer to the edge node equipment;
and if the container resource utilization rate is lower than a second preset value, sending a container hardware resource configuration reduction indication aiming at the target manufacturer to the edge node equipment.
4. A resource allocation method performed by an edge node device, comprising:
receiving a service mirror image sent by a cloud platform server, wherein a manufacturer corresponding to the service mirror image is associated with the edge node equipment;
correspondingly creating a container based on the manufacturer corresponding to the service mirror image;
and receiving a container hardware resource configuration instruction sent by the cloud platform server, responding to the container hardware resource configuration instruction, and configuring container hardware resources of each manufacturer.
5. The method of claim 4, wherein after receiving the container hardware resource configuration indication sent by the cloud platform server and configuring container hardware resources for vendors in response to the container hardware resource configuration indication, the method further comprises:
sending the container resource utilization rate aiming at a target manufacturer to the cloud platform server, wherein the target manufacturer is a manufacturer in a manufacturer corresponding to the service mirror image;
and receiving a container hardware resource configuration change instruction aiming at a target manufacturer and sent by the cloud platform server, and responding to the container hardware resource configuration change instruction and changing the container hardware resource configuration corresponding to the target manufacturer.
6. The method of claim 5, wherein the receiving a container hardware resource configuration change instruction for a target vendor sent by the cloud platform server, and in response to the container hardware resource configuration change instruction, changing a container hardware resource configuration corresponding to the target vendor comprises:
receiving a container hardware resource configuration improvement instruction aiming at the target manufacturer and sent by the cloud platform server, and improving the container resource configuration corresponding to the target manufacturer;
and receiving a container hardware resource configuration reduction instruction aiming at the target manufacturer and sent by the cloud platform server, and reducing the resource configuration aiming at the container corresponding to the target manufacturer.
7. A resource allocation device is applied to a cloud platform server and is characterized by comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring service mirror images of a plurality of manufacturers and service use conditions of edge node equipment, and the service use conditions comprise manufacturers related to edge nodes;
a first sending module, configured to send, according to a service usage condition of the edge node, a service image of a manufacturer associated with the edge node among the plurality of manufacturers to the edge node device;
a second sending module, configured to send a container hardware resource configuration indication to the edge node device according to the service usage of the edge node, where the container hardware resource configuration indication is used to indicate the container hardware resource configuration of the edge node device for each vendor.
8. A resource allocation apparatus applied to an edge node device includes:
the first receiving module is used for receiving a service mirror image sent by a cloud platform server, and a manufacturer corresponding to the service mirror image is associated with the edge node equipment;
the creating module is used for correspondingly creating a container based on the manufacturer corresponding to the service mirror image;
and the second receiving module is used for receiving the container hardware resource configuration instruction sent by the cloud platform server and responding to the container hardware resource configuration instruction to configure the container hardware resources of each manufacturer.
9. A cloud platform server comprising a processor, a memory and a program or instructions stored on the memory and running on the processor, which when executed by the processor, implement the steps of the resource allocation method of any one of claims 1 to 3.
10. An edge node device comprising a processor, a memory, and a program or instructions stored on the memory and run on the processor, which program or instructions, when executed by the processor, implement the steps of the resource allocation method of any one of claims 4 to 6.
CN202110017141.4A 2021-01-07 2021-01-07 Resource allocation method and device, cloud platform server and edge node equipment Pending CN112732440A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110017141.4A CN112732440A (en) 2021-01-07 2021-01-07 Resource allocation method and device, cloud platform server and edge node equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110017141.4A CN112732440A (en) 2021-01-07 2021-01-07 Resource allocation method and device, cloud platform server and edge node equipment

Publications (1)

Publication Number Publication Date
CN112732440A true CN112732440A (en) 2021-04-30

Family

ID=75590921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110017141.4A Pending CN112732440A (en) 2021-01-07 2021-01-07 Resource allocation method and device, cloud platform server and edge node equipment

Country Status (1)

Country Link
CN (1) CN112732440A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113452586A (en) * 2021-06-11 2021-09-28 青岛海尔科技有限公司 Method and device for registering edge computing node and intelligent home system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160323374A1 (en) * 2015-04-29 2016-11-03 Microsoft Technology Licensing, Llc Optimal Allocation of Dynamic Cloud Computing Platform Resources
CN109302483A (en) * 2018-10-17 2019-02-01 网宿科技股份有限公司 A kind of management method and system of application program
CN110166409A (en) * 2018-02-13 2019-08-23 华为技术有限公司 Equipment cut-in method, related platform and computer storage medium
CN110392094A (en) * 2019-06-03 2019-10-29 网宿科技股份有限公司 A kind of method and fusion CDN system of acquisition business datum
CN111405072A (en) * 2020-06-03 2020-07-10 杭州朗澈科技有限公司 Hybrid cloud optimization method based on cloud manufacturer cost scheduling
CN111796940A (en) * 2020-07-06 2020-10-20 中国铁塔股份有限公司 Resource allocation method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160323374A1 (en) * 2015-04-29 2016-11-03 Microsoft Technology Licensing, Llc Optimal Allocation of Dynamic Cloud Computing Platform Resources
CN110166409A (en) * 2018-02-13 2019-08-23 华为技术有限公司 Equipment cut-in method, related platform and computer storage medium
CN109302483A (en) * 2018-10-17 2019-02-01 网宿科技股份有限公司 A kind of management method and system of application program
CN110392094A (en) * 2019-06-03 2019-10-29 网宿科技股份有限公司 A kind of method and fusion CDN system of acquisition business datum
CN111405072A (en) * 2020-06-03 2020-07-10 杭州朗澈科技有限公司 Hybrid cloud optimization method based on cloud manufacturer cost scheduling
CN111796940A (en) * 2020-07-06 2020-10-20 中国铁塔股份有限公司 Resource allocation method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113452586A (en) * 2021-06-11 2021-09-28 青岛海尔科技有限公司 Method and device for registering edge computing node and intelligent home system
CN113452586B (en) * 2021-06-11 2023-04-07 青岛海尔科技有限公司 Method and device for registering edge computing node and intelligent home system

Similar Documents

Publication Publication Date Title
CN109743213B (en) Network slice processing method, equipment and system
EP3342138B1 (en) Systems and methods for distributing network resources to network service providers
CN108965485B (en) Container resource management method and device and cloud platform
CN111800442B (en) Network system, mirror image management method, device and storage medium
CN111770535B (en) Network configuration method, device and system based on intention
CN113301077B (en) Cloud computing service deployment and distribution method, system, equipment and storage medium
US10652360B2 (en) Access scheduling method and apparatus for terminal, and computer storage medium
CN106648900B (en) Supercomputing method and system based on smart television
US20150237508A1 (en) Method for generating wireless virtual network and wireless network control device
CN105307208A (en) Wireless network resource distribution method and device for mobile terminal and mobile terminal
CN112732440A (en) Resource allocation method and device, cloud platform server and edge node equipment
CN114786260A (en) Slice resource allocation method and device based on 5G network
CN107171976B (en) Method and device for realizing resource reservation
CN109302302B (en) Method, system and computer readable storage medium for scaling service network element
CN112332999B (en) Bandwidth allocation method, device, equipment and computer readable storage medium
CN105634990B (en) Based on the continuous method for obligating resource of time frequency spectrum, device and processor
CN112003790B (en) Distribution method of network traffic used by intelligent school
EP3890388B1 (en) Method for radio access network configuration, network management equipment, and storage medium
CN111510491A (en) Resource access method, cache server, storage medium and electronic device
CN112449301B (en) Broadcast method of positioning auxiliary information, positioning server and RAN node
CN116155829A (en) Network traffic processing method and device, medium and electronic equipment
CN102893561B (en) A kind of method of Service control, the network equipment, content server and system
CN111611084A (en) Streaming media service instance adjusting method and device and electronic equipment
CN113328868B (en) NFV management method, VNFM, MEC platform and storage medium
CN116193066A (en) Scheduling method and device for media forwarding service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 101, floors 1-3, building 14, North District, yard 9, dongran North Street, Haidian District, Beijing 100029

Applicant after: CHINA TOWER Co.,Ltd.

Address before: 100142 19th floor, 73 Fucheng Road, Haidian District, Beijing

Applicant before: CHINA TOWER Co.,Ltd.

CB02 Change of applicant information