CN114490008A - Service scheduling method, device and system and edge computing node - Google Patents

Service scheduling method, device and system and edge computing node Download PDF

Info

Publication number
CN114490008A
CN114490008A CN202011162644.2A CN202011162644A CN114490008A CN 114490008 A CN114490008 A CN 114490008A CN 202011162644 A CN202011162644 A CN 202011162644A CN 114490008 A CN114490008 A CN 114490008A
Authority
CN
China
Prior art keywords
edge computing
node
service
resource
computing node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011162644.2A
Other languages
Chinese (zh)
Inventor
吕航
李佳聪
贾冠一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202011162644.2A priority Critical patent/CN114490008A/en
Publication of CN114490008A publication Critical patent/CN114490008A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/382Payment protocols; Details thereof insuring higher security of transaction
    • G06Q20/3829Payment protocols; Details thereof insuring higher security of transaction involving key management

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The disclosure discloses a service scheduling method, a device and a system and an edge computing node, and relates to the technical field of information. The method comprises the following steps: the method comprises the steps that edge computing nodes added into a block chain receive a service request, scan a resource block chain account book under the condition that the service cannot be borne, and determine the available resource condition and node address position information of each current edge computing node; selecting an edge computing node which is closest to the edge computing node and can meet the service bearing requirement as a service node; and sending a response to the service initiator for redirecting the service request to the service node. The method and the system perform service scheduling based on the block chain technology, do not need to establish a service scheduling platform, and can realize decentralized and efficient service global scheduling of the edge computing node.

Description

Service scheduling method, device and system and edge computing node
Technical Field
The present disclosure relates to the field of information technologies, and in particular, to a method, an apparatus, a system, and an edge computing node for service scheduling.
Background
In the application fields of industrial internet, internet of things, internet of vehicles, high definition video services, and the like, MEC (Multi-access Edge Computing) plays an increasingly important role as a novel Computing model. The edge computing platform is deployed at the edge of a network, provides low-delay and high-flow services for various applications, and is also a data and service junction between cloud ends.
Because of factors such as environmental limitation, construction and operation cost, the edge computing nodes have limited computing capacity, storage capacity and network throughput, and when the traffic borne by the single-point edge computing node is too large, network congestion, data packet loss or storage and lack of computing resources can be caused to cause service failure, so that service scheduling among the edge computing nodes is a necessary choice. In a traditional global service scheduling mode, a service scheduling platform is established, the load condition and the service flow of each edge node are collected, and service scheduling is performed according to a certain optimization algorithm. The scheme has higher construction cost, increases the cost and burden of operation and maintenance, and converges calculation on the central platform at the same time, so that calculation delay is likely to be generated, and the timeliness of the whole service is influenced.
Disclosure of Invention
One technical problem to be solved by the present disclosure is to provide a method, an apparatus, a system and an edge computing node for service scheduling, which can implement decentralized and efficient service global scheduling of the edge computing node.
According to an aspect of the present disclosure, a method for scheduling a service is provided, including: the method comprises the steps that edge computing nodes added into a block chain receive a service request, scan a resource block chain account book under the condition that the service cannot be borne, and determine the available resource condition and node address position information of each current edge computing node; selecting an edge computing node which is closest to the edge computing node and can meet the service bearing requirement as a service node; and sending a response to the service originating terminal for redirecting the service request to the service node.
In some embodiments, after receiving a resource change record sent by each edge computing node, an edge computing node serving as a master node in a block chain generates a block and sends the block to each edge computing node; and each edge computing node writes the received block into a resource block chain account book.
In some embodiments, each edge compute node scans the resource block chain ledger, electing the master node based on the available resource value of each edge compute node.
In some embodiments, electing the master node comprises: each edge computing node scans a resource block chain account book and extracts resource data in a resource block model of each edge computing node, wherein the resource data comprises one or more of computing resources, storage resources, User Plane Function (UPF) capacity resources, network resources and Graphic Processor (GPU) resources; calculating the consumption ratio of various resources; determining the weight of each resource in service bearing according to the consumption ratio of each resource; calculating the available resource value of each edge calculation node according to the weight of each resource in the service bearer; and taking the edge computing node with the maximum available resource value as a main node.
In some embodiments, the resource block model includes edge compute node identification, address location information for edge compute nodes, and inventory and available resources for various types of resources.
In some embodiments, each edge compute node re-elects the master node after receiving the tile.
In some embodiments, if each edge computing node determines that the available resource changes itself, it generates a resource change record, and sends the resource change record to the edge computing node serving as the master node.
In some embodiments, each edge computing node obtains a key of each edge computing node, verifies the received block by using the key, and writes the block into the resource block chain ledger after the verification is passed.
According to another aspect of the present disclosure, a service scheduling apparatus is further provided, including: a service interface module configured to receive a service request; the resource sensing module is configured to judge whether the node resource can bear the service; and the service scheduling module is configured to scan the resource block chain ledger, determine the available resource condition and the node address position information of each current edge computing node, select the edge computing node which is closest to the edge computing node and can meet the service bearing requirement as a service node, and send a response for redirecting the service request to the service node to the service initiating terminal under the condition that the service cannot be borne.
In some embodiments, the controller is configured to generate a block after receiving the resource change record sent by each edge computing node, and send the block to each edge computing node; and the account book reading and writing module is configured to write the received block into the resource block chain account book.
In some embodiments, a controller configured to scan a resource block chain ledger elects a master node based on available resource values for each edge compute node.
In some embodiments, a key management module configured to store keys of respective edge compute nodes; and the checking module is configured to acquire the key of each edge computing node and check the received block by using the key.
According to another aspect of the present disclosure, a service scheduling apparatus is further provided, including: a memory; and a processor coupled to the memory, the processor configured to perform the traffic scheduling method as described above based on instructions stored in the memory.
According to another aspect of the present disclosure, an edge computing node is further provided, including: the service scheduling device is provided.
According to another aspect of the present disclosure, a service scheduling system is further provided, including: a plurality of edge compute nodes as described above.
According to another aspect of the present disclosure, a non-transitory computer-readable storage medium is also proposed, on which computer program instructions are stored, which instructions, when executed by a processor, implement the above-mentioned traffic scheduling method.
In the embodiment of the disclosure, the service scheduling is performed based on the block chain technology, a service scheduling platform is not required to be established, and decentralized and efficient service global scheduling of the edge computing node can be realized.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a flow chart diagram of some embodiments of a traffic scheduling method of the present disclosure.
Fig. 2 is a flowchart illustrating another embodiment of a service scheduling method according to the present disclosure.
Fig. 3 is a flowchart illustrating another embodiment of a service scheduling method according to the present disclosure.
Fig. 4 is a schematic diagram of a resource block model according to the present disclosure.
Fig. 5 is a schematic structural diagram of some embodiments of the traffic scheduling apparatus of the present disclosure.
Fig. 6 is a schematic structural diagram of another embodiment of a service scheduling apparatus according to the present disclosure.
Fig. 7 is a schematic structural diagram of another embodiment of a service scheduling apparatus according to the present disclosure.
Fig. 8 is a schematic structural diagram of some embodiments of the traffic scheduling system of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
Fig. 1 is a flow chart diagram of some embodiments of a traffic scheduling method of the present disclosure.
In step 110, the edge computing node added to the block chain receives the service request, and in the case that it is determined that the service cannot be carried, scans the resource block chain ledger, and determines the available resource condition and node address location information of each current edge computing node.
The blockchain technique is a decentralized distributed data management framework, and is applied to many distributed system scenarios. The block chain can realize distributed credible calculation, transaction and the like with high efficiency and low cost through a decentralized and unforgeable calculation model, and is widely applied to the fields of finance, logistics, industrial manufacturing, medical treatment, Internet of things and the like at present. In this embodiment, a block chain technique is used when performing service scheduling.
In some embodiments, after receiving a resource change record sent by each edge computing node, an edge computing node serving as a master node in a block chain generates a block and sends the block to each edge computing node; and each edge computing node writes the received block into a resource block chain account book. The resource block chain book stores node resource block chain data, that is, stores resource transaction (i.e., resource change) conditions of all edge compute nodes.
The edge computing nodes read the block chain data after receiving the service request, and because the storage information of each edge computing node is the same, the available resource condition of each edge computing node can be computed, and the address position information of each edge computing node can be obtained.
In step 120, the edge computing node closest to itself that can meet the service bearer requirement is selected as the service node.
In some embodiments, according to the geographic location, based on the principle of proximity, a node whose available resources meet the service requirement is selected as a substitute node, which facilitates quick response of the substitute node and reduces the time delay for executing service scheduling.
In step 130, a response is sent to the service originator redirecting the service request to the service node.
In the above embodiment, the service scheduling is performed based on the block chain technology, and a service scheduling platform is not required to be established, so that decentralized and efficient service global scheduling of the edge computing node can be realized.
Fig. 2 is a flowchart illustrating another embodiment of a service scheduling method according to the present disclosure.
In step 210, each edge compute node scans the resource block chain ledger and elects a master node based on the available resource value of each edge compute node.
In some embodiments, the step of selecting the master node is as shown in fig. 3, and in step 310, each edge computing node scans the resource block chain ledger and extracts resource data in the resource block model of each edge computing node, where the resource data includes one or more of a computing resource C, a storage resource S, UPF (User Plane Function) capacity resource U, a network resource N, and a GPU (Graphics Processing Unit) resource G.
In some embodiments, as shown in FIG. 4, the resource block model includes an edge compute node identification MEC _ ID, address location information MEC _ LOC of the edge compute node, and inventory and available resources of various types of resources. The resource block model has strong expandability, can add resource types to statistics according to the characteristics and the requirements of the edge computing nodes, and adapts to the continuously changing requirements and the service characteristics of the edge computing nodes.
Inventory resources are the sum of used and available resources. The edge computing node is usually constructed as a virtualization platform, which can conveniently and quickly respond to the service requirement and is convenient for the life cycleManagement, such as NFVI architecture. After the edge computing platform is virtualized, the resources of the node are distributed in each virtual machine, container, or dedicated infrastructure, so the statistics of the node inventory resources is the sum of all units. Wherein, ViIs a computational resource in a single virtual machine, container or other unit, and C is the sum of the computational resources of n units; piIs the capacity resource of the unit UPF, and U is the sum of the capacity resources of q UPFs; diThe network resources of a single link are N, and the N is the sum of the network resources of the t links; siS is the sum of the storage resources of the m storage units as the storage resource of each storage unit; wiThe GPU resources of a single virtual machine are G, and the G is the sum of the GPU resources of p virtual machines; r is the set of inventory resources of the edge compute node.
ΔViIs the available computational resource in a single virtual machine, container or other unit, and Δ C is the sum of the available computational resources of n units; delta PiIs available capacity resource of unit UPF, and delta U is sum of available capacity resource of q UPFs; delta DiThe network resources are available network resources of a single link, and the delta N is the sum of the available network resources of the t links; delta SiFor the available storage resource of each storage unit, Δ S is the sum of the available storage resources of m storage units; Δ WiThe GPU resources are available for a single virtual machine, and the delta G is the sum of the available GPU resources of the p virtual machines; Δ R is the set of currently available resources for the edge compute node.
In step 320, consumption ratios of the various types of resources are calculated.
In some embodiments, an occupation ratio α 1 of the computing resource C, an occupation ratio α 2 of the storage resource S, an occupation ratio α 3 of the UPF capacity resource U, an occupation ratio α 4 of the network resource N, and an occupation ratio α 5 of the GPU resource G are calculated, respectively.
Figure BDA0002744844780000071
Figure BDA0002744844780000072
m is the number of uplink chains on the edge compute node.
In step 330, the weight of each resource in the service bearer is determined according to the consumption ratio of each resource.
In some embodiments, a weight β 1 of the computing resource C in the traffic bearer, a weight β 2 of the storage resource S in the traffic bearer, a weight β 3 of the UPF capacity resource U in the traffic bearer, a weight β 4 of the network resource N in the traffic bearer, and a weight β 5 of the GPU resource G in the traffic bearer are calculated, respectively.
Figure BDA0002744844780000073
Figure BDA0002744844780000074
In step 340, the available resource value of each edge computing node is computed according to the weight of each resource in the service bearer.
The available resource value Δ R of each edge computing node is β 1 × Δ C + β 2 × Δ S + β 3 × Δ U + β 4 × Δ G.
At step 350, the edge compute node with the largest available resource value is taken as the master node. The main node is a block production node, the node has the accounting right, and other nodes are ordinary nodes.
In this step, by selecting the master node, the node having the most abundant resources can calculate the generated block.
In step 220, if each edge computing node determines that the available resource changes, it generates a resource change record and sends the resource change record to the edge computing node as the master node.
In some embodiments, when the edge computing node receives a new service or ends a service, the available resources of the edge computing node may change, or after the edge computing node expands the capacity, the stock resources and the available resources may also change, and at this time, the edge computing node immediately generates a resource change record of the resource block model and sends the record to the current host node.
In step 230, the edge computing nodes serving as the master node generate blocks after receiving the resource change records sent by each edge computing node, and send the blocks to each edge computing node.
Each block stores data of resource change of each edge computing node in a certain time period, each transaction is a resource data model of one edge computing node, and the blocks are chained according to a time sequence.
In some embodiments, the master node starts timing after receiving the first resource change record, and when the timing is up, calculates all collected records to generate a new block, and sends the new block to other edge calculation nodes.
In step 240, each edge computing node obtains the key of each edge computing node, verifies the received block by using the key, and writes the block into the resource block chain ledger after the verification is passed.
In some embodiments, each edge compute node re-elects the master node after receiving the tile. The available resource value of each edge computing node is recalculated, and the edge computing node with the maximum available resource value is used as a main node to ensure that the current main node is the most resource-rich node.
In step 250, the edge computing node receives the service request, and in case of determining that the service cannot be carried, scans the resource block chain ledger, and determines the available resource condition and node address location information of each current edge computing node.
In some embodiments, when an edge computing node receives a service request, the condition of idle resources of a current node is obtained, whether the current available resources can provide a bearer for the service is judged, and if the current available resources cannot provide the bearer is judged, a resource block chain ledger is scanned, and the available resource condition and node address location information of each current edge computing node are determined.
In step 260, the edge computing node selects the edge computing node closest to itself and capable of meeting the service bearing requirement as a service node according to the service requirement.
At step 270, the edge compute node sends a response to the service originator redirecting the service request to the service node.
In the above embodiment, each edge computing node elects a master node with a billing right by computing available resources, the master node collects resource information of each inbound edge computing node, and periodically produces a block inbound, and each edge computing node scans resource states of each edge computing node in the block link when own resources are in shortage after receiving a service request, so as to quickly realize global scheduling of services according to the current resource status of other edge computing nodes.
Fig. 5 is a schematic structural diagram of some embodiments of the traffic scheduling apparatus of the present disclosure. The service scheduling apparatus includes a service interface module 510, a resource awareness module 520, and a service scheduling module 530.
The service interface module 510 is configured to receive a service request.
In some embodiments, the edge service terminal initiates a service request, and the service interface module 510 receives the service request and invokes the resource awareness module 520.
The resource awareness module 520 is configured to determine whether the node resource is capable of carrying traffic.
In some embodiments, the resource sensing module 520 obtains a condition of idle resources of the current node, determines whether a current available resource can provide a bearer for the service, and invokes the service scheduling module 530 if it is determined that the bearer cannot be provided.
In some embodiments, the resource sensing module 520 is a probe module, and when the edge computing node changes the available resources due to receiving or releasing the service, the resource sensing module 520 collects sensing in real time, counts the available resources of the node, assembles a resource block model, and sends the resource block model to the resource collecting module of the controller.
The service scheduling module 530 is configured to, in a case that it is determined that the service cannot be carried, scan a resource block chain ledger, determine an available resource condition and node address location information of each current edge computing node, select an edge computing node that is closest to itself and can meet a service carrying requirement, as a service node, and send a response to the service originating terminal to redirect the service request to the service node.
In the above embodiment, the service scheduling is performed based on the block chain technology, and a service scheduling platform is not required to be established, so that decentralized and efficient service global scheduling of the edge computing node can be realized.
Fig. 6 is a schematic structural diagram of another embodiment of a service scheduling apparatus according to the present disclosure. The service scheduling apparatus includes a controller 610 and an account book reading and writing module 620, in addition to the service interface module 510, the resource sensing module 520 and the service scheduling module 530.
The controller 610 is configured to generate a block after receiving the resource change record sent by each edge computing node when being set in the master node, and send the block to each edge computing node.
In some embodiments, controller 610 is further configured to scan a resource block chain ledger, enumerating master nodes based on available resource values for each edge compute node. For example, scanning a resource block chain ledger, and extracting resource data in a resource block model of each edge computing node, wherein the resource data comprises one or more of computing resources, storage resources, UPF capacity resources, network resources, and GPU resources; calculating the consumption ratio of various resources; determining the weight of each resource in service bearing according to the consumption ratio of each resource; calculating the available resource value of each edge calculation node according to the weight of each resource in the service bearer; and taking the edge computing node with the maximum available resource value as a main node. The resource block chain book 630 stores node resource block chain data, that is, stores resource transaction conditions of all edge computing nodes. Each block stores data of resource change of each edge computing node in a certain time period, each transaction is a resource data model of one edge computing node, and the blocks are chained according to a time sequence.
The controller 610 is a core function module of the node block chain, and is a key module of the entire system, and controls the operation of the entire system. In some embodiments, controller 610 includes a blockchain interaction interface 611, a blockchain production module 612, a blockchain control module 613, and a resource acquisition module 614.
When the block chain interface 611 is located in the non-master node, it is used to read the access of the block and receive the block sent by the master node, and when the block chain interface 611 is located in the master node, it is used to send the access of the block and send the port of the resource data.
When the current edge computing node is the master node, the block production module 612 receives resource data reported by other edge computing nodes, and generates a block at regular time.
The block chain control module 613 is a core module of the controller, and has the main functions of coordinating the operation of the whole controller, scanning a resource block chain account book 630 when a new block enters a chain, calculating available resource values of all nodes, and taking an edge calculation node with the maximum available resource value as a master node; the module also has a built-in timer that starts the block production module 612 to produce blocks when the node is the master node.
The resource collection module 614 receives the data of the resource change sent by the resource sensing module 520, and sends the data to the host node through the block chain interactive interface 611.
The ledger reading and writing module 620 writes the received block into a resource block chain ledger.
In some embodiments, the ledger read-write module 620 may also read out the block data.
In the above embodiment, the master node collects resource information of each inbound edge computing node, and produces the block inbound at regular time, and after receiving the service request, each edge computing node can determine the substitute node in time when its own resources are in short, thereby facilitating to quickly implement global scheduling of the service.
In other embodiments of the present disclosure, the service scheduling apparatus further includes a key management module 640 and a verification module 650. The key management module 640 is configured to store keys of the respective edge computing nodes, and the verification module 650 is configured to obtain the keys of the respective edge computing nodes, and verify the received blocks by using the keys.
In some embodiments, the controller obtains the block data from the chain, and starts writing the block into the resource block chain ledger after the block data needs to be checked by the integrity check module. The key management module 640 stores the public and private keys of the edge computing node and the public keys of the edge computing nodes, and is used for generating and verifying transaction data and block data. The check module 650 checks the validity and integrity of the data for the block.
Fig. 7 is a schematic structural diagram of another embodiment of a service scheduling apparatus according to the present disclosure. The traffic scheduling device 700 includes a memory 710 and a processor 720. Wherein: the memory 710 may be a magnetic disk, flash memory, or any other non-volatile storage medium. The memory is used to store instructions in the embodiments corresponding to fig. 1-3. Coupled to memory 710, processor 720 may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 720 is configured to execute instructions stored in the memory.
In some embodiments, processor 720 is coupled to memory 710 through a BUS BUS 730. The service scheduling apparatus 700 may be further connected to an external storage system 750 through a storage interface 740 for calling external data, and may be further connected to a network or another computer system (not shown) through a network interface 760. And will not be described in detail herein.
In this embodiment, the memory stores data instructions, and the processor processes the instructions, thereby achieving decentralized and efficient global scheduling of services for edge computing nodes.
In other embodiments of the present disclosure, an edge computing node is protected, where the edge computing node includes the traffic scheduling apparatus in the foregoing embodiments.
In other embodiments of the present disclosure, as shown in fig. 8, a traffic scheduling system is protected, the traffic scheduling system comprising a plurality of edge compute nodes. The plurality of edge compute nodes form a decentralized distributed system, e.g., a private chain of node resource blocks. In the embodiment, the service scheduling of the edge computing node with low cost and high efficiency can be realized without establishing a service scheduling platform.
The scheme disclosed by the invention has universality, universality and cross-platform performance, can be realized on the basis of various platform edge computing nodes, can also be deployed and operated on the basis of entity equipment or a virtual machine, and has wide application prospect.
In other embodiments, a computer-readable storage medium has stored thereon computer program instructions which, when executed by a processor, implement the steps of the method in the embodiments corresponding to fig. 1-3. As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Thus far, the present disclosure has been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (16)

1. A service scheduling method comprises the following steps:
the method comprises the steps that edge computing nodes added into a block chain receive a service request, scan a resource block chain account book under the condition that the service cannot be borne, and determine the available resource condition and node address position information of each current edge computing node;
selecting an edge computing node which is closest to the edge computing node and can meet the service bearing requirement as a service node; and
and sending a response for redirecting the service request to the service node to a service initiating terminal.
2. The traffic scheduling method of claim 1, further comprising:
the edge computing nodes serving as main nodes in the block chain generate blocks after receiving the resource change records sent by the edge computing nodes, and send the blocks to the edge computing nodes; and
and each edge computing node writes the received block into the resource block chain account book.
3. The traffic scheduling method according to claim 1 or 2, further comprising:
and each edge computing node scans the resource block chain ledger and elects a main node based on the available resource value of each edge computing node.
4. The traffic scheduling method according to claim 3, wherein electing the master node comprises:
each edge computing node scans the resource block chain ledger and extracts resource data in a resource block model of each edge computing node, wherein the resource data comprises one or more of computing resources, storage resources, User Plane Function (UPF) capacity resources, network resources and Graphics Processing Unit (GPU) resources;
calculating the consumption ratio of various resources;
determining the weight of each resource in service bearing according to the consumption ratio of each resource;
calculating the available resource value of each edge calculation node according to the weight of each resource in the service bearer; and
and taking the edge computing node with the maximum available resource value as a main node.
5. The traffic scheduling method according to claim 4,
the resource block model comprises edge computing node identification, address position information of edge computing nodes, and stock resources and available resources of various resources.
6. The traffic scheduling method according to claim 3,
and each edge computing node re-elects the main node after receiving the block.
7. The traffic scheduling method according to claim 2,
and if determining that the available resources change, each edge computing node generates a resource change record and sends the resource change record to the edge computing node serving as the main node.
8. The traffic scheduling method according to claim 2,
and each edge computing node acquires a key of each edge computing node, verifies the received block by using the key, and writes the block into the resource block chain ledger after the verification is passed.
9. A traffic scheduling apparatus, comprising:
a service interface module configured to receive a service request;
the resource sensing module is configured to judge whether the node resource can bear the service; and
and the service scheduling module is configured to scan a resource block chain ledger under the condition that the service cannot be borne, determine the available resource condition and the node address position information of each current edge computing node, select the edge computing node which is closest to the edge computing node and can meet the service bearing requirement as a service node, and send a response for redirecting the service request to the service node to the service initiating terminal.
10. The traffic scheduling apparatus of claim 9, further comprising:
the controller is configured to generate a block after receiving the resource change record sent by each edge computing node, and send the block to each edge computing node; and
and the account book reading and writing module is configured to write the received block into the resource block chain account book.
11. The traffic scheduling apparatus according to claim 9 or 10, further comprising:
and the controller is configured to scan the resource block chain ledger, and select the master node based on the available resource value of each edge computing node.
12. The traffic scheduling apparatus of claim 10, further comprising:
a key management module configured to store keys of the respective edge compute nodes; and
and the checking module is configured to acquire a key of each edge computing node and check the received block by using the key.
13. A traffic scheduling apparatus, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the traffic scheduling method of any of claims 1 to 8 based on instructions stored in the memory.
14. An edge compute node, comprising:
a traffic scheduling device according to any one of claims 9 to 13.
15. A traffic scheduling system comprising:
a plurality of edge compute nodes as recited in claim 14.
16. A non-transitory computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the traffic scheduling method of any of claims 1 to 8.
CN202011162644.2A 2020-10-27 2020-10-27 Service scheduling method, device and system and edge computing node Pending CN114490008A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011162644.2A CN114490008A (en) 2020-10-27 2020-10-27 Service scheduling method, device and system and edge computing node

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011162644.2A CN114490008A (en) 2020-10-27 2020-10-27 Service scheduling method, device and system and edge computing node

Publications (1)

Publication Number Publication Date
CN114490008A true CN114490008A (en) 2022-05-13

Family

ID=81471475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011162644.2A Pending CN114490008A (en) 2020-10-27 2020-10-27 Service scheduling method, device and system and edge computing node

Country Status (1)

Country Link
CN (1) CN114490008A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086314A (en) * 2022-06-09 2022-09-20 中国银行股份有限公司 Interactive data processing method and related device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086314A (en) * 2022-06-09 2022-09-20 中国银行股份有限公司 Interactive data processing method and related device

Similar Documents

Publication Publication Date Title
US10140142B2 (en) Grouping and placement of virtual machines based on similarity and correlation of functional relations
CN107547595B (en) Cloud resource scheduling system, method and device
CN106713396B (en) Server scheduling method and system
CN101957863A (en) Data parallel processing method, device and system
CN105516347A (en) Method and device for load balance allocation of streaming media server
CN110111095B (en) Payment transaction weight judging method and payment system
CN110363663B (en) Block chain-based data batch processing method, device, equipment and storage medium
CN111506434B (en) Task processing method and device and computer readable storage medium
CN111614769A (en) Intelligent behavior analysis engine system of deep learning technology and control method
CN111858050B (en) Server cluster hybrid deployment method, cluster management node and related system
CN111541756B (en) Block generation method, block generation device, node equipment and storage medium
CN112764920A (en) Edge application deployment method, device, equipment and storage medium
CN112905333A (en) Computing load scheduling method and device for distributed video intelligent analysis platform
WO2023131058A1 (en) System and method for scheduling resource service application in digital middle office of enterprise
CN104572298B (en) The resource regulating method and device of video cloud platform
CN114490008A (en) Service scheduling method, device and system and edge computing node
CN111651522B (en) Data synchronization method and device
CN112925611A (en) Distributed container scheduling method and system based on shared GPU
CN115840649A (en) Method and device for allocating partitioned capacity block type virtual resources, storage medium and terminal
CN112950171A (en) Bank business processing system and method
CN109818767B (en) Method and device for adjusting Redis cluster capacity and storage medium
CN108595367B (en) Server system based on computer cluster in local area network
CN112527473A (en) Distributed transaction processing method and device
CN114201306B (en) Multi-dimensional geographic space entity distribution method and system based on load balancing technology
CN113965900B (en) Method, device, computing equipment and storage medium for dynamically expanding flow resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination