WO2023077791A1 - Procédé, système et appareil de traitement de service - Google Patents

Procédé, système et appareil de traitement de service Download PDF

Info

Publication number
WO2023077791A1
WO2023077791A1 PCT/CN2022/096569 CN2022096569W WO2023077791A1 WO 2023077791 A1 WO2023077791 A1 WO 2023077791A1 CN 2022096569 W CN2022096569 W CN 2022096569W WO 2023077791 A1 WO2023077791 A1 WO 2023077791A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing
computing device
blockchain
node
resources
Prior art date
Application number
PCT/CN2022/096569
Other languages
English (en)
Chinese (zh)
Inventor
葛建壮
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023077791A1 publication Critical patent/WO2023077791A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the technical field of the Internet, and in particular to a method, system and device for business processing.
  • Cloud computing uses centralized large-capacity cluster devices to centrally process massive amounts of information, and all computing processes are centrally processed in the cloud to facilitate data collection and sharing, giving rise to a series of big data applications.
  • cloud computing has almost unlimited computing power, in order to use the computing power of cloud computing, a lot of communication costs need to be borne, which inevitably causes delays.
  • the centralized processing method inevitably determines that the user's data needs to be uploaded to the cloud for processing, which makes it difficult to reduce the delay.
  • edge computing architecture is proposed to solve the problem of large delay in cloud computing. Specifically, edge computing processes tasks that require high latency near the user end, and puts services that can only be calculated on the cloud computing platform on edge devices.
  • edge devices Compared with cloud computing devices, the limited computing power of edge devices is the biggest bottleneck. When encountering bursty traffic, the edge device cannot meet business needs due to insufficient local computing power. If the computing power of the edge device is deployed according to the traffic peak, the cost of the edge device will be high.
  • the present application provides a method, system and device for business processing, which improve the efficiency of mutual borrowing of computing power between devices in different device clusters.
  • the present application provides a task processing method, including: a first computing device receives multiple task requests through an objective function (function) in a serverless system.
  • the multiple task requests belong to the same type of task, that is, the multiple tasks need to be processed by calling the same target function.
  • the embodiment of the present application does not limit the type of the task request, for example, it may be a target detection task, a classification task, a semantic segmentation task, and the like.
  • the first computing device sends the objective function to the second computing device, and the second computing device and the first computing device belong to different device clusters.
  • a device cluster described in the embodiment of the present application means that all devices included in the device cluster provide external services in a unified manner.
  • the first computing device When the first computing device obtains that its free computing resources are not sufficient, the first computing device processes the first part of the task request in the first computing device through the target function, and forwards the second part of the task request to the second computing device, indicating that it has The second computing device with idle computing resources processes the second part of task requests through the received target function, and the multiple task requests include the first part of task requests and the second part of task requests.
  • the first computing device receives the processing result of the second part of the task request sent by the second computing device.
  • the multiple task requests can be executed by the target function.
  • the second computing device After the second computing device provides the target funciton to the second computing device, the second computing device has the ability to process any task request in the plurality of task requests, so that the A second computing device may offload work from the first computing device.
  • the first computing device only needs to send the target function (and task request) to the second computing device, without sending the complete task processing program to the second computing device, so it is the same as the entire Compared with the task processing program provided to the second computing device, the amount of data sent by the first aspect of the present invention is small, and the bandwidth occupation is not large.
  • serverless systems are deployed in different device clusters. Due to the serverless system, business developers only need to pay attention to business logic, that is, only need to pay attention to related functions required for business execution, without considering server management, operating system management, resource allocation, expansion and other items. Therefore, after the serverless system is deployed in different device clusters, devices in different device clusters can transfer related functions to each other, and the devices that have obtained the related functions can execute the tasks corresponding to the related functions.
  • the serverless system is deployed in all device clusters. Devices in one device cluster can send related functions to devices in another device cluster to instruct the devices in the other device cluster to share business request.
  • virtual resource image migration between devices is required to realize computing power borrowing between cross-device clusters.
  • the solution provided in the first aspect only needs to transfer computing power between devices. Sub/functions can realize the borrowing of computing power between cross-device clusters, which will greatly reduce the time required to borrow computing power between devices, so as to achieve the purpose of efficiently realizing cross-device cluster borrowing of computing power.
  • the method further includes: the first computing device acquires reference information of at least one device.
  • the first computing device determines the second computing device according to the acquired reference information of at least one device, and the acquisition of the reference information of the second computing device satisfies a preset requirement.
  • one or more optimal devices can be selected from different device clusters as the device for borrowing computing power, so as to better realize the borrowing of computing power across device clusters. Purpose.
  • the reference information of the at least one device includes idle computing resources of each device in the at least one device.
  • devices with sufficient idle computing power resources can be selected as borrowing computing power, so as to improve the efficiency of borrowing computing power across device clusters.
  • the reference information of the at least one device further includes delay information between each of the at least one device and the first computing device.
  • the delay information between devices can be further considered to improve the efficiency of borrowing computing power across device clusters.
  • the reference information of the at least one device further includes each device in the at least one device, and the number of devices separated by the first computing device in the topology where the first computing device is located.
  • the length of the transmission path between devices can be further considered to improve the efficiency of borrowing computing power across device clusters.
  • the method further includes: the first computing device acquires reference information of at least one device from the first blockchain device, and each of the at least one device and the first computing device The devices belong to different device clusters, where the reference information of the third device is written by the third device to the second block chain device, so that the reference information of the third device can be synchronized to the second block chain device through the second block chain device.
  • the first blockchain device is any block in the blockchain maintained by the second blockchain device
  • the third device is any one device in the at least one device.
  • the block chain can also be introduced to realize the collection of measurement information across cluster devices, where the measurement information can include idle computing resources, delay information, bandwidth, etc.
  • the method further includes: establishing a communication link between the first computing device and the second computing device.
  • Sending the objective function to the second computing device by the first computing device includes: sending the objective function to the second computing device by the first computing device through a communication link.
  • the method further includes: if the first computing device obtains sufficient idle computing resources, the first computing device stops forwarding the second part of the task request to the second computing device through the serverless system. The first computing device disconnects the communication link after a specified period of time. In this implementation, if the first computing device obtains sufficient idle computing resources, the first computing device may no longer route traffic to the second computing device, that is, the first computing device may stop sending traffic to the second computing device through the serverless system.
  • the second computing device forwards the partial task request.
  • the first computing device does not immediately disconnect the communication link when it obtains sufficient idle computing resources, but After a long time, the communication link is disconnected.
  • the sending of the objective function from the first computing device to the second computing device includes: if the first computing device obtains that its own idle computing power resources are not sufficient, the first computing device sends the target function to the second computing device The two computing devices send the objective function.
  • the first computing device when it obtains insufficient free computing power resources, it can send the objective function to the second computing device without sending it in advance, which is conducive to the accurate realization of cross-device cluster borrowing of computing power Purpose.
  • the present application provides a task processing system, the system includes a first computing device cluster and a second computing device cluster, the first computing device cluster has multiple computing devices including the first computing device, the first The computing device cluster has multiple computing devices including the second computing device, and is used for: receiving multiple task requests through the target function function in the serverless system.
  • the objective function is sent to a second computing device that belongs to a different device cluster than the first computing device. If it is obtained that the idle computing resources of the first computing device are insufficient, the first part of the task request is processed in the first computing device through the serverless system, and the second part of the task request is forwarded to the second computing device. Including the first part of the task request and the second part of the task request.
  • the second computing device is configured to: process the second part of task requests through the received objective function. The processing result of the second part of the task request is sent to the first computing device.
  • the first computing device is further configured to: acquire reference information of at least one device.
  • the second computing device is determined according to the acquired reference information of at least one device, and the acquisition of the reference information of the second computing device satisfies a preset requirement.
  • the reference information of the at least one device includes idle computing resources of each device in the at least one device.
  • the reference information of the at least one device further includes delay information between each of the at least one device and the first computing device.
  • the reference information of the at least one device further includes each device in the at least one device, and the number of devices separated by the first computing device in the topology where the first computing device is located.
  • the system further includes the first blockchain device and the second blockchain device, and the first computing device is also used to: obtain at least The reference information of a device, each of the at least one device and the first computing device belong to different device clusters, wherein the reference information of the third device is written by the third device to the second block chain device, so that Synchronize the reference information of the third device to other blockchain devices except the second blockchain device in the blockchain maintained by the second blockchain device through the second blockchain device, the first blockchain The device is any blockchain device in the blockchain maintained by the second blockchain device, and the third device is any device in at least one device.
  • the first computing device is further configured to: establish a communication link with the second computing device.
  • the first computing device is specifically configured to: send the objective function to the second computing device through a communication link.
  • the first computing device is further configured to: stop forwarding the second part of the task request to the second computing device through the serverless system if it is obtained that the first computing device has sufficient idle computing resources. After a specified amount of time, disconnect the communication link.
  • the first computing device is specifically configured to: if the acquired free computing resources of the first computing device are insufficient, the first computing device sends the target function.
  • the present application provides a device for task processing, including: a transceiver module configured to: receive multiple task requests through an objective function function in a serverless system.
  • the objective function is sent to a second computing device that belongs to a different device cluster than the first computing device.
  • the processing module is configured to process the first part of the task request in the first computing device through the serverless system if it is obtained that the free computing resources of the first computing device are insufficient, and instruct the transceiver module to forward the second part to the second computing device
  • a task request is used to instruct the second computing device to process the second part of the task requests through the received objective function, and the multiple task requests include the first part of the task requests and the second part of the task requests.
  • the transceiver module is further configured to receive the processing result of the second part of the task request sent by the second computing device.
  • reference information of at least one device is acquired.
  • the processing module is further configured to: determine a second computing device according to the obtained reference information of at least one device, and the acquisition of the reference information of the second computing device meets a preset requirement.
  • the reference information of the at least one device includes idle computing resources of each device in the at least one device.
  • the reference information of the at least one device further includes delay information between each of the at least one device and the first computing device.
  • the reference information of the at least one device further includes each device in the at least one device, and the number of devices separated by the first computing device in the topology where the first computing device is located.
  • the transceiver module is further configured to: obtain reference information of at least one device from the first blockchain device, and each device in the at least one device and the first computing device belong to Different device clusters, where the reference information of the third device is written by the third device to the second block chain device, so that the reference information of the third device can be synchronized to the second block through the second block chain device
  • the first blockchain device is any blockchain device in the blockchain maintained by the second blockchain device
  • the third device is any one of the at least one device.
  • the processing module is further configured to: establish a communication link with the second computing device.
  • the transceiver module is specifically configured to: send the target function to the second computing device through the communication link.
  • the processing module is further configured to: stop forwarding the second part of the task request to the second computing device through the serverless system if it is obtained that the first computing device has sufficient free computing resources. After a specified amount of time, disconnect the communication link.
  • the transceiver module is specifically configured to: if the first computing device obtains that its free computing resources are not sufficient, the first computing device sends the objective function to the second computing device.
  • the present application provides a device for task processing, including: a memory configured to store computer-readable instructions. It also includes a processor coupled to the memory, configured to execute computer-readable instructions in the memory so as to execute the method as described in the first aspect or any possible implementation manner of the first aspect.
  • the present application provides a computer-readable storage medium, including instructions.
  • the instructions When the instructions are run on a computer device, the computer device executes the above-mentioned first aspect or any possible implementation manner of the first aspect. Methods.
  • the present application provides a chip, the chip is coupled with a memory, and is used to execute a program stored in the memory, so as to execute the method described in the first aspect or any possible implementation manner of the first aspect.
  • the embodiments of the present application provide a computer program product including computer programs/instructions, which, when executed by a processor, cause the processor to execute the first aspect or any optional implementation manner in the first aspect method in .
  • the beneficial effects brought by the second aspect to the seventh aspect and each possible implementation manner of the second aspect to the seventh aspect can refer to the first aspect and the advantages brought by each possible implementation manner of the first aspect Beneficial effects are understood, and will not be repeated here.
  • FIG. 1 is a schematic structural diagram of a system provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of a serverless application scenario
  • FIG. 3 is a schematic flow diagram of a business processing method provided in an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of another business processing method provided by the embodiment of the present application.
  • FIG. 5 is a schematic flowchart of another business processing method provided by the embodiment of the present application.
  • FIG. 6 is a schematic flowchart of another business processing method provided by the embodiment of the present application.
  • FIG. 7 is a schematic flowchart of another business processing method provided by the embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a system provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a service processing device provided by an embodiment of the present application.
  • Fig. 10 is a schematic structural diagram of another service processing device provided by the embodiment of the present application.
  • FIG. 1 it is a schematic structural diagram of a system 100 provided by an embodiment of the present application.
  • the system 100 includes multiple device clusters.
  • Each device cluster may include one or more devices.
  • m device clusters are shown in FIG. 1, and each device cluster includes n devices, where m is a positive integer greater than 1, and n is a positive integer.
  • each device cluster shown in Figure 1 includes n devices. It should be noted that the embodiment of the present application does not limit the number of devices included in each device cluster. Any two device clusters include The number of devices may be the same or different.
  • a device cluster described in the embodiment of the present application means that all devices included in the device cluster provide external services in a unified manner.
  • a device cluster has an independent computing power system, and any two different device clusters have different computing power systems.
  • a device cluster is usually powered by the same power supply system, and in addition, all devices in a device cluster are usually deployed at the same geographical location.
  • a device cluster is usually managed by a unified management platform. Taking the management platform included in a device cluster and the multiple devices managed by the management platform as an example, the communication between the management platform and the multiple devices managed by the management platform The relationship is explained.
  • the management platform can send control instructions to one or more devices in the device cluster where the management platform is located to control the devices in the device cluster;
  • a device sends configuration resource information to configure the one or more devices; for another example, the management platform can receive the data sent by the one or more devices and process it based on the data sent by the one or more devices to obtain processing result.
  • the management platform can be deployed on the cloud.
  • the management platform is sometimes referred to as a management device, and the devices managed by the management device are referred to as managed devices.
  • One or more devices in the device cluster may include at least one of various types of sensing devices, base stations, access points, switches, routers, gateways, terminals, audio and video devices, controllers, detection devices, and other types of networked devices.
  • configurations may be performed for each managed device, for example, configuration of its computing capability, networking mode, work authority, and so on.
  • the managed device has complete data collection, processing, calculation, and control capabilities.
  • the managed device may be deployed on a terminal.
  • the managed device may be an edge device or an edge scene end device.
  • the managed device is an edge device
  • the edge device is mostly a lightweight device with limited computing power
  • the computing power of the managed device is deployed according to the peak traffic of the managed device, the cost of the managed device will be greatly increased.
  • the embodiment of the present application provides a solution that uses a serverless service architecture.
  • the borrowing device transmits a related function to the borrowed device to obtain
  • the borrowed device that has completed the relevant functions can share the traffic of the borrowed device, so as to achieve the purpose of quickly borrowing computing power between devices across the device cluster.
  • serverless Since this application involves a lot of concepts related to serverless, in order to better understand the solutions provided by the embodiments of this application, serverless is introduced below.
  • the serverless architecture is a new type of Internet architecture, in which application development does not use conventional service processes, which provides a new architecture for applications in edge computing scenarios.
  • the serverless architecture can shield tenants' servers, databases, middleware and other server facilities, and tenants no longer participate in the deployment and maintenance of server facilities, which can greatly simplify tenant deployment and operation and maintenance difficulty.
  • a serverless architecture allows application deployment to be managed at the service deployment level rather than the server deployment level.
  • the serverless architecture enables business R&D personnel to only focus on business logic without considering O&M and capacity, improving the efficiency of business iteration.
  • the serverless service architecture outsources server management, operating system management, resource allocation, capacity expansion, etc., and the services are provided by a third party and triggered by events.
  • Serverless can automatically expand computing power and capacity when the business volume is large to carry more user requests, and shrink resources when the business volume drops to avoid resource waste.
  • This mechanism is serverless.
  • Elastic scaling mechanism When there is a business application, the serverless architecture transfers relevant resources to start running, and after the running is completed, all overhead is offloaded.
  • the serverless architecture allows developers to focus on products without managing and operating cloud or local servers, without having to consider server specifications, storage types, network bandwidth, automatic scaling, etc., and without operating and maintaining servers.
  • the virtualized resources created by the server during service deployment can exist in the form of virtual machines.
  • the virtualized resources are sometimes referred to as containers.
  • the corresponding request value and limit value will be set, where the request value represents the minimum resource requirement used by the container, and the limit value represents the maximum value of the resource that the container can use.
  • the resources here may include resources of multiple dimensions, for example, central processing unit (central processing unit, CPU) resources, network card resources, memory resources, and the like.
  • FIG. 2 is a schematic diagram of an application scenario provided by the embodiment of the present application. As shown in FIG. 2 , it includes a server 10 that can create multiple virtual resources and allocate corresponding resources to each virtual resource.
  • the server 10 has created two virtual resources, which are respectively the first virtual resource 11 and the second virtual resource 12.
  • the first The virtual resource 11 and the second virtual resource 12 perform resource allocation.
  • the server 10 creates the first virtual resource 11 and the second virtual resource 12
  • corresponding parameters will also be set for the first virtual resource 11 and the second virtual resource 12
  • the parameters include the minimum resource requirements used by the virtual resources, namely
  • the request value also includes the maximum value of resources that can be used by the virtual resource, that is, the limit value.
  • the request value and the limit value of different virtual resources may be different, and the request value can be used as a judgment dependence of resource allocation during virtual resource scheduling.
  • the server may allocate initial resources to each virtual resource according to the request value of the virtual resource, and these resources may include CPU, memory, network card, and the like.
  • the device on which the virtual resources are deployed can receive service requests, and then use the allocated resources to process these service requests. For example, in FIG. 2, a service request is sent to the first virtual resource 11, and the first virtual resource 11 can process the service indicated by the service request according to the allocated resources; The allocated resource can process the service indicated by the service request, and so on.
  • Services may include online services and offline services, where online services are services that need to be processed in a timely manner, and offline services are services that do not need to be processed immediately, which is not limited in this embodiment of the present application.
  • the first resource device and the second virtual resource may be deployed on different devices, for example, as shown in FIG. 2 , the first virtual resource is deployed on device A, and the second virtual resource is deployed on device B.
  • the service request may be directly sent to device A or device B.
  • the client may also send a request to the virtual resource, which is not limited in this embodiment of the present application.
  • the virtual resource which is not limited in this embodiment of the present application.
  • this application sometimes also refers to the server described in Figure 2 as a central node, or a management device, and they represent the same meaning.
  • the solution provided by the embodiment of this application utilizes the serverless architecture. Since the serverless architecture is currently deployed inside a device cluster, as shown in Figure 2, within a device cluster, the serverless architecture has an elastic expansion and contraction mechanism. Each operator needs to borrow computing power from each other.
  • This application finds that the serverless architecture can be used to support the ability to transfer operators/functions between devices, and quickly achieve the purpose of cross-device clusters borrowing computing power from each other. Specifically, when a device obtains an operator/function sent by another device, it can use the operator/function to perform the corresponding task, thereby achieving the purpose of borrowing computing power between devices.
  • a device uses A operator to execute For business A
  • device B can also execute business A after acquiring the operator A, then the business A of device A can also be transferred to device B for execution, thereby achieving the purpose of device A borrowing the computing power of device B.
  • virtual resources need to be mirrored and migrated between devices to realize computing power borrowing between cross-device clusters.
  • the solution provided by the embodiment of this application only needs to transmit only Operators/functions can realize computing power borrowing between cross-device clusters, which will greatly reduce the time required to borrow computing power between devices.
  • the solution provided by the embodiment of the present application allows all the devices in one or more clusters to send their computing power, network information and other information related to computing power borrowing to the blockchain.
  • any suitable device can be selected to borrow computing power according to the information recorded in the blockchain.
  • the solution provided by the embodiment of the present application can more efficiently implement cross-cluster devices to borrow computing power from each other, and broaden the application scenarios of computing power borrowing.
  • a data processing method provided by an embodiment of the present application is applied to a serverless system, wherein the serverless system includes multiple computing nodes and blockchain nodes. Specifically, the steps performed by each node are introduced in conjunction with FIG. 3 below:
  • the first computing node writes the idle computing resource of the first computing node to the blockchain node.
  • the computing node described in the embodiment of the present application is used to represent a node with data processing capability, which is sometimes referred to as a device, a virtual machine, a device to be managed, a node, or a site in this application.
  • the computing power of a node is used to represent the computing power of the node.
  • the computing power resource is the resource that the node needs to occupy when performing computing tasks.
  • it can include hardware resources or network resources.
  • it can include the central processing unit (central processing unit) , CPU) computing power resources, graphics processing unit (graphics processing unit, GPU) computing power resources, memory resources, network bandwidth resources, disk resources, etc.
  • Idle computing power resources refer to resources that are not currently occupied by nodes.
  • one or more types of computing power resources may be pre-specified, so that the first computing node acquires idle computing power resources of the type of computing power resources according to the pre-specified types of computing power resources.
  • the computing power resources that need to be written include the idle computing power resources of the CPU and the idle computing power resources of the GPU, then the first computing node only writes the idle computing power of the CPU of the first computing node to the blockchain node Resources and idle computing power resources of GPU do not need to write all idle computing power resources.
  • the first computing node can periodically write the idle computing power resources of the first computing node to the blockchain node, for example, write the first calculation node to the blockchain node every preset time A computing node's current idle computing resources.
  • the first computing node may also respond to the instruction, and based on the instruction, write the current idle computing resources of the first computing node to the blockchain node.
  • the first computing node may respond to the instruction written by the blockchain node, and based on the instruction, write the current idle computing resources of the first computing node to the blockchain node.
  • the first computing node may also respond to instructions written by other nodes, and based on the instruction, write the current idle computing resources of the first computing node to the blockchain node.
  • the first computing node writes the current idle computing resources of the first computing node to the blockchain node in response to the command written by the central node, which is not limited in this embodiment of the present application.
  • a statistical module is deployed on the first computing node, which is used to count the idle computing power resources of the first computing node, and write the idle computing power resources of the first computing node to the blockchain node.
  • the first computing node may be a central node, and the first computing node may also be other nodes managed by the central node.
  • the central node when the first computing node is a central node, the central node can also uniformly obtain the idle computing power resources of other nodes managed by the central node, and the central node can write the central node to the blockchain node.
  • the first computing node is a block chain node, and the first computing node locally writes the idle computing resources of the first computing node. It should be noted that any type of node described in this application can be a blockchain node, which will not be repeated below.
  • the blockchain node After acquiring the idle computing resource of the first computing node, the blockchain node stores the address of the first computing node and the idle computing resource of the first computing node. In a possible implementation, the blockchain node may obtain the address of the first computing node in advance, then after the blockchain node obtains the address of the first computing node, the address of the first computing node and the address of the second computing node are established. A corresponding relationship between idle computing power resources of a computing node.
  • a blockchain is made up of a growing series of records called blocks. These blocks are linked together through cryptography, and each block contains the hash value, timestamp, and transaction data of the previous block.
  • the blockchain is essentially a distributed multi-backup database, but the biggest difference from the database is that the data storage is formed through multi-party consensus, and the hash chain is used to protect the historical data, so that the data cannot be tampered with. Compared with traditional database technology, the immutable feature of blockchain data is easier to gain the trust of users, so it can better support multi-party cooperation.
  • the blockchain nodes described in the embodiments of this application can be regarded as a block. When new data is written in a block, the new data can be synchronized to the block according to the distributed database nature of the blockchain. other blocks in the chain. In a possible implementation manner, the new data may also be synchronized to each computing node in the serverless system.
  • the first computing node acquires delay information between the first computing node and the target node.
  • the target node is any computing node except the first computing node in the serverless system where the first computing node is located.
  • the first computing node may regularly perform network delay detection to obtain the network delay between the first computing node and the target node. It should be noted that the embodiment of the present application does not limit which network delay detection method is used, any network delay detection method can be adopted in the embodiment of the present application. It should be noted that the first computing node may also periodically perform network delay detection.
  • the first computing node may obtain at least one target node's Internet protocol (internet protocol address, IP) address locally, and based on the at least one IP address, obtain the first computing node and each IP address Calculate the time delay between corresponding nodes to obtain at least one detection result, each detection result indicates a network transmission delay between the first calculation node and a target node.
  • IP Internet protocol address
  • the first computing node may obtain the IP address of at least one target node from the central node, and based on the at least one IP address, obtain the connection between the first computing node and the computing node corresponding to each IP address. Delay, to obtain at least one detection result, each detection result indicates a network transmission delay between the first computing node and a target node.
  • the first computing node can obtain at least one IP address of the target node from the blockchain node, and based on the at least one IP address, obtain the first computing node and the computing node corresponding to each IP address The time delay between them is used to obtain at least one detection result, each detection result indicates a network transmission delay between the first computing node and a target node.
  • the first computing node writes the acquired delay information to the blockchain node.
  • the blockchain node After the blockchain node obtains the delay information between the first computing node and at least one target node, it can synchronize to other nodes in the blockchain. In a possible implementation, after the blockchain node obtains the delay information between the first computing node and at least one target node, it can also synchronize to each computing node in the serverless system.
  • the first computing node mentioned in step 301 and step 302 writes to the blockchain node the idle computing resources of the first computing node, the delay information obtained by the first computing node, the first computing node
  • the first computing node can also write other types of information to the blockchain node.
  • the first computing node can also write to the blockchain node the topology of the serverless system obtained by the first computing node; in a possible implementation, the first computing node can also write to the blockchain node
  • the blockchain node writes various network information such as the network bandwidth obtained by the first computing node.
  • the prediction results of the neural network model can be written to the blockchain nodes.
  • the prediction result indicates computing power resource information such as predicted idle computing resources of the first computing node at a certain time period/point, delay information obtained by the first computing node, and the like.
  • the prediction result indicating the predicted idle computing resources of the first computing node at a certain time period/point as an example
  • the idle computing resources of the first computing node at a historical time period/point can be used as the neural network model
  • the training data is to iteratively train the neural network model, so that the trained neural network model can predict the idle computing resources of the first computing node at a certain time period/point.
  • the first computing node receives the service access request.
  • the type of service access request is not limited.
  • the task may be a face recognition task, an object detection task, a classification task, and the like.
  • the idle computing resource of the first computing node is sufficient, and the first computing node is capable of executing the business independently, then the first computing node does not need to borrow computing power resources of other computing nodes, then the first Compute nodes process the task request locally.
  • the idle computing resources of the first computing node are not sufficient, and the first computing node is unable to complete the business by itself, then the first computing node needs to borrow the computing resources of other nodes, and it needs to execute Step 305, which will be described in detail in step 305.
  • the service access request triggers a serverless trigger deployed by the first computing node.
  • the serverless trigger is triggered, the first computing node starts a function (function) for executing the service, such as starting function1, If the idle computing power of the first computing node is sufficient, the first computing node will successfully start the function1 and execute the task based on the function1.
  • the first computing node borrows idle computing power resources of the second computing node to process the service access request.
  • the first computing node needs to borrow computing resources of other nodes.
  • a threshold may be set, and if the idle computing resource of the first computing node is lower than the threshold, it is considered that the idle computing resource of the first computing node is insufficient.
  • the first computing node cannot successfully start a function corresponding to a certain task, it is considered that the idle computing resources of the first computing node are insufficient.
  • the first computing node fails to execute a certain task multiple times, it is considered that the idle computing resources of the first computing node are not sufficient.
  • the first computing node may borrow computing power resources from other nodes to process the service access request received by the first computing node.
  • the first computing node may select one or more computing nodes from multiple computing nodes in the serverless system to borrow computing power resources.
  • one or more suitable computing nodes can be selected to borrow computing power resources based on relevant information such as idle computing power resources and delay information of each node stored in the blockchain node.
  • the first computing node sends a function corresponding to a certain service to other computing nodes, and the computing node that obtains the function corresponding to the service can process the service.
  • the following describes how the first computing node borrows computing power resources of other nodes in conjunction with a possible implementation manner.
  • the serverless management module calls the interface of the optimal node selection module.
  • the serverless management module may be responsible for managing, scheduling and orchestrating various functions of the first computing node, and at the same time responsible for transferring functions between different nodes.
  • the serverless management module is deployed on the first computing node, and in a possible implementation manner, the serverless management module may also be deployed on other nodes.
  • the optimal node selection module obtains one or more suitable second computing nodes.
  • the optimal node selection module selects one or more suitable second computing nodes according to the relevant information such as idle computing power resources and delay information of each node stored in the blockchain nodes and according to predetermined rules.
  • the priority of different types of related information can be defined. For example, if the priority of idle computing resources is set to be the highest, followed by delay information and bandwidth, the optimal node selection module will The node with the highest idle computing resources is preferentially selected as the second computing node.
  • the rules can be customized according to the requirements of actual application scenarios, such as setting the preferred 5G computing nodes or nodes with GPUs.
  • the weights of different types of related information can be defined. For example, if the weight of idle computing resources is set to 0.5, the weight of delay information to 0.3, and the weight of bandwidth to 0.2, the optimal node The selection module will give priority to the node with the highest weight obtained as the second computing node.
  • the optimal node selection module may be deployed on the first computing node, or may be deployed on other nodes.
  • the cross-node network management module establishes a communication network between the first computing node and at least one second computing node.
  • any method of establishing a communication network between nodes may be used to establish a communication network between the first computing node and each second computing node.
  • the communication network may be an overlay communication network or an underlay communication network.
  • the underlay is the network of the basic forwarding architecture of the current data center network. As long as any two points on the data center network are reachable, it refers to the physical base layer.
  • the overlay communication network refers to a virtualization technology mode superimposed on the network architecture. Its general framework is to realize the bearer of the application on the network without large-scale modification of the basic network, and can be separated from other network services. And mainly based on IP-based network technology.
  • the serverless management module transmits the function to be transmitted from the first computing node to the second computing node.
  • the serverless management module transmits the function to be transmitted from the first computing node to the at least one second computing node through the communication network established by the cross-node network management module.
  • the function to be transmitted is a function corresponding to the service that the first computing node requests to share.
  • the function corresponding to the A service is the A function.
  • the service to be shared and processed by the first computing node is the A service, and the function to be transmitted is the A function.
  • the second computing node executes the service initiated to the first computing node by using the obtained function.
  • the first computing node can route part of the local access traffic to the second computing node, and the second computing node executes the initiated business.
  • it may further include (6) when it is obtained that the idle computing resources of the first computing node are sufficient, the first computing node processes the service initiated to the first computing node.
  • the first computing node if the first computing node obtains that it has sufficient idle computing power resources to process the business initiated by the first computing node, the first computing node does not need to borrow the computing power of the second computing node resource.
  • the first computing node may stop routing traffic to the second computing node.
  • the communication network between the first computing node and the second computing node may be disconnected after a specified period of time.
  • the first computing node may also stop routing service traffic to the one or more second computing nodes in batches.
  • the second computing node includes three, which can be divided into three time points. At the first time point, stop routing service traffic to a second computing node, and stop routing service traffic to a second computing node at the second time point. Two computing nodes, stop routing business traffic to two second computing nodes at this time, stop routing business traffic to one second computing node at the third time point, and stop routing business traffic to three second computing nodes at this time .
  • FIG. 4 it is a schematic flowchart of a task processing method provided by the embodiment of the present application.
  • the first node and the second node shown in FIG. 4 belong to different device clusters.
  • FIG. 4 describes a specific first node The process of borrowing computing power from the second node.
  • Both the first node and the second node include a real-time statistics module of local idle computing power resources.
  • the local real-time statistics module periodically writes the available idle computing power of the local CPU/GPU/memory into the blockchain, and uses the distributed database nature of the blockchain to synchronize the available idle computing power of the CPU/GPU/memory to all
  • the edge nodes for example, in the solution shown in FIG. 4 , the edge nodes include a first node and a second node.
  • Both the first node and the second node may also include a network detection module.
  • the network detection module is responsible for real-time detection of the network access delay between each node and the node recorded in the blockchain module, as well as the health of the network connection between nodes, and at the same time writes the network connection bandwidth parameters of the local network and other networks.
  • the network detection module regularly performs network delay detection according to the network information of all nodes in the blockchain (such as the IP address of each node), and writes the detection results, network bandwidth, network topology and other parameters into the blockchain.
  • the distributed database nature of the block chain synchronizes the detection results obtained by the network detection module to all edge nodes.
  • Edge-side services access services on edge nodes.
  • the type of service access request is not limited.
  • the task may be a face recognition task, an object detection task, a classification task, and the like.
  • the service access request triggers the serverless trigger. If no function indicated by the service request is running, start the corresponding function at the edge site, such as the first node. As shown in Figure 4, start functionA-1 and functionA -2, Execute both functionA-1 and functionA-2 to process the service received in step 3.
  • the currently activated function cannot meet the computing power requirements, and the creation of a new function fails due to insufficient resources.
  • Both the first node and the second node may further include a network detection module.
  • the serverless management module is responsible for managing, scheduling, and orchestrating operators, and is also responsible for synchronizing operators between nodes. If the currently activated function cannot meet the computing power requirement, the serverless management module calls the interface of the optimal node selection module to obtain the optimal node.
  • the optimal node selection module is responsible for providing an interface to serverless and providing optimal site information for serverless cross-node scheduling.
  • the optimal node selection module selects the corresponding optimal node according to the optimal node selection algorithm according to the available computing power, network delay, bandwidth, network topology and other parameters recorded in the blockchain, and the optimal node selection algorithm can be Customized by the user, refer to the above optimal node selection module to select one or more suitable nodes based on the relevant information such as idle computing power resources and delay information of each node stored in the blockchain node and according to the pre-specified rules.
  • the second computing node is understood, and will not be repeated here.
  • the serverless management module After the optimal node selection module selects the optimal node (for example, the second node is selected), the serverless management module notifies the cross-node network management module to establish a cross-node transmission network.
  • the cross-node network management module establishes a communication network (overlay or underlay) between the first node and the second node, and configures a route between the local and remote target sites.
  • the serverless management module of the first node will synchronize operators that require flexibility (or operators that need to be transmitted) to the second node through the established network, and use the serverless management on the second node
  • the module starts the function, for example, continue to illustrate on the basis of step 4, and run functionA-3 on the second node.
  • the serverless management module deployed on the first node detects a decrease in business traffic, it can stop routing business traffic to the remote second site through the cross-node routing module, and after a specified period, refer to step 11 to notify the second node to stop running FunctionA-3, and notify the cross-node network management module to release the network connection.
  • FIG. 5 it is a schematic flowchart of another task processing method provided by the embodiment of the present application, which may include the following steps:
  • a first computing device receives multiple task requests through an objective function function in a serverless system.
  • the multiple task requests belong to the same type of task, that is, the multiple tasks need to be processed by calling the same target function.
  • the embodiment of the present application does not limit the type of the task request, for example, it may be a target detection task, a classification task, a semantic segmentation task, and the like.
  • the first computing device sends the objective function to the second computing device.
  • the second computing device and the first computing device belong to different device clusters.
  • the task request indicates the target function that needs to be called to execute the task request, and the first computing device sends the target function indicated in the task request to the second computing device.
  • the first computing device obtains that its free computing resources are not sufficient, process the first part of the task request on the first device through the target function, and forward the second part of the task request to the second computing device.
  • the multiple task requests include a first part of task requests and a second part of task requests.
  • the first computing device chooses to perform load sharing, processing part of the task requests locally on the first computing device, and sharing part of the task requests to the second computing device for processing. Satisfy the processing requirements of the multiple task requests.
  • the second computing device processes the second part of task requests by using the received target function.
  • the serverless system is deployed in the cluster where the second computing device is located, and the second computing device can process the second part of task requests for the received objective function based on the serverless system.
  • the first computing device receives a processing result of the second part of the task request sent by the second computing device.
  • different device clusters deploy serverless systems.
  • the device can transmit related functions (objective functions) to devices in other device clusters.
  • the device receiving the correlation function can share the task request of the device with sufficient computing power resources based on the correlation function.
  • the first computing device acquires reference information of at least one device; the first computing device determines a second computing device according to the acquired reference information of at least one device, and the acquisition of the reference information of the second computing device satisfies the predetermined set demand.
  • the reference information of at least one device includes idle computing resources of each device in the at least one device.
  • the preset requirements can be customized. For example, it can be set that the idle computing power resources exceed the threshold and the preset requirements are considered to be met.
  • the reference information of the at least one device further includes delay information between each device in the at least one device and the first computing device.
  • the reference information of the at least one device further includes each device in the at least one device, and the number of devices separated by the first computing device in the topology where the first computing device is located.
  • the blockchain can also be introduced to realize the collection of metric information across cluster nodes, where the metric information can include idle computing resources, delay information, bandwidth, and so on.
  • FIG. 6 it is a schematic flowchart of another task processing method provided by the embodiment of the present application, which may include the following steps:
  • the first computing device receives multiple task requests through an objective function function in a serverless system.
  • the first computing device acquires reference information of at least one device from the first blockchain device.
  • Each of the at least one device and the first computing device belong to different device clusters, wherein the reference information of the third device is written by the third device into the second block chain device to pass the second block chain
  • the device synchronizes the reference information of the third device to other blockchain devices except the second blockchain device in the blockchain maintained by the second blockchain device.
  • the first blockchain device is the second block Any one of the blockchain devices in the blockchain maintained by the chain device
  • the third device is any one of at least one device.
  • the first computing device sends the objective function to the second computing device.
  • the acquisition of reference information by the second computing device satisfies a preset requirement.
  • the first computing device obtains that its free computing resources are not sufficient, process the first part of the task request on the first device through the target function, and forward the second part of the task request to the second computing device.
  • the second computing device processes the second part of task requests by using the received objective function.
  • the first computing device receives a processing result of the second part of the task request sent by the second computing device.
  • Step 601 , step 603 to step 606 can be understood with reference to step 501 , step 502 to step 505 in the embodiment corresponding to FIG. 5 , and will not be repeated here.
  • FIG. 7 it is a schematic flowchart of another task processing method provided by the embodiment of the present application, which may include the following steps:
  • the first computing device receives multiple task requests through an objective function function in a serverless system.
  • the first computing device acquires reference information of at least one device from the first blockchain device.
  • the first computing device establishes a communication link with the second computing device.
  • the communication network between the first computing device and the second computing device may be established by any means of establishing a communication network between devices.
  • the communication network may be an overlay communication network or an underlay communication network.
  • the first computing device sends the objective function to the second computing device through a communication link.
  • the first computing device obtains that its free computing resources are not sufficient, process the first part of the task request on the first device through the target function, and forward the second part of the task request to the second computing device.
  • the second computing device processes the second part of task requests by using the received objective function.
  • the first computing device receives a processing result of the second part of the task request sent by the second computing device.
  • the first computing device stops forwarding the second part of task requests to the second computing device through the serverless system.
  • the first computing device disconnects the communication link after a specified time period.
  • the first computing device may no longer route traffic to the second computing device, that is, the first computing device may stop sending traffic to the second computing device through the serverless system.
  • the second computing device forwards the partial task request.
  • the first computing device does not immediately disconnect the communication link when it obtains sufficient idle computing resources, but After a long time, the communication link is disconnected.
  • FIG. 8 it is a schematic diagram of an architecture of a service processing system provided by an embodiment of the present application.
  • the system provided by the embodiment of the present application is applicable to marginalized scenarios.
  • most edge nodes are lightweight nodes with limited computing power. When encountering sudden traffic, due to insufficient local computing power, business needs cannot be met. If the computing power is deployed according to the peak value, the cost of edge nodes will be high , and at this time, the central node and adjacent nodes on the edge still have spare computing power. At this time, the computing power resources of the adjacent edge nodes or central nodes are quickly borrowed to meet the sudden traffic demand.
  • the operator can be quickly synchronized to the target node, and the target node can quickly respond to business needs in milliseconds.
  • the borrowing is automatically released according to the monitoring measurement information computing power.
  • the foregoing has introduced in detail the flow of the business processing method and the business processing system provided by this application. Based on the flow of the foregoing business processing method, the business processing device provided by this application will be introduced below.
  • the business processing device can be used to execute the aforementioned Method steps in 3-7.
  • FIG. 9 it is a schematic structural diagram of a service processing device provided by the present application.
  • the service processing device includes a transceiver module 901 and a processing module 902 .
  • the service processing device is the first computing node or the second computing node in Fig. 3-7, and may also be a block chain node/device, the first computing device or the second computing device.
  • the transceiving module 901 can be used to perform 302, 304, 305 and other steps related to transceiving in the embodiment corresponding to FIG. 3 , and optionally, can also be used to perform the implementation corresponding to FIG. 3 Steps 302 and 303 in the example.
  • the processing module 902 is configured to execute step 302 in the embodiment corresponding to FIG. 3 and other processing-related steps.
  • the transceiving module 901 can be used to execute steps 501, 502, 503, 504, 505 and other steps related to sending and receiving in the embodiment corresponding to FIG. Step 503 in the embodiment and other processing-related steps.
  • the transceiving module 901 can be used to execute steps 601, 602, 603, 604, 605, 606 and other steps related to sending and receiving in the embodiment corresponding to FIG. 6 corresponds to step 604 in the embodiment and other processing-related steps.
  • the transceiving module 901 can be used to execute steps 701, 702, 703, 704, 705, 706, 707 and other steps related to transceiving in the embodiment corresponding to FIG. Steps 705, 708, 709 and other processing-related steps in the embodiment corresponding to FIG. 6 are executed.
  • the embodiment of the present application also provides a service processing device, please refer to FIG. 10 , which is a schematic structural diagram of the service processing device provided in the embodiment of the present application.
  • the business processing device 1800 can be deployed with the business processing device described in the embodiment corresponding to FIG. 9, which is used to implement the first computing node or the second computing node in the embodiments corresponding to FIG. 3 to FIG. 7, and may also be a blockchain node. /device, a function of the first computing device or the second computing device.
  • the business processing device 1800 may have relatively large differences due to different configurations or performances, and may include one or more central processing units CPU1822 (for example, one or more processors) and memory 1832, and one or more storage applications A storage medium 1830 for programs 1842 or data 1844 (such as one or more mass storage devices).
  • CPU1822 for example, one or more processors
  • memory 1832 for example, one or more RAM
  • storage applications A storage medium 1830 for programs 1842 or data 1844 (such as one or more mass storage devices).
  • the memory 1832 and the storage medium 1830 may be temporary storage or persistent storage.
  • the memory 1832 is a random access memory (random access memory, RAM), which can directly exchange data with the central processing unit 1822 for loading data 1844 and application programs 1842 and/or operating system 1841 for the central processing unit 1822 runs and uses directly, usually as a temporary data storage medium for operating systems or other running programs.
  • the program stored in the storage medium 1830 may include one or more modules (not shown in FIG. 10 ), and each module may include a series of instruction operations on the business processing device.
  • the central processing unit 1822 may be configured to communicate with the storage medium 1830 , and execute a series of instruction operations in the storage medium 1830 on the service processing device 1800 .
  • the storage medium 1830 stores program instructions and data corresponding to the method steps shown in any one of the foregoing embodiments shown in FIG. 3 to FIG. 7 .
  • the service processing device 1800 may also include one or more power sources 1826, one or more wired or wireless network interfaces 1850, one or more input and output interfaces 1858, and/or, one or more operating systems 1841, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
  • the embodiment of the present application also provides a business processing device.
  • the business processing device can also be called a digital processing chip or a chip.
  • the chip includes a processing unit and a communication interface.
  • the processing unit can obtain program instructions through the communication interface, and the program instructions are processed by the processing unit.
  • the processing unit is configured to execute the method steps performed by the service processing device shown in any one of the embodiments in FIG. 3 to FIG. 7 .
  • the embodiment of the present application also provides a digital processing chip.
  • the digital processing chip integrates circuits and one or more interfaces for realizing the above processor 1801 or the functions of the processor 1801 .
  • the digital processing chip can complete the method steps in any one or more of the foregoing embodiments.
  • no memory is integrated in the digital processing chip, it can be connected to an external memory through a communication interface.
  • the digital processing chip realizes the first computing node or the second computing node in the above embodiment according to the program code stored in the external memory, and may also be a blockchain node/device, and the first computing device or the second computing device executes Actions.
  • the service processing device when the service processing device provided in the embodiment of the present application is a chip, the chip specifically includes: a processing unit and a communication unit, the processing unit may be, for example, a processor, and the communication unit may be, for example, an input/output interface , pins or circuits, etc.
  • the processing unit can execute the computer-executable instructions stored in the storage unit, so that the chip in the server executes the service processing method described in the embodiments shown in FIGS. 3-7 .
  • the aforementioned storage unit may be a storage unit in the chip, such as a register, a cache, etc., and the storage unit may also be a storage unit located outside the chip in the wireless access device, such as a read-only Memory (read-only memory, ROM) or other types of static storage devices that can store static information and instructions, random access memory RAM, etc.
  • ROM read-only Memory
  • static storage devices that can store static information and instructions, random access memory RAM, etc.
  • the aforementioned processing unit or processor may be a central processing unit, a network processing unit (neural-network processing unit, NPU), a graphics processing unit (graphics processing unit, GPU), a digital signal processor (digital signal processor, DSP) ), application specific integrated circuit (ASIC) or field programmable logic gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor or any conventional processor or the like.
  • the processor mentioned above can be a general-purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits for controlling the program execution of the above-mentioned methods in FIGS. 3 to 7 .
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a program, and when it runs on the computer, the computer executes the above-mentioned embodiment as described in Fig. 3 to Fig. 7 steps in the method.
  • the embodiment of the present application also provides a computer program product, which, when running on a computer, causes the computer to execute the first computing node or the second computing node in the method described in the embodiments shown in FIGS. 3 to 7 , and also May be steps performed by blockchain nodes/devices, first computing device or second computing device.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be A physical unit can be located in one place, or it can be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • the connection relationship between the modules indicates that they have communication connections, which can be specifically implemented as one or more communication buses or signal lines.
  • This application can be implemented by means of software plus necessary general-purpose hardware, and of course it can also be implemented by dedicated hardware including application-specific integrated circuits, dedicated CPUs, dedicated memories, and dedicated components.
  • dedicated hardware including application-specific integrated circuits, dedicated CPUs, dedicated memories, and dedicated components.
  • all functions completed by computer programs can be easily realized by corresponding hardware, and the specific hardware structure used to realize the same function can also be varied, such as analog circuits, digital circuits or special-purpose circuit etc.
  • software program implementation is a better implementation mode in most cases.
  • the essence of the technical solution of this application or the part that contributes to the prior art can be embodied in the form of a software product, and the computer software product is stored in a readable storage medium, such as a floppy disk of a computer , U disk, mobile hard disk, read-only memory ROM, random access memory RAM, magnetic disk or optical disk, etc., including several instructions to make a computer device (which can be a personal computer, server, or network device, etc.) execute the application methods described in the various examples.
  • a readable storage medium such as a floppy disk of a computer , U disk, mobile hard disk, read-only memory ROM, random access memory RAM, magnetic disk or optical disk, etc.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server, or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • wired eg, coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless eg, infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a DVD), or a semiconductor medium (such as a solid state disk (solid state disk, SSD)), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Power Sources (AREA)

Abstract

La présente invention concerne un procédé, un système et un appareil de traitement de tâche, améliorant l'efficacité d'un emprunt mutuel de puissance de calcul par des dispositifs dans différents groupes de dispositifs. Le procédé comprend les étapes suivantes : un premier dispositif informatique reçoit une pluralité de demandes de tâche au moyen d'une fonction cible dans un système sans serveur ; le premier dispositif informatique envoie la fonction cible à un deuxième dispositif informatique, le deuxième dispositif informatique et le premier dispositif informatique appartenant à différents groupes de dispositifs ; lorsque le premier dispositif informatique obtient que sa propre ressource de puissance de calcul au repos est insuffisante, le premier dispositif informatique traite une première partie des demandes de tâche dans le premier dispositif informatique au moyen de la fonction cible, et transfère une seconde partie des demandes de tâche au deuxième dispositif informatique de façon à donner l'ordre au deuxième dispositif informatique d'avoir une ressource de puissance de calcul au repos pour traiter la seconde partie des demandes de tâche au moyen de la fonction cible reçue, la pluralité de demandes de tâche comprenant la première partie des demandes de tâche et la seconde partie des demandes de tâche ; et le premier dispositif informatique reçoit un résultat de traitement de la seconde partie des demandes de tâche envoyé par le deuxième dispositif informatique.
PCT/CN2022/096569 2021-11-02 2022-06-01 Procédé, système et appareil de traitement de service WO2023077791A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111290138.6 2021-11-02
CN202111290138.6A CN116069492A (zh) 2021-11-02 2021-11-02 一种业务处理的方法、系统以及装置

Publications (1)

Publication Number Publication Date
WO2023077791A1 true WO2023077791A1 (fr) 2023-05-11

Family

ID=86172085

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/096569 WO2023077791A1 (fr) 2021-11-02 2022-06-01 Procédé, système et appareil de traitement de service

Country Status (2)

Country Link
CN (1) CN116069492A (fr)
WO (1) WO2023077791A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190026150A1 (en) * 2017-07-20 2019-01-24 Cisco Technology, Inc. Fpga acceleration for serverless computing
CN110537169A (zh) * 2017-04-28 2019-12-03 微软技术许可有限责任公司 分布式计算系统中的集群资源管理
US10915366B2 (en) * 2018-09-28 2021-02-09 Intel Corporation Secure edge-cloud function as a service
CN112671830A (zh) * 2020-12-02 2021-04-16 武汉联影医疗科技有限公司 资源调度方法、系统、装置、计算机设备和存储介质
US20210232440A1 (en) * 2020-01-28 2021-07-29 Hewlett Packard Enterprise Development Lp Execution of functions by clusters of computing nodes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110537169A (zh) * 2017-04-28 2019-12-03 微软技术许可有限责任公司 分布式计算系统中的集群资源管理
US20190026150A1 (en) * 2017-07-20 2019-01-24 Cisco Technology, Inc. Fpga acceleration for serverless computing
US10915366B2 (en) * 2018-09-28 2021-02-09 Intel Corporation Secure edge-cloud function as a service
US20210232440A1 (en) * 2020-01-28 2021-07-29 Hewlett Packard Enterprise Development Lp Execution of functions by clusters of computing nodes
CN112671830A (zh) * 2020-12-02 2021-04-16 武汉联影医疗科技有限公司 资源调度方法、系统、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN116069492A (zh) 2023-05-05

Similar Documents

Publication Publication Date Title
WO2022021176A1 (fr) Procédé et système de migration de ressources en douceur et de restructuration d'un réseau collaboratif nuage-périphérie
Yi et al. Lavea: Latency-aware video analytics on edge computing platform
Wang et al. Edge cloud offloading algorithms: Issues, methods, and perspectives
US10728091B2 (en) Topology-aware provisioning of hardware accelerator resources in a distributed environment
US8949847B2 (en) Apparatus and method for managing resources in cluster computing environment
CN105607954A (zh) 一种有状态容器在线迁移的方法和装置
WO2023039965A1 (fr) Procédé d'équilibrage et de planification de ressources de calcul de réseau informatique en nuage-périphérie pour un groupage du trafic, et système
CN108182105A (zh) 基于Docker容器技术的局部动态迁移方法及控制系统
CN105308929A (zh) 分布负载平衡器
CN105264865A (zh) 分布负载平衡器中的多路径路由
WO2018121201A1 (fr) Structure de service de grappe distribuée, procédé et dispositif de coopération de nœuds, terminal et support
CN109697120A (zh) 用于应用迁移的方法、电子设备
CN106681839B (zh) 弹性计算动态分配方法
WO2021120633A1 (fr) Procédé d'équilibrage de charge et dispositif associé
US20170293500A1 (en) Method for optimal vm selection for multi data center virtual network function deployment
CN104917805A (zh) 一种负载分担的方法和设备
Singh et al. Survey on various load balancing techniques in cloud computing
CN104539744A (zh) 一种两阶段协作的媒体边缘云调度方法及装置
WO2013016977A1 (fr) Procédé et système de planification uniforme de ressources distantes d'informatique en nuage
US20230136612A1 (en) Optimizing concurrent execution using networked processing units
Liu et al. Service resource management in edge computing based on microservices
JPWO2019100984A5 (fr)
CN103401951B (zh) 基于对等架构的弹性云分发方法
US20230412671A1 (en) Distributed cloud system, data processing method of distributed cloud system, and storage medium
CN106933654B (zh) 一种基于缓存的虚拟机启动方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22888827

Country of ref document: EP

Kind code of ref document: A1