CN116684418B - Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway - Google Patents

Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway Download PDF

Info

Publication number
CN116684418B
CN116684418B CN202310967727.6A CN202310967727A CN116684418B CN 116684418 B CN116684418 B CN 116684418B CN 202310967727 A CN202310967727 A CN 202310967727A CN 116684418 B CN116684418 B CN 116684418B
Authority
CN
China
Prior art keywords
computing
power
computing power
power service
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310967727.6A
Other languages
Chinese (zh)
Other versions
CN116684418A (en
Inventor
黄剑锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ultrapower Software Co ltd
Original Assignee
Ultrapower Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ultrapower Software Co ltd filed Critical Ultrapower Software Co ltd
Priority to CN202310967727.6A priority Critical patent/CN116684418B/en
Publication of CN116684418A publication Critical patent/CN116684418A/en
Application granted granted Critical
Publication of CN116684418B publication Critical patent/CN116684418B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/10015Access to distributed or replicated servers, e.g. using brokers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/563Data redirection of data network streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/566Grouping or aggregating service requests, e.g. for unified processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

Some embodiments of the present application provide a power scheduling method, a power network and a device based on a power service gateway, where the method is applied to a target power service gateway, where the target power service gateway is one of a plurality of power service gateways, and the plurality of power service gateways are all connected with a power network brain, and the method includes: receiving a first-level arrangement scheduling packet and an algorithm strategy which are sent by the computing network brain and related to a computing power service request of a user; processing the primary scheduling packet by using the algorithm policy to generate a secondary scheduling packet, wherein the secondary scheduling packet is used for configuring the target computing power service node, and the secondary scheduling packet comprises: various configuration information. According to the method and the device, the high-efficiency sensing of the computing power can be realized, the scheduling of computing power arrangement can be realized based on the computing power service gateway, and the computing power service efficiency is improved.

Description

Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway
Technical Field
The application relates to the technical field of computing power service, in particular to a computing power arrangement scheduling method, a computing power network and a computing power device based on a computing power service gateway.
Background
With the continuous deep transformation of the intelligent social numbers, the explosive growth of the calculation power scale and the transformation of the calculation power supply mode to the cluster ecology are realized. In order to promote the deep fusion development of computing power and networks, an integrated information infrastructure and an information service system of the computing power network are proposed in the industry.
A service node of a computing gateway in a computing network shields access difference between computing access and a bearing network by a boundary portal opened by the computing service in a network service form, announces and schedules information states of all computing resources, and carries out computing network routing forwarding based on computing resource labels. In order to implement the aggregation and scheduling of the computational resources in the computational power network, data is generally forwarded to the computational network brain for analysis by adopting different protocol programming modes for the attributes of different computational resources. However, since protocol programming is not infinitely scalable and requires more bytes, it is easy to cause poor stability of power data transmission and cannot realize real-time perception.
Therefore, how to provide a technical scheme of an efficient power scheduling method in a power network based on a power service gateway becomes a technical problem to be solved.
Disclosure of Invention
The technical scheme of the embodiment of the application can realize the calculation power arrangement and scheduling in the calculation power network based on the calculation power service gateway, realize the efficient perception of calculation power resources in the calculation power network and improve the efficiency and performance of calculation power service.
In a first aspect, some embodiments of the present application provide a method for scheduling a computing power arrangement in a computing power network based on a computing power service gateway, where the method is applied to the computing power network, and the computing power network includes: the system comprises a computing network operation platform, a computing network brain and a plurality of computing power service gateways; the computing network operation platform receives a computing power service order of a user and sends a computing power service request corresponding to the computing power service order to a computing network brain; the computing network brain generates a primary scheduling packet based on the computing power service request, wherein the primary scheduling packet comprises: a target computing power service node, a target computing power service gateway, a computing power service path and a computing power service strategy which are determined according to the computing power service request, wherein the target computing power service gateway, the computing power service path and the computing power service strategy are to which the target computing power service node belongs; the target computing power service gateway receives the primary scheduling packet and the algorithm policy sent by the computing network brain, and processes the primary scheduling packet by utilizing the algorithm policy to generate a secondary scheduling packet.
Some embodiments of the present application may process and schedule a user's request for a computing power service through a computing power network comprised of a computing network operating platform, a computing network brain, and a plurality of computing power service gateways. The computing network brain can select an optimal target computing power service node from computing power resources managed and controlled by a plurality of computing power service gateways based on the computing power service request, determine a computing power service path and generate a primary scheduling packet. And then the target computing power service gateway processes the primary scheduling packet to obtain a secondary scheduling packet, so that the target computing power service node is configured to be a node capable of serving computing power service requests based on the secondary scheduling packet. According to the method and the device, the computing pressure of various configuration information can be determined for the computing network brain to share the target computing power service node by arranging the computing power service gateways, so that the computing amount of the computing network brain is reduced. Moreover, by the optimal target power service node selected by the computing network brain, the response time to the power service request can be reduced, meanwhile, the power service request can quickly reach the target power service node based on the power service path, the response time of the power service request can be reduced to a certain extent, and the efficiency and performance of the power service are improved.
In some embodiments, the computational power network is built based on Yun Yuansheng distributed container clusters and micro-service technology architecture technology bases; and the computing network operation platform, the computing network brain and the plurality of computing power service gateways are subjected to network interconnection and communication through an all-optical network base.
According to some embodiments of the application, through the networking mode, safe and reliable deterministic network interconnection can be realized, and a service infrastructure is provided for realizing efficient perception of computing power.
In some embodiments, each of the plurality of computing force service gateways manages a plurality of computing force nodes, the each computing force service gateway belonging to a respective computing force availability zone, the respective computing force availability zone being partitioned based on attributes of computing force resources in a computing force network, wherein the attributes include: at least one of a regional attribute, a computing power resource load attribute, a computing power response time attribute, and a computing power resource utilization attribute.
According to the method and the device, the computing power resources are divided into different available areas according to the attributes, each available area is managed through the computing power service gateway, partition management of computing power nodes can be achieved, and computing power perception efficiency and computing power service quality are improved.
In some embodiments, the computing network brain generates a primary orchestration schedule package based on the computing power service request, comprising: the computing network brain selects the target computing power service node corresponding to the computing power service strategy, and performs path planning based on the network bandwidth occupation state to obtain the computing power service path; the method further comprises the steps of: and the computing network brain generates and sends the primary scheduling packet and the algorithm strategy to the target computing power service gateway.
The computing network brain of some embodiments of the application can generate the primary scheduling packet based on the computing power service request and provide the algorithm policy to the target computing power service gateway, so that the primary scheduling packet can be effectively optimized.
In some embodiments, the processing the primary scheduling packet by using the algorithm policy generates a secondary scheduling packet, including: the target power service gateway performs power network resource arrangement scheduling on the target power service node by using the algorithm policy to obtain various configuration information in the secondary arrangement scheduling packet matched with the power service policy, wherein the various configuration information comprises: calculating force parameters, core numbers, memory parameters, database information, algorithm information and middleware information.
According to some embodiments of the application, the first-level scheduling package can be effectively optimized to obtain the second-level scheduling package through the algorithm policy, so that the target computing power service node conforming to the computing power service policy is obtained through management and control, the effective processing of the computing power service request can be realized, and the computing power service quality is improved.
In some embodiments, after the generating the second level orchestration schedule package, the method further comprises: the target computing power service gateway sends the secondary scheduling packet to the target computing power service node; the target power computing service gateway obtains a power computing service result of the target power computing service node for completing the power computing service request; and the target power computing service gateway sends the power computing service result to the power computing network operation platform.
Some embodiments of the present application implement effective supervision of the power service by acquiring a power service result of the target power service node executing the power service request and feeding back to the power network operation platform.
In a second aspect, some embodiments of the present application provide a computing power network for performing a method according to any of the embodiments of the first aspect, the computing power network comprising: a plurality of available computing areas obtained by dividing the computing network resource based on the attribute of the computing resource, wherein each available computing area in the plurality of available computing areas comprises: a computing power service gateway and regional computing power nodes managed and controlled by the computing power service gateway; the computing power service gateway is used for receiving a registration request for registering computing power nodes and adding the registering computing power nodes into resources managed by the computing power service gateway; the computing power service gateway is configured to send, to a computing network brain, relevant information of the registered computing power node, so that the computing network brain stores the relevant information, where the relevant information includes: and registering the number of the computing power node, the computing power service type, the computing power parameter and the resource utilization information.
According to the method and the device, the computing power resources in the computing power network are divided into the plurality of computing power available areas, and the area service nodes in the optimal computing power available areas can be selected for service for different computing power service requests, so that the efficient sensing of the computing power and the request processing efficiency are improved. The computing power service gateway can provide effective business service for subsequent computing power service requests by receiving new registration requests of computing power nodes and synchronizing the registration requests to the computing network brain.
In some embodiments, the respective computing power availability areas are connected to the computing network brain, the computing network brain is connected to a computing network operation platform, wherein the computing network operation platform and the computing network brain call the computing power service gateway through a unified call interface; the computing power service gateway is used for acquiring the computing power service condition of the regional computing power node, or the computing power service gateway is used for acquiring the computing power service condition of the regional computing power node through the cloud management platform.
Some embodiments of the application implement the opening and unified management of standard capabilities through a unified call interface; the regional computing power node can be effectively managed through the computing power service gateway, or unified management of different computing power resources can be realized through the cloud management platform, a unified heterogeneous computing power measurement and standard system is provided, and more computing power resources are provided as candidates of services for matching more suitable and efficient computing power types for upper-layer applications.
In a third aspect, some embodiments of the present application provide a method for scheduling computing power arrangement in a computing power network based on a computing power service gateway, including: the method is applied to a target power computing service gateway, wherein the target power computing service gateway is one of a plurality of power computing service gateways, and the power computing service gateways are connected with a power computing network brain in a power computing network, and the method comprises the following steps: receiving a primary scheduling packet and an algorithm policy, which are sent by the computing network brain and related to a computing power service request of a user, wherein the primary scheduling packet is generated by the computing network brain based on the computing power service request, and the primary scheduling packet comprises: a target computing power service node, a target computing power service gateway, a computing power service path and a computing power service strategy which are determined according to the computing power service request, wherein the target computing power service gateway, the computing power service path and the computing power service strategy are to which the target computing power service node belongs; and processing the primary scheduling packet by using the algorithm strategy to generate a secondary scheduling packet.
In a fourth aspect, some embodiments of the present application provide a computing power scheduling apparatus in a computing power network based on a computing power service gateway, applied to a target computing power service gateway, where the target computing power service gateway is one of a plurality of computing power service gateways, and each of the plurality of computing power service gateways is connected to a computing network brain, including: the acquisition module is used for receiving a primary scheduling packet and an algorithm policy, which are sent by the computing network brain and related to a computing power service request of a user, wherein the primary scheduling packet is generated by the computing network brain based on the computing power service request, and the primary scheduling packet comprises: a target computing power service node, a target computing power service gateway, a computing power service path and a computing power service strategy which are determined according to the computing power service request, wherein the target computing power service gateway, the computing power service path and the computing power service strategy are to which the target computing power service node belongs; the generation module is configured to process the primary scheduling packet by using the algorithm policy, and generate a secondary scheduling packet, where the secondary scheduling packet is configured to configure the target computing power service node, and the secondary scheduling packet includes: various configuration information.
In a fifth aspect, some embodiments of the application provide a computer readable storage medium having stored thereon a computer program which when executed by a processor performs a method according to any of the embodiments of the first or second aspects.
In a sixth aspect, some embodiments of the present application provide an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor is operable to implement a method as described in any of the embodiments of the first or third aspects when the program is executed by the processor.
In a seventh aspect, some embodiments of the application provide a computer program product comprising a computer program, wherein the computer program, when executed by a processor, is adapted to carry out the method according to any one of the embodiments of the first or third aspects.
Drawings
In order to more clearly illustrate the technical solutions of some embodiments of the present application, the drawings that are required to be used in some embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be construed as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
FIG. 1 is a diagram of a computing power network system provided by some embodiments of the present application;
FIG. 2 is one of the flow charts of a method for scheduling computing power orchestration in a computing power network based on a computing power service gateway according to some embodiments of the present application;
FIG. 3 is a second flowchart of a method for scheduling computing power in a computing power network based on a computing power service gateway according to some embodiments of the present application;
FIG. 4 is a block diagram of an apparatus for scheduling a computational effort arrangement in a computational effort network based on a computational effort service gateway, according to some embodiments of the present application;
fig. 5 is a schematic diagram of an electronic device according to some embodiments of the present application.
Detailed Description
The technical solutions of some embodiments of the present application will be described below with reference to the drawings in some embodiments of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
In the related art, the computing network is a novel information infrastructure which takes computing as a center and takes the network as a root, and the network, cloud, number, intelligence, security, edge, end, chain (ABCDNETS) and the like are deeply fused to provide integrated services. The aim of the computing power network is to realize the ubiquitous computing power, the symbiotic computing power, the intelligent arrangement and the integrated service, gradually push the computing power to be social-grade service which is the same as water and electricity and can be accessed at one point and used at the same time, and achieve the wish of 'the network is not reached, the computing power is ubiquitous and the intelligence is not reached'. The computing power in the entire computing power network may currently be organized by different hardware architectures. Generally, the system comprises four types of CPU (Central Processing Unit ), GPU (graphics processing unit, image processor), FPGA (Field Programmable Gate Array, programmable array logic) and AISC (Application Specific Integrated Circuit) chip. The CPU mainly comprises X86 and ARM, and although ARM design is adopted as a customized ASIC chip which is promoted for low-power consumption and other scenes, the ARM is also adopted and deployed as a general chip along with the wide application of the ARM in a server and an embedded terminal. The GPU is mainly a proprietary architecture for enabling the rapid processing of vector graphic data; the FPGA has advantages in terms of hardware acceleration, etc. as a programmable logic gate. While processing requirements for a particular scenario require custom dedicated chips to handle, such as custom ASICs that the various TPU, NPUs currently designed for deep learning should belong to.
The computer network brain is used as the programming management core of the computer network, and realizes the intelligent sensing, intelligent programming and scheduling and the self-intelligence of the computer network by the artificial intelligence technology, thereby promoting the integrated development of the computer network and the symbiotic development of the computer network. The intelligent perception of the computing network needs to perform unified identification and computing force modeling on various heterogeneous computing forces. And needs to perform real-time and efficient information data acquisition and calculation force state sensing on various heterogeneous calculation forces.
In the traditional method for sensing the computational power state, the computational power gateway provides a means for converting the computational power information into network services to be opened outwards. While for the computing power user, the computing power gateway provides the possibility to aggregate and dynamically schedule computing power in the network. The channels between the computing force gateways are erected on the bottom layer bearing network in an overlay mode, and the computing force and the network are simultaneously ensured through SRv-bsid (Segment Routing IPv, route forwarding) and other service network capabilities. However, because the computational complexity in the computational power network is high, the SRv SID programmable space is not infinitely expandable, and the custom computational power mechanism cannot be infinitely propagated, so that the efficient forwarding of all the computational power data cannot be realized. And IP routes can be converged at the boundary and route advertised according to address prefix length, however multidimensional computational power routing cannot apply this mechanism. The computational resources are stateful, and if the computational route advertisement resources are used, the routes can change continuously along with the use state, and the network stability is affected.
In another prior art scheme, various power state information data of a power computing service gateway and a power computing management and control platform are all required to be transmitted to a power computing network brain (simply referred to as a power computing network brain) in a converged manner through a network, and the power computing network brain is required to bear huge network bandwidth consumption. The state advertisement information of all the computing nodes is processed by a single computing network brain node, which needs to bear huge computing and processing power consumption.
As known from the related art, the calculation network in the prior art has large brain calculation amount and influences the efficiency of calculation force perception.
In view of this, some embodiments of the present application provide a method for scheduling an algorithm in an algorithm network based on an algorithm service gateway, where the method is applied to a target algorithm service gateway that manages a plurality of algorithm nodes, where the target algorithm service gateway is one of the plurality of algorithm service gateways, and where the plurality of algorithm service gateways are all connected to an algorithm brain, and the method performs a preliminary calculation on an algorithm service request through the algorithm brain to obtain a first-level scheduling packet. And then the primary scheduling packet and a preset algorithm strategy are issued to the target power computing service gateway. After receiving the first-level scheduling packet and the algorithm policy, the target computing power service gateway can generate a second-level scheduling packet based on the first-level scheduling packet, and can configure the target computing power service node through information in the second-level scheduling packet, so that the target computing power service node meets the condition of service request of the computing power service. According to some embodiments of the application, the computing power service gateway manages the computing power nodes, so that the arrangement and the scheduling of computing power resources in the computing power network are realized, the target computing power service gateway can share the processing pressure of the computing network brain on the target computing power service nodes, and the computing power sensing efficiency and the computing power service quality are improved.
The overall composition of the computing network provided by some embodiments of the present application is described below by way of example in conjunction with fig. 1.
In some embodiments of the present application, a computing power network includes a plurality of computing power available regions partitioned from computing power network resources based on attributes of the computing power resources, wherein each of the plurality of computing power available regions includes: a computing force service gateway and regional computing force nodes managed by the computing force service gateway.
As shown in fig. 1, some embodiments of the application provide a computing power network comprising: the computing network comprises a computing network operation platform 100, a computing network brain 110, a first available area AZ1, a second available area AZ2, a third available area AZ3 and a fourth available area AZ4.AZ1 comprises: the first computing power service gateway 111, the first cloud management platform and the resource pools c1 and c2.AZ2 comprises: a second computing power service gateway 112, a second cloud management platform, and resource pools c3 and c4.AZ3 comprises: a third computing power service gateway 113, a third cloud management platform, and resource pools c5 and c6.AZ4 comprises: a fourth computing force service gateway 114, a fourth cloud management platform, and resource pools c7 and c8.
In some embodiments of the present application, the computing power service gateway in fig. 1 communicates with the computing network operation platform 100 and the computing network brain 110 through the all-optical network chassis. Different types of computing power nodes are arranged in the resource pool so as to meet various service requirements of computing power users.
In some embodiments of the present application, each of the plurality of computing force service gateways manages a plurality of computing force nodes, the each computing force service gateway belonging to a respective computing force availability zone, the respective computing force availability zone being partitioned based on an attribute of a computing force resource in a computing force network, wherein the attribute comprises: at least one of a regional attribute, a computing power resource load attribute, a computing power response time attribute, and a computing power resource utilization attribute.
For example, in some embodiments of the present application, the computing network is divided into AZ, i.e., available areas, such as AZ1, AZ2, AZ3, AZ4, etc., in fig. 1, according to the spatial and temporal comprehensive relevance (as one specific example of an attribute) of the computing resource and the network resource. For example, the space refers to the region attribute which can be divided according to the computing power, such as North China, east China, south China, etc. The time refers to the network delay time of the power calculation service, the response time of the power calculation node and the like. The computing power resource load attribute specifically refers to the load amount of the computing power service born by the computing power resource department. The computing power resource utilization attribute refers to the capability of the computing power resource to actually provide a service. Specifically, the division manner may be selected according to the actual situation to divide the available area of the computing power network, which is not limited herein in detail.
In some embodiments of the present application, the respective computing power availability zones are connected to the computing network brain, the computing network brain is connected to a computing network operation platform, wherein the computing network operation platform and the computing network brain invoke the computing power service gateway through a unified invocation interface; the computing power service gateway is used for acquiring the computing power service condition of the regional computing power node, or the computing power service gateway is used for acquiring the computing power service condition of the regional computing power node through the cloud management platform.
Specifically, the various heterogeneous computing power resources in each available area are registered to the computing power service gateway of the local area (namely the available area to which the computing power resource belongs) and accept the nano tube, computing power arrangement and scheduling of the computing power service gateway of the local area, and a cloud management platform can not be set at this time. In other embodiments of the present application, the regional power service gateway may also perform nano-tube, sensing, arrangement and scheduling on the power resources in the region through the cloud tube platform in the region, that is, each available region shown in fig. 1 is provided with the cloud tube platform to communicate with the power service gateway.
In some embodiments of the present application, a power user may subscribe to a power service (of various types) via the power network operations platform 100 and place relevant power network quality of service and performance requirements. The computing network operation platform 100 sends a computing power service request to the computing network brain 110 based on the computing power service subscription ordered by the computing power user. For example, a power user may specify what the type of power service they need is, how large the power scale needs to be, how long the power service request needs to be completed, and so on.
In some embodiments of the present application, the computing network brain 110 is configured to select a target computing power service node corresponding to a computing power service policy, and perform path planning based on a network bandwidth occupation state to obtain a computing power service path; the computing network brain 110 is configured to send the computing power service scheduling request carrying the primary orchestration scheduling packet and the algorithm policy to a target computing power service gateway.
For example, in some embodiments of the application, the computing network brain 110 may select a closely spaced and eligible optimal computing power service node (i.e., a target computing power service node) for the user based on the time, place, and user-formulated computing power service policy of the computing power service request. For example, the user belongs to the eastern region, at which time the best computing power service node may be selected from computing power resources in the available region of the eastern region, and a target computing power service gateway managing the eastern region may be obtained. The current network bandwidth occupancy state is then analyzed based on the network paths from the user to the best effort service node, and a network path that is not congested, has a relatively wide bandwidth, and has a relatively small number of routers is selected as the best network path (as a specific example of the effort service path). The power calculation service request is distributed to the optimal power calculation service node through the optimal network path, so that the response efficiency to the power calculation service request can be improved, and the processing efficiency to the service is further improved.
In some embodiments of the present application, the computing network brain 110 is configured to generate a primary orchestration schedule package for a computing user to select a target computing service gateway for the computing user according to the content of the computing service request, and send the primary orchestration schedule package and a related algorithm policy to the target computing service gateway. Wherein the target computing power service gateway is any one of the 4 computing power service gateways in fig. 1.
In some embodiments of the present application, the target power service gateway may select an appropriate target power service node for the power user based on the first level orchestration schedule package and the associated algorithm policy to fulfill the power service request. Finally, the target power service gateway may feed back the power service result to the power network operation platform 100. The target power service node may be characterized by a power service node unique identifier, a power service node ID, or a specific power service node.
It should be appreciated that the number of computing service gateways is set based on the number and condition of computing nodes that the computing network contains, and fig. 1 is merely a brief example for some embodiments of the present application. In practical applications, the number of available areas may be more than the above four, and the number of the computing service gateways may be more than the above four. The embodiments of the present application are not particularly limited herein.
In some embodiments of the present application, the computing power network of fig. 1 may perform a computing power orchestration scheduling method in a computing power network based on a computing power service gateway, the method comprising: the computing network operation platform receives a computing power service order of a user and sends a computing power service request corresponding to the computing power service order to a computing network brain; the computing network brain generates a primary scheduling packet based on the computing power service request, wherein the primary scheduling packet comprises: a target computing power service node, a target computing power service gateway, a computing power service path and a computing power service strategy which are determined according to the computing power service request, wherein the target computing power service gateway, the computing power service path and the computing power service strategy are to which the target computing power service node belongs; the target computing power service gateway receives the primary scheduling packet and the algorithm policy sent by the computing network brain, and processes the primary scheduling packet by utilizing the algorithm policy to generate a secondary scheduling packet. Wherein the secondary orchestration schedule package is used to configure the target computing power service node, the secondary orchestration schedule package comprising: various configuration information.
The implementation of a power orchestration schedule in a power network based on a power service gateway, as performed by a target power service gateway, according to some embodiments of the present application, is described below by way of example with reference to fig. 2.
It should be noted that, the target power service gateway is determined by the power brain 110 selecting a target power service node based on the power service request of the power user, and combining the power type, the power response time, the transmission bandwidth, the network transmission delay and the power resource utilization condition of the power resource of each available area.
Referring to fig. 2, fig. 2 is a flowchart of a method for scheduling computing power in a computing power network based on a computing power service gateway according to some embodiments of the present application, where the method for scheduling computing power in the computing power network based on the computing power service gateway includes:
s210, receiving a primary scheduling packet and an algorithm policy, which are sent by the computing network brain and related to a computing power service request of a user, wherein the primary scheduling packet is generated by the computing network brain based on the computing power service request, and the primary scheduling packet comprises: and determining a target computing power service node, a target computing power service gateway, a computing power service path and a computing power service strategy which are determined according to the computing power service request.
For example, in some embodiments of the application, the power service policy may be a user requirement (e.g., a requirement that is prioritized with performance, prioritized with quality of service, prioritized with time of service, etc. in servicing the power service request) related to the power service request formulated by the user. The computing network brain 110 may select an optimal target computing power service node according to the location, time, and computing power service policy of the computing power service request, determine a target computing power service gateway described by the optimal target computing power service node, formulate an optimal network path (as a specific example of the computing power service path), and perform computing power service orchestration (including computing network resource orchestration and computing network traffic orchestration) to generate a first-level orchestration scheduling package. That is, the computing network brain 110 allocates optimal computing power resource nodes (i.e., optimal target computing power service nodes) across the computing power network domain (i.e., among the computing power resources of all available areas) according to the computing power service requests. And when the power calculation service request is sent to the node with the optimal power calculation resource, an optimal network path is formulated so that the request can reach the node quickly, and the response time is reduced. The algorithm policy formulated for different types of computing power service is also preset in the computing power brain 110. For example, for different computing power services, what computing power environment, computing power scale, computing power flow and the like are adopted. For example, the computing network brain provides the algorithm model, the policy model, the flow model, the orchestration model, and various model-related parameter sets related to the algorithm policy to the target computing power service gateway through a cloud native infrastructure (e.g., a distributed memory database), or issues orchestration scheduling instructions along with a primary orchestration scheduling package to the target computing power service gateway containing the target computing power service node. In particular, the algorithm model may identify a particular type of algorithm to be provided for the current computing power service request. The policy model may identify what algorithm policies are enforced on the current computing power service request to meet the performance or time requirements of the user to provide good quality computing power service to the user. The flow model may identify a particular process that is servicing the current computing power service request. The orchestration model may confirm how the algorithms, policies, and flows provided for the current power request are configured in combination to fulfill the power request service. Various model-related parameter sets may provide some configuration parameters, such as database parameters, middleware parameters, etc., for optimal power resource nodes. Specifically, the optimal power resource node can be served by using all or part of the models according to actual conditions, so that the power calculation service request of the user can be finished with quality and quality assurance.
In some embodiments of the application, the computational power service policy comprises: the power calculation service request corresponds to the power calculation service type, the power calculation service model, the power calculation service scale, the power calculation service quality level requirement and the power calculation related parameters.
For example, in some embodiments of the application, the computational power service policies in the primary orchestration schedule package may specifically include: the power user's power demand is related to the power service request. For example, the power service request is a training model, where the power service model may be a training model service, the power service model may be a model related to training, the power service scale may be how large-scale data size is needed to train, and further determine the power service scale, and the power service quality may be parameters related to training efficiency in the training process (for example, how many iterations of the model are completed in a set time). The calculation force related parameters may refer to power consumption parameters, frequency parameters, etc. in the process of training the model.
S220, processing the primary scheduling packet by utilizing the algorithm strategy to generate a secondary scheduling packet.
For example, in some embodiments of the present application, the target power service gateway may optimize the primary orchestration schedule package according to an algorithm policy to obtain a secondary orchestration schedule package that may be floor-implemented and that contains a configuration associated with the target power service node, so that the target power service node may service the power service request in a targeted manner. Based on the method, the target power service gateway can utilize the algorithm strategy to perform targeted configuration, optimization and business arrangement and scheduling on the computing network resources of the target power service node to generate a secondary arrangement and scheduling packet, and issue the power service request to the target power service node to complete the configuration, so that the effective service of the power service request can be realized. The second-level scheduling packet is used for configuring the target computing power service node, and comprises the following components: various configuration information.
In some embodiments of the present application, S220 may include: the target power service gateway performs power network resource arrangement scheduling on the target power service node by using the algorithm policy to obtain various configuration information in the secondary arrangement scheduling packet matched with the power service policy, wherein the various configuration information comprises: calculating force parameters, core numbers, memory parameters, database information, algorithm information and middleware information.
For example, in some embodiments of the present application, the target power service gateway may optimize the primary orchestration schedule package through an algorithm policy, orchestrate the scheduling, resulting in a secondary orchestration schedule package, such that the target power service node may satisfy the environmental conditions that service the power service request. Such as providing a service environment for the computing force request service, a CPU core number (as a specific example of the core number), database information (such as a database name, a database size, etc.), middleware information (such as a middleware type, a middleware number, etc.), a computing force value (as a specific example of a computing force parameter), a computing force service billing data interface, etc. In other embodiments of the present application, the target computing power service gateway may also configure or read basic information (such as computing power values, memory, core numbers, etc.) of the computing power nodes under its control through the cloud management platform or the target computing power service gateway itself, so as to facilitate the scheduling of the target computing power service gateway.
In some embodiments of the present application, the second level orchestration schedule package may further comprise: the system comprises a power computing service ID set, a power computing service model set, a power computing service quality grade code, a power computing service running mirror image, a power computing service environment parameter, a power computing value, a memory amount, a network performance parameter, a power computing service flow, a power computing service result return interface and a power computing service charging data return interface of the target power computing service node.
Specifically, the content of the second level scheduling packet may be updated based on actual situations, which the embodiments of the present application are not limited to.
In some embodiments of the present application, after performing S220, the method of scheduling of computing power orchestration in a computing power network based on a computing power service gateway further comprises: acquiring a power calculation service result of the target power calculation service node for completing the power calculation service request; and sending the calculation power service result to a calculation network operation platform.
For example, in some embodiments of the present application, the target computing power service gateway may manage and control the target computing power service node to complete the computing power service corresponding to the computing power service request, and obtain the computing power service result. The result of the power service can also include power charging conditions, such as power resource usage amount and time. By sending the computing power service results to the computing network operation platform 100, the computing power user can be facilitated to view and understand the computing power service status, progress and result conditions.
In some embodiments of the present application, the method of scheduling of computing power orchestration in a computing power network based on a computing power service gateway further comprises: receiving a registration request of a registration computing node, and adding the registration computing node into computing resources managed and controlled by the target computing service gateway; transmitting relevant information of the registered computing power node to the computing network brain, wherein the relevant information comprises: and registering the number of the computing power node, the computing power service type, the computing power parameter and the resource utilization information.
For example, in some embodiments of the application, the target computing power service gateway may receive a registration request for a new computing power node (as a specific example of registering a computing power node). After receiving the registration request of the new computing node, the target computing service gateway may assign a unique computing identifier (as a specific example of a number) to the new computing node and add the unique computing identifier to the resource pool. At the same time, the target computing power service gateway may also synchronize new computing power nodes to the computing network brain 110 so as to be within the orchestrated scheduling scope when subsequently distributing computing power service requests. In addition, the target computing power service gateway can update the computing power condition of the computing power nodes in the managed resource pool. Meanwhile, the synchronous updating of the power unified identification database of the power computing network universe is carried out based on a cloud-primary distributed memory sharing mechanism (such as a cloud-primary distributed memory database), so that the synchronous notification of the resource condition of the power computing nodes is realized, and the power computing service quality is improved.
Specifically, heterogeneous computing power unified identification database cluster nodes at the resource node side can update computing power node time-varying computing power quantity information. The time-varying computation dynamics amount information includes: and calculating time-varying information such as faults, alarms and failures of the force node. The time-varying attribute data of the computing power resource node is updated in real time through real-time monitoring and analysis of the full stack data of the computing power resource node. The time-varying data update trigger threshold is used to monitor the computational power state of the computational power node, e.g., when the computational power node fails, the time-varying attribute data of the computational power node can be modified from "normal" to "failed" in the time-varying computational power metric information.
Heterogeneous computing power unified identification database cluster nodes at a target computing power service gateway can update computing power node primary convergence computing power quantity information. The first-level convergence computation metric information comprises first-level convergence attribute data containing computation power resource identifiers corresponding to computation power nodes. The first level aggregate attribute data may include: intermediate aggregation attribute data information such as resource availability status, resource utilization rate and the like of the computing power node. Wherein the resource availability status and the resource utilization are updated based on the condition of the resource change rate. For example, when the change amount of the attribute value of the resource availability status collected in the current preset time period and the attribute value of the resource availability status recorded in the last preset time period exceeds 5% (may be ±5%) in the change rate of the resource occupancy, the attribute values corresponding to the resource availability status and the resource utilization rate may be updated.
Heterogeneous computing power unified identification database cluster nodes on the brain side of the computing network can update computing power node secondary convergence computing power quantity information. The secondary aggregate computing metric information may include resource idleness and resource utilization of the computing nodes. For example, when the occupancy rate of the computing power node exceeds a preset threshold value at two time nodes, the resource idle condition and the resource utilization rate of the computing power node in the secondary aggregate computing power metric information may be updated.
Because the capability interfaces of APIs (Application Programming Interface, application programming interfaces) provided by the computing power service gateway for the management systems of different computing power resources are quite different, different Yun Guanping stations can be adapted and accessed one by one. Secondly, diversified heterogeneous computing power has different technical indexes and applicable scenes, and at present, a unified heterogeneous computing power measurement and marking system is lacking, and the matching of more suitable and efficient computing power types and the provision of more computing power resources as candidates of services are facilitated for upper-layer applications by extracting standard capabilities and establishing a standard data model (i.e. an algorithm strategy). Finally, the computing power service gateway can provide a unified calling interface for the computing network brain and the operation layer application, and the standard capacity opening and unified management are realized.
The following is an exemplary description of a specific process for scheduling a computing power orchestration in a computing power network based on a computing power service gateway, provided by some embodiments of the present application, in conjunction with fig. 3.
Referring to fig. 3, fig. 3 is a flowchart of a method for scheduling computing power in a computing power network based on a computing power service gateway according to some embodiments of the present application.
The implementation of the above is exemplarily described below.
S310, the computing network operation platform 100 acquires the computing power service of the computing power user, generates a computing power service request and sends the computing power service request to the computing network brain 110.
S320, the computing network brain 110 calculates computing power resources based on the computing power service request, schedules the computing power resources to generate a first-level scheduling packet, and sends the first-level scheduling packet to the target computing power service gateway together with the algorithm policy.
S330, the target power service gateway manages the target power service node, and performs the network resource and service arrangement scheduling aiming at the target power service node to generate a secondary arrangement scheduling packet.
And S340, the target power service gateway sends the secondary scheduling packet to the target power service node.
And S350, after the target power service node finishes the power service corresponding to the power service request, the target power service gateway feeds back the power service result to the power network operation platform 100.
It should be noted that, specific implementation details of S310 to S350 may refer to the method embodiments provided above, and detailed descriptions are omitted here for avoiding repetition.
Referring to fig. 4, fig. 4 is a block diagram illustrating an apparatus for scheduling computing power in a computing power network based on a computing power service gateway according to some embodiments of the present application. It should be understood that the apparatus for scheduling the calculation power arrangement in the calculation power network based on the calculation power service gateway corresponds to the above method embodiment, and can perform the steps related to the above method embodiment, and specific functions of the apparatus for scheduling the calculation power arrangement in the calculation power network based on the calculation power service gateway may be referred to the above description, and detailed descriptions thereof are omitted herein as appropriate to avoid repetition.
The apparatus for scheduling computing power in a computing power network based on a computing power service gateway of fig. 4 includes at least one software functional module capable of being stored in a memory in the form of software or firmware or being solidified in the apparatus for scheduling computing power in a computing power network based on a computing power service gateway, the apparatus for scheduling computing power in a computing power network based on a computing power service gateway comprising: an obtaining module 410, configured to receive a primary scheduling packet and an algorithm policy sent by a computing network brain and related to a computing power service request of a user, where the primary scheduling packet is generated by the computing network brain based on the computing power service request, and the primary scheduling packet includes: a target computing power service node, a target computing power service gateway, a computing power service path and a computing power service strategy which are determined according to the computing power service request, wherein the target computing power service gateway, the computing power service path and the computing power service strategy are to which the target computing power service node belongs; and the generating module 420 is configured to process the first-level scheduling packet by using the algorithm policy, and generate a second-level scheduling packet.
In some embodiments of the application, the computational power service policy comprises: the power calculation service request corresponds to the power calculation service type, the power calculation service model, the power calculation service scale, the power calculation service quality level requirement and the power calculation related parameters.
In some embodiments of the application, a generating module 420 is configured to select, from among the computing nodes in the target computing service gateway, a target computing service node that conforms to the computing service policy based on the algorithm policy; and carrying out service arrangement scheduling on the target computing power service node to generate the secondary arrangement scheduling packet.
In some embodiments of the application, the second level orchestration schedule package comprises: the system comprises a power computing service ID set, a power computing service model set, a power computing service quality grade code, a power computing service running mirror image, a power computing service environment parameter, a power computing value, a memory amount, a network performance parameter, a power computing service flow, a power computing service result return interface and a power computing service charging data return interface of the target power computing service node.
In some embodiments of the present application, after the generating module 420, the means for scheduling of the computing power orchestration in the computing power network based on the computing power service gateway further comprises: a feedback module (not shown in the figure) is used for acquiring a calculation power service result of the target calculation power service node for completing the calculation power service request; and sending the calculation power service result to a calculation network operation platform.
In some embodiments of the present application, the apparatus for scheduling of computing power orchestration in a computing power network based on a computing power service gateway further comprises: a registration module (not shown) is used to: receiving a registration request of a registration computing node, and adding the registration computing node into computing resources managed and controlled by the target computing service gateway; transmitting relevant information of the registered computing power node to the computing network brain, wherein the relevant information comprises: and registering the number of the computing power node, the computing power service type, the computing power parameter and the resource utilization information.
In some embodiments of the present application, each of the plurality of computing force service gateways manages a plurality of computing force nodes, the each computing force service gateway belonging to a respective computing force availability zone, the respective computing force availability zone being partitioned based on an attribute of a computing force resource in a computing force network, wherein the attribute comprises: at least one of a regional attribute, a computing power resource load attribute, a computing power response time attribute, and a computing power resource utilization attribute.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding procedure in the foregoing method for the specific working procedure of the apparatus described above, and this will not be repeated here.
Some embodiments of the present application also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the operations of the method according to any of the above-described methods provided by the above-described embodiments.
Some embodiments of the present application also provide a computer program product, where the computer program product includes a computer program, where the computer program when executed by a processor may implement operations of a method corresponding to any of the above embodiments of the above method provided by the above embodiments.
As shown in fig. 5, some embodiments of the present application provide an electronic device 500, the electronic device 500 comprising: memory 510, processor 520, and a computer program stored on memory 510 and executable on processor 520, wherein processor 520 may implement a method as in any of the embodiments described above when reading the program from memory 510 and executing the program via bus 530.
Processor 520 may process the digital signals and may include various computing structures. Such as a complex instruction set computer architecture, a reduced instruction set computer architecture, or an architecture that implements a combination of instruction sets. In some examples, processor 520 may be a microprocessor.
Memory 510 may be used for storing instructions to be executed by processor 520 or data related to execution of the instructions. Such instructions and/or data may include code to implement some or all of the functions of one or more of the modules described in embodiments of the present application. The processor 520 of the disclosed embodiments may be configured to execute instructions in the memory 510 to implement the methods shown above. Memory 510 includes dynamic random access memory, static random access memory, flash memory, optical memory, or other memory known to those skilled in the art.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. The power calculation scheduling method in the power calculation network based on the power calculation service gateway is characterized by being applied to the power calculation network, and the power calculation network comprises the following steps: the system comprises a computing network operation platform, a computing network brain and a plurality of computing power service gateways, wherein the computing power network resource is divided into a plurality of computing power available areas based on the attribute of the computing power resource, the computing power service gateways of the computing power available areas control computing power nodes of corresponding areas, and the computing network operation platform and the computing network brain call the computing power service gateways through unified call interfaces; wherein,
the computing network operation platform receives a computing power service order of a user and sends a computing power service request corresponding to the computing power service order to a computing network brain;
the computing network brain calculates an optimal target computing power service node, a target computing power service gateway to which the optimal target computing power service node belongs, and formulates an optimal network path based on the location, time and computing power service policy of the computing power service request, and performs computing power service arrangement to generate a primary arrangement scheduling packet, wherein the primary arrangement scheduling packet comprises: a target computing power service node, a target computing power service gateway, a computing power service path and a computing power service strategy which are determined according to the computing power service request, wherein the target computing power service gateway, the computing power service path and the computing power service strategy are to which the target computing power service node belongs;
The target computing power service gateway receives the primary scheduling packet and the algorithm policy sent by the computing network brain, processes the primary scheduling packet by utilizing the algorithm policy, configures, optimizes and schedules the computing network resources of the target computing power service node, and generates a secondary scheduling packet containing relevant configuration information of the target computing power service node, wherein the secondary scheduling packet is used for configuring the target computing power service node so as to facilitate service of the computing power service request.
2. The method of claim 1, wherein the computing power network is built based on Yun Yuansheng distributed container clusters and micro-service technology architecture technology; and the computing network operation platform, the computing network brain and the plurality of computing power service gateways are interconnected and communicated through an all-optical network.
3. The method of claim 1 or 2, wherein each of the plurality of computing force service gateways governs a plurality of computing force nodes, the each computing force service gateway belonging to a respective computing force availability zone that is partitioned based on attributes of computing force resources in the computing force network, wherein the attributes comprise: at least one of a regional attribute, a computing power resource load attribute, a computing power response time attribute, and a computing power resource utilization attribute.
4. The method of claim 1 or 2, wherein the computing network brain generates a primary orchestration schedule package based on the computing power service request, comprising:
the computing network brain selects the target computing power service node corresponding to the computing power service strategy, and performs path planning based on the network bandwidth occupation state to obtain the computing power service path;
the method further comprises the steps of:
and the computing network brain sends the computing power service scheduling request carrying the primary scheduling packet and the algorithm strategy to the target computing power service gateway.
5. The method of claim 4, wherein configuring, optimizing and traffic scheduling the computing network resources of the target computing power service node to generate a two-level scheduling packet containing configuration information associated with the target computing power service node comprises:
the target power service gateway performs power network resource arrangement scheduling on the target power service node by using the algorithm policy to obtain various configuration information in the secondary arrangement scheduling packet matched with the power service policy, wherein the various configuration information comprises: calculating force parameters, core numbers, memory parameters, database information, algorithm information and middleware information.
6. The method of claim 1 or 2, wherein after generating the secondary orchestration schedule package, the method further comprises:
the target computing power service gateway sends the secondary scheduling packet to the target computing power service node;
the target power computing service gateway obtains a power computing service result of the target power computing service node for completing the power computing service request;
and the target power computing service gateway sends the power computing service result to the power computing network operation platform.
7. A power network for performing the method of any one of claims 1-6, the power network comprising:
a plurality of available computing areas obtained by dividing the computing network resource based on the attribute of the computing resource, wherein each available computing area in the plurality of available computing areas comprises: the power computing service gateway of each power computing available area controls the power computing nodes of the corresponding area, and the power computing service platform and the power computing network brain call the power computing service gateway through a unified call interface;
the computing power service gateway is used for receiving a registration request for registering computing power nodes and adding the registering computing power nodes into resources managed by the computing power service gateway;
The computing power service gateway is configured to send, to a computing network brain, relevant information of the registered computing power node, so that the computing network brain stores the relevant information, where the relevant information includes: and registering the number of the computing power node, the computing power service type, the computing power parameter and the resource utilization information.
8. The computing power network of claim 7, wherein each computing power availability zone is connected to the computing network brain, the computing network brain being connected to a computing network operation platform, wherein the computing network operation platform and the computing network brain invoke the computing power service gateway through a unified invocation interface; the computing power service gateway is used for acquiring the computing power service condition of the regional computing power node, or the computing power service gateway is used for acquiring the computing power service condition of the regional computing power node through the cloud management platform.
9. The utility model provides a power scheduling method is arranged in power network based on power service gateway, characterized by that is applied to the target power service gateway, and a plurality of power service gateways are concentrated by the power network brain, and the power service gateways are determined through dividing the power resource, and each power service gateway manages a plurality of power nodes in the power resource, and the target power service gateway is one of a plurality of power service gateways, and all the power service gateways are connected with the power network brain in the power network, and the method includes:
Receiving a first-level arrangement scheduling packet and an algorithm policy, which are sent by the computing network brain and related to a computing power service request of a user, wherein the first-level arrangement scheduling packet is generated by the computing network brain by calculating an optimal target computing power service node, a target computing power service gateway to which the optimal target computing power service node belongs, and making an optimal network path based on the location, time and computing power service policy of the computing power service request, and performing computing power service arrangement, and the first-level arrangement scheduling packet comprises: a target computing power service node, a target computing power service gateway, a computing power service path and a computing power service strategy which are determined according to the computing power service request, wherein the target computing power service gateway, the computing power service path and the computing power service strategy are to which the target computing power service node belongs;
and processing the primary scheduling packet by utilizing the algorithm strategy, configuring, optimizing and scheduling business of the computing network resource of the target computing power service node, and generating a secondary scheduling packet containing configuration information related to the target computing power service node, wherein the secondary scheduling packet is used for configuring the target computing power service node so as to facilitate service of the computing power service request.
10. The utility model provides a power scheduling device is arranged in power network based on power service gateway, a serial communication port is applied to target power service gateway, the target power service gateway is one of a plurality of power service gateways, a plurality of power service gateways all are connected with the power network brain in the power network, power network resource divide into a plurality of power availability district based on the attribute of power resource, power service gateway management in each power availability district controls the power node in corresponding district, power network operation platform and power network brain call power service gateway through unified call interface, the device includes:
The system comprises an acquisition module, a first-level scheduling module and an algorithm policy, wherein the first-level scheduling module is used for receiving a first-level scheduling packet and an algorithm policy, wherein the first-level scheduling packet is transmitted by a computing network brain and is generated by the computing network brain by calculating an optimal target computing power service node, a target computing power service gateway to which the optimal target computing power service node belongs, formulating an optimal network path and performing computing power service scheduling based on the location, time and computing power service policy of the computing power service request, and the first-level scheduling packet comprises: a target computing power service node, a target computing power service gateway, a computing power service path and a computing power service strategy which are determined according to the computing power service request, wherein the target computing power service gateway, the computing power service path and the computing power service strategy are to which the target computing power service node belongs;
the generation module is used for processing the primary scheduling packet by utilizing the algorithm policy, configuring, optimizing and scheduling business of the computing network resource of the target computing power service node, and generating a secondary scheduling packet containing configuration information related to the target computing power service node, wherein the secondary scheduling packet is used for configuring the target computing power service node so as to facilitate service of the computing power service request.
CN202310967727.6A 2023-08-03 2023-08-03 Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway Active CN116684418B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310967727.6A CN116684418B (en) 2023-08-03 2023-08-03 Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310967727.6A CN116684418B (en) 2023-08-03 2023-08-03 Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway

Publications (2)

Publication Number Publication Date
CN116684418A CN116684418A (en) 2023-09-01
CN116684418B true CN116684418B (en) 2023-11-10

Family

ID=87784086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310967727.6A Active CN116684418B (en) 2023-08-03 2023-08-03 Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway

Country Status (1)

Country Link
CN (1) CN116684418B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117539642B (en) * 2024-01-09 2024-04-02 上海晨钦信息科技服务有限公司 Credit card distributed scheduling platform and scheduling method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021190482A1 (en) * 2020-03-27 2021-09-30 中国移动通信有限公司研究院 Computing power processing network system and computing power processing method
CN114756340A (en) * 2022-03-17 2022-07-15 中国联合网络通信集团有限公司 Computing power scheduling system, method, device and storage medium
CN114915556A (en) * 2022-07-19 2022-08-16 北京航空航天大学 Calculation task allocation method for industrial internet profit optimization
WO2023274293A1 (en) * 2021-07-02 2023-01-05 中国移动通信有限公司研究院 Method and apparatus for sending computing power announcement, and computing power network element node
WO2023284830A1 (en) * 2021-07-14 2023-01-19 中国移动通信有限公司研究院 Management and scheduling method and apparatus, node, and storage medium
CN115695281A (en) * 2022-10-26 2023-02-03 北京星网锐捷网络技术有限公司 Node scheduling method, device, equipment and medium for computational power network
CN116112572A (en) * 2023-01-29 2023-05-12 中国联合网络通信集团有限公司 Service processing method, device, network equipment and storage medium
CN116192960A (en) * 2023-01-05 2023-05-30 中国联合网络通信集团有限公司 Dynamic construction method and system for computing power network cluster based on constraint condition
CN116233256A (en) * 2023-01-29 2023-06-06 浪潮通信技术有限公司 Scheduling path configuration method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021190482A1 (en) * 2020-03-27 2021-09-30 中国移动通信有限公司研究院 Computing power processing network system and computing power processing method
WO2023274293A1 (en) * 2021-07-02 2023-01-05 中国移动通信有限公司研究院 Method and apparatus for sending computing power announcement, and computing power network element node
WO2023284830A1 (en) * 2021-07-14 2023-01-19 中国移动通信有限公司研究院 Management and scheduling method and apparatus, node, and storage medium
CN114756340A (en) * 2022-03-17 2022-07-15 中国联合网络通信集团有限公司 Computing power scheduling system, method, device and storage medium
CN114915556A (en) * 2022-07-19 2022-08-16 北京航空航天大学 Calculation task allocation method for industrial internet profit optimization
CN115695281A (en) * 2022-10-26 2023-02-03 北京星网锐捷网络技术有限公司 Node scheduling method, device, equipment and medium for computational power network
CN116192960A (en) * 2023-01-05 2023-05-30 中国联合网络通信集团有限公司 Dynamic construction method and system for computing power network cluster based on constraint condition
CN116112572A (en) * 2023-01-29 2023-05-12 中国联合网络通信集团有限公司 Service processing method, device, network equipment and storage medium
CN116233256A (en) * 2023-01-29 2023-06-06 浪潮通信技术有限公司 Scheduling path configuration method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN116684418A (en) 2023-09-01

Similar Documents

Publication Publication Date Title
WO2021190482A1 (en) Computing power processing network system and computing power processing method
Tran et al. Task placement on fog computing made efficient for IoT application provision
CN110191148B (en) Statistical function distributed execution method and system for edge calculation
CN104115165A (en) Method for mapping media components employing machine learning
CN113709048A (en) Routing information sending and receiving method, network element and node equipment
Al-Janabi et al. A centralized routing protocol with a scheduled mobile sink-based AI for large scale I-IoT
CN114095577A (en) Resource request method and device, calculation network element node and calculation application equipment
Karagiannis et al. Comparison of alternative architectures in fog computing
CN116684418B (en) Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway
Meneguette et al. Vehicular clouds leveraging mobile urban computing through resource discovery
WO2023284830A1 (en) Management and scheduling method and apparatus, node, and storage medium
Chen et al. Data dissemination based on mobile agent in wireless sensor networks
Liao et al. Cognitive balance for fog computing resource in Internet of Things: An edge learning approach
Shaheen et al. A lightweight location-aware fog framework (LAFF) for QoS in Internet of Things paradigm
Maiti et al. Efficient data collection for IoT services in edge computing environment
Khodaparas et al. A software-defined caching scheme for the Internet of Things
CN113271137A (en) Cooperative processing method and storage medium for space-based network heterogeneous computational power resources
CN114710571B (en) Data packet processing system
CN115840623A (en) General calculation and memory integration system
Gadasin et al. Organization of Interaction between the Concept of Fog Computing and Segment Routing for the Provision of IoT Services in Smart Grid Networks
Lloret et al. An IoT group-based protocol for smart city interconnection
Xu et al. A WSN architecture based on SDN
Li et al. Slicing-based AI service provisioning on network edge
Ou et al. Research on network performance optimization technology based on cloud-edge collaborative architecture
CN103702445A (en) Body and inference rule of task combination in WSN (wireless sensor network) semantic communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant