CN114095579B - Network system for computing power processing, service processing method and equipment - Google Patents

Network system for computing power processing, service processing method and equipment Download PDF

Info

Publication number
CN114095579B
CN114095579B CN202010771579.7A CN202010771579A CN114095579B CN 114095579 B CN114095579 B CN 114095579B CN 202010771579 A CN202010771579 A CN 202010771579A CN 114095579 B CN114095579 B CN 114095579B
Authority
CN
China
Prior art keywords
computing power
computing
node
service
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010771579.7A
Other languages
Chinese (zh)
Other versions
CN114095579A (en
Inventor
姚惠娟
耿亮
杜宗鹏
付月霞
刘鹏
张晓秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010771579.7A priority Critical patent/CN114095579B/en
Priority to PCT/CN2021/110326 priority patent/WO2022028418A1/en
Publication of CN114095579A publication Critical patent/CN114095579A/en
Application granted granted Critical
Publication of CN114095579B publication Critical patent/CN114095579B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload

Abstract

The network system for computing power processing comprises a first processing layer, a second processing layer, a third processing layer, a fourth processing layer and a fifth processing layer. The embodiment of the invention provides a novel network system architecture for computing power processing, which realizes high coordination of networks and computing, so that ubiquitous computing is interconnected based on ubiquitous connection, cloud, edge and network high-efficiency coordination is realized, the utilization efficiency of computing resources is improved, and consistent experience can be provided for users. In addition, the embodiment of the invention is defined by the newly added functional modules and interface flows, and the network management node can realize perceivable, controllable and manageable bottom heterogeneous computing power resources, thereby being beneficial to realizing calculation and network resource joint scheduling and improving the resource utilization rate of an operator network.

Description

Network system for computing power processing, service processing method and equipment
Technical Field
The present invention relates to the field of mobile communications technologies, and in particular, to a network system for computing power processing, a service processing method and a device. The network system for computing power processing is also sometimes called a name such as a computing power sensing network, a computing power network, a computing network integrated network or a novel network fused with a computing network.
Background
Under the development trend of cloud computing and edge computing, in future society, computing power of a plurality of different scales can be spread at different distances close to users, and various personalized services can be provided for the users through a global network. From intelligent terminals of billions to home gateways of billions worldwide, thousands of edge clouds with computing capability brought by future edge computing in each city, and billions of large cloud data centers DC in each country, massive ubiquitous computing power is formed, and Internet is accessed from everywhere, so that development trend of computing and network depth fusion is formed.
The computing resources in the network are integrated into all corners of the network, so that each network node can be a provider of the resources, the user request can be satisfied by calling the nearest node resources, the network node is not limited to a specific node any more, and the waste of connection and network scheduling resources is avoided. Conventional networks, however, merely provide a conduit for data communication, are subject to fixed network addressing mechanisms, and often fail to meet the requirements of higher and more demanding quality of experience (Quality of Experience, qoE). In addition, with the development of micro-services, the traditional client-server (server) mode is deconstructed, application deconstructing functional components on the server side are distributed on the cloud platform, the application deconstructing functional components are uniformly scheduled by the API gateway, the on-demand dynamic instantiation can be achieved, the service in the server is logically transferred to the client side, and the client only needs to care about the computing function, but does not need to care about computing resources such as the server, the virtual machine, the container and the like, so that the service function is realized.
The new generation network architecture facing the future network needs to cooperatively consider the requirements of fusion evolution of network transmission resources and computing resources (or called computing resources), so as to realize global optimization of the network in the ubiquitous connection and computing architecture, flexible scheduling of computing power and reasonable distribution of services. Currently, it is difficult for conventional network architecture to implement the cooperative work of computing resources and network transmission resources.
Disclosure of Invention
At least one embodiment of the invention provides a network system for computing power processing, a service processing method and equipment, which can realize the cooperative work of computing power resources and network transmission resources and improve the utilization efficiency of computing resources.
According to one aspect of the present invention, at least one embodiment provides a power-handling network system, including a first processing layer, a second processing layer, and a third processing layer; wherein,
the third processing layer is used for performing sensing, measurement and OAM management on the computing power nodes and configuring computing power resources based on a unified measurement and balance system;
the first processing layer is used for bearing computing capacity and application of computing power resources, acquiring service requests of businesses or applications, determining target application information and computing power request parameters corresponding to the service requests, and sending the service requests carrying the target application information and computing power request parameters to the second processing layer; and receiving a calculation result returned by the computing power node for the service request from the second processing layer;
The second processing layer is configured to schedule the service request to an computing node based on the network transmission resource and the computing resource based on the unified metrics system according to the computing request parameter; and receiving a calculation result generated by the computing power node for the service request, and sending the calculation result to a first processing layer.
Further, according to at least one embodiment of the present invention, a fourth treatment layer and a fifth treatment layer are further included; wherein,
the fourth processing layer is used for providing computing power resources through a plurality of computing power nodes;
the fifth processing layer is configured to provide network transmission resources for information transmission.
Furthermore, in accordance with at least one embodiment of the present invention, the second processing layer includes a first processing module and a second processing module; wherein,
the first processing module is used for generating a computing power topology based on the service capacity and the computing power resource state advertised by the computing power node and generating a routing table perceived by computing power based on the computing power topology;
and the second processing module is used for forwarding the service message based on the routing table.
Furthermore, according to at least one embodiment of the present invention, the first processing module includes:
The first sub-module is used for sending initial configuration information of the computing power node to a third processing layer of the network management node, and configuring computing power resources based on the unified metrology system according to the configuration of the network management node, wherein the initial configuration information comprises at least one of the following information: the computing power node identification, the computing power type, the computing power resource deployment form, the computing power resource deployment position and the computing power resource size;
a second sub-module, configured to notify other nodes in the network of currently available service capability and computing resource status of the computing node, where the service capability is a service capability supported by the computing node, and the computing resource status includes computing capability and deployment information of the computing resource;
the third sub-module is used for generating a computing power topology based on the service capacity and the computing power resource state of the received computing power node and generating a routing table of computing power perception based on the computing power topology;
and the fourth sub-module is used for scheduling the service request to a target power node according to the power resource state, the network transmission resource and the power request parameter corresponding to the service request.
Furthermore, in accordance with at least one embodiment of the present invention, the computing power of the computing power resource includes at least one of:
A number of service connections;
information of a Central Processing Unit (CPU);
information of the image processor GPU;
memory information;
storage capacity and/or storage morphology.
The deployment information of the computing power resource comprises at least one of the following:
the deployment form of the computing power resource;
the deployment location of the computational resource.
Furthermore, according to at least one embodiment of the present invention, the second processing module includes:
a fifth sub-module, configured to add calculation force demand information of a service to a service packet, where the calculation force demand information includes target application information and calculation force request parameters;
a sixth sub-module, configured to generate OAM information and data through a predefined OAM mechanism and add the OAM information and data to a service packet;
a seventh sub-module, configured to generate an identification field of routing information of the routing table perceived by computing power;
and the eighth sub-module is used for forwarding the service message based on the routing table perceived by the computing power.
Furthermore, in accordance with at least one embodiment of the present invention, the third processing layer includes:
a ninth sub-module, configured to form a computing power capability template by performing unified abstract description on computing power resources of different computing types based on a unified metrology system;
a tenth sub-module, configured to register, update, and cancel the computing power node based on the computing power capability template, and configure computing power resources based on the unified metrology system; managing a route advertisement policy;
An eleventh sub-module, configured to generate a power service contract based on the power capability template, and generate a corresponding charging policy;
and the twelfth sub-module is used for monitoring the computing power resources of the fourth processing layer and carrying out fusion charging management on the computing power resources and the network resources on the service according to the charging policy.
Furthermore, in accordance with at least one embodiment of the present invention, the third processing layer further comprises:
and the thirteenth sub-module is used for carrying out security authentication management on the computing power node.
According to another aspect of the present invention, at least one embodiment provides a service processing method applied to a computing node, including:
the service capability and the computing power resource state which are currently available for the computing power node are announced to other nodes in the network, wherein the service capability is the service capability supported by the computing power node, and the computing power resource state comprises the computing capability and the deployment information of the computing power resource;
receiving a service request from a computing power application node, responding to the service request, executing corresponding calculation, and sending a calculation result generated for the service request to the computing power application node.
Furthermore, according to at least one embodiment of the present invention, there is also provided:
Sending a registration request message of the computing node to a network management node, wherein the registration request message carries initial configuration information of the computing node, and the initial configuration information comprises at least one of the following information: the computing power node identification, the computing power type, the computing power resource deployment form, the computing power resource deployment position and the computing power resource size;
according to the configuration of the network management node, the computing power resource based on the unified metrology system is configured.
Furthermore, in accordance with at least one embodiment of the present invention, before sending the registration request message, the method further comprises:
receiving a computing capability template sent by a network management node, wherein the computing capability template is formed by carrying out unified abstract description on computing power resources of different computing types based on a unified weighing system;
and generating initial configuration information of the computing power node based on the computing power capability template.
Furthermore, according to at least one embodiment of the present invention, there is also provided:
when the service capacity and/or the state of the computing power resource of the computing power node change, sending a state update message to the network management node, wherein the state update message carries change indication information of the service capacity and/or the state of the computing power resource; and notifying the service capability and the computing power resource state after the computing power node is changed to other nodes in the network;
And when the computing power node exits the network system for computing power processing, sending a logout request message to a network management node, and notifying other nodes in the network of an indication message for exiting the network.
Furthermore, in accordance with at least one embodiment of the present invention, the computing power of the computing power resource includes at least one of:
a number of service connections;
information of a Central Processing Unit (CPU);
information of the image processor GPU;
memory information;
storage capacity and/or storage morphology.
The deployment information of the computing power resource comprises at least one of the following:
the deployment form of the computing power resource;
the deployment location of the computational resource.
According to another aspect of the present invention, at least one embodiment provides a service processing method, applied to a network management node, including:
receiving a registration request message sent by a computing node, wherein the registration request message carries initial configuration information of the computing node, and the initial configuration information comprises at least one of the following information: the computing power node identification, the computing power type, the computing power resource deployment form, the computing power resource deployment position and the computing power resource size;
and when the computing power node is allowed to register, sending a configuration message to the computing power node, and configuring computing power resources based on a unified metrology system.
Furthermore, in accordance with at least one embodiment of the present invention, prior to receiving the initial-configuration information, the method further comprises:
based on a unified measurement and balance system, forming a computing power capability template by carrying out unified abstract description on computing power resources of different computing types;
and sending the computing power capability template to the computing power node.
Furthermore, according to at least one embodiment of the present invention, there is also provided:
receiving a state update message sent by the computing power node, wherein the state update message carries change indication information of service capability and/or computing power resource state of the computing power node, and updating the service capability and/or computing power resource state of the computing power node according to the state update message; or alternatively, the first and second heat exchangers may be,
and receiving a cancellation request message sent by the computing node, and deleting the computing node from the computing processing network system according to the cancellation request message.
Furthermore, in accordance with at least one embodiment of the present invention, the computing power of the computing power resource includes at least one of:
a number of service connections;
information of a Central Processing Unit (CPU);
information of the image processor GPU;
memory information;
storage capacity and/or storage morphology.
The deployment information of the computing power resource comprises at least one of the following:
the deployment form of the computing power resource;
the deployment location of the computational resource.
Furthermore, according to at least one embodiment of the present invention, there is also provided:
generating a power computing service contract based on the power computing capability template, and generating a corresponding charging strategy;
and monitoring the computing power resources of the fourth processing layer, and carrying out fusion charging management on the computing power resources and the network resources on the service according to the charging policy.
Furthermore, according to at least one embodiment of the present invention, there is also provided:
and carrying out security authentication management on the computing power node.
According to another aspect of the present invention, at least one embodiment provides a service processing method of a routing node, including:
generating a computational power topology based on the service capability and the computational power resource state advertised by the computational power node, and generating a computational power perceived routing table based on the computational power topology; the service capability is the service capability supported by the computing power node, and the computing power resource state comprises the computing capability and the deployment information of the computing power resource; the method comprises the steps of,
and receiving a service request of a service or an application, and dispatching the service request to a target power node according to the power resource state, the network transmission resource and the power request parameter corresponding to the service request.
Furthermore, according to at least one embodiment of the present invention, there is also provided:
and forwarding the service message based on the routing table.
Furthermore, according to at least one embodiment of the present invention, the forwarding of the service packet based on the routing table includes at least one of the following:
adding calculation force demand information of a service into a service message, wherein the calculation force demand information comprises target application information and calculation force request parameters;
generating OAM information and data through a unified, built-in and user-definable OAM mechanism and adding the OAM information and the data into a service message;
an identification field of routing information of the new routing table is generated that is computationally aware.
According to another aspect of the present invention, at least one embodiment provides a service processing method of a computing power application node, including:
acquiring a service request of a service or an application, determining target application information and an algorithm force request parameter corresponding to the service request, and sending the service request carrying the target application information and the algorithm force request parameter to an algorithm force routing node;
and receiving a calculation result which is sent by the computing power node and generated aiming at the service request.
According to another aspect of the invention, at least one embodiment provides a computing node comprising a transceiver and a processor, wherein,
The processor is configured to notify other nodes in the network of currently available service capabilities and computing resource states of the computing node, where the service capabilities are service capabilities supported by the computing node, and the computing resource states include computing capabilities and deployment information of the computing resource;
the transceiver is configured to receive a service request from a computing power application node, respond to the service request, perform a corresponding calculation, and send a calculation result generated for the service request to the computing power application node.
According to another aspect of the invention, at least one embodiment provides a computing node comprising: a processor, a memory and a program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the business processing method as described above.
According to another aspect of the present invention, at least one embodiment provides a network management node comprising a transceiver and a processor, wherein,
the transceiver is configured to receive a registration request message sent by the power node, where the registration request message carries initial configuration information of the power node, and the initial configuration information includes at least one of the following information: the computing power node identification, the computing power type, the computing power resource deployment form, the computing power resource deployment position and the computing power resource size;
The processor is used for sending a configuration message to the computing node when the computing node is allowed to register, and configuring computing resources based on a unified metric system.
According to another aspect of the present invention, at least one embodiment provides a network management node comprising: a processor, a memory and a program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the business processing method as described above.
According to another aspect of the present invention, at least one embodiment provides a routing node comprising a transceiver and a processor, wherein,
the processor is used for generating a computing power topology based on the service capacity and the computing power resource state advertised by the computing power node and generating a routing table of computing power perception based on the computing power topology; the service capability is the service capability supported by the computing power node, and the computing power resource state comprises the computing capability and the deployment information of the computing power resource; the method comprises the steps of,
the transceiver is configured to receive a service request of a service or an application, and schedule the service request to a target power node according to a power resource state, a network transmission resource, and a power request parameter corresponding to the service request.
According to another aspect of the present invention, at least one embodiment provides a routing node comprising: a processor, a memory and a program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the business processing method as described above.
According to another aspect of the present invention, at least one embodiment provides a computing power application node comprising a transceiver and a processor, wherein,
the processor is used for acquiring a service request of a service or an application, determining target application information and an algorithm force request parameter corresponding to the service request, and sending the service request carrying the target application information and the algorithm force request parameter to an algorithm force routing node;
the transceiver is configured to receive a calculation result generated for the service request and sent by the computing node.
According to another aspect of the invention, at least one embodiment provides a computing force application node comprising: a processor, a memory and a program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the business processing method as described above.
According to another aspect of the invention, at least one embodiment provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the steps of the method as described above.
Compared with the prior art, the business processing method and the business processing equipment provided by the embodiment of the invention provide a new network system architecture for computing power processing, and realize high cooperation of networks and computing, so that ubiquitous computing is interconnected based on ubiquitous connection, cloud, edge and network high-efficiency cooperation is realized, the utilization efficiency of computing resources is improved, and consistent experience can be provided for users. In addition, the embodiment of the invention is defined by the newly added functional modules and interface flows, and the network management node can realize perceivable, controllable and manageable bottom heterogeneous computing power resources, thereby being beneficial to realizing calculation and network resource joint scheduling and improving the resource utilization rate of an operator network.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a schematic diagram of a functional architecture of a computing power processing network system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an architecture of a computing power management layer according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an I3 interface according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an I2 interface according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an I1 interface according to an embodiment of the present invention;
FIG. 6 is a flowchart of a business processing method applied to a computing node according to an embodiment of the present invention;
fig. 7 is a flowchart of a service processing method applied to a network management node according to an embodiment of the present invention;
fig. 8 is a flowchart of a service processing method applied to a routing node according to an embodiment of the present invention;
fig. 9 is a flowchart of a service processing method applied to a computing power application node according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a structure of a computing node according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a network management node according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a routing node according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a computing power application node according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be capable of operation in sequences other than those illustrated or described herein, for example. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. "and/or" in the specification and claims means at least one of the connected objects.
The following description provides examples and does not limit the scope, applicability, or configuration as set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the spirit and scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For example, the described methods may be performed in an order different than described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
In order to better understand the related aspects of the embodiments of the present invention, the related concepts related to the embodiments of the present invention are described below.
1) The network system for computing power processing can also be called a name such as a computing power sensing network, a computing power network, a computing network integrated network or a novel network fused with a computing network. The network system of the computing power processing comprises:
a first treatment layer (sometimes referred to herein as a computing application layer),
A second processing layer (sometimes referred to herein as a computational routing layer),
A third treatment layer (sometimes referred to herein as a computing force management layer),
A fourth process layer (sometimes referred to herein as the computing resource layer), and,
a fifth processing layer (sometimes referred to herein as a network resource layer).
2) A computational power network element node, herein referred to as a network device with computational power. The computing nodes further may include computing routing nodes and computing nodes (computing nodes are sometimes also referred to as computing nodes).
3) The computing power routing node is positioned at a second processing layer of the computing power processing network system and is network equipment for transmitting the notification of computing power resource information in the computing power processing network system.
4) The computing node, located at the fourth processing layer and/or the fifth processing layer, refers to a device having computing capability, and is used for providing computing resources, and is equivalent to a device for processing computing tasks in a network system for computing, such as a server device of a data center, an integrated machine, and the like. In addition, the power computing node in the embodiment of the invention can be power computing network element equipment, which is network transmission equipment of a fifth processing layer, such as a router, and the like, and can provide power computing resources and power computing services.
5) The computing power resource state refers to information such as the computing power state and the deployment position of computing power nodes deployed in a computing power processing network system, and the computing power resource state can be indicated through parameters of computing power resources. The parameters of the computing power resource specifically include one or more of parameters such as service connection number, CPU/GPU computing power, deployment form (physical, virtual), deployment location (such as corresponding IP address), storage capacity, storage form and the like. The computing power resource state can also be the computing power abstracted based on the computing power resource, and is used for reflecting the information such as the currently available computing power, distribution position, deployment form and the like of each computing power node in the network system for computing power processing.
6) Network transmission resources refer to network resources of transmission information of a network system with computational power processing, and have functions of various forwarding devices (such as routers and switches), transmission links, transmission capabilities (such as bandwidth, delay and delay jitter), and the like. The embodiment of the invention can match the service and the computing power according to the service request (service request) and the computing power resource state, and schedule the proper target computing power node for the service request to process the service. Optionally, the embodiment of the invention can combine the computing power resource state and the network transmission resource, search the optimal network transmission path (route) of the target computing power node and the service request for the service request, realize the better dispatching processing of the service request and improve the use experience of the user on the network.
The embodiment of the invention provides a network system for computing power processing, which can also be called as a computing power network, a computing network integrated network or a novel network fused with a computing network. Referring to fig. 1, fig. 1 is a schematic diagram illustrating a functional architecture of a power processing network system according to an embodiment of the invention. The network system for computing force processing comprises a first processing layer, a second processing layer, a third processing layer, a fourth processing layer and a fifth processing layer; wherein:
the fourth processing layer is configured to provide computing power resources through a plurality of computing power nodes in a computing power processing network system, so that computing power resources oriented to different applications can be implemented, and corresponding computation is performed in response to a service request. In the embodiment of the invention, the fourth processing layer is a ubiquitous heterogeneous resource layer and a fifth processing layer of computing resources are provided at each corner in the network by combining various computing forces from a single-core CPU to a multi-core CPU, to a CPU+GPU+FPGA and the like in order to meet the computing requirements of the diversity of the edge computing field and facing different applications.
The fifth processing layer is configured to provide a network transmission resource for information transmission in the network system with computing power processing, where the network transmission resource is an infrastructure of network transmission, and the fifth processing layer may further report a status of the network transmission resource to the second processing layer. It can be seen that the fifth processing layer is the network infrastructure providing information transport, including access networks, metropolitan area networks, and backbones.
And the third processing layer is used for carrying out sensing, measurement and OAM management on the computing power nodes and configuring computing power resources based on a unified measurement and balance system. For example, generating a computing resource discovery for a computing power node; OAM management is carried out on the computing power nodes, and computing power resource states are obtained; and reporting the computing resource discovery and the computing resource status to the second processing layer and the like, thereby realizing the perception, measurement and OAM management of the computing resource so as to support the perceivable, measurable, manageable and controllable computing resource of the network system for computing processing.
The first processing layer is used for bearing computing capacity and application of computing power resources, acquiring service requests of businesses or applications, determining target application information and computing power request parameters corresponding to the service requests, and sending the service requests carrying the target application information and computing power request parameters to the second processing layer; and receiving a calculation result returned by the computing power node for the service request from the second processing layer, and realizing the calculation service of business or application.
It can be seen that the first processing layer carries various capabilities and applications of ubiquitous computing, and transmits parameters such as a Service-Level agent (SLA) request (including a calculation request) of a user to a Service to the second processing layer. For example, identification (ID) information of a service or an application, calculation force request parameters (such as SLA request information) and the like are added in an IPv6 extension header of a service message.
The second processing layer is configured to schedule the service request to an computing node based on the network transmission resource and the computing resource based on the unified metrics system according to the computing request parameter; and receiving a calculation result generated by the computing power node for the service request, and sending the calculation result to a first processing layer. That is, the second processing layer flexibly schedules the traffic to different computing resource nodes as needed based on the abstracted computing resource discovery, taking network conditions and computing resource conditions into consideration.
In the embodiment of the invention, the fourth processing layer and the fifth processing layer are infrastructure layers of a network system for computing power processing, and the third processing layer and the second processing layer are two core functional modules for realizing a computing power perception functional system. In addition, the architecture of the network system for computing power processing is based on the defined functional modules such as a fourth processing layer, a third processing layer, a second processing layer and the like, and further defines interfaces among part of the functional modules, specifically as follows:
i1 interface: the interface between the first processing layer and the second processing layer is used for supporting the mapping and negotiation of the user demand and the computing interconnection resource between the user and the network so as to realize the network programming and the service automatic adaptation.
I2 interface: and the interface between the second processing layer and the fourth processing layer realizes the sensibility and the controllability of the network to the computational resources by sensing the computational resources, the services and issuing the control information.
I3 interface: and interfaces among the third processing layer, the second processing layer and the fourth processing layer are used for finishing operation management functions such as registering, resource reporting, performance monitoring, fault management, charging management and the like of the computing power node equipment, and realizing the perception, measurement, management and control of the third processing layer to the fourth processing layer.
Based on the above-mentioned layer structure and the interlayer interface, the network system for computing power processing in the embodiment of the invention can be used as a novel network architecture for computing network depth fusion, can be based on ubiquitous network connection, and based on highly distributed computing nodes, a brand new network infrastructure for computing power perception is constructed through automatic deployment, optimal routing and load balancing of services, so that the network is truly realized everywhere, computing power is everywhere, and intelligence is everywhere. In addition, mass application, mass function and mass computing resources in the computing power processing network system can form an open ecology, wherein the mass application can call computing resources in different places as required in real time, the computing resource utilization efficiency is improved, and finally, the user experience optimization, the computing resource utilization rate optimization and the network efficiency optimization are realized.
As shown in fig. 1, the second processing layer includes a first processing module of the control plane and a second processing module of the data plane. The first processing module is also sometimes referred to herein as a power routing control module and the second processing module is referred to herein as a power routing forwarding module.
The first processing module is used for generating a computing power topology based on the service capacity and the computing power resource state advertised by the computing power node, and generating a routing table of computing power perception based on the computing power topology. It can be seen that the first processing module mainly introduces information of the computing node into the routing domain to perform routing control of computing perception. And the second processing module is used for forwarding the service message based on the routing table.
The first processing module of the embodiment of the invention generates the computational power topology based on the advertised computational power node information, further generates a novel routing table of computational power perception, supports the generation of a dynamic and on-demand computational power scheduling strategy based on service requirements, and realizes the computational power perceived computational network collaborative scheduling. The coordinated scheduling of the computing network refers to the coordinated scheduling of computing resources and network transmission resources.
More specifically, the first processing module includes 4 sub-modules, respectively: the first sub-module, the second sub-module, the third sub-module and the fourth sub-module. The first sub-module is also sometimes referred to herein as a power computing capability notification sub-module, the second sub-module is referred to herein as a power computing status notification sub-module, the third sub-module is referred to herein as a power computing route generation sub-module, and the fourth sub-module is referred to herein as a power computing network joint scheduling sub-module. Wherein,
The first sub-module is configured to send initial configuration information of the computing power node to a third processing layer of the network management node, and configure computing power resources based on the unified metrology system according to configuration of the network management node, where the initial configuration information includes at least one of the following information: the computing power node identity, the computing power type (e.g., CPU, GPU, FPGA, etc.), the computing power resource deployment modality (e.g., virtual deployment or physical deployment, etc.), the deployment location of the computing power resource, and the size of the computing power resource (which may be a measure of computing power, in particular).
The second sub-module is configured to notify other nodes in the network of currently available service capabilities and computing resource states of the computing node, where the service capabilities are service capabilities supported by the computing node, and the computing resource states include computing capabilities and deployment information of the computing resource. In particular, the second sub-module may use various existing routing advertisement protocols to perform the advertisement, for example, a new field is added in the existing routing protocol to carry the information such as the service capability and the status of the computing resource.
The third sub-module is configured to generate a computing power topology based on the service capability and the computing power resource state of the received computing power node, and generate a routing table for computing power perception based on the computing power topology, so as to support subsequent service forwarding. Here, the computing force topology may include a node topology for representing a topological relation between computing force nodes and a state topology for representing a topological relation between service capabilities and computing force resource states between nodes.
And the fourth sub-module is used for scheduling the service request to a target power node according to the power resource state, the network transmission resource and the power request parameter corresponding to the service request.
Here, the computing power of the computing power resource includes at least one of: a number of service connections; information of a Central Processing Unit (CPU); information of an image processor (GPU); memory information; storage capacity and/or storage morphology. The deployment information of the computing power resource comprises at least one of the following: computing power resource deployment form; the deployment location (corresponding IP address) of the computing resource. As another implementation manner, the computing power of the computing power resource may also be a general capability parameter abstracted based on the above parameters, which is used to reflect the currently available computing power and the distributed location and form of each computing power node in the network.
The second processing module of the embodiment of the invention supports a novel data surface which is flexible and extensible and is used for supporting the optimal experience of the computing power service. As shown in fig. 5, based on the segment routing IPv6 (SRv 6) extension, not only the network path but also the computational power can be programmed. The network application perception calculation power application requirement and the path OAM management are realized through IP protocol expansion or enhancement, and a flexible and expandable novel data surface is constructed. SRv6 is a new extension directly in the IP extension header of IPv6, this extension part is called the segment routing extension header (Segment Routing Header, SRH), and this part of extension does not destroy the standard IP header.
Specifically, the second processing module includes 4 sub-modules, respectively: a fifth sub-module, a sixth sub-module, a seventh sub-module, and an eighth sub-module. The fifth submodule is also sometimes referred to herein as an associated force demand submodule, the sixth submodule is referred to herein as an associated force monitoring submodule, the seventh submodule is referred to herein as an associated force route identification submodule, and the eighth submodule is referred to herein as an associated force route forwarding submodule.
Wherein,
the fifth sub-module is configured to add the calculation power demand information of the service to the service packet, where the calculation power demand information includes target application information and calculation power request parameters.
The sixth submodule is configured to generate OAM information and data through a predefined OAM mechanism and add the OAM information and the data to a service packet, so that the OAM information and the data can be encapsulated in a user data packet and sent together with the user data packet, thereby implementing real-time detection of computing resources and network resources. The OAM mechanism may be a unified, built-in, user-definable OAM mechanism.
And the seventh submodule is used for generating an identification field of the routing information of the novel computing power perceived routing table. Here, the identification field of the routing information may be used as an identification of the addressing of the computing service, which is advantageous for addressing the corresponding computing service.
And the eighth sub-module is used for forwarding the service message based on the routing table perceived by the computing power.
It can be seen that, in the data plane of the computing power processing network system in the embodiment of the present invention, the network perceives the computing power requirement of the application, and performs dynamic computing power matching according to the service request/computing power request (the service ID of the service request and the SLA request of the computing power request may be carried in the IPv6 extension header), and the information of the network transmission state, the computing power resource state, and the like is combined. In addition, the embodiment of the invention supports the path programmability of the computing power service, and the computing power resource can be injected into the network as a link state information through the templating of the computing power resource, for example, the link state information is injected into a routing table, so that the data surface can realize the network transmission and the programmability of the computing power resource based on SRv source routing, and realize the computing power sensing routing. In addition, the data surface can also realize real-time monitoring of the computing power service, comprehensively consider the health of the network and the computing power through in-band OAM expansion, realize the flexible adjustment of the perceivable service path of the network transmission resource/computing power resource state, and realize the monitoring of the computing power along with the road.
As shown in fig. 1 and 2, the third processing layer includes a ninth sub-module, a tenth sub-module, an eleventh sub-module, and a twelfth sub-module. The ninth submodule is also sometimes referred to herein as a power building submodule, the tenth submodule as a power management submodule, the eleventh submodule as a power operation submodule, and the twelfth submodule as a power OAM submodule. Wherein,
The ninth submodule is used for forming a computing power capability template based on a unified computing power weighing system by carrying out unified abstract description on computing power resources of different computing types. The power capability templates provide standard power metric rules for power equipment management, contracts and billing and OAM.
The tenth sub-module is configured to register, update and cancel the computing power node based on the computing power capability template, and configure computing power resources based on the unified metrology system; and managing the route advertisement policy.
The eleventh sub-module is configured to generate a power service contract based on the power capability template, and generate a corresponding charging policy.
The twelfth sub-module is configured to monitor the computing power resources of the fourth processing layer (including computing power performance monitoring and fault management, etc.), and perform fusion charging management on the computing power resources and the network resources according to the charging policy.
Optionally, the third processing layer further includes:
the thirteenth sub-module is used for carrying out security authentication management on the computing power node. For example, in the processes of registering, updating, and deregistering the computing node, the computing node is subjected to processes such as identity authentication and/or information transmission encryption. The thirteenth sub-module is sometimes also referred to as a power-save sub-module.
The interfaces I1 to I3 and the like according to the embodiment of the present invention are described below.
I3 interface
Firstly, the third processing layer realizes operation management functions of registering, resource reporting, performance monitoring, fault management, charging management and the like of the computing node equipment through the I3 interface, and realizes the perception, measurement and management of the third processing layer to the fourth processing layer, as shown in fig. 3, the functions realized by the I3 interface comprise:
a) Computing resource registration update
The third processing layer reports resource information such as calculation power and the like from the fourth processing layer to the third processing layer according to the capability template in a subscription, self-adaption and other modes.
B) Capability template configuration delivery
And the third processing layer prepares a capacity template according to the user requirement and sends the capacity template to the fourth processing layer.
C) Computing power resource performance monitoring
Expanding the protocols such as Telemetry and the like, the third processing layer obtains the utilization rate (such as the number of times of API calls and the CPU utilization rate) of each index of the fourth processing layer,
d) Fault detection quick positioning
The third processing layer realizes rapid processing after node and link fault detection through detection type protocols such as OAM and the like.
E) Network-computing integration charging channel
The third processing layer realizes the fusion of the computing power resource and the network transmission resource (namely the fusion of the computing network) and the management of multi-dimensional multi-dimension charging by using the AAA protocol/novel charging protocol/blockchain and other technologies.
I2 interface
As shown in fig. 4, after the third processing layer performs configuration management on the relevant computing resources, the second processing layer perceives the computing resources and services through the I2 interface and issues control information, so as to implement computing topology control (B), computing capability notification (D), computing performance monitoring (E) of the computing resources, and policy control (C) of computing notification policies (a), computing charging policies (F), computing network scheduling policies and the like of the network. Such control information may be based on existing network protocol extensions, such as BGP/IGP protocol extensions, or may be carried by designing a new protocol.
I1 interface
After the management and control strategy configuration is completed, the service is accessed to the network system for computing power processing through the I1 interface, and the invention supports the mapping and negotiation of 'user demand' and 'computing interconnection resource' between the user and the network so as to realize the programmable network and the automatic service adaptation. As shown in fig. 5, in a specific implementation, in step 51, information such as target application information and calculation force request parameters and the like is carried in a header of a service message (such as an IPv6 message), so as to support fine service identification and service requirement negotiation, and pull through a network and a service. In addition, the computing power demand is expanded based on the service demand, namely the demand of the traditional service comprises bandwidth, time delay, jitter, packet loss rate and the like, so that elements comprising computing power can be further expanded, the network can further know the computing power demand of a user, the network and the computing power demand are integrated to carry out routing scheduling, and the network efficiency of computing power service is improved.
Referring to fig. 6, a service processing method provided by an embodiment of the present invention, when applied to a computing node, includes:
step 61, notifying the service capability currently available to the computing node and the computing resource state to other nodes in the network, wherein the service capability is the service capability supported by the computing node, and the computing resource state comprises the computing capability and the deployment information of the computing resource.
Here, the other nodes include other computing nodes and routing nodes in the computing processing network system.
Step 62, receiving a service request from a computing power application node, responding to the service request, executing corresponding calculation, and sending a calculation result generated for the service request to the computing power application node.
Through the steps, the power computing node can realize information notification of the power computing node and processing of the service request.
Optionally, before the step 61, the computing node may further send a registration request message of the computing node to the network management node, where the registration request message carries initial configuration information of the computing node, and the initial configuration information includes at least one of the following information: the computing power node identification, the computing power type, the computing power resource deployment form, the computing power resource deployment position and the computing power resource size; then, according to the configuration of the network management node, the computing power resource based on the unified metrology system is configured.
In addition, before the registration request message is sent, the computing power node can also receive a computing power capability template sent by the network management node, wherein the computing power capability template is formed by carrying out unified abstract description on computing power resources of different computing types based on a unified metrology system; and generating initial configuration information of the computing power node based on the computing power capability template.
Optionally, when the service capability and/or the computing power resource state of the computing power node change, the computing power node may further send a state update message to the network management node, where the state update message carries change indication information of the service capability and/or the computing power resource state; and advertising the service capability and the computing power resource state after the computing power node change to other nodes in the network. In addition, when the computing node exits the network system of the computing process, a logout request message is sent to a network management node, and an indication message of exiting the network is announced to other nodes in the network.
Here, the computing power of the computing power resource includes at least one of:
a number of service connections;
information of a Central Processing Unit (CPU);
Information of the image processor GPU;
memory information;
storage capacity and/or storage morphology.
The deployment information of the computing power resource comprises at least one of the following:
the deployment form of the computing power resource;
the deployment location of the computational resource.
Referring to fig. 7, a service processing method provided in an embodiment of the present invention, when applied to a network management node, includes:
step 71, receiving a registration request message sent by the computing node, where the registration request message carries initial configuration information of the computing node, and the initial configuration information includes at least one of the following information: the computing power node identification, the computing power type, the computing power resource deployment form, the computing power resource deployment position and the computing power resource size.
And step 72, when the computing power node is allowed to register, sending a configuration message to the computing power node, and configuring computing power resources based on a unified metrology system.
Through the steps, the embodiment of the invention can realize the registration management of the network management node to the computing node.
Optionally, before receiving the initial configuration information, the network management node may further form a computing power capability template by performing a unified abstract description on computing power resources of different computing types based on a unified metrology system; the computing power capability template is then sent to the computing power node.
Optionally, the network management node may further receive a status update message sent by the computing node, where the status update message carries change indication information of service capability and/or computing power resource status of the computing node, and update the service capability and/or computing power resource status of the computing node according to the status update message; or, receiving a logout request message sent by the computing node, and deleting the computing node from the computing processing network system according to the logout request message.
Optionally, the computing power of the computing power resource includes at least one of:
a number of service connections;
information of a Central Processing Unit (CPU);
information of the image processor GPU;
memory information;
storage capacity and/or storage morphology.
The deployment information of the computing power resource comprises at least one of the following:
the deployment form of the computing power resource;
the deployment location of the computational resource.
Optionally, the network management node may further generate a computing power service contract based on the computing power capability template, and generate a corresponding charging policy; and monitoring the computing power resources of the fourth processing layer, and carrying out fusion charging management on the computing power resources and the network resources on the service according to the charging policy.
Optionally, the network management node may also perform security authentication management on the computing node.
Referring to fig. 8, a service processing method provided in an embodiment of the present invention, when applied to a routing node, includes:
step 81, generating a computing power topology based on the service capability and the computing power resource state advertised by the computing power node, and generating a routing table of computing power perception based on the computing power topology; the service capability is a service capability supported by the computing power node, and the computing power resource state comprises computing capability and deployment information of the computing power resource.
And step 82, receiving a service request of a business or an application, and dispatching the service request to a target computing node according to the computing resource state, the network transmission resource and the computing request parameters corresponding to the service request.
Through the steps, the embodiment of the invention can realize the dispatching processing of the service request by the routing node, can realize the cooperative work of the computing resource and the network transmission resource, and improves the utilization efficiency of the computing resource.
Optionally, the routing node may further forward the service packet based on the routing table.
Optionally, the forwarding of the service packet based on the routing table includes at least one of:
Adding calculation force demand information of a service into a service message, wherein the calculation force demand information comprises target application information and calculation force request parameters;
generating OAM information and data through a unified, built-in and user-definable OAM mechanism and adding the OAM information and the data into a service message;
an identification field of routing information of the new routing table is generated that is computationally aware.
Referring to fig. 9, a service processing method provided by an embodiment of the present invention, when applied to a computing power application node, includes:
step 91, obtaining a service request of a service or an application, determining target application information and calculation request parameters corresponding to the service request, and sending the service request carrying the target application information and the calculation request parameters to a calculation routing node.
And step 92, receiving a calculation result generated for the service request and sent by the computing power node.
Through the steps, the embodiment of the invention realizes the mapping of the service request and the computing power resource at the computing power application node, can support fine service identification and service demand negotiation, and pulls through the network and the service.
The foregoing describes various methods of embodiments of the present invention. An apparatus for carrying out the above method is further provided below.
Referring to fig. 10, a schematic structural diagram of a computing node according to an embodiment of the present invention is provided, where the computing node 1000 includes: a processor 1001, a transceiver 1002, a memory 1003, a user interface 1004 and a bus interface.
In an embodiment of the present invention, the computing node 1000 further includes: a program stored in the memory 1003 and executable on the processor 1001.
The processor 1001 performs the following steps when executing the program:
the service capability and the computing power resource state which are currently available for the computing power node are announced to other nodes in the network, wherein the service capability is the service capability supported by the computing power node, and the computing power resource state comprises the computing capability and the deployment information of the computing power resource;
receiving a service request from a computing power application node, responding to the service request, executing corresponding calculation, and sending a calculation result generated for the service request to the computing power application node.
It can be appreciated that in the embodiment of the present invention, when the computer program is executed by the processor 1001, the processes of the embodiment of the service processing method shown in fig. 6 can be implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is provided herein.
In fig. 10, a bus architecture may be comprised of any number of interconnected buses and bridges, and in particular, one or more processors represented by the processor 1001 and various circuits of the memory represented by the memory 1003. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. The transceiver 1002 may be a number of elements, i.e. comprising a transmitter and a receiver, providing a means for communicating with various other apparatus over a transmission medium. The user interface 1004 may also be an interface capable of interfacing with an inscribed desired device for a different user device, including but not limited to a keypad, display, speaker, microphone, joystick, etc.
The processor 1001 is responsible for managing the bus architecture and general processing, and the memory 1003 may store data used by the processor 1001 in performing operations.
The terminal in this embodiment is a device corresponding to the method shown in fig. 6, and the implementation manners in the foregoing embodiments are all applicable to the embodiment of the terminal, so that the same technical effects can be achieved. In this device, the transceiver 1002 and the memory 1003, and the transceiver 1002 and the processor 1001 may be communicatively connected through a bus interface, and the functions of the processor 1001 may be implemented by the transceiver 1002, and the functions of the transceiver 1002 may be implemented by the processor 1001. It should be noted that, the above device provided in the embodiment of the present invention can implement all the method steps implemented in the method embodiment and achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those of the method embodiment in the embodiment are omitted herein.
In some embodiments of the present invention, there is also provided a computer-readable storage medium having stored thereon a program which, when executed by a processor, performs the steps of:
the service capability and the computing power resource state which are currently available for the computing power node are announced to other nodes in the network, wherein the service capability is the service capability supported by the computing power node, and the computing power resource state comprises the computing capability and the deployment information of the computing power resource;
receiving a service request from a computing power application node, responding to the service request, executing corresponding calculation, and sending a calculation result generated for the service request to the computing power application node.
When the program is executed by the processor, all the implementation modes in the service processing method applied to the computing node side can be realized, the same technical effects can be achieved, and the repetition is avoided, so that the description is omitted.
Referring to fig. 11, an embodiment of the present invention provides a schematic structural diagram of a network management node 1100, including: processor 1101, transceiver 1102, memory 1103 and bus interface, wherein:
in an embodiment of the present invention, the network management node 1100 further includes: a program stored on the memory 1103 and executable on the processor 1101, which when executed by the processor 1101, performs the steps of:
Receiving a registration request message sent by a computing node, wherein the registration request message carries initial configuration information of the computing node, and the initial configuration information comprises at least one of the following information: the computing power node identification, the computing power type, the computing power resource deployment form, the computing power resource deployment position and the computing power resource size;
and when the computing power node is allowed to register, sending a configuration message to the computing power node, and configuring computing power resources based on a unified metrology system.
It can be appreciated that in the embodiment of the present invention, when the computer program is executed by the processor 1101, the processes of the embodiment of the service processing method shown in fig. 7 can be implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is provided herein.
In fig. 11, a bus architecture may comprise any number of interconnecting buses and bridges, with various circuits of the one or more processors, as represented by the processor 1101, and the memory, as represented by the memory 1103, being linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. The transceiver 1102 may be a number of elements, i.e., including a transmitter and a receiver, providing a means for communicating with various other apparatus over a transmission medium.
The processor 1101 is responsible for managing the bus architecture and general processing, and the memory 1103 may store data used by the processor 1101 in performing the operations.
Note that, the terminal in this embodiment is a device corresponding to the method shown in fig. 7, and the implementation manners in the foregoing embodiments are all applicable to the embodiment of the terminal, so that the same technical effects can be achieved. In this device, the transceiver 1102 and the memory 1103, and the transceiver 1102 and the processor 1101 may be communicatively connected through a bus interface, where the functions of the processor 1101 may be implemented by the transceiver 1102, and the functions of the transceiver 1102 may be implemented by the processor 1101. It should be noted that, the above device provided in the embodiment of the present invention can implement all the method steps implemented in the method embodiment and achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those of the method embodiment in the embodiment are omitted herein.
In some embodiments of the present invention, there is also provided a computer-readable storage medium having stored thereon a program which, when executed by a processor, performs the steps of:
receiving a registration request message sent by a computing node, wherein the registration request message carries initial configuration information of the computing node, and the initial configuration information comprises at least one of the following information: the computing power node identification, the computing power type, the computing power resource deployment form, the computing power resource deployment position and the computing power resource size;
And when the computing power node is allowed to register, sending a configuration message to the computing power node, and configuring computing power resources based on a unified metrology system.
When the program is executed by the processor, all the implementation modes in the service processing method applied to the network management node can be realized, the same technical effect can be achieved, and the repetition is avoided, so that the description is omitted.
Referring to fig. 12, a schematic structural diagram of a routing node according to an embodiment of the present invention is provided, and a routing node 1200 includes: a processor 1201, a transceiver 1202, a memory 1203, a user interface 1204 and a bus interface.
In an embodiment of the present invention, the routing node 1200 further includes: a program stored on the memory 1203 and executable on the processor 1201.
The processor 1201, when executing the program, performs the following steps:
generating a computational power topology based on the service capability and the computational power resource state advertised by the computational power node, and generating a computational power perceived routing table based on the computational power topology; the service capability is the service capability supported by the computing power node, and the computing power resource state comprises the computing capability and the deployment information of the computing power resource; the method comprises the steps of,
and receiving a service request of a service or an application, and dispatching the service request to a target power node according to the power resource state, the network transmission resource and the power request parameter corresponding to the service request.
Optionally, the processor further implements the following steps when executing the program:
it can be understood that, in the embodiment of the present invention, when the computer program is executed by the processor 1201, the processes of the embodiment of the service processing method shown in fig. 8 can be implemented, and the same technical effects can be achieved, so that the repetition is avoided, and the description is omitted here.
In fig. 12, a bus architecture may be comprised of any number of interconnected buses and bridges, and in particular, one or more processors represented by the processor 1201 and various circuits of memory represented by the memory 1203. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. The transceiver 1202 may be a number of elements, i.e., include a transmitter and a receiver, providing a means for communicating with various other apparatus over a transmission medium. The user interface 1204 may also be an interface capable of interfacing with an inscribed desired device for a different user device, including but not limited to a keypad, display, speaker, microphone, joystick, etc.
The processor 1201 is responsible for managing the bus architecture and general processing, and the memory 1203 may store data used by the processor 1201 in performing operations.
The terminal in this embodiment is a device corresponding to the method shown in fig. 8, and the implementation manners in the above embodiments are all applicable to the embodiment of the terminal, so that the same technical effects can be achieved. In the device, the transceiver 1202 and the memory 1203, and the transceiver 1202 and the processor 1201 may be communicatively connected through a bus interface, and the functions of the processor 1201 may be implemented by the transceiver 1202, and the functions of the transceiver 1202 may be implemented by the processor 1201. It should be noted that, the above device provided in the embodiment of the present invention can implement all the method steps implemented in the method embodiment and achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those of the method embodiment in the embodiment are omitted herein.
In some embodiments of the present invention, there is also provided a computer-readable storage medium having stored thereon a program which, when executed by a processor, performs the steps of:
generating a computational power topology based on the service capability and the computational power resource state advertised by the computational power node, and generating a computational power perceived routing table based on the computational power topology; the service capability is the service capability supported by the computing power node, and the computing power resource state comprises the computing capability and the deployment information of the computing power resource; the method comprises the steps of,
And receiving a service request of a service or an application, and dispatching the service request to a target power node according to the power resource state, the network transmission resource and the power request parameter corresponding to the service request.
When the program is executed by the processor, all the implementation modes in the service processing method applied to the routing node side can be realized, the same technical effect can be achieved, and the repetition is avoided, so that the description is omitted.
Referring to fig. 13, an embodiment of the present invention provides a schematic structural diagram of a computing power application node 1320, including: a processor 1321, a transceiver 1322, a memory 1323, and a bus interface, wherein:
in an embodiment of the present invention, the computing force application node 1320 further includes: a program stored on the memory 1323 and executable on the processor 1321, which when executed by the processor 1321 performs the steps of:
acquiring a service request of a service or an application, determining target application information and an algorithm force request parameter corresponding to the service request, and sending the service request carrying the target application information and the algorithm force request parameter to an algorithm force routing node;
and receiving a calculation result which is sent by the computing power node and generated aiming at the service request.
It can be appreciated that in the embodiment of the present invention, when the computer program is executed by the processor 1321, the processes of the embodiment of the service processing method shown in fig. 9 can be implemented, and the same technical effects can be achieved, so that repetition is avoided, and further description is omitted here.
In fig. 13, a bus architecture may be comprised of any number of interconnected buses and bridges, and in particular, one or more processors represented by the processor 1321 and various circuits of the memory represented by the memory 1323. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. The transceiver 1322 may be a plurality of elements, i.e., including a transmitter and a receiver, providing a means for communicating with various other apparatus over a transmission medium.
The processor 1321 is responsible for managing the bus architecture and general processing, and the memory 1323 may store data used by the processor 1321 in performing operations.
The terminal in this embodiment is a device corresponding to the method shown in fig. 9, and the implementation manners in the above embodiments are all applicable to the embodiment of the terminal, so that the same technical effects can be achieved. In this device, the transceiver 1322 and the memory 1323, and the transceiver 1322 and the processor 1321 may be communicatively connected through a bus interface, where the functions of the processor 1321 may be implemented by the transceiver 1322, and the functions of the transceiver 1322 may be implemented by the processor 1321. It should be noted that, the above device provided in the embodiment of the present invention can implement all the method steps implemented in the method embodiment and achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those of the method embodiment in the embodiment are omitted herein.
In some embodiments of the present invention, there is also provided a computer-readable storage medium having stored thereon a program which, when executed by a processor, performs the steps of:
acquiring a service request of a service or an application, determining target application information and an algorithm force request parameter corresponding to the service request, and sending the service request carrying the target application information and the algorithm force request parameter to an algorithm force routing node;
and receiving a calculation result which is sent by the computing power node and generated aiming at the service request.
When the program is executed by the processor, all the implementation modes in the service processing method applied to the computing power application node can be realized, the same technical effects can be achieved, and the repetition is avoided, so that the description is omitted.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present invention.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (23)

1. The network system for computing power processing is characterized by comprising a first processing layer, a second processing layer and a third processing layer; wherein,
the third processing layer is used for performing sensing, measurement and OAM management on the computing power nodes and configuring computing power resources based on a unified measurement and balance system;
the first processing layer is used for bearing computing capacity and application of computing power resources, acquiring service requests of businesses or applications, determining target application information and computing power request parameters corresponding to the service requests, and sending the service requests carrying the target application information and computing power request parameters to the second processing layer; and receiving a calculation result returned by the computing power node for the service request from the second processing layer;
the second processing layer is configured to schedule the service request to an computing node according to the computing request parameter based on network transmission resources and the computing resources based on the unified metrology system; and receiving a calculation result generated by the computing power node for the service request, and sending the calculation result to a first processing layer.
2. The computing power processing network system of claim 1, further comprising a fourth processing layer and a fifth processing layer; wherein,
The fourth processing layer is used for providing computing power resources through a plurality of computing power nodes;
the fifth processing layer is configured to provide network transmission resources for information transmission.
3. The power processing network system of claim 1,
the second processing layer comprises a first processing module and a second processing module; wherein,
the first processing module is used for generating a computing power topology based on the service capacity and the computing power resource state advertised by the computing power node and generating a routing table perceived by computing power based on the computing power topology;
and the second processing module is used for forwarding the service message based on the routing table.
4. The computing power processing network system of claim 3, wherein the first processing module comprises:
the first sub-module is used for sending initial configuration information of the computing power node to a third processing layer of the network management node, and configuring computing power resources based on the unified metrology system according to the configuration of the network management node, wherein the initial configuration information comprises at least one of the following information: the computing power node identification, the computing power type, the computing power resource deployment form, the computing power resource deployment position and the computing power resource size;
A second sub-module, configured to notify other nodes in the network of currently available service capability and computing resource status of the computing node, where the service capability is a service capability supported by the computing node, and the computing resource status includes computing capability and deployment information of the computing resource;
the third sub-module is used for generating a computing power topology based on the service capacity and the computing power resource state of the received computing power node and generating a routing table of computing power perception based on the computing power topology;
and the fourth sub-module is used for scheduling the service request to a target power node according to the power resource state, the network transmission resource and the power request parameter corresponding to the service request.
5. The power processing network system of claim 4,
the computing power of the computing power resource includes at least one of:
a number of service connections;
information of a Central Processing Unit (CPU);
information of the image processor GPU;
memory information;
storage capacity and/or storage morphology;
the deployment information of the computing power resource comprises at least one of the following:
the deployment form of the computing power resource;
the deployment location of the computational resource.
6. The computing power processing network system of claim 3, wherein the second processing module comprises:
A fifth sub-module, configured to add calculation force demand information of a service to a service packet, where the calculation force demand information includes target application information and calculation force request parameters;
a sixth sub-module, configured to generate OAM information and data through a predefined OAM mechanism and add the OAM information and data to a service packet;
a seventh sub-module, configured to generate an identification field of routing information of the routing table perceived by computing power;
and the eighth sub-module is used for forwarding the service message based on the routing table perceived by the computing power.
7. The computing power processing network system of claim 1, wherein the third processing layer comprises:
a ninth sub-module, configured to form a computing power capability template by performing unified abstract description on computing power resources of different computing types based on a unified metrology system;
a tenth sub-module, configured to register, update, and cancel the computing power node based on the computing power capability template, and configure computing power resources based on the unified metrology system; managing a route advertisement policy;
an eleventh sub-module, configured to generate a power service contract based on the power capability template, and generate a corresponding charging policy;
And the twelfth sub-module is used for monitoring the computing power resources of the fourth processing layer and carrying out fusion charging management on the computing power resources and the network resources on the service according to the charging policy.
8. The computing power processing network system of claim 7, wherein the third processing layer further comprises:
and the thirteenth sub-module is used for carrying out security authentication management on the computing power node.
9. A business processing method applied to a computing node, comprising:
the service capability and the computing power resource state which are currently available for the computing power node are announced to other nodes in the network, wherein the service capability is the service capability supported by the computing power node, and the computing power resource state comprises the computing capability and the deployment information of the computing power resource;
receiving a service request from a routing node schedule, responding to the service request, executing corresponding calculation, and sending a calculation result generated for the service request to a computing power application node;
sending a registration request message of the computing node to a network management node, wherein the registration request message carries initial configuration information of the computing node, and the initial configuration information comprises at least one of the following information: the computing power node identification, the computing power type, the computing power resource deployment form, the computing power resource deployment position and the computing power resource size;
According to the configuration of the network management node, the computing power resource based on the unified metrology system is configured.
10. The method of claim 9, wherein prior to sending the registration request message, the method further comprises:
receiving a computing capability template sent by a network management node, wherein the computing capability template is formed by carrying out unified abstract description on computing power resources of different computing types based on a unified weighing system;
and generating initial configuration information of the computing power node based on the computing power capability template.
11. The method as recited in claim 9, further comprising:
when the service capacity and/or the state of the computing power resource of the computing power node change, sending a state update message to the network management node, wherein the state update message carries change indication information of the service capacity and/or the state of the computing power resource; and notifying the service capability and the computing power resource state after the computing power node is changed to other nodes in the network;
and when the computing power node exits the computing power processing network system, sending a logout request message to a network management node, and notifying other nodes in the network of an indication message for exiting the network.
12. The method of claim 9, wherein,
the computing power of the computing power resource includes at least one of:
a number of service connections;
information of a Central Processing Unit (CPU);
information of the image processor GPU;
memory information;
storage capacity and/or storage morphology;
the deployment information of the computing power resource comprises at least one of the following:
the deployment form of the computing power resource;
the deployment location of the computational resource.
13. A traffic handling method for a routing node, comprising:
generating a computational power topology based on the service capability and the computational power resource state advertised by the computational power node, and generating a computational power perceived routing table based on the computational power topology; the service capability is the service capability supported by the computing power node, and the computing power resource state comprises the computing capability and the deployment information of the computing power resource; the method comprises the steps of,
and receiving a service request of a service or an application, and dispatching the service request to a target power node according to the power resource state, the network transmission resource and the power request parameter corresponding to the service request.
14. The method as recited in claim 13, further comprising:
and forwarding the service message based on the routing table.
15. The method of claim 13, wherein forwarding the service message based on the routing table comprises at least one of:
adding calculation force demand information of a service into a service message, wherein the calculation force demand information comprises target application information and calculation force request parameters;
generating OAM information and data through a unified, built-in and user-definable OAM mechanism and adding the OAM information and the data into a service message;
an identification field of routing information of the new routing table is generated that is computationally aware.
16. A business processing method of a computing force application node, comprising:
acquiring a service request of a service or an application, determining target application information and an algorithm force request parameter corresponding to the service request, and sending the service request carrying the target application information and the algorithm force request parameter to an algorithm force routing node;
and receiving a calculation result generated for the service request and sent by the computing power routing node, wherein the calculation result is generated by the computing power node and sent to the computing power routing node.
17. A computing node is characterized by comprising a transceiver and a processor, wherein,
the processor is configured to notify other nodes in the network of currently available service capabilities and computing resource states of the computing node, where the service capabilities are service capabilities supported by the computing node, and the computing resource states include computing capabilities and deployment information of the computing resource;
The transceiver is used for receiving a service request from a routing node, responding to the service request, executing corresponding calculation, and sending a calculation result generated for the service request to a computing power application node;
the transceiver is further configured to send a registration request message of the power node to the network management node, where the registration request message carries initial configuration information of the power node, and the initial configuration information includes at least one of the following information: the computing power node identification, the computing power type, the computing power resource deployment form, the computing power resource deployment position and the computing power resource size;
the processor is further configured to configure the computing power resource based on the unified metrology system according to the configuration of the network management node.
18. A computing node, comprising: a processor, a memory and a program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the traffic processing method according to any of claims 9 to 12.
19. A routing node comprising a transceiver and a processor, wherein,
the processor is used for generating a computing power topology based on the service capacity and the computing power resource state advertised by the computing power node and generating a routing table of computing power perception based on the computing power topology; the service capability is the service capability supported by the computing power node, and the computing power resource state comprises the computing capability and the deployment information of the computing power resource; the method comprises the steps of,
The transceiver is configured to receive a service request of a service or an application, and schedule the service request to a target power node according to a power resource state, a network transmission resource, and a power request parameter corresponding to the service request.
20. A routing node, comprising: a processor, a memory and a program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the traffic processing method according to any of claims 13 to 15.
21. A computing power application node is characterized by comprising a transceiver and a processor, wherein,
the processor is used for acquiring a service request of a service or an application, determining target application information and an algorithm force request parameter corresponding to the service request, and sending the service request carrying the target application information and the algorithm force request parameter to an algorithm force routing node;
the transceiver is configured to receive a calculation result generated for the service request and sent by the computing power routing node, where the calculation result is generated by the computing power node and sent to the computing power routing node.
22. A computing force application node, comprising: a processor, a memory and a program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the business processing method of claim 16.
23. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the service processing method according to any of claims 9 to 12.
CN202010771579.7A 2020-08-04 2020-08-04 Network system for computing power processing, service processing method and equipment Active CN114095579B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010771579.7A CN114095579B (en) 2020-08-04 2020-08-04 Network system for computing power processing, service processing method and equipment
PCT/CN2021/110326 WO2022028418A1 (en) 2020-08-04 2021-08-03 Computing power processing network system, and service processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010771579.7A CN114095579B (en) 2020-08-04 2020-08-04 Network system for computing power processing, service processing method and equipment

Publications (2)

Publication Number Publication Date
CN114095579A CN114095579A (en) 2022-02-25
CN114095579B true CN114095579B (en) 2024-03-22

Family

ID=80117018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010771579.7A Active CN114095579B (en) 2020-08-04 2020-08-04 Network system for computing power processing, service processing method and equipment

Country Status (2)

Country Link
CN (1) CN114095579B (en)
WO (1) WO2022028418A1 (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114980033A (en) * 2021-02-26 2022-08-30 维沃移动通信有限公司 Method and device for realizing raw computing power service, network equipment and terminal
CN114980034A (en) * 2021-02-26 2022-08-30 维沃移动通信有限公司 Method and device for realizing raw computing power service, network equipment and terminal
CN115065631A (en) * 2022-03-02 2022-09-16 广东云下汇金科技有限公司 Calculation power scheduling method and system and data center
CN116781732A (en) * 2022-03-07 2023-09-19 中国移动通信有限公司研究院 Routing method, system and node
CN114615180A (en) * 2022-03-09 2022-06-10 阿里巴巴达摩院(杭州)科技有限公司 Calculation force network system, calculation force calling method and device
CN114827028B (en) * 2022-03-09 2023-03-28 北京邮电大学 Multi-layer computation network integrated routing system and method
CN114978908B (en) * 2022-05-11 2023-09-26 量子科技长三角产业创新中心 Evaluation and operation method and device for computing power network node
CN115118647B (en) * 2022-05-20 2024-02-09 北京邮电大学 System and method for sensing and advertising calculation force information in calculation force network
CN115086225B (en) * 2022-05-27 2023-12-05 量子科技长三角产业创新中心 Method and monitoring device for determining optimal path of calculation and storage of power network
CN114978978A (en) * 2022-06-07 2022-08-30 中国电信股份有限公司 Computing resource scheduling method and device, electronic equipment and medium
CN115002127A (en) * 2022-06-09 2022-09-02 方图智能(深圳)科技集团股份有限公司 Distributed audio system
CN115086720B (en) * 2022-06-14 2023-06-09 烽火通信科技股份有限公司 Network path calculation method and device for live broadcast service
CN117424882A (en) * 2022-07-08 2024-01-19 中兴通讯股份有限公司 Data transmission method, data processing method, electronic device, and readable medium
CN115412609B (en) * 2022-08-16 2023-07-28 中国联合网络通信集团有限公司 Service processing method, device, server and storage medium
CN115396358B (en) * 2022-08-23 2023-06-06 中国联合网络通信集团有限公司 Route setting method, device and storage medium of computing power perception network
CN115689741A (en) * 2022-09-09 2023-02-03 中国联合网络通信集团有限公司 Calculation power transaction method, device and system
CN115297014B (en) * 2022-09-29 2022-12-27 浪潮通信信息系统有限公司 Zero-trust computing network operating system, management method, electronic device and storage medium
CN115866092A (en) * 2022-11-24 2023-03-28 中国联合网络通信集团有限公司 Data forwarding method, device, equipment and storage medium
CN116074390A (en) * 2022-12-09 2023-05-05 重庆大学 New energy computing power network sensing and routing system and method
CN115858179B (en) * 2023-02-16 2023-05-09 北京虹宇科技有限公司 Method, device and equipment for automatically discovering and networking computing unit collaborative service
CN116346938B (en) * 2023-05-25 2023-08-18 新华三技术有限公司 Calculation power access method and device, electronic equipment and storage medium
CN116634208A (en) * 2023-07-26 2023-08-22 合肥英特灵达信息技术有限公司 Service algorithm scheduling method, system, device, terminal and storage medium
CN116708294B (en) * 2023-08-08 2023-11-21 三峡科技有限责任公司 Method for realizing intelligent application sensing and message forwarding based on APN6 network
CN117376032B (en) * 2023-12-06 2024-04-16 华润数字科技有限公司 Security service scheduling method and system, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107294773A (en) * 2017-05-30 2017-10-24 浙江工商大学 A kind of Network collocation method of software definable
CN110213363A (en) * 2019-05-30 2019-09-06 华南理工大学 Cloud resource dynamic allocation system and method based on software defined network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10769212B2 (en) * 2015-07-31 2020-09-08 Netapp Inc. Extensible and elastic data management services engine external to a storage domain

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107294773A (en) * 2017-05-30 2017-10-24 浙江工商大学 A kind of Network collocation method of software definable
CN110213363A (en) * 2019-05-30 2019-09-06 华南理工大学 Cloud resource dynamic allocation system and method based on software defined network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
于清林.从边缘计算到算力网络.《产业科技创新》.2020,(第03期), *
从边缘计算到算力网络;于清林;《产业科技创新》;20200125(第03期);49-51 *
何涛等.面向6G需求的算力网络技术.《移动通信》.2020,(第06期), *
基于通信云和承载网协同的算力网络编排技术;曹畅等;《电信科学》;20200720(第07期);全文 *
算力感知网络技术白皮书发布会;蔡慧;2019边缘计算产业峰会;全文 *
面向6G需求的算力网络技术;何涛等;《移动通信》;20200615(第06期);131-135 *

Also Published As

Publication number Publication date
CN114095579A (en) 2022-02-25
WO2022028418A1 (en) 2022-02-10

Similar Documents

Publication Publication Date Title
CN114095579B (en) Network system for computing power processing, service processing method and equipment
US11038972B2 (en) Service providing method, apparatus, and system
Shah et al. Cloud-native network slicing using software defined networking based multi-access edge computing: A survey
CN109600246B (en) Network slice management method and device
CN113448721A (en) Network system for computing power processing and computing power processing method
Vilalta et al. TelcoFog: A unified flexible fog and cloud computing architecture for 5G networks
JP6408602B2 (en) Method and communication unit for service implementation in an NFV system
CN110476453A (en) For providing the service granting that network is sliced to client
CN108886531A (en) Network and application management are carried out using service layer's ability
EP2932657A1 (en) Information centric networking based service centric networking
CN108632058A (en) The management method and device of network slice
CN103428025A (en) Method, apparatus and system for managing virtual network service
CN104322011A (en) Connectivity service orchestrator
WO2022184094A1 (en) Network system for processing hash power, and service processing method and hash power network element node
CN113726843A (en) Edge cloud system, data transmission method, device and storage medium
Bu et al. Enabling adaptive routing service customization via the integration of SDN and NFV
CN111510383B (en) Route calculation method and related equipment
CN114546632A (en) Calculation force distribution method, calculation force distribution platform, calculation force distribution system and computer readable storage medium
Podleski et al. Multi-domain Software Defined Network: exploring possibilities in
Ceccarelli et al. Framework for abstraction and control of TE networks (ACTN)
Figuerola et al. PHOSPHORUS: Single-step on-demand services across multi-domain networks for e-science
Latre et al. The fluid internet: service-centric management of a virtualized future internet
CN113839995A (en) Cross-domain resource management system, method, device and storage medium
Mendiola et al. Enhancing network resources utilization and resiliency in multi-domain bandwidth on demand service provisioning using SDN
CN102316086B (en) The trunking method of business datum and relay node

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant