WO2021052132A1 - 一种网络边缘计算方法、装置、设备及介质 - Google Patents

一种网络边缘计算方法、装置、设备及介质 Download PDF

Info

Publication number
WO2021052132A1
WO2021052132A1 PCT/CN2020/111469 CN2020111469W WO2021052132A1 WO 2021052132 A1 WO2021052132 A1 WO 2021052132A1 CN 2020111469 W CN2020111469 W CN 2020111469W WO 2021052132 A1 WO2021052132 A1 WO 2021052132A1
Authority
WO
WIPO (PCT)
Prior art keywords
service
address
container
virtual
edge data
Prior art date
Application number
PCT/CN2020/111469
Other languages
English (en)
French (fr)
Inventor
陈闯
苗辉
Original Assignee
贵州白山云科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 贵州白山云科技股份有限公司 filed Critical 贵州白山云科技股份有限公司
Priority to US17/761,707 priority Critical patent/US20220394084A1/en
Publication of WO2021052132A1 publication Critical patent/WO2021052132A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/53Network services using third party service providers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2101/00Indexing scheme associated with group H04L61/00
    • H04L2101/60Types of network addresses
    • H04L2101/618Details of network addresses
    • H04L2101/663Transport layer addresses, e.g. aspects of transmission control protocol [TCP] or user datagram protocol [UDP] ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • This application relates to, but is not limited to, edge computing technology, and in particular to a network edge computing method, device, equipment, and medium.
  • edge computing technology in a distributed content distribution network can realize most user data operations and data control sinking to the local device closest to the user without relying on the cloud. This undoubtedly greatly improves the efficiency of data processing and reduces the cloud and The load of the central database. But at the same time, new problems have arisen. There are a large number of node servers in the distributed content distribution network, which need to meet a variety of single or integrated services such as caching, scheduling, computing, monitoring, and storage. Then, how to provide edge computing services quickly and efficiently in large-scale complex server clusters has become a key issue.
  • this article provides a network edge computing method, device and medium for TCP services.
  • a network edge computing method includes:
  • the edge data node receives the service request
  • the edge data node routes the service request to one or more containers corresponding to the service according to the virtual IP address and port information in the service request, and the container performs processing.
  • the routing the service request to one or more containers corresponding to the service according to the virtual IP address and port information in the service request includes:
  • the public port After receiving the service request, the public port routes the service request to one or more containers corresponding to the service according to a load balancing mechanism.
  • the above method also includes:
  • the virtual IP address and port information are transmitted to a third party in advance, where the third party includes at least a third party trusted by the initiator of the service request.
  • the edge data node is one or more edge data nodes in an edge data node cluster corresponding to the virtual IP address and port information;
  • the edge data node is one or more edge data nodes in an edge data node cluster selected based on a load balancing strategy and corresponding to the virtual IP address and port information.
  • the service request includes a TCP request and/or a UDP request.
  • the method before the edge data node receives the service request, the method further includes:
  • the edge data node receives a service creation request, and the service creation request includes at least container configuration information for creating the service;
  • the edge data node creates a container corresponding to the service on the server in the edge data node according to the container configuration information.
  • the container configuration information includes at least any one or more of the following:
  • the number of containers, the resource information used by the container, and the container image address are the number of containers, the resource information used by the container, and the container image address.
  • the edge data node creates a container corresponding to the service on the server in the edge data node according to the container configuration information, including:
  • the edge data node selects a plurality of servers whose available resources meet the container use resource information according to the container use resource information, and creates a container corresponding to the service on the selected server according to the container image address.
  • the above method also includes:
  • the edge data node uses a pre-configured public port corresponding to the virtual IP address and port information of the service, and configures the corresponding public port for the created container respectively;
  • the pre-configured virtual IP address and port information of the service, and the public port corresponding to the virtual IP address and port information are issued to the edge data node by the management center or independently configured by the edge data node.
  • a network edge computing device including:
  • the first module is set to receive service requests
  • the second module is configured to route the service request to one or more containers corresponding to the service according to the virtual IP address and port information in the service request, and the container will process it.
  • the second module based on the virtual IP address and port information in the service request, route the service request to one or more containers corresponding to the service, including:
  • the virtual IP address and port information in the service request from the mapping relationship between the public port of the device and the virtual IP address and port information of the service, query corresponding to the virtual IP address and port information in the service request The public port of, send the service request to the public port that is queried;
  • the service request When the service request is received on the public port of any server in the device, the service request is routed to one or more containers corresponding to the service according to the cluster load balancing mechanism.
  • the above device also includes:
  • the third module is configured to receive a service creation request, the service creation request includes at least the container configuration information for creating the service, and the container corresponding to the service is created on the server in the device according to the container configuration information.
  • the container configuration information includes at least any one or more of the following:
  • the number of containers, the resource information used by the container, and the container image address are the number of containers, the resource information used by the container, and the container image address.
  • the third module based on the container configuration information, creates a container corresponding to the service on the server in the device, including:
  • the container usage resource information select multiple servers whose available resources match the container usage resource information, and create a container corresponding to the container image address creation service on the selected server.
  • the third module is also set to use a pre-configured public port corresponding to the virtual IP address and port information of the service, and configure corresponding public ports for the created container respectively;
  • the pre-configured public port corresponding to the virtual IP address and port information of the service is issued by the management center to the device, or is independently configured by the device.
  • the service request includes a TCP request and/or a UDP request.
  • a computer-readable storage medium having a computer program stored thereon, wherein the computer program implements the steps of the network edge computing method and system as described above when the computer program is executed.
  • a computer device including a processor, a memory, and a computer program stored on the memory, wherein the processor executes the computer program to implement the network edge computing method and System steps.
  • This article provides a network edge computing method, device, equipment, and media. Without the intervention of a service provider, users can use pre-obtained virtual IP address and port information to directly initiate service requests to edge data nodes to achieve edge computing of services.
  • Fig. 1 is a schematic flowchart of a method for network edge computing according to an exemplary embodiment.
  • Fig. 2 is a schematic diagram showing a network architecture for implementing edge computing according to an exemplary embodiment.
  • FIG. 3 is a schematic diagram of the structure of the management center in the network architecture shown in FIG. 2.
  • FIG. 4 is a schematic diagram of the structure of an edge data node in the network architecture shown in FIG. 2.
  • FIG. 5 is a schematic diagram of the principle of cluster management among multiple nodes in the network architecture shown in FIG. 2.
  • FIG. 6 is a schematic diagram of the mapping relationship between services and public ports in each node in the network architecture shown in FIG. 2.
  • FIG. 7 is a schematic diagram of the deployment principle of each service in the network architecture shown in FIG. 2.
  • FIG. 8 is a schematic diagram of the principle of replicas of various services on different edge data nodes in the network architecture shown in FIG. 2.
  • Fig. 9 is a flowchart of a method for a user to initiate a service request to an edge data node in the network architecture shown in Fig. 2.
  • Fig. 10 is a flowchart of a method for implementing service access by edge data nodes in the network architecture shown in Fig. 2.
  • Fig. 11 is a block diagram showing a computer device for network edge computing according to an exemplary embodiment.
  • This embodiment provides a network edge computing method.
  • the implementation process of the method is shown in FIG. 1, and includes the following operation steps:
  • Step S11 the edge data node receives the service request
  • the service request mentioned in this article may include any one or both of TCP request and UDP request.
  • step S12 the edge data node routes the service request to one or more containers corresponding to the service according to the virtual IP address and port information in the service request, and the container performs processing.
  • the virtual IP address in the service request may be a virtual IP address corresponding to the service.
  • the port information may include information such as a port number or port identifier corresponding to the service.
  • the container corresponding to the service includes a container that has been deployed on the edge data node to provide the service processing function.
  • the container may be deployed on one or more servers of the edge data node.
  • step S12 can be divided into the following operations:
  • Step S12a from the mapping relationship between the public port of the edge data node, the virtual IP address of the service, and the port information, query the public port corresponding to the virtual IP address and port information in the received service request, and send the received service request to On the public port that was queried;
  • step S12b after the public port receives the service request, it routes the service request to one or more containers corresponding to the service according to the load balancing mechanism.
  • service requests are routed to containers deployed on servers with lighter loads.
  • the edge data node that performs the operation may be one or more edge data nodes in the edge data node cluster corresponding to the virtual IP address and port information. It may also be one or more edge data nodes in the edge data node cluster selected based on the load balancing strategy and corresponding to the virtual IP address and port information.
  • the technical solution of this embodiment does not require intervention by the service provider, and the user can directly initiate a service request to the edge data node by using the virtual IP address and port information obtained in advance to implement edge computing of the service.
  • the edge data node can also perform the following operations:
  • the edge data node may transmit the virtual IP address and port information to the initiator of the service request in advance. In this manner, the initiator of the service request can initiate a service request to the edge data node according to the received virtual IP address and port information.
  • the edge data node may also transmit the virtual IP address and port information to a third party in advance, where the third party at least includes a third party trusted by the initiator of the service request. In this way, the initiator of the service request can obtain the virtual IP address and port information through a third party. Thus, according to the acquired virtual IP address and port information, a service request is initiated to the edge data node.
  • the above-mentioned edge data node can process the service request correspondingly because the container corresponding to the service is already deployed in the edge data node. Therefore, on the basis of the above operations, the above method may also include the operation of creating a container corresponding to the service, which is specifically as follows:
  • Step a The edge data node receives a service creation request, and the service creation request includes at least the container configuration information of the created service.
  • the container configuration information may include any one or more of the number of containers, the resource information used by the container, and the container image address.
  • Step b The edge data node creates a container corresponding to the service on the server in the edge data node according to the container configuration information.
  • the edge data node may select multiple servers whose available resources match the container use resource information according to the container use resource information, and create a container corresponding to the service on the selected server according to the container image address. Since one or more containers can be created on one server in the edge data node, the number of selected servers is less than or equal to the number of containers.
  • the edge data node can also use the pre-configured public port corresponding to the virtual IP address and port information of the service to configure the corresponding public port for the created container.
  • the pre-configured virtual IP address and port information of the service, and the public port corresponding to the virtual IP address and port information are issued to the edge data node by the management center or independently configured by the edge data node.
  • This embodiment provides a network edge computing device, which mainly includes a first module and a second module.
  • the first module is configured to receive service requests.
  • the service request received herein may include either or both of TCP and UDP service requests.
  • the second module is configured to route the service request to one or more containers corresponding to the service according to the virtual IP address and port information in the service request for processing by the container.
  • the second module can query the virtual IP address and port information in the received service request from the mapping relationship between the public port of the device and the virtual IP address and port information of the service according to the received service request.
  • the public port corresponding to the virtual IP address and port information will send the service request to the public port queried;
  • the service request can be routed to one or more containers corresponding to the service according to the cluster load balancing mechanism.
  • the above-mentioned device may also have the function of creating a container corresponding to the service.
  • a third module can be added. The third module receives the service creation request, and obtains the container configuration information for creating the service according to the service deployment request.
  • the container configuration information of the edge computing service includes at least one or more of the number of containers, the resource information used by the container, and the container image address.
  • the method of creating a container may be to select a plurality of servers whose available resources meet the container use resource information according to the container use resource information, and create a container corresponding to the container image address creation service on the selected server.
  • one or more containers can be created on one server, so the total number of selected servers is less than or equal to the number of containers.
  • the above-mentioned third module may also use pre-configured public ports corresponding to the virtual IP address and port information of the service to configure corresponding public ports for the created containers.
  • the pre-configured public port corresponding to the virtual IP address and port information of the service may be issued by the management center to the device, or may be independently configured by the device.
  • the service request contains the virtual IP address and port information of the service, that is, the port accessed by the user (for example, when the initiated service request is a TCP request, the port accessed by the user includes the TCP port, and the initiated service request is When a UDP request is made, the port accessed by the user includes the UDP port), and a user can be uniquely identified.
  • the edge data node routes the user's service request to one or more relatively idle containers for processing, which greatly improves the user's service experience.
  • an edge computing network architecture shown in FIG. 2 is taken as an example to introduce an implementation manner of the foregoing Embodiment 1 and Embodiment 2. It can be seen from Figure 2 that the overall architecture for edge computing includes at least two parts: a management center and edge data nodes.
  • the management center is used to control and manage all edge data nodes, send creation and management commands to each edge data node, and collect information reported by each edge data node.
  • Edge data nodes are used to process user requests.
  • each node can be considered as a self-managed cluster, which can perform load balancing processing on received user requests, and horizontally expand and automatically migrate the containers of this edge data node. Etc., thereby providing high availability.
  • the containers involved in this article may include, but are not limited to, docker containers.
  • the management center is shown in Figure 3, which may include the following components:
  • Application program interface server It mainly receives service deployment requests to edge data nodes, and determines to create a corresponding container on the specified node according to the configuration information involved in the received service deployment request and the server information of each node stored in the database.
  • the corresponding operation command is sent to the cluster management module of the edge data node, which can instruct the cluster management module of the edge data node to perform any one or more of operations such as creation, destruction, capacity expansion, and migration of the local container.
  • the management center may send a service creation request to the designated edge data node according to the configuration information of the service, where the service creation request includes container configuration information of the container deployed on the designated edge data node.
  • the virtual IP address and port information of the created service and the corresponding public port can be configured, and the virtual IP address and port information of the configured service and the corresponding public port can be configured.
  • the port is delivered to the edge data node.
  • the management center can also return the configured virtual IP address and port information of the service to the initiator of the service deployment request, which is used to indicate that the user using the service can use the virtual IP address and port information of the service to initiate a service request.
  • the initiator of the service deployment request may include the service provider.
  • the method of returning the virtual IP address and port information of the configured service may include various forms. For example, after the operation of configuring the virtual IP address and port information of the TCP service is completed, the virtual IP address and port information of the service are returned to the initiator of the service deployment request.
  • the virtual IP address and port information of the service are returned to the initiator of the service deployment request.
  • the virtual IP address and port information of the service can be directly or indirectly returned to the initiator of the service deployment request.
  • the virtual IP address and port information of the service can be sent to a third party designated by the initiator of the service deployment request. After that, the third party can interact with the initiator of the service deployment request to transmit the virtual IP address and port information of the service.
  • Log Center Collect user log data, process the log data and store it, so that the user can view the log data in the future, analyze the user log, excavate abnormal data, and give warnings to special logs.
  • Monitoring center Send a monitoring request to the cluster monitoring tool of the edge data node.
  • the monitoring request can be used to collect the status information of the container and the server in the cluster of the edge data node. Among them, the monitoring request can be sent periodically to the cluster monitoring tool of each edge data node.
  • the status information of the container can include the occupancy ratio of the container (such as container memory, CPU, and network usage).
  • the status information of the server can include the server operation. Load status, etc.
  • the Database Mainly used to store user information, cluster information, server information on edge data nodes and other information.
  • the user information includes at least a user identification (for example, user IP, etc.).
  • the cluster information includes at least the status of the cluster, the number of tasks running in the cluster, and so on.
  • the server information on the edge data node at least includes server identification, server load status, and so on.
  • the foregoing database may also save the configuration information of the service after the service is created on the edge data node.
  • the edge data node is equivalent to a network edge computing device shown in Embodiment 2.
  • the edge data node may include the following components. Among them, the cluster management module, the database cache module, the virtual server cluster module, and the service request processing module all adopt redundant design to avoid single points. problem.
  • Cluster management module (which integrates the first module and the third module in embodiment 2): responsible for the creation, deletion, and migration of containers in the node according to the operation commands issued by the management center, management of each server in the node, and Collect the server status information in this node and report it to the management center.
  • the cluster management modules between different nodes can be independent of each other, and each node is a self-managed cluster, as shown in Figure 5. In this way, it can ensure that the control granularity is finer, and there is no need to maintain complex relationships through tags.
  • the container in each node is only managed by the cluster management module in the node, so there is no need to store the corresponding relationship between the node and the container.
  • the node server in each node is only managed by the cluster management module in the node, and there is no need to mark the association between the storage node and the node server.
  • multiple nodes can also form a self-managed cluster.
  • the above-mentioned method of constructing clusters in units of nodes, and the cluster management modules between different nodes are not related to each other, can also detect the survival status of containers and servers more accurately. This is because all computer rooms use a cluster. After the cluster management module is deployed at the central node, the network environments from the central node to the edge computer rooms are different. It is very possible to misjudge the survival status of containers and nodes and perform the wrong migration.
  • the cluster system is limited to the beneficial effects of one node for management. It can also be expressed in that after a server is associated with a public port, all servers need to listen to the public port. After constructing different clusters according to different nodes, you can avoid Unnecessary nodes listen to the server on the public port.
  • each node can maintain a set of service-corresponding container-to-public port mapping relationships, that is, one-to-one correspondence between containers and public ports corresponding to servers in the node.
  • the mapping relationship between the container and the public port corresponding to the service in the node can be configured by the management center side, or can be configured by the cluster management module on the node side. Since the cluster management modules in different nodes are not related to each other, the port mapping relationship maintained in each node does not affect each other.
  • each server corresponds to a container, and the container can only be used by the server in the cluster. Specifically, the port to which the container is mapped is called a public port. In this way, different applications (also called services) can use the same port.
  • server 1 and server 2 in Figure 6 are configured to call port 80 in the container, but it can be seen from the port mapping relationship that the mapping to specific For public ports, server 1 uses port 8000, and server 2 uses port 8001. Moreover, the container inside the server is migrated. When migrating from one server to another server, the IP of the container will be maintained inside the server, and the mapped public port will not change, so the upper layer does not need to care about the migration of the container.
  • the mapping relationship between the application, the container, and the public port can be stored in the database of the management center. For example, for the TCP service, the virtual IP address and port information can be saved, and the mapping relationship with the public port.
  • Running module By running different containers, it responds to user-initiated edge computing service requests.
  • Database cache module The edge data node accesses the database of the central cluster (that is, the database of the management center mentioned above), and then queries the database of the central cluster when the cache cannot be hit.
  • Virtual server cluster module Provides high reliability for service request processing.
  • Service request processing module (equivalent to the second module in Embodiment 2): receives the service request, and routes the service request to one or more containers corresponding to the service according to the virtual IP address and port information in the service request.
  • the container is processed. That is, the virtual IP address and port information responsible for the service, the mapping relationship with the public port, and the edge computing request is routed to different containers of the edge data node according to the virtual IP address and port information used by the user who initiates the edge computing service request.
  • the service requested by the user can be composed of servers deployed on multiple nodes, and each server is a collection of a set of containers.
  • the principle is shown in Figure 7.
  • the number of containers in the same server is called the number of copies of the server, and the server will ensure that containers with the specified number of copies are running and distributed on different servers, as shown in Figure 8.
  • the server can send the request to different containers for corresponding processing according to the load balancing mechanism within the server. This process is transparent to the user, so for the user, only the server can be seen.
  • the edge data node listens to the service request initiated by the user through the public port, and then the cluster management module routes the request to one or more containers corresponding to the service through the load balancing process of the cluster. Generally, the request is routed to the load comparison. Inside a container deployed on a light server.
  • users can request the creation of various types of services, such as TCP services and UDP services.
  • This embodiment provides a network edge computing method, including the following operations:
  • the edge data node receives the service request, and according to the virtual IP address and port information in the service request, from the mapping relationship between the public port of the edge data node and the virtual IP address and port information of the service, query the IP in the service request
  • the public port corresponding to the address and port information sends the service request to the public port that is queried;
  • the service request After receiving the service request on the public port of any server in the edge data node, the service request is routed to one of the containers corresponding to the service according to the cluster load balancing mechanism, and the container performs corresponding processing.
  • the client before the client initiates a service request to the edge node, it can send the original service request to the service provider, and the service provider returns the virtual IP address and port information. Or, the service provider pre-issues virtual IP address and port information to the client, and the client obtains the virtual IP address and port locally.
  • the process in which the edge data node receives the service request initiated by the user can be the user using the stored virtual IP address and port information to the management center (it can also be a network element that implements scheduling or routing functions for edge computing, for example, applied to service).
  • the service request is initiated on the equipment of the supplier, and the edge data node determines the edge data node corresponding to the virtual IP address and port information according to the virtual IP address and port information of the service request initiated by the user, and the edge data node The IP address is sent to the user.
  • the user initiates a service request to this edge data node, where the service request includes virtual IP address and port information.
  • the corresponding service can also be created in the network edge data node in advance, that is, the edge data node receives the service creation request sent by the management center, and the service creation request can include the service creation request.
  • Container configuration information At this time, the edge data node creates a container corresponding to the service on the server in the edge data node according to the container configuration information contained in the received service creation request, and then the edge data node can provide services to the user.
  • the container configuration information involved in this article may include any one or more of the number of containers, the resource information used by the container, and the container image address.
  • the edge data node creates a container corresponding to the service on the server in the edge data node according to the container configuration information. You can refer to the following operations:
  • the edge data node can select multiple servers whose available resources meet the container usage resource information according to the container usage resource information;
  • a container corresponding to the service is created on the selected server according to the container image address.
  • One or more containers can be created on one server. Therefore, the number of selected servers may be less than or equal to the number of containers.
  • the following method can be used to create the container:
  • the edge data node can use the pre-configured virtual IP address and port information of the service, and the public port corresponding to the virtual IP address and port information, to create a container corresponding to the service on the server in the edge data node.
  • the pre-configured virtual IP address and port information of the service, and the public port corresponding to the virtual IP address and port information may be pre-configured by the management center and issued to the edge data node, or it may be the edge data node Independently configured, or configured by the service provider through the interface, this article does not make special restrictions.
  • the virtual IP address and port information as a whole correspond to the public port.
  • This embodiment provides another network edge computing method, including the following operations:
  • the management center receives the service deployment request, and obtains the configuration information of the service creation according to the service deployment request.
  • the configuration information of the edge computing service includes at least the specified edge data node information and the container configuration information of the creation service;
  • the management center sends a service creation request to the designated edge data node according to the configuration information of the service, where the service creation request includes container configuration information of the container deployed on the designated edge data node.
  • the container configuration information of the container has been introduced in the foregoing, and will not be repeated here.
  • the foregoing method may also perform the following operations:
  • the management center saves the configuration information of the service after the service is created on the edge data node.
  • the management center when it sends a service creation request to the designated edge data node, it can also configure the virtual IP address and port information of the service for the edge data node, and the information related to the service.
  • the public port corresponding to the virtual IP address and port information, and the virtual IP address and port information of the service and the corresponding public port are issued to the edge data node.
  • the management center may also return the virtual IP address and port information of the created service to the initiator of the service deployment request, such as the service provider.
  • the virtual IP address and port information of the service are used to indicate that the user using the service can use the virtual IP address and port information to initiate a service request.
  • the management center may also receive a service request containing virtual IP address and port information sent by the user.
  • the management center determines edge data node information corresponding to the virtual IP address and port information contained in the service request, and returns the determined edge data node information to the user, where the edge data node information includes at least the IP address of the edge data node.
  • the edge data node information determined by the management center may be the IP address of the edge data node whose geographic location and/or logical location is closest to the user.
  • the edge data node whose logical location is the closest to the originator of the service request mentioned in this article may include the edge data node of the same operator as the originator of the service request and/or the edge data with the smallest data transmission delay. node.
  • the operator to which the initiator of the service request belongs can be determined, and an edge data node belonging to this operator can be selected as the edge data node whose logical location is closest to the location of the initiator of the service request. It is also possible to determine the edge data node with the smallest data transmission delay as the edge data node whose logical location is closest to the location of the initiator of the service request.
  • the operator to which the initiator of the service request belongs, and the edge data node with the smallest data transmission delay among the edge data nodes under it may be determined as the edge data node whose logical location is closest to the location of the initiator of the service request.
  • the data transmission delay includes node processing delay, queuing delay, sending delay, propagation delay and so on.
  • the following takes actual applications as an example to introduce the process of creating a service at an edge data node, a user initiating a service request to an edge data node, and an edge data node achieving service access.
  • the embodiment of the present invention provides a process for creating a service in an edge computing network.
  • the process mainly includes the following operations:
  • Step S1 The user (the user here is a service provider, and may also be referred to as an administrator user) sends a deploy application request (deploy app) to the application program interface server of the management center;
  • the deployment application request may include the type information of the service requested to be deployed and the location information (such as node information) of the deployed service.
  • Step S2 The application program interface server queries the database of the management center for the available virtual IP address and port information on the node requesting the deployment service, and the available public port, and allocates a virtual IP available on the edge data node for the service to be deployed Address and port information, and available public ports, for example, assign a virtual IP address and port 9000, and public port 7000 on node 1.
  • the available virtual IP address and port information may be free virtual IP addresses and free ports, and virtual IP addresses and ports that are not occupied by other users or services.
  • Available public ports can be idle ports, ports not occupied by other services.
  • step S3 the application program interface server sends a creation request to the cluster management module of the edge data node 1, and the cluster management module is responsible for the specific creation.
  • the creation request sent by the application program interface server includes virtual IP address and port information allocated for the service, as well as corresponding public port information, and container configuration information.
  • the container information may include any one or more of the number of containers (which may also be referred to as the number of copies of the server), the resource information used by the container, and the container mirror address information.
  • step S4 the cluster management module selects several servers according to the CPU, memory, etc. restrictions, according to the cluster load balancing mechanism, and creates containers for running services on the selected servers.
  • the cluster management module selects multiple servers according to the number of containers in the container configuration information, that is, the number of selected servers may be less than or equal to the number of containers.
  • the cluster management module selects multiple servers according to the number of containers in the container configuration information, that is, the number of selected servers may be less than or equal to the number of containers.
  • it is based on the container resource information in the container configuration information to select a server that can satisfy the container resource information. After selecting the server, when creating the container, you can implement the creation operation according to the container image address in the container configuration information.
  • Step S5 the application program interface server adds the mapping relationship between the virtual IP address and port information corresponding to the service and the public port to the service request processing module of edge data node 1, and allows the service request processing module to monitor the virtual IP address and The public port corresponding to the port information, and forward the request to the public port.
  • Step S6 the application program interface server records the mapping relationship between the virtual IP address and port information corresponding to the service and the public port in the database.
  • Step S7 the application program interface server returns the virtual IP address and port information corresponding to the service to the service provider, and the service provider issues it to the user end.
  • the user end needs to record the virtual IP address and port information, and subsequent visits can be made directly Use the virtual IP address and port information, or, based on the user's original access request (for example, domain name access request), the service provider returns the corresponding virtual IP address and port information to the user.
  • the available virtual IP address and port information of the edge data node are allocated to the created service, and the corresponding available public port is allocated by the application program interface server on the management center side, but this method is only To illustrate.
  • the edge data node can also independently allocate reliable virtual IP addresses and port information for the service, and the corresponding available public ports, and the edge data node allocates the virtual IP address and port information corresponding to the service. After the public port, you can also report the virtual IP address and port information corresponding to the service and the corresponding public port to the management center.
  • FIG. 9 is a schematic flowchart of a method for realizing a service request provided in an exemplary embodiment. As can be seen from Figure 9, the method mainly includes the following operations:
  • Step S91 The user sends a service request carrying virtual IP address and port information to the management center. After receiving the service request, the management center returns the IP of an edge data node to the user according to the local scheduling algorithm;
  • the IP of the edge data node returned by the management center to the user can be the virtual IP address and port information of the service request initiated by the user, and the edge data node corresponding to the virtual IP address and port information is determined according to the load balancing mechanism. , The IP of the determined edge data node is fed back to the user.
  • edge data nodes corresponding to the virtual IP address and port information there may be one or more edge data nodes corresponding to the virtual IP address and port information.
  • the edge data node corresponding to the virtual IP address and port information includes the edge data node whose geographic location and/or logical location is closest to the initiator of the service request. Specifically, the process of determining the edge data node corresponding to the virtual IP address and port information has been described in the foregoing, and will not be repeated here.
  • Step S92 The user initiates a service request to the edge data node according to the received IP of the edge data node.
  • the service request includes virtual IP address and port information, and the container group server in the edge data node that receives the service request sends the request to the user. Provide corresponding services.
  • the management center provides the service requested by the user for scheduling, and can store the corresponding relationship information between each edge data node and the virtual IP address and port information, and the public port.
  • FIG. 10 is a schematic flowchart of a network edge computing method provided in this embodiment. The method mainly includes the following operations:
  • step S101 the user uses the stored virtual IP address and port information to directly access (that is, send a service request) on the service request dispatch server of the edge data node, where the service request dispatch server can perform 4-layer load balancing processing.
  • Step S102 the service request dispatch server searches for the corresponding public port according to the virtual IP address and port information requested by the user, and checks whether the request is legal. If the check request is a legal request, proceed to step S103, and if the check request is an illegal request, prompt the user Invalid operation or illegal operation, end this process.
  • This operation step can also be omitted, so that there is no need to access the management center database.
  • step S103 the service request dispatch server sends the service request to the public port of the node (that is, the public port found by the search).
  • step S104 the cluster routes the service request to a certain container under the designated server according to the load balancing mechanism.
  • the cluster can route the service request to an idle container corresponding to the service or any non-idle container according to the load balancing mechanism.
  • step S105 the container processes the user's request and returns the result to the user.
  • the embodiment of the present invention also provides a computer-readable storage medium on which a computer program is stored, wherein the computer program implements the steps of the network edge computing method described above when the computer program is executed.
  • a computer program implements the steps of the network edge computing method described above when the computer program is executed.
  • Fig. 11 is a block diagram showing a computer device 110 for network edge computing according to an exemplary embodiment.
  • the computer device 110 may be provided as a server. 11, the computer device 110 includes a processor 111, and the number of processors can be set to one or more as needed.
  • the computer device 110 further includes a memory 112 for storing instructions executable by the processor 111, such as application programs. The number of memories can be set to one or more as required.
  • the stored application programs can be one or more.
  • the processor 111 is configured to execute instructions to execute the aforementioned network edge computing method.
  • this application may be provided as methods, devices (equipment), or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may take the form of a computer program product implemented on one or more computer-usable storage media containing computer-usable program codes.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storing information (such as computer readable instructions, data structures, program modules, or other data) , Including but not limited to RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridge, magnetic tape, magnetic disk storage or other magnetic storage device, or can be used for Any other medium that stores desired information and can be accessed by a computer.
  • communication media usually contain computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as carrier waves or other transmission mechanisms, and may include any information delivery media. .
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • the edge data node that performs the operation may be one or more edge data nodes in an edge data node cluster corresponding to the virtual IP address and port information. It may also be one or more edge data nodes in the edge data node cluster selected based on the load balancing strategy and corresponding to the virtual IP address and port information.
  • users can use pre-obtained virtual IP address and port information to directly initiate service requests to edge data nodes to realize edge computing of services.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供了一种网络边缘计算方法、装置及介质。所述网络边缘计算方法包括:边缘数据节点接收用户发起的服务请求,根据服务请求对应的虚拟IP地址和端口信息,查询所述服务请求对应的公用端口,将所述服务请求发送到所查询到的公用端口上;所述边缘数据节点内任一服务器的所述公用端口上接收到所述服务请求后,按照集群负载均衡机制将所述服务请求路由到服务对应的其中一个容器中,由所述容器进行处理。通过本申请提供的技术方案,无需服务提供商介入,用户可以使用预先获取的虚拟IP地址和端口信息向边缘数据节点直接发起服务请求,实现服务的边缘计算。

Description

一种网络边缘计算方法、装置、设备及介质
本申请要求在2019年09月19日提交中国专利局、申请号为201910885903.5,发明名称为“一种网络边缘计算方法、装置及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及但不限于边缘计算技术,尤其涉及一种网络边缘计算方法、装置、设备及介质。
背景技术
在分布式内容分发网络中运用边缘计算技术,能够实现大部分的用户数据运算和数据控制下沉至离用户最近的本地设备而无需依赖云端,这无疑大大提升数据的处理效率,减轻了云端以及中心数据库的负荷。但同时也产生了新的问题,分布式内容分发网络中存在大量的节点服务器,需要满足诸如缓存、调度、计算、监控、存储等多种单一或综合服务。那么,如何在大规模的复杂服务器集群中实现快捷高效的提供边缘计算服务成为了关键问题。
发明内容
为克服相关技术中存在的问题,本文提供一种用于TCP服务的网络边缘计算方法、装置及介质。
根据本文的第一方面,提供一种网络边缘计算方法,该方法包括:
边缘数据节点接收服务请求;
所述边缘数据节点根据所述服务请求中的虚拟IP地址和端口信息,将所述服务请求路由到服务对应的其中一个或多容器中,由所述容器进行处理。
其中,上述方法中,所述根据所述服务请求中的虚拟IP地址和端口信息,将服务请求路由到服务对应的其中一个或多个容器中,包括:
从所述边缘数据节点的公用端口、服务的虚拟IP地址和端口信息的映射关 系中,查询与所述服务请求中的虚拟IP地址和端口信息对应的公用端口,将所述服务请求发送到所查询到的公用端口上;
所述公用端口接收所述服务请求后,按照负载均衡机制将所述服务请求路由到服务对应的其中一个或多个容器中。
其中,上述方法还包括:
预先将所述虚拟IP地址和端口信息传输给所述服务请求的发起方;
或者,预先将所述虚拟IP地址和端口信息传输给第三方,其中,所述第三方至少包括所述服务请求的发起方信赖的第三方。
其中,上述方法中,所述边缘数据节点为,与虚拟IP地址和端口信息对应的边缘数据节点集群中的一个或多个边缘数据节点;
或者,所述边缘数据节点为基于负载均衡策略选择的与虚拟IP地址和端口信息对应的边缘数据节点集群中的一个或多个边缘数据节点。
其中,上述方法中,所述服务请求包括TCP请求和/或UDP请求。
其中,上述方法中,所述边缘数据节点接收服务请求之前,所述方法还包括:
所述边缘数据节点接收服务创建请求,所述服务创建请求中至少包括创建服务的容器配置信息;
所述边缘数据节点根据所述容器配置信息,在本边缘数据节点内的服务器上创建服务对应的容器。
其中,上述方法中,所述容器配置信息至少包括如下任一种或几种:
容器个数、容器使用资源信息、容器镜像地址。
其中,上述方法中,所述边缘数据节点根据所述容器配置信息,在本边缘数据节点内的服务器上创建服务对应的容器,包括:
所述边缘数据节点根据容器使用资源信息,选择多个可用资源符合容器使用资源信息的服务器,在所选择的服务器上按照容器镜像地址创建服务对应的容器。
其中,上述方法还包括:
所述边缘数据节点使用预先配置的与服务的虚拟IP地址和端口信息对应的公用端口,为创建的容器分别配置对应的公用端口;
其中,预先配置的服务的虚拟IP地址和端口信息,以及虚拟IP地址和端口信息对应的公用端口是,管理中心下发给所述边缘数据节点的,或者是所述边缘数据节点自主配置的。
根据本文的另一方面,提供一种网络边缘计算装置,包括:
第一模块,被设置为接收服务请求;
第二模块,被设置为根据所述服务请求中的虚拟IP地址和端口信息,将所述服务请求路由到服务对应的其中一个或多容器中,由所述容器进行处理。
其中,上述装置中,所述第二模块,根据所述服务请求中的虚拟IP地址和端口信息,将所述服务请求路由到服务对应的其中一个或多容器中,包括:
根据所述服务请求中的虚拟IP地址和端口信息,从本装置的公用端口,与服务的虚拟IP地址和端口信息的映射关系中,查询与所述服务请求中的虚拟IP地址和端口信息对应的公用端口,将所述服务请求发送到所查询到的公用端口上;
在本装置内任一服务器的所述公用端口上接收到所述服务请求时,按照集群负载均衡机制将所述服务请求路由到服务对应的其中一个或多个容器中。
其中,上述装置还包括:
第三模块,被设置为接收服务创建请求,所述服务创建请求中至少包括创建服务的容器配置信息,根据所述容器配置信息,在本装置内的服务器上创建服务对应的容器。
其中,上述装置中,所述容器配置信息至少包括如下任一种或几种:
容器个数、容器使用资源信息、容器镜像地址。
其中,上述装置中,所述第三模块,根据所述容器配置信息,在本装置内的服务器上创建服务对应的容器,包括:
根据容器使用资源信息,选择多个可用资源符合容器使用资源信息的服务器,在所选择的服务器上创建按照容器镜像地址创建服务对应的容器。
其中,上述装置中,所述第三模块,还被设置为使用预先配置的与服务的虚拟IP地址和端口信息对应的公用端口,为创建的容器分别配置对应的公用端口;
其中,所述预先配置的与服务的虚拟IP地址和端口信息对应的公用端口是,管理中心下发给本装置的,或者是本装置自主配置的。
其中,上述装置中,所述服务请求包括TCP请求和/或UDP请求。
根据本文的另一方面,提供一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被执行时实现如上所述的网络边缘计算方法和系统的步骤。
根据本文的另一方面,提供一种计算机设备,包括处理器、存储器和存储于所述存储器上的计算机程序,其中,所述处理器执行所述计算机程序时实现如上所述网络边缘计算方法和系统的步骤。
本文提供一种网络边缘计算方法、装置、设备及介质,无需服务提供商介入,用户就可以使用预先获取的虚拟IP地址和端口信息向边缘数据节点直接发起服务请求,实现服务的边缘计算。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本文。
附图说明
构成本申请的一部分的附图用来提供对本申请的进一步理解,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据一示例性实施例示出的一种网络边缘计算方法流程示意图。
图2是根据一示例性实施例示出的一种实现边缘计算的网络架构示意图。
图3是图2所示网络架构中管理中心的结构示意图。
图4是图2所示网络架构中边缘数据节点的结构示意图。
图5是图2所示网络架构中多个节点之间的集群管理原理示意图。
图6是图2所示网络架构中各个节点内服务与公用端口之间的映射关系的原理示意图。
图7是图2所示网络架构中各个服务部署的原理示意图。
图8是图2所示网络架构中各个服务在不同边缘数据节点上的副本原理示意图。
图9是图2所示网络架构中用户向边缘数据节点发起服务请求的方法流程图。
图10是图2所示网络架构中边缘数据节点实现服务访问的方法流程图。
图11是根据一示例性实施例示出的一种用于网络边缘计算的计算机设备的框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
实施例1
本实施例提供一种网络边缘计算方法,该方法的实施过程如图1所示,包括如下操作步骤:
步骤S11,边缘数据节点接收服务请求;
本文中涉及的服务请求可以包括TCP请求和UDP请求中的任一种或两种请求。
步骤S12,边缘数据节点根据服务请求中的虚拟IP地址和端口信息,将服务请求路由到服务对应的其中一个或多容器中,由所述容器进行处理。
其中,服务请求中的虚拟IP地址可以是与服务对应的虚拟IP地址。端口信息可以包括与服务对应的端口号或端口标识等信息。
服务对应的容器包括,在该边缘数据节点上已部署的提供该服务处理功能的容器。容器可以是部署在边缘数据节点的一个或多个服务器上。
上述步骤S12可划分为如下操作:
步骤S12a,从边缘数据节点的公用端口、服务的虚拟IP地址和端口信息的映射关系中,查询与接收的服务请求中的虚拟IP地址和端口信息对应的公用端口,将接收的服务请求发送到所查询到的公用端口上;
步骤S12b,上述公用端口接收服务请求后,按照负载均衡机制将服务请求路由到服务对应的其中一个或多个容器中。一般是将服务请求路由到负载较轻的服务器上部署的容器内。
本实施例中,执行操作的边缘数据节点,可以是与虚拟IP地址和端口信息对应的边缘数据节点集群中的一个或多个边缘数据节点。也可以是,基于负载均衡策略选择的与虚拟IP地址和端口信息对应的边缘数据节点集群中的一个或多个边缘数据节点。
从上述描述可以看出,本实施例技术方案,无需服务提供商介入,用户就可以使用预先获取的虚拟IP地址和端口信息向边缘数据节点直接发起服务请求,实现服务的边缘计算。
另外,在上述方法的基础上,边缘数据节点还可以进行如下操作:
边缘数据节点可以预先将虚拟IP地址和端口信息传输给服务请求的发起方。此种方式下,服务请求的发起方可根据接收的虚拟IP地址和端口信息,向该边缘数据节点发起服务请求。
边缘数据节点还可以,预先将虚拟IP地址和端口信息传输给第三方,其中,第三方至少包括服务请求的发起方信赖的第三方。此种方式下,服务请求的发起方可以通过第三方获取虚拟IP地址和端口信息。从而根据所获取的虚拟IP地址和端口信息,向该边缘数据节点发起服务请求。
由上述描述可知,上述边缘数据节点可以对服务请求进行相应处理是因为,边缘数据节点内已部署有服务对应的容器。因此,在上述操作的基础上,上述方法还可以包括创建服务对应的容器的操作,具体如下:
步骤a,边缘数据节点接收服务创建请求,该服务创建请求中至少包括创建 服务的容器配置信息。
本文中,容器配置信息可以包括容器个数、容器使用资源信息和容器镜像地址中的任一种或多种信息。
步骤b,边缘数据节点根据容器配置信息,在本边缘数据节点内的服务器上创建服务对应的容器。
例如,边缘数据节点可以根据容器使用资源信息,选择多个可用资源符合容器使用资源信息的服务器,在所选择的服务器上按照容器镜像地址创建服务对应的容器。由于边缘数据节点内的一个服务器上可以创建一个或多个容器,因此,选择的服务器的个数小于或等于容器个数。
另外,创建服务对应的容器时,边缘数据节点,还可以使用预先配置的与服务的虚拟IP地址和端口信息对应的公用端口,为创建的容器分别配置对应的公用端口。其中,预先配置的服务的虚拟IP地址和端口信息,以及虚拟IP地址和端口信息对应的公用端口是,管理中心下发给所述边缘数据节点的,或者是所述边缘数据节点自主配置的。
实施例2
本实施例提供一种网络边缘计算装置,其主要包括第一模块和第二模块。
第一模块,被配置为,接收服务请求。
本文中接收的服务请求可以包括TCP和UDP服务请求中的任一种或两种请求。
第二模块,被配置为,根据所述服务请求中的虚拟IP地址和端口信息,将所述服务请求路由到服务对应的其中一个或多容器中,由所述容器进行处理。
具体地,第二模块可以根据接收的服务请求中的虚拟IP地址和端口信息,从本装置的公用端口,与服务的虚拟IP地址和端口信息的映射关系中,查询与接收的服务请求中的虚拟IP地址和端口信息对应的公用端口,将服务请求发送到所查询到的公用端口上;
在本装置内任一服务器的所述公用端口上接收到上述服务请求时,可以按照集群负载均衡机制将服务请求路由到服务对应的其中一个或多个容器中。
另外,上述装置还可以具备创建服务对应的容器的功能。此时,可以增加第三模块。该第三模块,接收服务创建请求,根据服务部署请求获取创建服务的容器配置信息。
本实施例中,边缘计算服务的容器配置信息至少包括容器个数、容器使用资源信息、容器镜像地址中的一个或多个信息。
具体地,创建容器的方式可以是,根据容器使用资源信息,选择多个可用资源符合容器使用资源信息的服务器,在所选择的服务器上创建按照容器镜像地址创建服务对应的容器。其中,在一个服务器上可以创建一个或多个容器,因此选择出的服务器的总数小于或等于容器个数。
上述第三模块,还可以使用预先配置的与服务的虚拟IP地址和端口信息对应的公用端口,为创建的容器分别配置对应的公用端口。其中,预先配置的与服务的虚拟IP地址和端口信息对应的公用端口可以是,管理中心下发给本装置的,也可以是本装置自主配置的。
从上述描述可以看出,服务请求中包含服务的虚拟IP地址和端口信息,即用户访问的端口(例如,发起的服务请求为TCP请求时,用户访问的端口包括TCP端口,发起的服务请求为UDP请求时,用户访问的端口包括UDP端口),就可以唯一确定一个用户。而边缘数据节点将此用户的服务请求路由至一个或多个相较空闲的容器上进行处理,大大提高了用户的服务体验。
实施例3
本实施例以图2所示为的一种边缘计算的网络架构为例,介绍上述实施例1和实施例2的一种实施方式。从图2可以看出,实现边缘计算的总体架构至少包括管理中心和边缘数据节点两部分。
其中,管理中心,用于控制管理所有的边缘数据节点,向各个边缘数据节点发送创建、管理命令等,以及收集各个边缘数据节点上报的信息等。
边缘数据节点,用于处理用户的请求,其中,每个节点都可认为是自我管理的集群,可以对接收到的用户的请求进行负载均衡处理,对本边缘数据节点的容器进行水平扩展、自动迁移等,从而提供高可用性。
本文中涉及的容器可以包括但不限于docker容器。
在上述图2所示的网络架构中,管理中心如图3所示,可以包括如下组成部分:
应用程序接口服务器:主要接收对边缘数据节点的服务部署请求,根据接收到的服务部署请求涉及的配置信息,以及数据库中存储的各节点的服务器信息,确定在指定的节点上创建相应的容器,并发送对应的操作命令给边缘数据节点的集群管理模块,可指示边缘数据节点的集群管理模块对本地的容器进行创建、销毁、扩容、迁移等操作中的任一种或几种操作。
具体地,管理中心可以根据服务的配置信息,向指定的边缘数据节点发送服务创建请求,其中,服务创建请求中包含部署位置在指定的边缘数据节点上的容器的容器配置信息。其中,向指定的边缘数据节点发送服务创建请求时,可以配置所创建的服务的虚拟IP地址和端口信息以及对应的公用端口,并将配置好的服务的虚拟IP地址和端口信息以及对应的公用端口下发给边缘数据节点。
另外,管理中心还可以向服务部署请求的发起方返回配置的服务的虚拟IP地址和端口信息,用于指示使用该服务的用户可以使用服务的虚拟IP地址和端口信息发起服务请求。本文中,服务部署请求的发起方,可以包括服务提供商。返回配置的服务的虚拟IP地址和端口信息的方式,可以包括多种形式。例如,可以是在完成配置TCP服务的虚拟IP地址和端口信息的操作之后,向服务部署请求的发起方返回服务的虚拟IP地址和端口信息。也可以是在接收到边缘数据节点返回的服务创建请求的应答,根据应答确认边缘数据节点成功创建服务时,向服务部署请求的发起方返回服务的虚拟IP地址和端口信息。返回服务的虚拟IP地址和端口信息时,可以直接或间接地向服务部署请求的发起方返回服务的虚拟IP地址和端口信息。间接发送服务的虚拟IP地址和端口信息时,可以将服务的虚拟IP地址和端口信息发送给服务部署请求的发起方指定的第三方。之后,可以由第三方与服务部署请求的发起方之间进行交互,传输服务的虚拟IP地址和端口信息。
日志中心:收集用户的日志数据,可以对日志数据进行处理后进行存储,以 便以后用户查看日志数据,以及对用户日志进行分析,对异常数据进行挖掘,对特殊的日志进行告警。
监控中心:向边缘数据节点的集群监控工具发送监控请求,该监控请求可以用于收集边缘数据节点的集群中的容器状态信息和服务器的状态信息。其中,可以周期性地向各个边缘数据节点的集群监控工具发送监控请求,容器的状态信息可以包括容器的占用比率(例如容器内存、CPU以及网络使用情况)等,服务器的状态信息可以包括服务器运行负载状态等。
数据库:主要用于存放用户信息、集群信息、边缘数据节点上的服务器信息等信息。用户信息至少包括用户标识(例如用户IP等)。集群信息至少包括集群的状态,集群中运行的任务数量等。边缘数据节点上的服务器信息至少包括服务器标识,服务器负载状态等。
上述数据库,还可以在边缘数据节点创建完成服务后,保存服务的配置信息。
在上述图2所示的网络架构中,边缘数据节点相当于上述实施例2所示的一种网络边缘计算装置。本实施例中,边缘数据节点如图4所示,可以包括如下组成部分,其中,集群管理模块、数据库缓存模块、虚拟服务器集群模块以及服务请求处理模块均采用冗余设计,以避免出现单点问题。
集群管理模块(其中集成有实施例2中的第一模块和第三模块):根据管理中心下发的操作命令负责本节点内的容器创建、删除、迁移,管理本节点内各个服务器,以及负责收集本节点中的服务器状态信息并上报给管理中心。本文中,不同节点之间的集群管理模块可以互不关联,每个节点都是自我管理的集群,如图5所示。这样,可以保证控制粒度更精细,不需要通过标签来维护复杂的关系。例如,每个节点内的容器只受本节点内的集群管理模块的管理,因此不需要存储节点与容器之间的对应关系。同样,每个节点内的节点服务器只受本节点内的集群管理模块的管理,也不用标记存储节点与节点服务器之间的关联。当然,也可以由多个节点共同组成一个自我管理的集群。
上述以节点为单位构建集群,不同的节点之间集群管理模块互不关联的方式,还可以更精准地探测容器和服务器的生存状态。这是因为所有的机房使用 一个集群,在中心节点部署集群管理模块后,中心节点到边缘机房的网络环境各异,很可能错误的判断容器和节点的生存状态,进行错误的迁移。集群系统限制于一个节点进行管理的有益效果,还可以表现在,一个服务器和一个公用端口关联后,所有服务器都需要监听这个公用端口,按照不同的节点分别构建不同的集群后,就可以避免让不必要的节点监听该公用端口的服务器。
另外,每个节点内可以维护一套服务对应的容器与公用端口映射关系,即将节点内的服务器对应的容器与公用端口一一对应。其中,节点内服务对应的容器与公用端口的映射关系可由管理中心侧进行配置,也可以由节点侧的集群管理模块配置的。由于不同节点内的集群管理模块互不关联,因此,各个节点内维护的端口映射关系互不影响。如图6所示,每个服务器都跟一个容器对应,并且该容器在集群内只能被该服务器使用,具体地,容器被映射的端口被称为公用端口。这样,可以让不同的应用(也可称为服务)使用相同的端口,比如图6中服务器1和服务器2都配置调用容器内的80端口,但通过端口映射关系可以看出,映射至具体的公用端口时,服务器1使用的是8000端口,服务器2使用的是8001端口。并且,服务器内部的容器发生迁移,从一个服务器迁移到另一个服务器时,服务器内部会维护该容器的IP变更,而其映射的公用端口并不会改变,因此上层不需要关心容器的迁移。其中,应用与容器、公用端口之间的映射关系可以保存在管理中心的数据库中。例如,对于TCP服务,可以保存虚拟IP地址和端口信息,与公用端口之间的映射关系。
运行模块:通过运行不同的容器,响应用户发起的边缘计算服务请求。
数据库缓存模块:边缘数据节点访问中心集群的数据库(即上文所述的管理中心的数据库),在缓存无法命中的情况,再去查询中心集群的数据库。
虚拟服务器集群模块:为服务请求处理提供高可靠性。
服务请求处理模块(相当于实施例2中的第二模块):接收服务请求,根据服务请求中的虚拟IP地址和端口信息,将服务请求路由到服务对应的其中一个或多容器中,由所述容器进行处理。即负责服务的虚拟IP地址和端口信息,与公用端口的映射关系,根据发起边缘计算服务请求的用户使用的虚拟IP地址和端口信息,将边缘计算请求路由到边缘数据节点的不同的容器上。
下面介绍上述边缘计算的网络架构的实际应用。
首先基于上述网络架构可以看出,用户请求的服务可以是由部署在多个节点的服务器组成,每个服务器则是一组容器的集合,其原理见图7所示。其中,同一服务器中容器的个数叫作服务器的副本数,服务器内部会保证有指定副本数的容器在运行,并且分布在不同的服务器上,如图8所示。这样,当用户向边缘数据节点发起服务请求时,可以由服务器内部对请求按照负载均衡机制,将请求发送到不同的容器中进行对应的处理。此过程对于用户是透明的,因此对用户而言,其能看到的只有服务器。
具体地,边缘数据节点通过公用端口监听用户发起的服务请求,然后由集群管理模块将请求通过集群的负载均衡处理路由到服务所对应的一个或多个容器内,一般是将请求路由到负载较轻的服务器上部署的容器内。
以上述边缘计算的网络架构为基础,用户可以请求创建各种类型的服务,例如TCP服务,UDP服务。
本实施例提供一种网络边缘计算方法,包括如下操作:
边缘数据节点接收服务请求,根据服务请求中的虚拟IP地址和端口信息,从所述边缘数据节点的公用端口,与服务的虚拟IP地址和端口信息的映射关系中,查询与服务请求中的IP地址和端口信息对应的公用端口,将服务请求发送到所查询到的公用端口上;
边缘数据节点内任一服务器的上述公用端口上接收到服务请求后,按照集群负载均衡机制将服务请求路由到服务对应的其中一个容器中,由容器进行相应处理。
其中,客户端向边缘节点发起服务请求之前,可以向服务商发送原始服务请求,服务商返回虚拟IP地址和端口信息。或者,服务商预先下发虚拟IP地址和端口信息至客户端,客户端从本地获取虚拟IP地址和端口。
其中,边缘数据节点接收用户发起的服务请求的过程,可以是用户使用已存储的虚拟IP地址和端口信息向管理中心(也可以为边缘计算实现调度或者路由功能的网元,例如,应用于服务商的设备上)发起服务请求,由边缘数据节 点根据用户发起的服务请求的虚拟IP地址和端口信息,确定与此虚拟IP地址和端口信息对应的边缘数据节点,将所确定的边缘数据节点的IP地址发送给用户。用户向此边缘数据节点发起服务请求,其中,服务请求中包含有虚拟IP地址和端口信息。
以上述方法为基础实现服务的网络边缘计算时,还可以预先在网络边缘数据节点中创建相应的服务,即边缘数据节点接收管理中心发送的服务创建请求,此服务创建请求中可以包括创建服务的容器配置信息。此时,边缘数据节点根据收到的服务创建请求中包含的容器配置信息,在本边缘数据节点内的服务器上创建服务对应的容器,之后此边缘数据节点即可为用户提供服务。本文中涉及的容器配置信息可以包括容器个数、容器使用资源信息、容器镜像地址中的任一种或几种信息。
以上述方法为基础,还有一种可选实施例中,边缘数据节点根据容器配置信息,在本边缘数据节点内的服务器上创建服务对应的容器的过程,可以参考如下操作:
边缘数据节点可以根据容器使用资源信息,选择多个可用资源符合容器使用资源信息的服务器;
在所选择的服务器上按照容器镜像地址创建服务对应的容器,其中,一个服务器上可以创建一个或多个容器,因此,选择的服务器的个数可以小于或者等于所述容器个数。
另外一种可选实施例中,边缘数据节点根据容器配置信息,在本边缘数据节点内的服务器上创建服务对应的容器时,可以采用如下方式进行创建:
边缘数据节点可以使用预先配置的服务的虚拟IP地址和端口信息,以及该虚拟IP地址和端口信息对应的公用端口,在本边缘数据节点内的服务器上创建服务对应的容器。本实施例中,预先配置的服务的虚拟IP地址和端口信息,以及虚拟IP地址和端口信息对应的公用端口可以是,管理中心预先配置并下发给边缘数据节点的,也可以是边缘数据节点自主配置的,或者由服务商通过接口配置的,对此本文不做特别限制。其中,虚拟IP地址和端口信息做为一个整体与公用端口相对应。
本实施例提供另一种网络边缘计算方法,包括如下操作:
管理中心接收服务部署请求,根据服务部署请求获取创建服务的配置信息,边缘计算服务的配置信息至少包括指定的边缘数据节点信息以及创建服务的容器配置信息;
管理中心根据服务的配置信息,向指定的边缘数据节点发送服务创建请求,其中,服务创建请求中包含部署位置在指定的边缘数据节点上的容器的容器配置信息。
其中,容器的容器配置信息在前文中已做过介绍,在此不再赘述。
以上述方法为基础,一种可选的实施例中,上述方法还可以进行如下操作:
管理中心,在边缘数据节点创建完成服务后,保存服务的配置信息。
以上述方法为基础,一种可选的实施例中,管理中心向指定的边缘数据节点发送创建服务请求时,还可以为边缘数据节点配置与服务的虚拟IP地址和端口信息,以及与服务的虚拟IP地址和端口信息对应的公用端口,并将服务的虚拟IP地址和端口信息和对应的公用端口下发给边缘数据节点。
以上述方法为基础,另一种可选的实施例中,管理中心,还可以向服务部署请求的发起方,例如服务提供商,返回创建服务的虚拟IP地址和端口信息。其中,服务的虚拟IP地址和端口信息用于指示使用服务的用户可以使用该虚拟IP地址和端口信息发起服务请求。
以上述方法为基础,另一种可选的实施例中,管理中心,还可以接收用户方发送的包含有虚拟IP地址和端口信息的服务请求。管理中心确定与该服务请求中包含的虚拟IP地址和端口信息对应的边缘数据节点信息,将所确定的边缘数据节点信息返回给用户,其中,边缘数据节点信息至少包括边缘数据节点的IP地址。本实施例中,管理中心所确定的边缘数据节点信息可以为地理位置和/或逻辑位置距离用户最近的边缘数据节点的IP地址。
本文中涉及的逻辑位置距离服务请求的发起方位置最近的边缘数据节点可以包括与服务请求的发起方隶属的运营商为同一个运营商的边缘数据节点和/或数据传输时延最小的边缘数据节点。例如,可以确定服务请求的发起方隶属的运营商,选择一个属于该运营商的边缘数据节点做为逻辑位置距离服务请求的 发起方位置最近的边缘数据节点。也可以将数据传输时延最小的边缘数据节点确定为逻辑位置距离服务请求的发起方位置最近的边缘数据节点。还可以将服务请求的发起方隶属的运营商,其下的边缘数据节点中数据传输时延最小的边缘数据节点确定为逻辑位置距离服务请求的发起方位置最近的边缘数据节点。其中,数据传输时延包括节点处理延迟、排队延迟、发送延迟、传播延迟等等。
下面以实际应用为例,分别介绍边缘数据节点创建服务、用户向边缘数据节点发起服务请求以及边缘数据节点实现服务访问的过程。
本发明实施例中提供了一种边缘计算网络中创建服务的流程。该过程主要包括如下操作:
步骤S1,用户(此处的用户为服务提供商,也可称为管理员用户)发送部署应用请求(deploy app)到管理中心的应用程序接口服务器;
其中,该部署应用请求中可以包含请求部署的服务的类型信息以及部署服务的位置信息(如节点信息)。
步骤S2,应用程序接口服务器从管理中心的数据库中查询请求部署服务的节点上的可用虚拟IP地址和端口信息,与可用公用端口,为所要部署的服务分配一个该边缘数据节点上可用的虚拟IP地址和端口信息,以及可用的公用端口,例如,分配虚拟IP地址和端口9000,以及公用端口7000在节点1上。
其中,可用虚拟IP地址和端口信息可以为空闲虚拟IP地址和空闲端口,未被其他用户或服务占用的虚拟IP地址和端口。可用公用端口可以为空闲端口,未被其他服务占用的端口。
步骤S3,应用程序接口服务器发送创建请求到边缘数据节点1的集群管理模块上,由集群管理模块负责具体的创建。
其中,应用程序接口服务器发送的创建请求包含为该服务分配的虚拟IP地址和端口信息,以及对应的公用端口信息,以及容器配置信息。
本文中,容器信息可以包括容器个数(也可以称为服务器的副本个数)、容器使用资源信息和容器镜像地址信息中的任一种或多种。
步骤S4,集群管理模块根据CPU,内存等限制,根据集群负载均衡机制选 择几台服务器,并在选择的服务器上创建运行服务的容器。
其中,集群管理模块根据容器配置信息中的容器个数来选择多个服务器,即选择的服务器的个数可以小于或者等于容器个数。并且,在选择服务器时,是根据容器配置信息中的容器使用资源信息,来选择可以满足容器使用资源信息的服务器。在选择好服务器后,创建容器时,可以按照容器配置信息中的容器镜像地址实现创建操作。
步骤S5,应用程序接口服务器将服务对应的虚拟IP地址和端口信息,与公用端口之间的映射关系,添加到边缘数据节点1的服务请求处理模块,并让服务请求处理模块监听虚拟IP地址和端口信息对应的公用端口,并将请求转发到公用端口。
步骤S6,应用程序接口服务器将服务对应的虚拟IP地址和端口信息,与公用端口之间的映射关系,记录到数据库中。
步骤S7,应用程序接口服务器将服务对应的虚拟IP地址和端口信息返回给服务提供商,由服务提供商下发给用户端,用户端需要记录该虚拟IP地址和端口信息,之后的访问可以直接使用该虚拟IP地址和端口信息,或者,服务提供商基于用户的原始访问请求(例如,域名访问请求),返回给用户对应的虚拟IP地址和端口信息。
上述方法步骤中,为创建的服务分配边缘数据节点的可用的虚拟IP地址和端口信息,与对应的可用的公用端口时,采用了管理中心侧应用程序接口服务器分配的方式,但此种方式仅为举例说明。在实际应用中,也可以由边缘数据节点侧自主为服务分配可靠的虚拟IP地址和端口信息,与对应的可用的公用端口,由边缘数据节点分配服务对应的虚拟IP地址和端口信息,与对应的公用端口后,还可以将服务对应的虚拟IP地址和端口信息,与对应的公用端口上报给管理中心。
按照上述描述创建服务后,用户可以向边缘数据节点发起服务请求。图9所示即为一示例性实施例中提供的一种实现服务请求的方法流程示意图。从图9可以看出,该方法主要包括如下操作:
步骤S91,用户向管理中心发送携带有虚拟IP地址和端口信息的服务请求,管理中心接收该服务请求后,根据本地的调度算法,返回一个边缘数据节点的IP给用户;
该步骤中,管理中心向用户返回的边缘数据节点的IP,可以是根据用户发起的服务请求的虚拟IP地址和端口信息,根据负载均衡机制确定与此虚拟IP地址和端口信息对应的边缘数据节点,将所确定的边缘数据节点的IP反馈给用户。
其中,与此虚拟IP地址和端口信息对应的边缘数据节点可以是一个或多个。与此虚拟IP地址和端口信息对应的边缘数据节点包括地理位置和/或逻辑位置距离服务请求的发起方最近的边缘数据节点。具体地,确定与此虚拟IP地址和端口信息对应的边缘数据节点的过程在前文中已进行过描述,在此不再赘述。
步骤S92,用户按照收到的边缘数据节点的IP向该边缘数据节点发起服务请求,此服务请求中包含有虚拟IP地址和端口信息,收到服务请求的边缘数据节点内的容器组服务器向用户提供相应的服务。
从上述方法可以看出,管理中心提供用户请求调度的服务,并且可以保存有每个边缘数据节点与虚拟IP地址和端口信息,与公用端口之间的对应关系信息。
图10所示为本实施例中提供的一种网络边缘计算方法的流程示意图。该方法主要包括如下操作:
步骤S101,用户使用已存储的虚拟IP地址和端口信息直接访问(即发送服务请求)边缘数据节点的服务请求调度服务器上,此处的服务请求调度服务器可做4层负载均衡处理。
步骤S102,服务请求调度服务器根据用户请求的虚拟IP地址和端口信息去查找对应的公用端口,检查请求是否合法,如果检查请求为合法请求,进入步骤S103,如果检测请求为非法请求,向用户提示为无效操作或非法操作,结束本流程。
该操作步骤也可以省略,这样就不需要访问管理中心数据库了。
步骤S103,服务请求调度服务器将服务请求发送到节点的公用端口(即查 找到的公用端口)上。
步骤S104,集群按照负载均衡机制将服务请求路由到指定服务器下的某个容器中。
该步骤中,集群按照负载均衡机制可以将服务请求路由到服务对应的一个空闲容器中,或者是任意一个非空闲容器中均可。
步骤S105,容器处理用户的请求,将结果返回给用户。
本发明实施例还提供一种计算机可读存储介质,其上存储有计算机程序,其中,计算机程序被执行时实现如上述所述的网络边缘计算方法的步骤。关于上述计算机程序被执行时涉及的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图11是根据一示例性实施例示出的一种用于网络边缘计算的计算机设备110的框图。例如,计算机设备110可以被提供为一服务器。参照图11,计算机设备110包括处理器111,处理器的个数可以根据需要设置为一个或者多个。计算机设备110还包括存储器112,用于存储可由处理器111的执行的指令,例如应用程序。存储器的个数可以根据需要设置一个或者多个。其存储的应用程序可以为一个或者多个。处理器111被设置为执行指令,以执行上述网络边缘计算的方法。
本领域技术人员应明白,本申请的实施例可提供为方法、装置(设备)、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质上实施的计算机程序产品的形式。计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质,包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访 问的任何其他的介质等。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
本申请是参照根据本发明实施例的方法、装置(设备)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括……”限定的要素,并不排除在包括所述要素的物品或者设备中还存在另外的相同要素。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请的意图也包含这些改动和变型在内。
工业实用性
本文提供一种网络边缘计算方法和装置,执行操作的边缘数据节点,可以是与虚拟IP地址和端口信息对应的边缘数据节点集群中的一个或多个边缘数据节点。也可以是,基于负载均衡策略选择的与虚拟IP地址和端口信息对应的边缘数据节点集群中的一个或多个边缘数据节点。本文可以无需服务提供商介入,用户就可以使用预先获取的虚拟IP地址和端口信息向边缘数据节点直接发起服务请求,实现服务的边缘计算。

Claims (18)

  1. 一种网络边缘计算方法,该方法包括:
    边缘数据节点接收服务请求;
    所述边缘数据节点根据所述服务请求中的虚拟IP地址和端口信息,将所述服务请求路由到服务对应的其中一个或多个容器中,由所述容器进行处理。
  2. 如权利要求1所述的方法,其中,所述根据所述服务请求中的虚拟IP地址和端口信息,将服务请求路由到服务对应的其中一个或多个容器中,包括:
    从所述边缘数据节点的公用端口、服务的虚拟IP地址和端口信息的映射关系中,查询与所述服务请求中的虚拟IP地址和端口信息对应的公用端口,将所述服务请求发送到所查询到的公用端口上;
    所述公用端口接收所述服务请求后,按照负载均衡机制将所述服务请求路由到服务对应的其中一个或多个容器中。
  3. 如权利要求1所述的方法,其中,所述方法还包括:
    预先将所述虚拟IP地址和端口信息传输给所述服务请求的发起方;
    或者,预先将所述虚拟IP地址和端口信息传输给第三方,其中,所述第三方至少包括所述服务请求的发起方信赖的第三方。
  4. 如权利要求1所述的方法,其中,
    所述边缘数据节点为,与虚拟IP地址和端口信息对应的边缘数据节点集群中的一个或多个边缘数据节点;
    或者,所述边缘数据节点为基于负载均衡策略选择的与虚拟IP地址和端口信息对应的边缘数据节点集群中的一个或多个边缘数据节点。
  5. 如权利要求1至4任一项所述的方法,其中,所述服务请求包括TCP请求和/或UDP请求。
  6. 如权利要求5所述的方法,其中,所述边缘数据节点接收服务请求之前,所述方法还包括:
    所述边缘数据节点接收服务创建请求,所述服务创建请求中至少包括创建 服务的容器配置信息;
    所述边缘数据节点根据所述容器配置信息,在本边缘数据节点内的服务器上创建服务对应的容器。
  7. 如权利要求6所述的方法,其中,所述容器配置信息至少包括如下任一种或几种:
    容器个数、容器使用资源信息、容器镜像地址。
  8. 如权利要求7所述的方法,其中,所述边缘数据节点根据所述容器配置信息,在本边缘数据节点内的服务器上创建服务对应的容器,包括:
    所述边缘数据节点根据容器使用资源信息,选择多个可用资源符合容器使用资源信息的服务器,在所选择的服务器上按照容器镜像地址创建服务对应的容器。
  9. 如权利要求6所述的方法,所述方法还包括:
    所述边缘数据节点使用预先配置的与服务的虚拟IP地址和端口信息对应的公用端口,为创建的容器分别配置对应的公用端口;
    其中,预先配置的服务的虚拟IP地址和端口信息,以及虚拟IP地址和端口信息对应的公用端口是,管理中心下发给所述边缘数据节点的,或者是所述边缘数据节点自主配置的。
  10. 一种网络边缘计算装置,包括:
    第一模块,被设置为接收服务请求;
    第二模块,被设置为根据所述服务请求中的虚拟IP地址和端口信息,将所述服务请求路由到服务对应的其中一个或多容器中,由所述容器进行处理。
  11. 如权利要求10所述的装置,其中,所述第二模块,根据所述服务请求中的虚拟IP地址和端口信息,将所述服务请求路由到服务对应的其中一个或多容器中,包括:
    根据所述服务请求中的虚拟IP地址和端口信息,从本装置的公用端口,与服务的虚拟IP地址和端口信息的映射关系中,查询与所述服务请求中的虚拟IP地址和端口信息对应的公用端口,将所述服务请求发送到所查询到的公用端口上;
    在本装置内任一服务器的所述公用端口上接收到所述服务请求时,按照集群负载均衡机制将所述服务请求路由到服务对应的其中一个或多个容器中。
  12. 如权利要求10所述的装置,所述装置还包括:
    第三模块,被设置为接收服务创建请求,所述服务创建请求中至少包括创建服务的容器配置信息,根据所述容器配置信息,在本装置内的服务器上创建服务对应的容器。
  13. 如权利要求12所述的装置,其中,所述容器配置信息至少包括如下任一种或几种:
    容器个数、容器使用资源信息、容器镜像地址。
  14. 如权利要求13所述的装置,其中,所述第三模块,根据所述容器配置信息,在本装置内的服务器上创建服务对应的容器,包括:
    根据容器使用资源信息,选择多个可用资源符合容器使用资源信息的服务器,在所选择的服务器上创建按照容器镜像地址创建服务对应的容器。
  15. 如权利要求12或13所述的装置,其中,
    所述第三模块,还被设置为使用预先配置的与服务的虚拟IP地址和端口信息对应的公用端口,为创建的容器分别配置对应的公用端口;
    其中,所述预先配置的与服务的虚拟IP地址和端口信息对应的公用端口是,管理中心下发给本装置的,或者是本装置自主配置的。
  16. 如权利要求10至14任一项所述的装置,其中,
    所述服务请求包括TCP请求和/或UDP请求。
  17. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被执行时实现如权利要求1-9中任意一项所述方法的步骤。
  18. 一种计算机设备,包括处理器、存储器和存储于所述存储器上的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1-9中任意一项所述方法的步骤。
PCT/CN2020/111469 2019-09-19 2020-08-26 一种网络边缘计算方法、装置、设备及介质 WO2021052132A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/761,707 US20220394084A1 (en) 2019-09-19 2020-08-26 Method and apparatus for computing network edge, device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910885903.5 2019-09-19
CN201910885903.5A CN112532668B (zh) 2019-09-19 2019-09-19 一种网络边缘计算方法、装置及介质

Publications (1)

Publication Number Publication Date
WO2021052132A1 true WO2021052132A1 (zh) 2021-03-25

Family

ID=74883330

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/111469 WO2021052132A1 (zh) 2019-09-19 2020-08-26 一种网络边缘计算方法、装置、设备及介质

Country Status (3)

Country Link
US (1) US20220394084A1 (zh)
CN (2) CN112532668B (zh)
WO (1) WO2021052132A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113612866A (zh) * 2021-08-04 2021-11-05 北京金山云网络技术有限公司 地址检测方法、装置、计算机设备和存储介质
CN115225450A (zh) * 2022-09-20 2022-10-21 南京艾泰克物联网科技有限公司 一种基于边缘计算的多数据机房虚拟化集群管理系统
CN116055496A (zh) * 2022-12-30 2023-05-02 广州趣研网络科技有限公司 一种监控数据采集方法、装置、电子设备及存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113179190B (zh) * 2021-06-29 2022-01-07 深圳智造谷工业互联网创新中心有限公司 边缘控制器、边缘计算系统及其配置方法
CN113285885B (zh) * 2021-07-23 2021-12-17 阿里云计算有限公司 基于服务网格的边缘流量控制方法、设备及存储介质
CN113626371A (zh) * 2021-08-27 2021-11-09 深圳供电局有限公司 一种基于x86和arm混合架构的边缘计算系统与方法
CN114866421B (zh) * 2022-05-13 2024-05-14 西安广和通无线通信有限公司 一种端口管理方法、装置、设备及计算机可读存储介质
CN115242754A (zh) * 2022-07-08 2022-10-25 京东科技信息技术有限公司 信息返回方法、请求响应方法、报文发送方法和装置
US11915059B2 (en) * 2022-07-27 2024-02-27 Oracle International Corporation Virtual edge devices
CN115991223B (zh) * 2023-03-23 2023-06-27 北京全路通信信号研究设计院集团有限公司 轨道交通计算系统和方法
CN116743845B (zh) * 2023-08-15 2023-11-03 中移(苏州)软件技术有限公司 边缘服务发现方法、装置、节点设备和可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109067890A (zh) * 2018-08-20 2018-12-21 广东电网有限责任公司 一种基于docker容器的CDN节点边缘计算系统
CN109640319A (zh) * 2019-01-16 2019-04-16 腾讯科技(深圳)有限公司 基于接入信息的调度方法、装置及电子设备
CN109802934A (zh) * 2018-12-13 2019-05-24 中国电子科技网络信息安全有限公司 一种基于容器云平台的mec系统
US20190243438A1 (en) * 2018-02-08 2019-08-08 Korea Advanced Institute Of Science And Technology Method and system for deploying dynamic virtual object for reducing power in mobile edge computing environment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101699801B (zh) * 2009-10-30 2011-09-28 孙喜明 一种数据传输方法及传输数据的虚拟对等网络系统
US11438278B2 (en) * 2015-06-29 2022-09-06 Vmware, Inc. Container-aware application dependency identification
CN106020930B (zh) * 2016-05-13 2019-07-23 深圳市中润四方信息技术有限公司 一种基于应用容器的应用管理方法及系统
CN105847108B (zh) * 2016-05-24 2019-01-15 中国联合网络通信集团有限公司 容器间的通信方法及装置
CN106067858B (zh) * 2016-05-24 2019-02-15 中国联合网络通信集团有限公司 容器间的通信方法、装置及系统
CN108737468B (zh) * 2017-04-19 2021-11-12 中兴通讯股份有限公司 云平台服务集群、构建方法及装置
CN108958927B (zh) * 2018-05-31 2023-04-18 康键信息技术(深圳)有限公司 容器应用的部署方法、装置、计算机设备和存储介质
CN108833163B (zh) * 2018-06-13 2020-08-28 平安科技(深圳)有限公司 Linux虚拟服务器的创建方法、装置、计算机设备及存储介质
CN109725949B (zh) * 2018-12-25 2021-10-19 南京邮电大学 一种基于移动代理的移动边缘计算卸载系统及方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190243438A1 (en) * 2018-02-08 2019-08-08 Korea Advanced Institute Of Science And Technology Method and system for deploying dynamic virtual object for reducing power in mobile edge computing environment
CN109067890A (zh) * 2018-08-20 2018-12-21 广东电网有限责任公司 一种基于docker容器的CDN节点边缘计算系统
CN109802934A (zh) * 2018-12-13 2019-05-24 中国电子科技网络信息安全有限公司 一种基于容器云平台的mec系统
CN109640319A (zh) * 2019-01-16 2019-04-16 腾讯科技(深圳)有限公司 基于接入信息的调度方法、装置及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CMCC: "Use Case of Edge Computing and Radio Network Exposure", 3GPP DRAFT; R3-186040_USE CASE OF EDGE COMPUTING AND RADIO NETWORK EXPOSURE, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG3, no. Chengdu, China; 20181008 - 20181012, 29 September 2018 (2018-09-29), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP051529305 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113612866A (zh) * 2021-08-04 2021-11-05 北京金山云网络技术有限公司 地址检测方法、装置、计算机设备和存储介质
CN113612866B (zh) * 2021-08-04 2023-01-20 北京金山云网络技术有限公司 地址检测方法、装置、计算机设备和存储介质
CN115225450A (zh) * 2022-09-20 2022-10-21 南京艾泰克物联网科技有限公司 一种基于边缘计算的多数据机房虚拟化集群管理系统
CN116055496A (zh) * 2022-12-30 2023-05-02 广州趣研网络科技有限公司 一种监控数据采集方法、装置、电子设备及存储介质
CN116055496B (zh) * 2022-12-30 2024-04-05 广州趣研网络科技有限公司 一种监控数据采集方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
US20220394084A1 (en) 2022-12-08
CN112532668B (zh) 2022-08-02
CN112532675B (zh) 2023-04-18
CN112532668A (zh) 2021-03-19
CN112532675A (zh) 2021-03-19

Similar Documents

Publication Publication Date Title
WO2021052132A1 (zh) 一种网络边缘计算方法、装置、设备及介质
US10715485B2 (en) Managing dynamic IP address assignments
CA3033217C (en) Method for virtual machine to access physical server in cloud computing system, apparatus, and system
JP6073246B2 (ja) 大規模記憶システム
US11611481B2 (en) Policy management method and system, and apparatus
JP7270755B2 (ja) 分散システムでのメタデータルーティング
US20050108394A1 (en) Grid-based computing to search a network
CN112532669B (zh) 一种网络边缘计算方法、装置及介质
WO2012068867A1 (zh) 虚拟机管理系统及其使用方法
CN114374696A (zh) 一种容器负载均衡方法、装置、设备及存储介质
WO2021052129A1 (zh) 一种网络边缘计算方法、装置、设备及介质
US11159607B2 (en) Management for a load balancer cluster
US9760370B2 (en) Load balancing using predictable state partitioning
WO2023207189A1 (zh) 负载均衡方法及系统、计算机存储介质、电子设备
CN114500450B (zh) 一种域名解析方法、设备及计算机可读存储介质
WO2021042845A1 (zh) 虚拟局域网业务管理方法、虚拟局域网全局管理设备
CN111294383B (zh) 物联网服务管理系统
CN110213180B (zh) 网络资源管理方法、装置及云平台
US20230328137A1 (en) Containerized gateways and exports for distributed file systems
CN117130733A (zh) 一种数据中台对接大数据集群的数据请求适配方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20866061

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20866061

Country of ref document: EP

Kind code of ref document: A1