CN115733746A - Service grid unit deployment method, device, equipment and storage medium - Google Patents

Service grid unit deployment method, device, equipment and storage medium Download PDF

Info

Publication number
CN115733746A
CN115733746A CN202211398492.5A CN202211398492A CN115733746A CN 115733746 A CN115733746 A CN 115733746A CN 202211398492 A CN202211398492 A CN 202211398492A CN 115733746 A CN115733746 A CN 115733746A
Authority
CN
China
Prior art keywords
service
cloud server
server node
container
control plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211398492.5A
Other languages
Chinese (zh)
Inventor
王绍坤
黄明亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yusur Technology Co ltd
Original Assignee
Yusur Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yusur Technology Co ltd filed Critical Yusur Technology Co ltd
Priority to CN202211398492.5A priority Critical patent/CN115733746A/en
Publication of CN115733746A publication Critical patent/CN115733746A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to a deployment method, a device, equipment and a storage medium of a service grid unit, wherein a centralized agent unit is deployed to a data processor in a sinking way, so that CPU, memory and network bandwidth resources on a cloud server node can be effectively saved, and meanwhile, network time delay caused by system calling and context switching caused by the deployment of a service grid agent on the cloud server node can be avoided.

Description

Service grid unit deployment method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for deploying a service grid unit.
Background
The Service Mesh (Service Mesh) is an infrastructure layer in cloud services, and is focused on realizing reliable transmission of Service requests among micro services, and providing flow control layer operations such as Service discovery, load balancing, request routing, rule configuration and the like on the basis of the Service Mesh.
In practical applications, the service grid is usually implemented as a distributed lightweight network proxy, which is deployed together with the microservice application in a Sidecar (Sidecar) injection manner, and is transparent to the application program, so that the user separates service communication and related management and control functions from the service onto the infrastructure layer Sidecar, and completely decouples the service system from the service system.
Although the distributed service grid can meet the requirements of service governance and service decoupling, configuration-based flow control and the like required by application services, the following problems are brought at the same time: since each service container deployed in the service grid needs to be injected with one Sidecar, not only CPU and memory resources on a cloud server node (server node) are greatly occupied, but also system power consumption is increased. At the same time, the distributed services grid introduces the Sidecar agents into an already complex distributed environment, greatly increasing the complexity of the overall links and the operation and maintenance. In addition, the service grid injects the sdec agent into each service container of the application service, which also increases the delay caused by system call and context switch.
Disclosure of Invention
To solve the technical problem or at least partially solve the technical problem, the present disclosure provides a deployment method, an apparatus, a device, and a storage medium for a service grid unit.
In a first aspect, the present disclosure provides a method for deploying a service grid unit, including:
sending service container deployment information to a cloud server node so that the cloud server node creates a plurality of service containers according to the service container deployment information;
sending a first installation instruction to a data processor installed on the cloud server node to cause the data processor to install a centralized agent unit so that the centralized agent unit and the plurality of business containers constitute a service grid unit;
the centralized agent unit is configured to agent data plane traffic entering and exiting the multiple service containers, where the data plane traffic is traffic generated by communication between service instances running in different service containers.
Optionally, before sending the service container deployment information to the cloud server node, the method further includes:
installing a container arrangement engine on the central control node;
and adding the cloud server node into a deployment network of the container orchestration engine as a working node, so that the container orchestration engine generates the business container deployment information based on the service instance registered on the cloud server node.
Optionally, before sending the service container deployment information to the cloud server node, the method further includes:
sending a second installation instruction to the cloud server node to enable the cloud server node to install a container creation engine;
the sending service container deployment information to a cloud server node to enable the cloud server node to create a plurality of service containers according to the service container deployment information includes:
and sending service container deployment information to a cloud server node so that the container creation engine creates a plurality of service containers for the service instances registered on the cloud server node according to the service container deployment information.
Optionally, the method further includes:
installing a control plane platform on the central control node, and synchronizing configuration information of the plurality of service containers on the cloud server node from the container orchestration engine to the control plane platform, so that the control plane platform generates a control plane traffic according to the configuration information, the control plane traffic being traffic resulting from a policy and/or configuration for instructing the centralized agent unit to manage data plane traffic to and from the plurality of service containers on the cloud server node;
sending a third installation instruction to the cloud server node to enable the cloud server node to install a control plane proxy unit, where the control plane proxy unit is configured to forward the control plane traffic to the centralized proxy unit.
In a second aspect, the present disclosure provides a deployment apparatus for a service grid unit, including:
the first deployment module is used for sending service container deployment information to a cloud server node so that the cloud server node creates a plurality of service containers according to the service container deployment information;
a second deployment module, configured to send a first installation instruction to a data processor installed on the cloud server node, so that the data processor installs a centralized agent unit, so that the centralized agent unit and the plurality of service containers form a service grid unit;
the centralized agent unit is configured to agent data plane traffic entering and exiting the multiple service containers, where the data plane traffic is traffic generated by communication between service instances running in different service containers.
Optionally, before the first deployment module sends the service container deployment information to the cloud server node, the first deployment module is further configured to install a container orchestration engine on the central control node; and adding the cloud server node into a deployment network of the container orchestration engine as a working node, so that the container orchestration engine generates the business container deployment information based on the service instance registered on the cloud server node.
Optionally, before sending the service container deployment information to the cloud server node, the first deployment module is further configured to send a second installation instruction to the cloud server node, so that the cloud server node installs the container creation engine;
the first deployment module is specifically configured to send service container deployment information to a cloud server node when sending the service container deployment information to the cloud server node so that the cloud server node creates a plurality of service containers according to the service container deployment information, so that the container creation engine creates a plurality of service containers for service instances registered on the cloud server node according to the service container deployment information.
Optionally, the apparatus further includes a third deployment module, configured to install a control plane platform on the central control node, and synchronize configuration information of the multiple service containers on the cloud server node from the container orchestration engine to the control plane platform, so that the control plane platform generates a control plane traffic according to the configuration information, where the control plane traffic is a traffic resulting from a policy and/or configuration for instructing the centralized agent unit to manage data plane traffic entering and exiting the multiple service containers on the cloud server node; sending a third installation instruction to the cloud server node to enable the cloud server node to install a control plane proxy unit, where the control plane proxy unit is configured to forward the control plane traffic to the centralized proxy unit.
In a third aspect, the present disclosure provides an electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of the first aspect.
In a fourth aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method according to the first aspect.
Compared with the prior art, the technical scheme provided by the disclosure has the following advantages:
according to the deployment method, the device, the equipment and the storage medium of the service grid unit, the service container deployment information is sent to the cloud server node, so that the cloud server node creates a plurality of service containers, meanwhile, a first installation instruction is sent to the data processor of the cloud server node, so that the data processor is installed with the centralized agent unit, so that the centralized agent unit and the service containers form the service grid unit.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart of a deployment method of a service grid unit according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an application scenario provided by the embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a service grid system provided by an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a deployment apparatus of a service grid unit provided in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
The Sidecar is a very important container design mode in a cloud native service grid, and splits network forwarding and L7 layer (application layer) proxy capability from a main container into separate Sidecar containers, namely, a mode of splitting application functions from an application per se as a separate process, and allows us to add various functions to the application without invasion, thereby avoiding adding extra configuration codes to the application to meet the requirements of third-party components. In the existing distributed service grid, a sdecar is injected into each deployed service container, and the service grid is composed of sdecars corresponding to a plurality of service containers to provide flow proxy service for a micro service program running in the service container, so that not only are CPU and memory resources on a cloud server node (server node) greatly occupied, but also system power consumption is increased, and meanwhile, the sdecar proxy is introduced into an already complex distributed environment by the distributed service grid, thereby greatly increasing complexity of an overall link and operation and maintenance. In addition, the service grid injects the sdec agent into each service container of the application service, which also increases the delay caused by system call and context switch.
Fig. 1 is a flowchart of a deployment method of a service grid unit according to an embodiment of the present disclosure. The method may be performed by a deployment apparatus of a service grid unit, which may be implemented in software and/or hardware, and which may be configured in an electronic device, such as a server. In addition, fig. 2 is a schematic diagram of an application scenario provided by the embodiment of the present disclosure, and the method may be applied to the application scenario shown in fig. 2, where the application scenario shown in fig. 2 includes a central control node 201, a cloud server node 202, and a data processor 203.
The following describes a deployment method of the service grid unit shown in fig. 1 with reference to an application scenario shown in fig. 2, for example, the central control node 201 in fig. 2 may execute the method. The method comprises the following steps:
s101, sending service container deployment information to a cloud server node, so that the cloud server node creates a plurality of service containers according to the service container deployment information.
Generally, a service cluster is composed of a central control node and a plurality of cloud server nodes, which generally belong to different servers, wherein the cloud server nodes are used for clients to register micro services, and the cloud server nodes provide service containers to allow programs of the micro services registered by the clients to run therein, that is, the service containers provide running environments for service instances registered on the cloud server nodes. The cloud server node shown in the embodiment of the present disclosure is configured with a Data Processing Unit (DPU), and the Data Processing Unit operates on the cloud server node and provides a high-bandwidth low-delay heterogeneous network computing acceleration engine for the cloud server node. When the service grid is deployed to a plurality of cloud server nodes, deployment ideas for each cloud server node are similar, and therefore, an example is given for one cloud server.
Illustratively, the central control node 201 of the service cluster sends service container deployment information to the cloud server node 202, and the cloud server node 202 creates a plurality of service containers according to the service container deployment information, where the number of the service containers may be determined according to service instances registered on the cloud server node 202, or the number of the service containers may be customized to create service containers in advance on the cloud server node 202, and the service containers are operated in the created service containers after clients register the service instances. The service container is configured in a dynamic manner, that is, the service container is added or deleted according to the service instance registered by the client at the cloud server node, for example, after the client registers a service instance, the service container is created for the registered service instance, and after the client logs off a service instance, the service container of the logged-off service instance is deleted. Taking the scenario shown in fig. 2 as an example, the service container created by the cloud server node 202 includes 2021, 2022, 2023.
S102, sending a first installation instruction to a data processor installed on a cloud server node to enable the data processor to install the centralized agent unit, so that the centralized agent unit and the plurality of business containers form a service grid unit.
The centralized agent unit is used for acting data plane traffic entering and exiting a plurality of service containers, and the data plane traffic is traffic generated by communication between service instances running in different service containers.
Illustratively, the central control node 201 sends a first installation instruction to the data processor 203 of the cloud server node 202, and the data processor 203 is loaded with a system on chip (SoC), for example, an Ubuntu system may be used, and after the data processor 203 receives the first installation instruction, the centralized agent unit 204 is installed on the SoC. Centralized agent unit 204 proxies data plane traffic to and from the plurality of service containers installed on cloud server node 202, where the data plane traffic is traffic generated during communication between service instances running in different service containers. For example, the centralized proxy unit may use an Envoy centralized proxy program, which is a lightweight 7-layer service proxy program that implements advanced network functions in a manner of independent processes in a microservice architecture, and may operate around applications in a Sidecar manner or as an edge proxy of a network. The Envoy centralized agent may be specifically deployed based on a Docker composition, where the Docker composition is responsible for implementing fast orchestration of a service container cluster, and the Envoy centralized agent may centrally proxy data plane traffic entering and exiting from multiple service containers.
Thus, the centralized agent unit 204 installed on the data processor 203 and the plurality of service containers installed on the cloud server nodes 202 constitute a service grid unit for providing traffic proxy services for the service instances running in the service containers, and the plurality of service grid units corresponding to the plurality of cloud server nodes in the service cluster constitute a service grid of the entire service cluster. Generally speaking, micro services (i.e. service instances) registered on the whole service cluster communicate with each other, a cloud server node and a plurality of service containers on the cloud server node form a service grid unit by working on the cloud server node, and a data processor provided with a centralized agent unit and a plurality of service containers on the cloud server node form a service grid unit, so that a service grid formed by service grid units corresponding to the plurality of cloud server nodes provides proxy services of data plane traffic for the service instances registered on the whole service cluster, and the formed service grid can be managed by a control plane component.
Since the centralized agent unit 204 centrally proxies the data plane traffic entering and exiting the service containers 2021, 2022, 2023 on the cloud server node 202, there is no need to inject one Sidecar for each service container 2021, 2022, 2023, so that a plurality of sidecars are integrated into one centralized agent unit, and the deployment complexity and operation and maintenance complexity of the cloud native service grid can be reduced. In addition, because the centralized agent is installed on the data processor of the cloud server node, services provided by the service grid, such as service discovery, load balancing, request routing, forwarding according to regular flow and the like, are completely unloaded onto the data processor, so that resources of a Central Processing Unit (CPU), a memory and network bandwidth on the cloud server node can be effectively saved, more service instances can be registered under the cloud service resources of the same scale, and the computing resources and the flow resources of cloud service users are saved. Meanwhile, the service request/response does not need to send the flow to the corresponding service container after passing through the Sidecar of each service container, the length of a forwarding path is shortened, and the data plane flow of the service grid proxy is accelerated by the data processor, so that the delay caused by system calling and context switching can be effectively reduced.
The method and the system have the advantages that the service containers are created by the cloud server node by sending the service container deployment information to the cloud server node, the first installation instruction is sent to the data processor of the cloud server node by the data processor, the centralized agent unit is installed on the data processor, the centralized agent unit and the service containers form the service grid unit, the plurality of Sidecars are integrated into one centralized agent unit, deployment complexity and operation and maintenance complexity of the cloud native service grid are reduced, meanwhile, the effect of unloading services provided by the service grid to the data processor is achieved by installing the centralized agent on the data processor installed on the cloud server node, CPU, memory and network bandwidth resources originally used for bearing Sidecar processes on the cloud server node can be effectively saved, and in addition, because data plane flow of the service grid agent is accelerated by the data processor, network delay caused by system calling and context switching introduced by the fact that the service grid agent is deployed on the cloud server node is avoided.
On the basis of the foregoing embodiment, before sending the service container deployment information to the cloud server node, the method further includes: installing a container arrangement engine on a central control node of a service cluster; and adding the cloud server node into a deployment network of the container orchestration engine as a working node, so that the container orchestration engine generates service container deployment information based on the service instances registered on the cloud server node.
In practical application, the number of containerized applications deployed in a service cluster is very large, manual deployment of service containers on cloud server nodes cannot meet service requirements, and later maintenance of the service containers is difficult to realize, so that a container arranging engine is developed at the discretion, and containers can be automatically deployed and maintained by using the container arranging engine.
Illustratively, the container orchestration engine 205 is installed on the central control node 201, then the cloud server node 202 is added to a deployment network of the container orchestration engine 205 as a worker node (worker node), and the container orchestration engine 205 is utilized to automatically orchestrate a service container for a service instance registered on the cloud server node 202, so as to generate corresponding service container deployment information. For example, the container orchestration engine may use kubernets, which is an open source for managing containerized applications on multiple hosts in a service cluster, and it provides a mechanism for container automated deployment, planning, updating, and maintenance, which may make container deployment simpler and more efficient. The Kubernetes can provide a policy for automatically deploying the service container for the service instance registered by the cloud server node 202, and generate corresponding service container deployment information.
According to the embodiment of the disclosure, the container arrangement engine is installed on the central control node, and the cloud server node is added into the deployment network of the container arrangement engine to serve as the working node, so that the container arrangement engine automatically deploys the service container for the service instance registered on the cloud server node, and generates corresponding service container deployment information, thereby meeting the service scene requirements.
On the basis of the foregoing embodiment, before sending the service container deployment information to the cloud server node, the method further includes: sending a second installation instruction to the cloud server node to enable the cloud server node to install the container creation engine; sending the service container deployment information to the cloud server node so that the cloud server node creates a plurality of service containers according to the service container deployment information, wherein the method comprises the following steps: and sending the service container deployment information to the cloud server node so that the container creation engine creates a plurality of service containers for the service instances registered on the cloud server node according to the service container deployment information.
Kubernets as a mechanism for a container arrangement engine to automatically deploy containers, that is, a policy for providing automatic container deployment cannot directly create service containers, kubernets create a Pod which can contain one or more service containers by creating a Pod on the premise that a set of kubernets clusters which can normally operate already exists, the Pod is a group (one or more) of containers which share storage, a network and a statement on how to operate the containers, and therefore, an object for creating the service containers is also needed, and the object is a container creation engine.
For example, the central control node 201 sends a second installation instruction to the cloud server node 202, the cloud server node 202 installs the container creation engine 206 after receiving the second installation instruction, and the container creation engine 206 is used to execute the container deployment policy in the service container deployment information. After installing the container creation engine 206, the container creation engine 206 creates business containers 2021, 2022, 2023 for the service instances registered on the cloud server node 202 according to the business container deployment information. The container creation engine may use Docker, and may also use other types of container creation engines, which are not limited in this disclosure.
According to the embodiment of the disclosure, the second installation instruction is sent to the cloud server node, so that the cloud server node installs the container creation engine, and the container creation engine is used for creating a plurality of service containers according to the service container deployment information, thereby meeting the service requirements of the service cluster.
Fig. 3 is a schematic structural diagram of a service grid system provided in the embodiment of the present disclosure. On the basis of the above embodiment, the method further includes: installing a control plane platform on the central control node, and synchronizing configuration information of a plurality of service containers on the cloud server node from the container arrangement engine to the control plane platform so that the control plane platform generates control plane flow according to the configuration information, wherein the control plane flow is flow generated by a strategy and/or configuration for instructing a centralized agent unit to manage data plane flow entering and exiting the plurality of service containers on the cloud server node; and sending a third installation instruction to the cloud server node so that the cloud server node installs a control plane proxy unit, wherein the control plane proxy unit is used for forwarding the control plane traffic to the centralized proxy unit.
For example, after the container orchestration engine 307 (i.e., kubernets) on the central control node 301 deploys the traffic container for the cloud server node of the service cluster, taking the cloud server node 302 as an example, the centralized proxy unit 304 (i.e., envoy) installed on the data processor 303 and the traffic containers 3021, 3022, and 3023 on the cloud server node 302 form a service grid unit, and the centralized proxy unit 304 itself cannot make a dynamic traffic management policy, and needs to manage the data plane traffic entering and exiting the traffic containers 3021, 3022, and 3023 through the traffic management policy provided by the control plane component.
Therefore, the control plane platform 305 is installed on the central control node 301, and the programs of the control plane platform can use the isio, which can provide a simple way to establish a network of deployed services, and has the functions of load balancing, service-to-service authentication, monitoring and the like without changing any service code. After installing the issue on the central control node 301, the configuration information of multiple service containers on the cloud server node 302 is synchronized from the container orchestration engine 307, taking kubernets as an example, the issue may be configured from a kubernets service API to the issue through each service container configuration information, including dynamic information and CRD (CRD, custom Resource Definition, kubernets 1.7, and then secondary development capability for CRD-defined resources is added to extend the kubernets API, and a new Resource type may be added to the kubernets service API through CRD). Thus, the isitio can generate a policy and/or configuration indicating that centralized agent unit 204 manages data plane traffic entering and exiting a plurality of service containers on the cloud server node according to configuration information of each service container on cloud server node 302, and traffic generated by transmitting the policy and/or configuration is control plane traffic. Meanwhile, a third installation instruction is sent to the cloud server node 302 based on a deployment policy of Kubernetes, so that the cloud server node installs the control plane proxy unit 306, when the control plane platform uses the indication, the installed control plane proxy unit is an indication control plane agent, the indication control plane agent synchronously indicates the control plane traffic of the Envoy centralized agent from the indication control plane platform, and forwards the control plane traffic to the Envoy centralized agent, that is, after the control plane proxy unit 306 synchronizes to the control plane traffic from the control plane platform 305, the control plane traffic is forwarded to the centralized proxy unit 304, and these control plane traffic are the traffic generated by the policy and/or configuration for indicating the centralized proxy unit 304 to manage the data plane traffic entering and exiting the service containers 3021, 3022, and 3023.
In summary, the control plane platform and the control plane proxy units on the multiple cloud server nodes form a control plane component, which is used to manage the service grids formed by the service grid units corresponding to the multiple cloud server nodes described in the above embodiments, and the control plane component and the service grids form an overall service grid system of the service cluster.
In the embodiment of the disclosure, a control plane platform is installed on a central control node, so that the control plane platform generates a policy for instructing a centralized agent unit to manage data plane traffic of a plurality of service containers entering and exiting a cloud server node and/or a control plane traffic generated by configuration according to configuration information of the service containers synchronized by a container arrangement engine, and the control plane agent unit is installed on the cloud server node to forward the control plane traffic to the centralized agent unit, thereby forming a control plane component for managing a service grid, and meeting requirements of a service cluster service scene.
Fig. 4 is a schematic structural diagram of a deployment apparatus of a service grid unit according to an embodiment of the present disclosure. The deployment means of the service grid unit may be a component or assembly in a server as in the above embodiments. The deployment apparatus for a service grid unit provided in the embodiment of the present disclosure may execute the processing flow provided in the deployment method embodiment of the service grid unit, as shown in fig. 4, the deployment apparatus 400 for a service grid unit includes: the first deployment module 401 is configured to send service container deployment information to the cloud server node, so that the cloud server node creates a plurality of service containers according to the service container deployment information; a second deployment module 402, configured to send a first installation instruction to a data processor installed on a cloud server node, so as to cause the data processor to install the centralized agent unit, so that the centralized agent unit and the plurality of service containers form a service grid unit; the centralized agent unit is used for acting data plane traffic entering and exiting a plurality of service containers, and the data plane traffic is traffic generated by communication between service instances running in different service containers.
In some embodiments, the first deployment module 401 is further configured to install a container orchestration engine on a central control node of the service cluster before sending the service container deployment information to the cloud server node; and adding the cloud server node into a deployment network of the container orchestration engine as a working node, so that the container orchestration engine generates service container deployment information based on the service instances registered on the cloud server node.
In some embodiments, the first deployment module 401 is further configured to send a second installation instruction to the cloud server node to cause the cloud server node to install the container creation engine, before sending the service container deployment information to the cloud server node; the first deployment module 401 is specifically configured to send service container deployment information to the cloud server node when sending the service container deployment information to the cloud server node so that the cloud server node creates a plurality of service containers according to the service container deployment information, so that the container creation engine creates a plurality of service containers for the service instances registered on the cloud server node according to the service container deployment information.
In some embodiments, the apparatus further comprises a third deployment module 403 for installing a control plane platform on the central control node and synchronizing configuration information of the plurality of traffic containers on the cloud server node from the container orchestration engine to the control plane platform, such that the control plane platform generates control plane traffic according to the configuration information, the control plane traffic being traffic resulting from policies and/or configurations used to instruct the centralized agent unit to manage data plane traffic entering and exiting the plurality of traffic containers on the cloud server node; and sending a third installation instruction to the cloud server node so that the cloud server node installs a control plane proxy unit, wherein the control plane proxy unit is used for forwarding the control plane traffic to the centralized proxy unit.
The deployment apparatus of the service grid unit in the embodiment shown in fig. 4 may be used to implement the technical solution of the foregoing method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device may be a server as described in the above embodiments. The electronic device provided in the embodiment of the present disclosure may execute the processing flow provided in the embodiment of the method for deploying a service grid unit, as shown in fig. 5, the electronic device 500 includes: memory 501, processor 502, computer programs and communication interface 503; wherein a computer program is stored in the memory 501 and is configured to be executed by the processor 502 for a deployment method of a service grid unit as described above.
In addition, the embodiment of the present disclosure further provides a computer readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the deployment method of the service grid unit described in the foregoing embodiment.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for deploying a service grid unit, comprising:
sending service container deployment information to a cloud server node so that the cloud server node creates a plurality of service containers according to the service container deployment information;
sending a first installation instruction to a data processor installed on the cloud server node to cause the data processor to install a centralized agent unit so that the centralized agent unit and the plurality of business containers constitute a service grid unit;
the centralized agent unit is configured to agent data plane traffic entering and exiting the multiple service containers, where the data plane traffic is traffic generated by communication between service instances running in different service containers.
2. The method of claim 1, wherein prior to sending the service container deployment information to the cloud server node, further comprising:
installing a container arrangement engine on the central control node;
and adding the cloud server node into a deployment network of the container orchestration engine as a working node, so that the container orchestration engine generates the business container deployment information based on the service instance registered on the cloud server node.
3. The method of claim 2, wherein prior to sending the traffic container deployment information to the cloud server node, further comprising:
sending a second installation instruction to the cloud server node to enable the cloud server node to install a container creation engine;
the sending of the service container deployment information to the cloud server node to enable the cloud server node to create a plurality of service containers according to the service container deployment information includes:
and sending the service container deployment information to a cloud server node, so that the container creation engine creates a plurality of service containers for the service instances registered on the cloud server node according to the service container deployment information.
4. The method of claim 2, wherein the method further comprises:
installing a control plane platform on the central control node, and synchronizing configuration information of the plurality of service containers on the cloud server node from the container orchestration engine to the control plane platform, so that the control plane platform generates a control plane traffic according to the configuration information, the control plane traffic being traffic resulting from a policy and/or configuration for instructing the centralized agent unit to manage data plane traffic to and from the plurality of service containers on the cloud server node;
sending a third installation instruction to the cloud server node to enable the cloud server node to install a control plane proxy unit, where the control plane proxy unit is configured to forward the control plane traffic to the centralized proxy unit.
5. A deployment apparatus for serving a grid cell, comprising:
the first deployment module is used for sending service container deployment information to a cloud server node so that the cloud server node creates a plurality of service containers according to the service container deployment information;
a second deployment module, configured to send a first installation instruction to a data processor installed on the cloud server node, so that the data processor installs a centralized agent unit, so that the centralized agent unit and the plurality of service containers form a service grid unit; the centralized agent unit is configured to agent data plane traffic entering and exiting the multiple service containers, where the data plane traffic is traffic generated by communication between service instances running in different service containers.
6. The apparatus of claim 5, wherein the first deployment module, prior to sending the service container deployment information to the cloud server node, is further to install a container orchestration engine on a central control node; and adding the cloud server node into a deployment network of the container orchestration engine as a working node, so that the container orchestration engine generates the business container deployment information based on the service instance registered on the cloud server node.
7. The apparatus of claim 6, wherein the first deployment module, prior to sending the traffic container deployment information to the cloud server node, is further to send a second installation instruction to the cloud server node to cause the cloud server node to install a container creation engine;
the first deployment module is specifically configured to send service container deployment information to a cloud server node when sending the service container deployment information to the cloud server node so that the cloud server node creates a plurality of service containers according to the service container deployment information, so that the container creation engine creates a plurality of service containers for service instances registered on the cloud server node according to the service container deployment information.
8. The apparatus of claim 6, further comprising a third deployment module to install a control plane platform on the central control node and synchronize configuration information of a plurality of traffic containers on the cloud server node from the container orchestration engine to the control plane platform to cause the control plane platform to generate control plane traffic from the configuration information, the control plane traffic being traffic resulting from policies and/or configurations used to instruct the centralized proxy unit to manage data plane traffic to and from the plurality of traffic containers on the cloud server node; sending a third installation instruction to the cloud server node to enable the cloud server node to install a control plane proxy unit, where the control plane proxy unit is configured to forward the control plane traffic to the centralized proxy unit.
9. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1-4.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-4.
CN202211398492.5A 2022-11-09 2022-11-09 Service grid unit deployment method, device, equipment and storage medium Pending CN115733746A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211398492.5A CN115733746A (en) 2022-11-09 2022-11-09 Service grid unit deployment method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211398492.5A CN115733746A (en) 2022-11-09 2022-11-09 Service grid unit deployment method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115733746A true CN115733746A (en) 2023-03-03

Family

ID=85294962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211398492.5A Pending CN115733746A (en) 2022-11-09 2022-11-09 Service grid unit deployment method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115733746A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116107564A (en) * 2023-04-12 2023-05-12 中国人民解放军国防科技大学 Data-oriented cloud native software architecture and software platform
CN117176819A (en) * 2023-09-27 2023-12-05 中科驭数(北京)科技有限公司 Service network service-based unloading method and device
CN117201302A (en) * 2023-07-28 2023-12-08 中科驭数(北京)科技有限公司 Centralized agent upgrading method, device, equipment and medium for service grid

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210019194A1 (en) * 2019-07-16 2021-01-21 Cisco Technology, Inc. Multi-cloud service mesh orchestration platform
CN112929230A (en) * 2021-01-22 2021-06-08 中信银行股份有限公司 Test processing method and device, electronic equipment and computer readable storage medium
CN113542437A (en) * 2021-09-16 2021-10-22 阿里云计算有限公司 Network system, network proxy method and device
WO2021249268A1 (en) * 2020-06-09 2021-12-16 阿里巴巴集团控股有限公司 Method for creating service mesh instance, service mesh system, and multi-cluster system
CN113949702A (en) * 2021-08-30 2022-01-18 浪潮软件科技有限公司 Multi-layer network protocol processing method and device for service grid
CN114896025A (en) * 2022-05-16 2022-08-12 网易(杭州)网络有限公司 Architecture optimization method and device of service grid, computer equipment and storage medium
CN115022408A (en) * 2022-06-07 2022-09-06 中国工商银行股份有限公司 Data transmission method and device based on service grid and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210019194A1 (en) * 2019-07-16 2021-01-21 Cisco Technology, Inc. Multi-cloud service mesh orchestration platform
WO2021249268A1 (en) * 2020-06-09 2021-12-16 阿里巴巴集团控股有限公司 Method for creating service mesh instance, service mesh system, and multi-cluster system
CN112929230A (en) * 2021-01-22 2021-06-08 中信银行股份有限公司 Test processing method and device, electronic equipment and computer readable storage medium
CN113949702A (en) * 2021-08-30 2022-01-18 浪潮软件科技有限公司 Multi-layer network protocol processing method and device for service grid
CN113542437A (en) * 2021-09-16 2021-10-22 阿里云计算有限公司 Network system, network proxy method and device
CN114896025A (en) * 2022-05-16 2022-08-12 网易(杭州)网络有限公司 Architecture optimization method and device of service grid, computer equipment and storage medium
CN115022408A (en) * 2022-06-07 2022-09-06 中国工商银行股份有限公司 Data transmission method and device based on service grid and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116107564A (en) * 2023-04-12 2023-05-12 中国人民解放军国防科技大学 Data-oriented cloud native software architecture and software platform
CN116107564B (en) * 2023-04-12 2023-06-30 中国人民解放军国防科技大学 Data-oriented cloud native software device and software platform
CN117201302A (en) * 2023-07-28 2023-12-08 中科驭数(北京)科技有限公司 Centralized agent upgrading method, device, equipment and medium for service grid
CN117176819A (en) * 2023-09-27 2023-12-05 中科驭数(北京)科技有限公司 Service network service-based unloading method and device

Similar Documents

Publication Publication Date Title
CN115733746A (en) Service grid unit deployment method, device, equipment and storage medium
US10511506B2 (en) Method and device for managing virtualized network function
EP3427439B1 (en) Managing planned adjustment of allocation of resources in a virtualised network
CN112000448A (en) Micro-service architecture-based application management method
CN109542457A (en) A kind of system and method for the Distributed Application distribution deployment of edge calculations network
CN111858054B (en) Resource scheduling system and method based on edge computing in heterogeneous environment
CN113190378B (en) Edge cloud disaster recovery method based on distributed cloud platform
CN102821000A (en) Method for improving usability of PaaS platform
CN110837418A (en) High-concurrency web system based on container and implementation method
CN111797173B (en) Alliance chain sharing system, method and device, electronic equipment and storage medium
CN114691567A (en) Multi-cloud interface adaptation method, system and storage medium based on micro-service
WO2017008839A1 (en) Managing resource allocation in a network functions virtualisation infrastructure
JP2023538852A (en) Systems and methods for zero-touch interworking of network orchestration with data platforms and analytics in virtualized 5G deployments
CN111245634A (en) Virtualization management method and device
CN110366056B (en) Method, device, equipment and storage medium for realizing ASON business model
CN109525413B (en) CDN network function virtualization management method, device and system
CN116566984A (en) Routing information creation method and device of k8s container cluster and electronic equipment
Khichane et al. Cloud native 5G: an efficient orchestration of cloud native 5G system
CN114840329A (en) Cloud and native hybrid integration method based on block chain
CN110855739A (en) Container technology-based remote and heterogeneous resource unified management method and system
CN107426109B (en) Traffic scheduling method, VNF module and traffic scheduling server
CN116450351A (en) Edge container scheduling algorithm
CN114615268B (en) Service network, monitoring node, container node and equipment based on Kubernetes cluster
WO2022260330A1 (en) Improvements in and relating to multi-access edge computing
CN113515458B (en) Method and system for reducing test environment resource consumption based on Envoy plug-in

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination