CN112600931A - API gateway deployment method and device - Google Patents

API gateway deployment method and device Download PDF

Info

Publication number
CN112600931A
CN112600931A CN202011527486.6A CN202011527486A CN112600931A CN 112600931 A CN112600931 A CN 112600931A CN 202011527486 A CN202011527486 A CN 202011527486A CN 112600931 A CN112600931 A CN 112600931A
Authority
CN
China
Prior art keywords
kong
instance
deployed
deployment
deploying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011527486.6A
Other languages
Chinese (zh)
Other versions
CN112600931B (en
Inventor
鲍伟伟
张建伟
熊宇豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Cloud Technologies Co Ltd
Original Assignee
New H3C Cloud Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Cloud Technologies Co Ltd filed Critical New H3C Cloud Technologies Co Ltd
Priority to CN202011527486.6A priority Critical patent/CN112600931B/en
Publication of CN112600931A publication Critical patent/CN112600931A/en
Application granted granted Critical
Publication of CN112600931B publication Critical patent/CN112600931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present application relates to the field of micro service technologies, and in particular, to a method and an apparatus for API gateway deployment. The method comprises the following steps: creating an application package including a manifest file template required for deploying the Kong instance based on the helm of the containerized application management platform; polling a preset parameter list configured for each Kong instance to be deployed, and respectively generating deployment files corresponding to the Kong instance to be deployed by adopting each manifest file template in the application package, wherein the deployment file corresponding to one Kong instance to be deployed at least comprises each parameter value configured for the Kong instance to be deployed; and respectively generating corresponding Kong examples based on the deployment files.

Description

API gateway deployment method and device
Technical Field
The present application relates to the field of micro service technologies, and in particular, to a method and an apparatus for API gateway deployment.
Background
The most basic role of an API service is to accept requests from clients and return responses. However, with today's briskly evolving microservice architecture, the situation is not so simple. Developers usually need to consider the problems of security protection, flow control, analysis and monitoring, charging and the like of the API, and the API gateway is a standard configuration scheme for solving the problems.
The API gateway is an API management tool positioned between the client and the back-end service set, provides a uniform entrance for an API caller, receives API call from the client, and forwards the API call to the back-end service through a routing mechanism. Kong is a high-performance and easily-extensible API gateway project written based on Nginx and Lua modules, and provides functions of identity verification, rate limitation, load balancing, log recording, protocol conversion and the like in a plug-in mode.
A single instance of the Kong gateway is likely to become a performance bottleneck for the system. Therefore, how to implement a multi-instance distributed high-performance high-availability API gateway scheme is an important issue. One deployment scheme based on the Docker container technology is given by Kong's official, Kong and PostgreSQL are deployed in a container form in a Docker environment. To realize cluster deployment of Kong, only a plurality of Kong containers need to be deployed on the basis of an official scheme. All the Kong nodes are connected to the same PostgreSQL database to achieve data synchronization of the API gateway configuration.
However, each Kong instance is only deployed in a container manner, when the Kong instance is deployed across servers, the orchestration, management and scheduling cannot be performed effectively, when the Kong instance is extended, the orchestration, management and scheduling need to be completed through a docker command or an interface, and the implementation process is complex.
Disclosure of Invention
The application provides an API gateway deployment method and device, which are used for solving the problem that in the prior art, each instance cannot be efficiently managed, scheduled and expanded.
In a first aspect, the present application provides a method for deploying an API gateway, the method including:
creating an application package including a manifest file template required for deploying the Kong instance based on the helm of the containerized application management platform;
polling a preset parameter list configured for each Kong instance to be deployed, and respectively generating deployment files corresponding to the Kong instance to be deployed by adopting each manifest file template in the application package, wherein the deployment file corresponding to one Kong instance to be deployed at least comprises each parameter value configured for the Kong instance to be deployed;
and respectively generating corresponding Kong examples based on the deployment files.
Optionally, a parameter list corresponding to the Kong instance to be deployed at least includes: the Kong instance to be deployed comprises IP address information configured for the Kong instance to be deployed, port information configured for the Kong instance to be deployed and an instance name used for uniquely identifying the Kong instance to be deployed.
Optionally, the parameter list corresponding to one Kong instance to be deployed further includes: and node information used for deploying the Kong instance in the cluster, and starting limit and running limit of a CPU and a memory of the Kong instance to be deployed.
Optionally, the step of generating corresponding Kong instances respectively based on the deployment files includes:
upon receiving a Kong instance deployment request, determining a target node in the cluster for deploying the Kong instance;
and according to each deployment file, respectively deploying a corresponding Kong instance on the target node.
Optionally, the method further comprises:
and adding the IP address of each Kong instance deployed on the target node into the network card of the target node, so that when the Kong instance is started, the Kong instance communicates with external equipment based on the corresponding IP address in the network card.
In a second aspect, the present application provides an API gateway deployment apparatus, the apparatus comprising:
a creating unit, configured to create an application package including a manifest file template required for deploying the Kong instance based on the helm of the containerized application management platform;
the first generating unit is used for polling a preset parameter list configured for each Kong instance to be deployed, and generating deployment files corresponding to the Kong instance to be deployed respectively by adopting each manifest file template in the application package, wherein the deployment file corresponding to one Kong instance to be deployed at least comprises each parameter value configured for the Kong instance to be deployed;
and the second generating unit is used for respectively generating corresponding Kong instances based on the deployment files.
Optionally, a parameter list corresponding to the Kong instance to be deployed at least includes: the Kong instance to be deployed comprises IP address information configured for the Kong instance to be deployed, port information configured for the Kong instance to be deployed and an instance name used for uniquely identifying the Kong instance to be deployed.
Optionally, the parameter list corresponding to one Kong instance to be deployed further includes: and node information used for deploying the Kong instance in the cluster, and starting limit and running limit of a CPU and a memory of the Kong instance to be deployed.
Optionally, when generating corresponding Kong instances based on each deployment file, the second generating unit is specifically configured to:
upon receiving a Kong instance deployment request, determining a target node in the cluster for deploying the Kong instance;
and according to each deployment file, respectively deploying a corresponding Kong instance on the target node.
Optionally, the apparatus further comprises:
and the adding unit is used for adding the IP address of each Kong instance deployed on the target node into the network card of the target node so as to enable the Kong instance to communicate with external equipment based on the corresponding IP address in the network card when the Kong instance is started.
In a third aspect, an embodiment of the present application provides an API gateway deployment apparatus, where the API gateway deployment apparatus includes:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory and for executing the steps of the method according to any one of the above first aspects in accordance with the obtained program instructions.
In a fourth aspect, the present application further provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the steps of the method according to any one of the above first aspects.
In summary, the API gateway deployment method provided in the embodiment of the present application creates an application package including a manifest file template required for deploying a Kong instance based on the helm of the containerized application management platform; polling a preset parameter list configured for each Kong instance to be deployed, and respectively generating deployment files corresponding to the Kong instance to be deployed by adopting each manifest file template in the application package, wherein the deployment file corresponding to one Kong instance to be deployed at least comprises each parameter value configured for the Kong instance to be deployed; and respectively generating corresponding Kong examples based on the deployment files.
By adopting the API gateway deployment method provided by the embodiment of the application, the Kong instance cluster is automatically deployed through the helm tool, the Kong instance is managed in a cluster mode, the Kong instance is scheduled, the expanding operation of the Kong instance is simple and flexible, and the high availability of the API gateway is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
Fig. 1 is a detailed flowchart of an API gateway deployment method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an API gateway deployment apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of another API gateway deployment apparatus provided in the embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
The cloud desktop system provided in the embodiment of the present application is described in detail below with reference to specific application scenarios, and for example, with reference to fig. 1, a detailed flowchart of an API gateway deployment method provided in the embodiment of the present application is shown, where the method includes the following steps:
step 100: based on the helm of the containerized application management platform, an application package is created that includes manifest file templates needed for deploying Kong instances.
In practical application, Helm is a package management tool of Kubernets, each package is called a Chart, and a Chart contains some templated Kubernets list files which are located in a templates directory. These templates are YAML files that are rendered and submitted to Kubernets to generate the desired Kubernets resources, such as Service, Deployment file, Stateresult, Job, etc.
In this embodiment, the containerized application management platform may be kubernets, and then, based on Helm in kubernets, a Helm Chart may be created, where the Helm Chart includes some templated kubernets manifest files located in templates directory, such as kong.
Step 110: polling a preset parameter list configured for each Kong instance to be deployed, and respectively generating deployment files corresponding to the Kong instance to be deployed by adopting each manifest file template in the application package, wherein the deployment file corresponding to one Kong instance to be deployed at least comprises each parameter value configured for the Kong instance to be deployed.
In the embodiment of the present application, a preferred implementation manner is that a parameter list corresponding to a Kong instance to be deployed at least includes: the Kong instance to be deployed comprises IP address information configured for the Kong instance to be deployed, port information configured for the Kong instance to be deployed and an instance name used for uniquely identifying the Kong instance to be deployed.
For example, the Kong instances to be deployed include Kong instance 1, Kong instance 2, and Kong instance 3, and then, the parameters corresponding to Kong instance 1 include: kong1, KongIP1, konghttpaddmport 1, konghttpaddmport 2, kongHttpProxyPort1, kongHttpProxyPort 2, wherein Kong1 is used to uniquely identify Kong instance 1, KongIP1 represents the IP address of Kong instance 1, konghttpaddmport 1 and konghttpaddmport 2 represent the HTTP and HTTPs ports, respectively, of Admin; konghttppproxyport 1 and konghttppproxyport 2 represent Proxy HTTP and HTTPs ports, respectively.
In the embodiment of the application, a simple generation rule of the Kong instance name is provided: the IPV4 address of the Kong instance is converted to hexadecimal as the value of the Kong instance name, such as the Kong instance with IP "192.168.1.100" and the Kong instance name "c 0a 80164". Then, the generation rule of the Kong instance name can be associated with the IP address, so that when deleting the Kong instance of the specified IP or adding the Kong instance, the addition and deletion of the target Deployment can be conveniently realized.
Of course, in the embodiment of the present application, the foregoing Kong instance name generation rule is only used for illustration, and is not used to limit the present application.
Further, a parameter list corresponding to the Kong instance to be deployed further includes: and node information used for deploying the Kong instance in the cluster, and starting limit and running limit of a CPU and a memory of the Kong instance to be deployed.
For example, assuming that the cluster includes 9 nodes, the nodes 1 to 3 are divided into the area 1, the nodes 4 to 6 are divided into the area 2, the nodes 7 to 9 are divided into the area 3, and the configured area for deploying the API gateway is the area 3, then a parameter list corresponding to one Kong instance to be deployed may carry node information for deploying the Kong instance (e.g., the API gateway is deployed in the nodes 7 to 9).
Further, in the embodiment of the present application, the CPU start limits (limits) refer to: corresponding to the least usable CPU resource value of the Kong instance, CPU running limits (requests) refer to: the value of the CPU resource that the corresponding Kong instance can use at most; the memory boot restriction refers to: corresponding to the least usable memory resource of the Kong instance, the memory operation limitation refers to: corresponding to the memory resource value that the Kong instance can use at most.
Specifically, the Kong. yaml file included in Helm Chart configures kubernets resource template related to Kong, and through the kubernets resource template, the Deployment of Kong can be generated, so that the generation of multi-instance Pod is controlled. Yanml is a loop statement to flexibly control the configuration of the Deployment. When parameter configuration is carried out, parameter values of each Kong instance to be deployed are sent to a values.
The parameter replication in the Deployment is the number of copies of the Kong instance in the Pod, set to 1 in the embodiment of the present application, and each Kong instance is controlled by a separate Deployment.
It should be noted that the environment variables KONG _ ADMIN _ LISTEN and KONG _ PROXY _ LISTEN in the Deployment are used to configure the listening parameters of the KONG instances ADMIN and PROXY. The Proxy port is used for acting back-end service, the Admin port is used for managing the Kong configuration, and the operations of adding, deleting, changing and checking the Kong configuration are carried out.
Step 120: and respectively generating corresponding Kong examples based on the deployment files.
In the embodiment of the present application, when generating corresponding Kong instances based on each deployment file, a preferred implementation manner is to determine a target node for deploying the Kong instances in a cluster when receiving a Kong instance deployment request; and according to each deployment file, respectively deploying a corresponding Kong instance on the target node.
Because a parameter list corresponding to a Kong instance to be deployed carries node information for deploying the Kong instance, a target node for deploying the Kong instance in the cluster can be determined, and then the Kong instance is deployed on the target node based on a preset rule.
For example, after the deployment file corresponding to each Kong instance to be deployed is generated, a deployment request may be sent to the hell server. Specifically, the Tiller Server is a Server of Helm, which communicates with a client using a gRPC. And after the Tiller monitors a 'helm install' request of the client, interacting with Kubernets API service based on the deployment file of each Kong instance to be deployed to complete the deployment of each Kong instance.
Further, the IP address of each Kong instance deployed on the target node is added to the network card of the target node, so that when the Kong instance is started, the Kong instance communicates with an external device based on the corresponding IP address in the network card.
For example, in order to enable an external device to access the API gateway through a designated IP address, the hostNetwork is configured as true, so that the Pod can use the network device of the host node. Before the Kong instance is started, the IP address to be bound (the IP address of the Kong instance to be started) is written into the network card of the host where the Pod is located, so that the Kong instance can normally use the IP address when being started. The write operation may be implemented by adding an "ip addr add" command to the entrypoint script in the Kong image. In addition, the Presop callback is called before the Pod terminates to execute an "IP addr del" command to complete the deletion operation of the IP.
Further, when Kong clusters need to add or subtract Kong instances, the Kong clusters need to be upgraded. The upgrading steps are basically consistent with the deployment, and only parameter configuration needs to be modified, and a 'palm update' request is sent during upgrading.
Based on the same inventive concept as the above-mentioned embodiment of the present invention, exemplarily, refer to fig. 2, which is a schematic structural diagram of an API gateway deployment apparatus provided in the embodiment of the present application, and the apparatus includes:
a creating unit 20, configured to create an application package including a manifest file template required for deploying the Kong instance based on the helm of the containerized application management platform;
a first generating unit 21, configured to poll a preset parameter list configured for each Kong instance to be deployed, and generate deployment files corresponding to the Kong instance to be deployed respectively by using each manifest file template in the application package, where a deployment file corresponding to one Kong instance to be deployed at least includes each parameter value configured for the Kong instance to be deployed;
and a second generating unit 22, configured to generate corresponding Kong instances based on the deployment files, respectively.
Optionally, a parameter list corresponding to the Kong instance to be deployed at least includes: the Kong instance to be deployed comprises IP address information configured for the Kong instance to be deployed, port information configured for the Kong instance to be deployed and an instance name used for uniquely identifying the Kong instance to be deployed.
Optionally, the parameter list corresponding to one Kong instance to be deployed further includes: and node information used for deploying the Kong instance in the cluster, and starting limit and running limit of a CPU and a memory of the Kong instance to be deployed.
Optionally, when generating corresponding Kong instances based on each deployment file, the second generating unit is specifically configured to:
upon receiving a Kong instance deployment request, determining a target node in the cluster for deploying the Kong instance;
and according to each deployment file, respectively deploying a corresponding Kong instance on the target node.
Optionally, the apparatus further comprises:
and the adding unit is used for adding the IP address of each Kong instance deployed on the target node into the network card of the target node so as to enable the Kong instance to communicate with external equipment based on the corresponding IP address in the network card when the Kong instance is started.
The above units may be one or more integrated circuits configured to implement the above methods, for example: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above units is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these units may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Further, in the API gateway deployment apparatus provided in this embodiment of the present application, from a hardware level, a schematic diagram of a hardware architecture of the API gateway deployment apparatus may be shown in fig. 3, where the API gateway deployment apparatus may include: a memory 30 and a processor 31, which,
the memory 30 is used for storing program instructions; the processor 31 calls the program instructions stored in the memory 30 and executes the above-described method embodiments in accordance with the obtained program instructions. The specific implementation and technical effects are similar, and are not described herein again.
Optionally, the present application further provides an API gateway deployment apparatus, including at least one processing element (or chip) for executing the above method embodiments.
Optionally, the present application also provides a program product, such as a computer-readable storage medium, having stored thereon computer-executable instructions for causing the computer to perform the above-described method embodiments.
Here, a machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and so forth. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. An API gateway deployment method, the method comprising:
creating an application package including a manifest file template required for deploying the Kong instance based on the helm of the containerized application management platform;
polling a preset parameter list configured for each Kong instance to be deployed, and respectively generating deployment files corresponding to the Kong instance to be deployed by adopting each manifest file template in the application package, wherein the deployment file corresponding to one Kong instance to be deployed at least comprises each parameter value configured for the Kong instance to be deployed;
and respectively generating corresponding Kong examples based on the deployment files.
2. The method of claim 1, wherein a list of parameters corresponding to a Kong instance to be deployed comprises at least: the Kong instance to be deployed comprises IP address information configured for the Kong instance to be deployed, port information configured for the Kong instance to be deployed and an instance name used for uniquely identifying the Kong instance to be deployed.
3. The method of claim 2, wherein a list of parameters corresponding to a Kong instance to be deployed further comprises: and node information used for deploying the Kong instance in the cluster, and starting limit and running limit of a CPU and a memory of the Kong instance to be deployed.
4. The method of any of claims 1-3, wherein the step of generating, based on each deployment file, a corresponding Kong instance respectively comprises:
upon receiving a Kong instance deployment request, determining a target node in the cluster for deploying the Kong instance;
and according to each deployment file, respectively deploying a corresponding Kong instance on the target node.
5. The method of claim 4, wherein the method further comprises:
and adding the IP address of each Kong instance deployed on the target node into the network card of the target node, so that when the Kong instance is started, the Kong instance communicates with external equipment based on the corresponding IP address in the network card.
6. An API gateway deployment apparatus, the apparatus comprising:
a creating unit, configured to create an application package including a manifest file template required for deploying the Kong instance based on the helm of the containerized application management platform;
the first generating unit is used for polling a preset parameter list configured for each Kong instance to be deployed, and generating deployment files corresponding to the Kong instance to be deployed respectively by adopting each manifest file template in the application package, wherein the deployment file corresponding to one Kong instance to be deployed at least comprises each parameter value configured for the Kong instance to be deployed;
and the second generating unit is used for respectively generating corresponding Kong instances based on the deployment files.
7. The apparatus of claim 6, wherein a list of parameters corresponding to Kong instances to be deployed comprises at least: the Kong instance to be deployed comprises IP address information configured for the Kong instance to be deployed, port information configured for the Kong instance to be deployed and an instance name used for uniquely identifying the Kong instance to be deployed.
8. The apparatus of claim 7, wherein a list of parameters corresponding to a Kong instance to be deployed further comprises: and node information used for deploying the Kong instance in the cluster, and starting limit and running limit of a CPU and a memory of the Kong instance to be deployed.
9. The apparatus according to any one of claims 6 to 8, wherein, when generating the corresponding Kong instance based on each deployment file, the second generating unit is specifically configured to:
upon receiving a Kong instance deployment request, determining a target node in the cluster for deploying the Kong instance;
and according to each deployment file, respectively deploying a corresponding Kong instance on the target node.
10. The apparatus of claim 9, wherein the apparatus further comprises:
and the adding unit is used for adding the IP address of each Kong instance deployed on the target node into the network card of the target node so as to enable the Kong instance to communicate with external equipment based on the corresponding IP address in the network card when the Kong instance is started.
CN202011527486.6A 2020-12-22 2020-12-22 API gateway deployment method and device Active CN112600931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011527486.6A CN112600931B (en) 2020-12-22 2020-12-22 API gateway deployment method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011527486.6A CN112600931B (en) 2020-12-22 2020-12-22 API gateway deployment method and device

Publications (2)

Publication Number Publication Date
CN112600931A true CN112600931A (en) 2021-04-02
CN112600931B CN112600931B (en) 2022-05-24

Family

ID=75200013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011527486.6A Active CN112600931B (en) 2020-12-22 2020-12-22 API gateway deployment method and device

Country Status (1)

Country Link
CN (1) CN112600931B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157339A (en) * 2021-04-23 2021-07-23 东云睿连(武汉)计算技术有限公司 Application service expansion method, system, storage medium and device based on OSB
CN114221949A (en) * 2021-11-30 2022-03-22 北京航天云路有限公司 API gateway implementation method suitable for public cloud platform

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108270726A (en) * 2016-12-30 2018-07-10 杭州华为数字技术有限公司 Application example dispositions method and device
CN108809722A (en) * 2018-06-13 2018-11-13 郑州云海信息技术有限公司 A kind of method, apparatus and storage medium of deployment Kubernetes clusters
CN108958927A (en) * 2018-05-31 2018-12-07 康键信息技术(深圳)有限公司 Dispositions method, device, computer equipment and the storage medium of container application
CN110457114A (en) * 2019-07-24 2019-11-15 杭州数梦工场科技有限公司 Application cluster dispositions method and device
CN111176788A (en) * 2019-12-24 2020-05-19 优刻得科技股份有限公司 Method and system for deploying main nodes of Kubernetes cluster
CN111371679A (en) * 2020-03-09 2020-07-03 山东汇贸电子口岸有限公司 Method for realizing API gateway based on kubernets and Kong
CN111935312A (en) * 2020-09-21 2020-11-13 深圳蜂巢互联(南京)科技研究院有限公司 Industrial Internet container cloud platform and flow access control method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108270726A (en) * 2016-12-30 2018-07-10 杭州华为数字技术有限公司 Application example dispositions method and device
CN108958927A (en) * 2018-05-31 2018-12-07 康键信息技术(深圳)有限公司 Dispositions method, device, computer equipment and the storage medium of container application
CN108809722A (en) * 2018-06-13 2018-11-13 郑州云海信息技术有限公司 A kind of method, apparatus and storage medium of deployment Kubernetes clusters
CN110457114A (en) * 2019-07-24 2019-11-15 杭州数梦工场科技有限公司 Application cluster dispositions method and device
CN111176788A (en) * 2019-12-24 2020-05-19 优刻得科技股份有限公司 Method and system for deploying main nodes of Kubernetes cluster
CN111371679A (en) * 2020-03-09 2020-07-03 山东汇贸电子口岸有限公司 Method for realizing API gateway based on kubernets and Kong
CN111935312A (en) * 2020-09-21 2020-11-13 深圳蜂巢互联(南京)科技研究院有限公司 Industrial Internet container cloud platform and flow access control method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
朱娟等: "容器网络Calico基本原理和模拟", 《信息与电脑(理论版)》 *
杨建平等: "基于容器技术的新型智能网关设计", 《自动化博览》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157339A (en) * 2021-04-23 2021-07-23 东云睿连(武汉)计算技术有限公司 Application service expansion method, system, storage medium and device based on OSB
CN114221949A (en) * 2021-11-30 2022-03-22 北京航天云路有限公司 API gateway implementation method suitable for public cloud platform
CN114221949B (en) * 2021-11-30 2024-04-05 北京航天云路有限公司 API gateway implementation method suitable for public cloud platform

Also Published As

Publication number Publication date
CN112600931B (en) 2022-05-24

Similar Documents

Publication Publication Date Title
EP3355193B1 (en) Security-based container scheduling
US10585785B2 (en) Preservation of modifications after overlay removal from a container
EP3618352B1 (en) Virtual machine management
US11159393B1 (en) System and method of unifying and deploying a microservice-based application platform
CN112311855B (en) Data transmission method and device
US9852220B1 (en) Distributed workflow management system
CN112600931B (en) API gateway deployment method and device
CN108287894B (en) Data processing method, device, computing equipment and storage medium
US20200364258A1 (en) Container Image Size Reduction Via Runtime Analysis
US10839103B2 (en) Privacy annotation from differential analysis of snapshots
US11886302B1 (en) System and method for execution of applications in a container
CN112131099A (en) Version upgrading test method and device
CN113938321B (en) Extensible operation and maintenance management system, method, electronic equipment and readable storage medium
CN112860450B (en) Request processing method and device
CN110753119A (en) Application package deployment system based on K8s cluster
US9940329B2 (en) System and method for providing a climate data persistence service
CN111694639A (en) Method and device for updating address of process container and electronic equipment
CN114356521A (en) Task scheduling method and device, electronic equipment and storage medium
CN114006815B (en) Automatic deployment method and device for cloud platform nodes, nodes and storage medium
US10176059B2 (en) Managing server processes with proxy files
CN110968406B (en) Method, device, storage medium and processor for processing task
CN114900497B (en) Identification sequence number generation method and device, electronic equipment and storage medium
CN111399999A (en) Computer resource processing method and device, readable storage medium and computer equipment
CN113126912A (en) Personal disk mounting method and device
CN111984510A (en) Performance test method and device of scheduling system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant