CN114490393A - Single-cluster multi-tenant management system - Google Patents

Single-cluster multi-tenant management system Download PDF

Info

Publication number
CN114490393A
CN114490393A CN202210102294.3A CN202210102294A CN114490393A CN 114490393 A CN114490393 A CN 114490393A CN 202210102294 A CN202210102294 A CN 202210102294A CN 114490393 A CN114490393 A CN 114490393A
Authority
CN
China
Prior art keywords
service
containerized
area
gateway
partition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210102294.3A
Other languages
Chinese (zh)
Inventor
王蔚
高剑
张立东
刘智勇
范明柯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Financial Futures Information Technology Co ltd
Original Assignee
Shanghai Financial Futures Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Financial Futures Information Technology Co ltd filed Critical Shanghai Financial Futures Information Technology Co ltd
Priority to CN202210102294.3A priority Critical patent/CN114490393A/en
Publication of CN114490393A publication Critical patent/CN114490393A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Abstract

The invention discloses a single-cluster multi-tenant management system which can create a plurality of development and test environments, wherein all areas of tenants are interconnected and intercommunicated, and all areas of different tenants are isolated. The technical scheme is as follows: on one hand, the isolation among multiple sets of environments is ensured under the condition that multiple tenants use one set of development and test environment, namely, the flow access among different partitions in the same partition group is completely and uniformly controlled by the gateway and the Ingress, and no access flow exists among different partition groups, so that the flow isolation among different partition groups and the flow communication inside the partition groups are realized. The invention provides a partition group mechanism on the other hand, and realizes the transparency of the service in the partition group, namely, the cross-partition routing addressing in all the partition groups is uniformly encapsulated by the gateway and the ingress, and for a developer of the service and the service, the service calling mechanism which is the same as the online environment can be realized without paying attention to the partition group to which the developer belongs.

Description

Single-cluster multi-tenant management system
Technical Field
The invention relates to a single-cluster multi-tenant management system, in particular to a Kubernetes (an open source container management platform) single-cluster multi-tenant management system based on Ingress and Namespace.
Background
The traditional online environment of the financial industry, as shown in fig. 1, has the following broad features:
1. highly isolated multi-network areas: due to the sensitivity of the financial industry to information security, the online environment of a financial enterprise is generally composed of a plurality of highly isolated network regions, which are divided according to the type of service to be carried and the security level to be guaranteed. And (3) between network areas, network isolation is carried out by using technologies such as a firewall and the like, all necessary cross-area access is carried out, and a gateway is uniformly used for carrying out flow control.
2. The traditional architecture and the cloud-native architecture are in parallel: in recent years, cloud-native technology has become an important engine for driving business growth, and the financial industry is also added to a queue of cloud-native architecture transformation. However, the cloud native modification of the business architecture cannot be achieved at once, and therefore, the parallel of the cloud architecture and the traditional application architecture is a long-standing current situation. Thus, architecturally, a network region may consist of a traditional business architecture or may be a cluster in a cloud-native architecture.
3. Micro-service architecture: the micro-service architecture is one of cloud-native three-drive carriages, the dependence between services is determined, the services are decoupled, the development focuses more on business development, and the micro-service architecture is a second choice for the architecture transformation of most of the current financial enterprises. Since the microservice focuses the business, and considering the complexity of the financial business, the completion of one financial business often requires a plurality of services deployed in a plurality of areas to be completed through restful interface communication.
In conclusion, the financial business architecture transformation based on cloud native and micro-service brings values such as high resource efficiency and agility in application to enterprises, but the complex online network architecture of the financial industry also brings greater challenges to the development of a multi-tenant management mechanism of a test environment: how to use a multi-tenant mechanism in limited development and test resources to quickly create multiple sets of online consistent non-container-containing environments and container environments, and a development and test environment that interconnects and interworks with each area of a tenant and isolates each area of different tenants is one of the problems to be solved in the industry at present.
Disclosure of Invention
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
The invention aims to solve the problems and provides a single-cluster multi-tenant management system which can create a plurality of sets of development and test environments which are consistent on line, so that all areas of tenants are interconnected and intercommunicated, and all areas of different tenants are isolated.
The technical scheme of the invention is as follows: the invention discloses a single-cluster multi-tenant management system which comprises a plurality of partition groups, wherein each partition group comprises a plurality of partitions, each partition corresponds to an on-line area of a production environment one by one, one part of the partitions contained in each partition group is used for providing a non-containerized area, the other part of the partitions is used for providing a containerized area, the non-containerized area and the containerized area jointly form a complete on-line simulation environment, and different tenants customize the values of the partition groups corresponding to the tenants according to the use of the environment.
According to an embodiment of the single cluster multi-tenant management system, the non-containerized region is composed of Linux hosts of a cloud platform, and the containerized region is composed of a plurality of namespaces in a kubernets cluster, wherein each non-containerized region and each containerized region are respectively composed of a series of services and a gateway.
According to an embodiment of the single cluster multi-tenant management system of the present invention, the service is a micro-service exposed through an http protocol; the access path of the service is [ local area gateway address ]/[ S ], wherein S is a service name; the service access address of the non-containerized area is IP [ G ]/[ S ], wherein G is the gateway name; the service access address of the containerized area is [ NG ] - [ Z ]. k8s.net/[ S ], wherein NG is a partition group name, and Z is a partition name; the gateway is deployed in the corresponding region and used for controlling access flow among the regions, and service access in the regions does not pass through the gateway; storing the access addresses of other areas in the routing database of each gateway; the gateway exposure address of the non-containerized area is IP [ G ]; and the gateway exposure address of the containerized area is [ NG ] - [ Z ]. k8s.net.
According to an embodiment of the single cluster multi-tenant management system of the present invention, the partition group further includes a network agent Ingress based on a routing rule, and the Ingress is further configured to execute the following processing:
firstly, a main node of a container cluster registers at the beginning of DNS service, and a root domain name of k8s.net is registered, so that the container cluster is exposed in a local area network;
then, based on the relation that a containerized partition corresponds to an Ingress, the Ingress maps a domain name and a gateway of the containerized partition in a domain name exposure mode, wherein inter-partition traffic in the same partition group is uniformly controlled by the gateway and the Ingress, and no access traffic exists between different partition groups.
According to an embodiment of the single cluster multi-tenant management system of the present invention, a multi-tenant management mechanism of the system includes a partition group creation method, and the partition group creation method includes the following steps:
step 1: the user inputs the name of the sub-group;
step 2: circularly creating partitions according to the built-in online area list;
and step 3: when the area is a non-containerized area, creating the partition needs to create a virtual machine from the cloud platform, allocate an IP, deploy components including the service and the gateway of the area, generate a gateway routing address of the area after deployment is completed, and then execute step 5;
and 4, step 4: when the area is a containerized area, calling an API of kubernets to create partitions, creating ingress instances matched with the partitions, registering the domain name address of each partition, configuring the mapping relation between the domain name address and gateway services under the partitions, regenerating the gateway routing address of the area, and then executing the step 5;
and 5: and summarizing gateway routing addresses of all areas in the partition group, distributing and configuring the gateway routing addresses of each area to corresponding gateways in a summarizing result of the gateway routing addresses after the gateway routing addresses are stored in a centralized mode, wherein all the areas comprise containerized areas and non-containerized areas.
According to an embodiment of the single cluster multi-tenant management system of the present invention, the multi-tenant management mechanism of the system further includes a service routing method under the same partition group, where the service routing method includes a calling process in which a service in the non-containerized area calls a service in the containerized area, and a calling process in which a service in the containerized area calls a service in the non-containerized area.
According to an embodiment of the single cluster multi-tenant management system of the present invention, the calling process of the service S1 in the non-containerized zone Z1 calling the service S3 in the containerized zone Z3 further includes:
step 1: the service S1 uses the service registration and discovery of the non-containerized region to obtain that the access address of the service S3 in the containerized region Z3 is IP [ G1]/S3 and initiates a calling request, wherein the calling request firstly reaches a gateway G1 of the region Z1 where the service S1 is located;
step 2: the gateway G1 calculates the access address of the service S3 as NG1-Z3.k8s.net/S3 by inquiring a routing information table, and simultaneously locates the address of the access address domain name of the service S3 through DNS service, and calls a request to reach a container cluster;
and step 3: the container cluster matches Ingress to which the containerization zone Z3 belongs through a domain name address, and the Ingress translates the call request into cluster internal addressing: containerization zone Z3, gateway G3, path "/S3";
and 4, step 4: the gateway G3 translates the call request to a cluster internal address, i.e. looking for the same partitioned service S3, according to the path "/S3" in the cluster internal addressing.
According to an embodiment of the single cluster multi-tenant management system of the present invention, the calling process of the service S4 in the containerized zone Z4 calling the service S2 in the non-containerized zone Z2 further includes:
step 1: the service S4 located in the containerization area Z4 obtains the access address of the service S2 through service registration and discovery of the containerization area, and the access address is registered as a G4 gateway;
step 2: the gateway G4 obtains the access address of the service S2 by calculation through inquiring a routing information table, and calls a gateway G2 of which the request reaches the non-containerization area Z2;
and step 3: the gateway G2 obtains the access address of the service S2 through service registration and discovery in the non-containerized area, and calls the request to the service S2.
Compared with the prior art, the invention has the following beneficial effects: on one hand, the invention ensures the isolation among multiple environments under one development and test environment for multiple tenants, namely, in a partition group, the flow access among partitions is completely and uniformly controlled by a gateway and Ingress, and no access flow exists among partition groups, thereby realizing the flow isolation among partitions and the flow communication inside the partition group. The invention provides a partition group mechanism on the other hand, and realizes the transparency of the service in the partition group, namely, the cross-partition routing addressing in all the partition groups is uniformly encapsulated by the gateway and the ingress, and for a developer of the service and the service, the service calling mechanism which is the same as the online environment can be realized without paying attention to the partition group to which the developer belongs.
Drawings
The above features and advantages of the present disclosure will be better understood upon reading the detailed description of embodiments of the disclosure in conjunction with the following drawings. In the drawings, components are not necessarily drawn to scale, and components having similar relative characteristics or features may have the same or similar reference numerals.
Fig. 1 shows an architectural diagram of an online environment of a conventional financial industry.
Fig. 2 shows an overall architecture diagram of an embodiment of the single cluster multi-tenant management system of the present invention.
FIG. 3 illustrates a flow diagram for creating a partition group as configured in the system embodiment shown in FIG. 2.
Fig. 4 is a flow chart illustrating a service routing method under the same partition group configured in the system embodiment shown in fig. 2.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. It is noted that the aspects described below in connection with the figures and the specific embodiments are only exemplary and should not be construed as imposing any limitation on the scope of the present invention.
Fig. 2 illustrates the overall architecture of an embodiment of the single cluster multi-tenant management system of the present invention. Referring to fig. 2, the single-cluster multi-tenant management system of the present embodiment includes a plurality of partition groups.
A partition group (NG) comprises a plurality of partitions, and corresponds to the online areas (Zone, Z) of the production environment one by one, and is used for providing a complete online simulation environment formed by the non-containerized areas (namely, the areas NG1-Z1 and NG1-Z2 in the figure) and the containerized areas (namely, the areas NG1-Z3 and NG1-Z4 in the figure). Different tenants customize the values of their corresponding partition groups according to the use of the environment, such as NG m12-trade-it-test, i.e. a trading version integration test environment representing 12 months.
The traditional environmental zones (i.e., non-containerized zones) that a zoning group contains are made up of Linux hosts of the cloud platform, such as zones NG1-Z1 and zones NG1-Z2 in zoning group NG1, and zones NG2-Z1 and zones NG2-Z2 in zoning group NG2 shown in FIG. 2. The containerized zones contained by the partition groups are composed of a plurality of namespaces (Namespace, referred to as N for short) in a Kubernetes cluster, such as partitions NG1-Z3 and NG1-Z4 in partition group NG1, and partitions NG2-Z3 and NG2-Z4 in partition group NG2 shown in FIG. 2. Each zone Z (including both non-containerized and containerized zones) is composed of a series of services (Service, abbreviated S) and a Gateway (Gateway, abbreviated G). The services are exposed micro services through an http protocol, the access path of the services is [ gateway address of the local area ]/[ S ], the service access address of the non-containerized area is IP [ G ]/[ S ], and the service access address of the containerized area is [ NG ] - [ Z ]. k8s.net/[ S ]. The gateways are deployed in corresponding areas for controlling access traffic between the areas, and service access in the areas does not pass through the gateways. The route database of each gateway stores the access addresses of other areas. The gateway exposure address of the non-containerization area is IP [ G ], namely the local area network IP of the host where the non-containerization area is located; and the gateway exposure address of the containerized area is [ NG ] - [ Z ]. k8s.net.
Ingress (i.e., routing rule based network agents, simply referred to as IN) are also included IN the partition group, such as Ingress NG1-IN3 and Ingress NG1-IN4 IN the illustrated partition group NG1, and Ingress NG2-IN3 and Ingress NG2-IN4 IN the partition group NG 2. Ingress is one of Kubernets resources, and is one of the common ways that Kubernets expose services. First, the master node (master) of the container cluster registers with the DNS service, the root domain name of the. k8s.net, exposing the container cluster to the local area network. Based on the relationship that one containerized zone (i.e., shown as zones NG1-Z3) corresponds to one Ingress, Ingress maps a domain name with a gateway of one containerized zone in a domain name exposure manner. For example, when the controller in Ingress recognizes that the domain name is [ NG ] - [ Z ]. k8s.net, the controller forwards the request to the gateway of the [ Z ] zone of the [ NG ] sub-group indicated in the domain name by means of mapping and proxy.
The inter-partition flow in the same partition group is uniformly controlled by the gateway and Ingress, and no access flow exists between different partition groups. In conclusion, the flow communication performance in the same partition group is ensured, and the flow communication performance is consistent with the flow of the online area. Meanwhile, under the condition that multiple tenants share one cloud platform and one container cluster, the isolation among multiple environments is also ensured.
To implement the multi-tenant management mechanism in the system shown in fig. 2, a partition group creation method shown in fig. 3 and a service routing method under the same partition group shown in fig. 4 need to be configured. The two methods are explained below separately.
FIG. 3 shows a flow of the system creating a partition group. The user only needs to input a parameter "partition group name" such as new-production-test through an interface, and the method of fig. 3 automatically generates a set of development test environments with interconnected regions, fully closed flow loops and consistent online regions for the user. The detailed procedure is as follows.
Step 1: the user enters the name of the partition group, i.e., the value of NG.
And 2, step: and circularly creating partitions according to the built-in on-line area list, wherein the value of each partition is N ═ NG ] - [ Z ].
And step 3: when the area is a traditional area (non-containerized area), the creation of the partition needs to create a virtual machine from the cloud platform, allocate an IP, deploy components such as services and gateways of the area, and generate a gateway routing address Addr [ NG1-Z1] ═ IP [ G1] of the area after deployment is completed. Step 5 is then performed.
And 4, step 4: when the area is a containerized area, an API of kubernets is called to create a partition (a partition group is NG, the area and the partition are in a one-to-one mapping relationship, and the partition corresponds to an area concept on a production line), and the name of the partition is N ═ NG ] - [ Z ]. Meanwhile, creating an ingress instance matched with the partitions, registering the domain name addresses [ NG ] - [ Z ]. k8s.net of each partition, and configuring the mapping relation between the domain name addresses and the gateway service [ G ] under the partition [ N ]. And finally, generating a gateway routing address Addr [ NG3-Z3] - [ NG ] - [ Z ]. k8s.net in the region. Step 5 is then performed.
And 5: and summarizing the gateway routing addresses of all areas (containerized areas and non-containerized areas) in the partition group, storing the gateway routing addresses in a centralized manner, and distributing and configuring the gateway routing addresses of each area to corresponding gateways in a summarizing result of the gateway routing addresses.
Fig. 4 shows a flow of a service routing method under the same partition group configured in the system embodiment shown in fig. 2. As shown in fig. 4, in the sub-group NG1, the service S1 located in the non-containerized area Z1 calls the service S3 located in the containerized area Z3, and the call procedure is as follows:
step 1: the service S1 obtains the access address of the service S3 in the partition Z3 as IP [ G1]/S3 by using a service registration and discovery algorithm of the non-containerized area, and initiates a call request, wherein the call request firstly reaches the gateway G1 of the area NG1-Z1 where the service S1 is located.
The service registration and discovery algorithm for the non-containerized area is implemented as follows:
implemented using zookeeper. In the same non-containerized area, as shown in fig. 1, the service S1 registers its own IP information IP [ S1] in zookeeper, and if the service S2 in the Z1 area needs to access the service S1, the access address of the service S1, i.e., IP [ S1], may be read from zookeeper. Similarly, the G1 gateway is responsible for registering the service access addresses of all other areas in zookeeper, the registered value is IP [ G1], for example, service S1 calls service S5, and reads IP [ G1] from zookeeper and calls it.
And 2, step: the gateway G1 obtains the access address of the service S3 as NG1-Z3.k8s.net/S3 by calculation through inquiring a routing information table, and simultaneously locates the address of the access address domain name of the service S3 through DNS service, and calls a request to reach the container cluster.
And 3, step 3: the container cluster matches the Ingress NG1-IN1 to which the partition NG1-Z3 belongs through a domain name address, and the Ingress NG1-IN1 translates the call request into cluster internal addressing: partition NG1-Z3, gateway G3, path (url)/S3.
And 4, step 4: the gateway G3 translates the call request to the cluster internal address "S3: 8080", i.e., to look for the same partition' S3 service, according to path/S3.
As shown in fig. 4, in the subgroup NG2, the service S4 located in the containerized zone Z4 calls the service S2 located in the non-containerized zone Z2, and the call process is as follows:
step 1: service S4 located in partition NG2-Z4 obtains the access address of service S2 as S2:8080 through service registration and discovery algorithm of containerized area, and the access address is registered as G4 gateway.
The specific implementation of the service registration and discovery algorithm of the containerized area is as follows: inside the container cluster, kubernets dns is used to implement the registration and discovery mechanism for services. As shown in fig. 1, the service S5 registers its own service name S5 and cluster internal virtual address in a list of kubernets dns, and the service S6 may directly call the service S5 using the domain name of the service S5. Similarly, the G3 gateway is responsible for registering the service names of all external services, such as S1 and the cluster internal virtual addresses of the G3 gateway itself, into the kubernets dns list, so that when the service S5 needs to call the service S1, the used domain name S1 accesses the gateway G3.
Step 2: the G4 gateway calculates the access address of the service S2 as IP [ G2]/S2 by inquiring the routing information table, and calls the request to reach the G2 of the Z2 area gateway.
And step 3: and the gateway G2 obtains the access address of the service S2 as IP [ S2]/S2 through a service registration and discovery algorithm of the non-containerized area, and calls the request to reach the service S2.
As described in fig. 4, the cross-partition routing addresses in all the partition groups are uniformly encapsulated by the gateway and the ingress, and for the service and service developers, the service invocation mechanism identical to the online environment can be implemented without paying attention to the partition group to which the service and service developers belong, thereby ensuring the transparency of the service in the partition groups.
While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more embodiments, occur in different orders and/or concurrently with other acts from that shown and described herein or not shown and described herein, as would be understood by one skilled in the art.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software as a computer program product, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk (disk) and disc (disc), as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks (disks) usually reproduce data magnetically, while discs (discs) reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. The single-cluster multi-tenant management system is characterized by comprising a plurality of partition groups, each partition group comprises a plurality of partitions, each partition corresponds to an on-line area of a production environment one by one, one part of the partitions contained in each partition group is used for providing a non-containerized area, the other part of the partitions is used for providing a containerized area, the non-containerized area and the containerized area jointly form a complete on-line simulation environment, and different tenants customize the values of the partition groups corresponding to the tenants according to the use of the environment.
2. The single cluster multi-tenant management system of claim 1, wherein the non-containerized zones are comprised of Linux hosts of a cloud platform, the containerized zones are comprised of namespaces in a kubernets cluster, and wherein each non-containerized zone and each containerized zone are each comprised of a series of services and a gateway.
3. The single cluster multi-tenant management system of claim 2, wherein the service is a microservice exposed through http protocol; the access path of the service is [ local area gateway address ]/[ S ], wherein S is a service name; the service access address of the non-containerized area is IP [ G ]/[ S ], wherein G is the gateway name; the service access address of the containerized area is [ NG ] - [ Z ]. k8s.net/[ S ], wherein NG is a partition group name, and Z is a partition name; the gateway is deployed in the corresponding region and used for controlling access flow among the regions, and service access in the regions does not pass through the gateway; the access addresses of other areas are stored in the routing database of each gateway; the gateway exposure address of the non-containerized area is IP [ G ]; and the gateway exposure address of the containerized area is [ NG ] - [ Z ]. k8s.net.
4. The single cluster multi-tenant management system of claim 3, further comprising a routing rule based network agent Ingress in the partition group, wherein Ingress is further configured to perform the following processing:
firstly, a main node of a container cluster registers at the beginning of DNS service, and a root domain name of k8s.net is registered, so that the container cluster is exposed in a local area network;
then, based on the relation that a containerized partition corresponds to an Ingress, the Ingress maps a domain name and a gateway of the containerized partition in a domain name exposure mode, wherein inter-partition traffic in the same partition group is uniformly controlled by the gateway and the Ingress, and no access traffic exists between different partition groups.
5. The single cluster multi-tenant management system of claim 4, wherein the multi-tenant management mechanism of the system comprises a group creation method, the group creation method comprising the steps of:
step 1: the user inputs the name of the sub-group;
step 2: circularly creating partitions according to the built-in online area list;
and step 3: when the area is a non-containerized area, creating the partition needs to create a virtual machine from the cloud platform, allocate an IP, deploy components including the service and the gateway of the area, generate a gateway routing address of the area after deployment is completed, and then execute step 5;
and 4, step 4: when the area is a containerized area, calling an API of kubernets to create partitions, creating ingress instances matched with the partitions, registering the domain name address of each partition, configuring the mapping relation between the domain name address and gateway services under the partitions, regenerating the gateway routing address of the area, and then executing the step 5;
and 5: and summarizing gateway routing addresses of all areas in the partition group, distributing and configuring the gateway routing addresses of each area to corresponding gateways in a summarizing result of the gateway routing addresses after the gateway routing addresses are stored in a centralized mode, wherein all the areas comprise containerized areas and non-containerized areas.
6. The single cluster multi-tenant management system of claim 5, wherein the multi-tenant management mechanism of the system further comprises a service routing method under the same partition group, the service routing method comprises a calling procedure in which a service in the non-containerized area calls a service in the containerized area, and a calling procedure in which a service in the containerized area calls a service in the non-containerized area.
7. The single cluster multi-tenant management system of claim 6, wherein the calling process of the service S1 in the non-containerized zone Z1 calling the service S3 in the containerized zone Z3 further comprises:
step 1: the service S1 obtains the access address of the service S3 in the containerized area Z3 as IP [ G1]/S3 by using service registration and discovery of the non-containerized area and initiates a calling request, wherein the calling request firstly reaches a gateway G1 of the area Z1 where the service S1 is located;
step 2: the gateway G1 calculates the access address of the service S3 as NG1-Z3.k8s.net/S3 by inquiring a routing information table, and simultaneously locates the address of the access address domain name of the service S3 through DNS service, and calls a request to reach a container cluster;
and step 3: the container cluster matches Ingress to which the containerization zone Z3 belongs through a domain name address, and the Ingress translates the call request into cluster internal addressing: containerization zone Z3, gateway G3, path "/S3";
and 4, step 4: the gateway G3 translates the call request to a cluster internal address, i.e. looking for the same partitioned service S3, according to the path "/S3" in the cluster internal addressing.
8. The single cluster multi-tenant management system of claim 7, wherein the calling process of the service S4 in the containerized zone Z4 calling the service S2 in the non-containerized zone Z2 further comprises:
step 1: the service S4 located in the containerization area Z4 obtains the access address of the service S2 through service registration and discovery of the containerization area, and the access address is registered as a G4 gateway;
and 2, step: the gateway G4 obtains the access address of the service S2 by calculation through inquiring a routing information table, and calls a gateway G2 of which the request reaches the non-containerization area Z2;
and step 3: the gateway G2 obtains the access address of the service S2 through service registration and discovery in the non-containerized area, and calls the request to the service S2.
CN202210102294.3A 2022-01-27 2022-01-27 Single-cluster multi-tenant management system Pending CN114490393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210102294.3A CN114490393A (en) 2022-01-27 2022-01-27 Single-cluster multi-tenant management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210102294.3A CN114490393A (en) 2022-01-27 2022-01-27 Single-cluster multi-tenant management system

Publications (1)

Publication Number Publication Date
CN114490393A true CN114490393A (en) 2022-05-13

Family

ID=81475726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210102294.3A Pending CN114490393A (en) 2022-01-27 2022-01-27 Single-cluster multi-tenant management system

Country Status (1)

Country Link
CN (1) CN114490393A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115604199A (en) * 2022-10-09 2023-01-13 上海道客网络科技有限公司(Cn) Service routing method and system for cloud native platform micro-service gateway

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115604199A (en) * 2022-10-09 2023-01-13 上海道客网络科技有限公司(Cn) Service routing method and system for cloud native platform micro-service gateway

Similar Documents

Publication Publication Date Title
JP7060636B2 (en) Virtual network interface object
AU2014278314B2 (en) Distributed lock management in a cloud computing environment
CN104090825B (en) Dynamic migration computer network
CN109451084A (en) A kind of service access method and device
US20100318609A1 (en) Bridging enterprise networks into cloud
CN108886525A (en) The method and apparatus of intelligent domain name system forwards
KR20050055770A (en) Apparatus, method, and computer program product for building virtual networks
CN107077367A (en) Privately owned alias end points for isolating virtual network
CN113301116A (en) Cross-network communication method, device, system and equipment for microservice application
US10333901B1 (en) Policy based data aggregation
CN111970337B (en) P2P network communication construction method, system, medium and terminal under cross-cloud environment
US11777897B2 (en) Cloud infrastructure resources for connecting a service provider private network to a customer private network
CN110213391A (en) A kind of configuration method and device of internet protocol address
CN114490393A (en) Single-cluster multi-tenant management system
RU2517377C2 (en) Allotting of functional possibilities for client services by implementation and translocation
US10243920B1 (en) Internet protocol address reassignment between virtual machine instances
US7818713B2 (en) Method, system and program product for generating requirement relationships for designing a solution
CN114374743B (en) Gateway routing rule generation method and system supporting multiple service discovery mechanisms
Cherrueau et al. Geo-distribute cloud applications at the edge
Lehwess Public Cloud Architecture
US11496549B2 (en) Heterogeneous execution engines in a network centric process control system
US20240098089A1 (en) Metadata customization for virtual private label clouds
CN115604272A (en) Load balancing method, device, system creating method, device and medium
CN115514805A (en) Multi-registration center micro-service discovery method and system based on unified access rule
CN116897527A (en) Cloud infrastructure resources for connecting a service provider private network to a customer private network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination