CN114168156A - Multi-tenant data persistence method and device, storage medium and computer equipment - Google Patents

Multi-tenant data persistence method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN114168156A
CN114168156A CN202111432747.0A CN202111432747A CN114168156A CN 114168156 A CN114168156 A CN 114168156A CN 202111432747 A CN202111432747 A CN 202111432747A CN 114168156 A CN114168156 A CN 114168156A
Authority
CN
China
Prior art keywords
tenant
bucket
zeppelin
data persistence
computing node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111432747.0A
Other languages
Chinese (zh)
Inventor
张�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yishi Huolala Technology Co Ltd
Original Assignee
Shenzhen Yishi Huolala Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yishi Huolala Technology Co Ltd filed Critical Shenzhen Yishi Huolala Technology Co Ltd
Priority to CN202111432747.0A priority Critical patent/CN114168156A/en
Publication of CN114168156A publication Critical patent/CN114168156A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a multi-tenant data persistence method, a multi-tenant data persistence device, a computer readable storage medium and a computer device. The multi-tenant data persistence method is used for a Zeppelin platform, and the Zeppelin platform is deployed in hosts of a K8S cluster. The method comprises the following steps: mounting of the S3 bucket in each compute node of the K8S cluster; when a tenant uses the Zeppelin platform, a separate user directory is created for the tenant in a computing node corresponding to the tenant, wherein the user directories of multiple tenants corresponding to the same computing node are located in an S3 bucket mounted on the computing node, and the user directories of different tenants in the same S3 bucket are isolated from each other. The multi-tenant data persistence method, the multi-tenant data persistence device, the computer readable storage medium and the computer device in the embodiments of the present application can implement data persistence storage and recording of tenant-related operations, and data of different tenants will not affect each other.

Description

Multi-tenant data persistence method and device, storage medium and computer equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a multi-tenant data persistence method, a multi-tenant data persistence apparatus, a computer-readable storage medium, and a computer device.
Background
Most Internet enterprises are provided with similar notewood products, and the products perform data analysis, data modeling and data visualization in an interactive mode. Zeppelin notewood is one of the widely used products for big data analysis, modeling and visualization.
Based on Zeppelin, a multitenant notebook application which is deployed in K8S and opens a big data platform can be custom-built. However, the service deployed in K8S has no data persistence function by default, and when the Zeppelin container exits abnormally or the service stops, the user-written notebook code and uploaded data are at risk of being lost.
There are several ways to solve this problem. For example, data persistence can be implemented based on a local disk mount method, however, this method needs to solve the problem of creation and mount of multi-user data volumes programmatically, and also has a problem that when a same data volume directory is mounted under a multi-tenant scenario, multiple users edit the same data to cause code and data confusion, and in addition, only the host disk directory is mounted, there is a problem that the mounted files are inconsistent because the containers are dispatched to different hosts after restarting and stopping. For another example, all the computing nodes in the cluster may be used to manually mount the distributed file system, however, in this way, all the computing nodes need to manually install mount software and set configuration files on the host, and also need to deal with operation and maintenance problems such as machine restart and cluster expansion.
Disclosure of Invention
In order to solve at least one of the above technical drawbacks, the present application provides a multi-tenant data persistence method, a multi-tenant data persistence apparatus, a computer-readable storage medium, and a computer device according to the following technical solutions.
The embodiment of the application provides a multi-tenant data persistence method, which is used for a Zeppelin platform, wherein the Zeppelin platform is deployed in a host of a K8S cluster. The multi-tenant data persistence method comprises the following steps: mounting an S3 bucket in each compute node of the K8S cluster; when the tenant uses the Zeppelin platform, creating a separate user directory for the tenant in the computing node corresponding to the tenant, wherein a plurality of user directories of the tenant corresponding to the same computing node are located in the S3 bucket mounted by the computing node, and the user directories of different tenants in the same S3 bucket are isolated from each other.
In certain embodiments, said mounting of a S3 bucket in each compute node of the K8S cluster comprises: mirroring software related to the mount of the S3 bucket; mounting, by DaemonSet in the K8S cluster, the S3 bucket into each of the compute nodes based on the associated software that was mirrored.
In some embodiments, the software associated with the mount of the S3 bucket includes a start script and a dockerfile.
In certain embodiments, said mounting, by said associated software made by DaemonSet in said K8S cluster based on mirroring, said S3 bucket into each said compute node comprises: creating a ConfigMap object containing basic configuration information for mounting the S3 bucket; and creating a DaemonSet object, wherein the DaemonSet object mounts the S3 bucket into the target directory of each computing node based on the start script, the dockerfile and the ConfigMap object.
In some embodiments, the creating a separate user directory for the tenant in the compute node corresponding to the tenant when the tenant uses the Zeppelin platform includes: when any tenant uses the Zeppelin platform, judging whether the tenant needs to save data or not; if yes, creating an independent user directory for the tenant in the computing node corresponding to the tenant.
In some embodiments, the creating a separate user catalog for a tenant in the computing node corresponding to the tenant comprises: the mounting type is designated as Host; determining the target directory in the target computing node to be mounted according to the Host; creating an initialization container, and mounting the target directory in the target computing node to be mounted in the initialization container; creating the user catalog for the tenant in the target catalog; creating a Zeppelin service container, and mounting the user directory corresponding to the tenant in the Zeppelin service container.
In some embodiments, the creating a separate user catalog for a tenant in the computing node corresponding to the tenant further comprises: and after the user directory is created for the tenant in the target directory, setting operation permission for the user directory.
The embodiment of the application further provides a multi-tenant data persistence device which is used for a Zeppelin platform, and the Zeppelin platform is deployed in a host of the K8S cluster. The multi-tenant data persistence device comprises a mounting module and an operation module. The mounting module is used for mounting an S3 bucket in each computing node of the K8S cluster. The persistence operation module is used for creating separate user directories for the tenant in the computing node corresponding to the tenant when the tenant uses the Zeppelin platform, wherein the user directories of a plurality of tenants corresponding to the same computing node are located in the S3 bucket mounted on the computing node, and the user directories of different tenants in the same S3 bucket are isolated from each other.
The embodiment of the application also provides a computer readable storage medium. The computer readable storage medium stores thereon a computer program, and the computer program, when executed by a processor, implements the multi-tenant data persistence method according to any one of the above embodiments.
The embodiment of the application also provides computer equipment. The computer comprises one or more processors; a memory; one or more computer programs, wherein the one or more computer programs are stored in the memory and configured to be executed by the one or more processors, the one or more computer programs configured to: and executing the multi-tenant data persistence method described in any one of the above embodiments.
Compared with the prior art, the application has the following beneficial effects:
the multi-tenant data persistence method, the multi-tenant data persistence device, the computer readable storage medium and the computer device in the embodiments of the present application utilize linux Fuse technology in combination with Docker and K8S daemon set to implement the mounting of the computing node in the K8S cluster on the S3 file directory, and the Zeppelin service created by multiple tenants implements the persistence of the code written by the user and the uploaded data through the file directory mounted on the computing node.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a method flow diagram of a data persistence method of certain embodiments of the present application;
FIG. 2 is a schematic diagram of a data persistence device according to some embodiments of the present application;
FIG. 3 is a method flow diagram of a data persistence method of certain embodiments of the present application;
FIG. 4 is a method flow diagram of a data persistence method of certain embodiments of the present application;
FIG. 5 is a schematic diagram of a data persistence method according to some embodiments of the present application;
FIG. 6 is a method flow diagram of a data persistence method of certain embodiments of the present application;
FIG. 7 is a method flow diagram of a data persistence method of certain embodiments of the present application;
FIG. 8 is a schematic diagram of a computer-readable storage medium in communication with a processor according to some embodiments of the present application;
FIG. 9 is a schematic diagram of a computer device according to some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Referring to fig. 1, an embodiment of the present application provides a multi-tenant data persistence method. The method is used for a Zeppelin platform, and the Zeppelin platform is deployed in a host machine of a K8S cluster. The multi-tenant data persistence method comprises the following steps:
01: mounting of the S3 bucket in each compute node of the K8S cluster;
02: when a tenant uses the Zeppelin platform, a separate user directory is created for the tenant in a computing node corresponding to the tenant, wherein the user directories of multiple tenants corresponding to the same computing node are located in an S3 bucket mounted on the computing node, and the user directories of different tenants in the same S3 bucket are isolated from each other.
Referring to fig. 2, the present embodiment further provides a multi-tenant data persistence apparatus 10. The multi-tenant data persistence method according to the embodiment of the present application can be implemented by the multi-tenant data persistence device 10 according to the embodiment of the present application. The multi-tenant data persistence device 10 includes a mount module 11 and an operation module 12. Step 01 may be implemented by the mounting module 11. Step 02 may be implemented by the operation module 12.
That is, the mount module 11 may be used to mount the S3 bucket in each compute node of the K8S cluster. The operation module 12 may be used to create a separate user directory for a tenant in a computing node corresponding to the tenant when the tenant uses the Zeppelin platform. Wherein, the user directories of a plurality of tenants corresponding to the same computing node are positioned in the S3 storage bucket mounted by the computing node, and the user directories of different tenants in the same S3 storage bucket are isolated from each other.
Among them, K8S (i.e., kubernets) is a container cluster management system of Google open source. On the basis of the Docker technology, k8s provides a series of complete functions such as deployment and operation, resource scheduling, service discovery and dynamic scaling for containerized applications, and can improve the convenience of large-scale container cluster management. The Zeppelin platform is an application deployed in a K8S cluster, and the functions of the application are as follows: the method carries out query and calculation of big data in a Web interface visualization mode, provides rich graphical display such as column diagrams and pie charts for operation results, supports SQL statement query data, supports various development languages (Java, Python, Scale, Shell and the like) to write code scripts and supports various data processing engines to process data, and therefore a user can conveniently create interactive documents and charts by using various programming languages. As an example, the Zeppelin platform may be, for example, Zeppelin notewood.
The Zeppelin service currently deployed in K8S has no data persistence function by default, and relevant operation data of a tenant (also understood as a user or a project group) on the Zeppelin platform, such as written notebook code and uploaded data, can face the risk of loss when the Zeppelin service container abnormally exits or the service stops.
In the multi-tenant data persistence method and the multi-tenant data persistence device 10 according to the embodiments of the present application, a mount of an S3 bucket is performed in each compute node of a K8S cluster, and then, when a tenant uses a Zeppelin platform, a separate user directory may be created for the tenant in the S3 bucket of the corresponding compute node, and user directories of different tenants may be isolated from each other, thereby implementing storage and recording of data persistence of user-related operations, and data of different tenants may not affect each other.
Referring to FIG. 3, in certain embodiments, step 01 mounts a S3 bucket in each compute node of a K8S cluster, comprising:
011: mirroring software related to the mounting of the S3 bucket;
012: the S3 bucket was mounted to each compute node by daemonsett in the K8S cluster based on the associated software that was mirrored.
Referring to fig. 2, in some embodiments, step 011 and step 012 can be implemented by the mount module 11. That is, the mounting module 11 may be further configured to mirror software related to mounting of the S3 bucket, and mount the S3 bucket into each compute node by the daemonsett in the K8S cluster based on the mirrored software related.
In some embodiments, the software associated with the mount of the S3 bucket includes a start script and a dockerfile.
In particular, the start script is mainly used to obtain s3 bucket-related authentication parameters and configuration parameters. The authentication parameters include, for example, access Key keys, i.e., Key Secret Key and Key. The authentication parameters may be used to secure access services, and the S3 storage service identifies which tenant is currently accessing which S3 bucket through the access key described above. The configuration parameters include, for example, the number of retries for mounting a network problem, whether to allow other tenants to operate the file, and the like. The code that initiates script execution is, for example: -usr/bin/s 3fs $ { buffer } $ { MNT _ POINT } -d-d-f-o.endpoint ═ allow _ other, retries ═ 5.
The Docker file is a script interpreted by a Docker program, and is a text file for constructing an image, and the text content of the Docker file contains a piece of instruction and description required for constructing the image. In addition to specifying the above boot script, the dockerfile also includes a minimal operating alpine image layer, a system dependency package, and an S3fs (i.e., S3 fuse) program that performs the S3 bucket file mount operation, and the finally manufactured image is pushed to an image repository.
In the multi-tenant data persistence method and the multi-tenant data persistence device 10 according to the embodiments of the present application, a docker mirror image is used to encapsulate software and dependence required for mounting an S3Fuse, so that modification of a host environment is avoided, and meanwhile, the cost of installation, operation and maintenance can be reduced.
Referring to fig. 4, in some embodiments, step 012 of mounting, by DaemonSet in the K8S cluster, an S3 bucket to each computing node based on related software of image production includes:
0121: creating a ConfigMap object containing basic configuration information for mounting the S3 bucket;
0122: a DaemonSet object is created that mounts the S3 bucket into the target directory of each compute node based on the startup script, the dockerfile, and the ConfigMap object.
Referring back to fig. 2, in some embodiments, both steps 0121 and 0122 can be implemented by the mounting module 11. That is, the mounting module 11 may be further configured to create a ConfigMap object that contains the basic configuration information of the mounting S3 bucket. The mount module 11 may be further configured to create a DaemonSet object that mounts the S3 bucket into the target directory of each compute node based on the startup script, the dockerfile, and the ConfigMap object.
Specifically, a ConfigMap object is first created to the K8S cluster. The ConfigMap object is an API object in the K8S cluster, is equivalent to a collection of a plurality of configuration parameters or environment parameters required by running a program, and is mainly used for decoupling the program from running static codes and dynamically changing parameters, so that modification of application configuration parameters is facilitated. The ConfigMap object contains the basic configuration information for mounting the S3 bucket, such as the S3 bucket name, the aforementioned key keys and keys, etc. After creating the ConfigMap object, a DaemonSet object for performing the mount operation flow is created to the K8S cluster. The DaemonSet object contains the software related to the S3 bucket mount that was made as the aforementioned mirror image. The DaemonSet object may mount the ConfigMap object to obtain parameters for program execution and mount the S3 bucket into the target directory of the compute node. For example, referring to fig. 5, Node1 and Node2 in fig. 5 are computing nodes, each computing Node corresponds to a host, and each host has a target directory: the/mnt/s 3fs catalog. The S3 bucket encapsulates software and dependency required by S3fuse mount through the mirror image of dockerfile in docker, and mounts the S3 bucket into the/mnt/S3 fs directory by means of DaemonSet object, thereby realizing mount and use of distributed object storage under each compute node.
In the multi-tenant data persistence method and the multi-tenant data persistence device 10 according to the embodiments of the present application, a docker mirror image is used to encapsulate S3Fuse to mount required software and dependence, so as to avoid modifying the host environment, and reduce the cost of installation, operation and maintenance by the docker.
Referring to FIG. 6, in some embodiments, step 02 creates a separate user directory for a tenant in a compute node corresponding to the tenant when the tenant uses the Zeppelin platform, including:
021: when any tenant uses the Zeppelin platform, judging whether the tenant needs to store data or not;
022: and if so, creating a separate user directory for the tenant in the computing node corresponding to the tenant.
Referring back to fig. 2, in some embodiments, step 021 and step 022 can both be implemented by operational module 12. That is, the operation module 12 may be configured to determine whether a tenant needs to perform data saving when any tenant uses the Zeppelin platform, and create a separate user directory for the tenant in a computing node corresponding to the tenant when the tenant needs to perform data saving.
Specifically, in the process that the tenant uses the Zeppelin platform, if the tenant needs to save the written code and the uploaded data to realize the persistence of the data, it is determined that an individual user directory needs to be created for the tenant in the computing node corresponding to the tenant. Therefore, the user catalog is only created for the tenants with requirements, the user catalog is not required to be created for all the tenants, and unnecessary operation is avoided.
Referring to FIG. 7, in certain embodiments, step 022 creates a separate user catalog for the tenant in the compute node corresponding to the tenant, comprising:
0221: the mounting type is designated as Host;
0222: determining a target directory in a target computing node to be mounted according to the Host;
0223: creating an initialization container, and mounting a target directory in a target computing node to be mounted in the initialization container;
0224: creating a user directory for the tenant in the target directory;
0225: creating a Zeppelin service container, and mounting a user directory corresponding to the tenant in the Zeppelin service container.
Referring back to fig. 2, in some embodiments, steps 0221-0225 may be implemented by operation module 12 in creating a Zeppelin service container and mounting a user directory corresponding to a tenant in the Zeppelin service container. That is, the operation module 12 can be configured to designate the mount type as Host, and determine the target directory in the target computing node that needs to be mounted according to the Host. The operation module 12 can also be used to create an initialization container, mount a target directory in a target computing node to be mounted in the initialization container, and create a user directory for the tenant in the target directory. The operation module 12 may also be configured to create a Zeppelin service container, and mount a user directory corresponding to a tenant in the Zeppelin service container.
Specifically, please refer to fig. 5, when it is determined that the tenant needs to save data, the fixed mount type is Host. The container in the computing node can determine which target directory in the computing node needs to be mounted through the parameter Host, in other words, can determine which target directory on the Host needs to be mounted through the parameter Host. Subsequently, an initialization container is created and the target directory on the target host determined in step 0222 is mounted in the initialization container. Subsequently, a separate user directory is created for the user under the target directory, and the separate user directory can be used for storing the data of the current tenant. Subsequently, a zappelin service container is created. When the zeppelin service container mounts the target directory in the host, the user directory corresponding to the current tenant under the target directory is designated to be mounted according to the difference of the tenants. Thus, isolation between data of different tenants is achieved by different tenants mounting the same S3 bucket and mounting different user directories in the bucket.
Similarly, if data saving is to be performed when Hive pod, Spark pod, or Shell pod (as shown in fig. 5) run started by the zappelin service is executed, the data saving may also be performed by the above-mentioned user directory of the target tenant mounted in the same S3 bucket, and details are not described here again.
Referring back to FIG. 7, in certain embodiments, step 022 creates a separate user catalog for the tenant in the compute node corresponding to the tenant, further comprising:
0226: and after the user directory is created for the tenant in the target directory, setting operation permission for the user directory.
Referring back to fig. 2, in some embodiments, step 0226 can be implemented by operational module 12. That is, the operation module 12 may be further configured to set the operation authority for the user directory after the user directory is created for the tenant in the target directory.
The operation authority may include read, write, and other operation authorities. The operation authority is set, so that the container can read or write data in the user directory during operation.
To sum up, the multi-tenant data persistence method and the multi-tenant data persistence device 10 in the embodiments of the present application use linux Fuse technology in combination with Docker and K8S daemon set to implement the mount of the computing node in the K8S cluster on the S3 file directory, and the Zeppelin Notebook service created by multiple tenants implements the persistence of the code written by the tenant and the uploaded data by mounting the file directory on the computing node. In addition, the mount logic of the S3 bucket is decoupled from the business logic of Zeppelin, avoiding interaction. Moreover, according to different user directories mounted by different tenants, the code and data security isolation is realized. In addition, software and dependence required by the Docker mirror image packaging S3Fuse mounting are adopted, the host environment is prevented from being modified, and meanwhile, the cost of installation, operation and maintenance is reduced. In addition, after the K8S DaemonSet object is adopted to realize K8S cluster expansion, the mounting of the distributed S3 file system of the newly added nodes is automatically realized, and the intervention operation of manual operation and maintenance is reduced.
The scheme of the application aims to solve the problems of production of the Zeppelin platform by combining the cloud native K8S DaemonSet, containerization and introduction of the distributed data storage technology S3, so that when a user uses the platform to analyze and visualize data, the user does not need to pay attention to bottom storage resources, worry about code and data loss and data privacy, and meanwhile, system development and operation and maintenance personnel do not need to pay attention to the problem of distributed data storage maintenance when facing container errors and cluster scaling.
The contents of the method embodiments of the present application are all applicable to the apparatus embodiments, the functions specifically implemented by the apparatus embodiments are the same as those of the method embodiments, and the beneficial effects achieved by the apparatus embodiments are also the same as those achieved by the method described above.
Further, referring to fig. 8, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the multi-tenant data persistence method described in any of the above embodiments. The computer-readable storage medium includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Erasable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards. That is, a storage device includes any medium that stores or transmits information in a form readable by a device (e.g., a computer, a cellular phone), and may be a read-only memory, a magnetic or optical disk, or the like.
The contents of the method embodiments of the present application are all applicable to the storage medium embodiments, the functions specifically implemented by the storage medium embodiments are the same as those of the method embodiments, and the beneficial effects achieved by the storage medium embodiments are also the same as those achieved by the method described above, and for details, refer to the description of the method embodiments, and are not described herein again.
In addition, referring to fig. 9, an embodiment of the present application further provides a computer device, where the computer device described in this embodiment may be a server, a personal computer, a network device, and other devices. The computer device includes: one or more processors, memory, one or more computer programs stored in the memory and configured to be executed by the one or more processors, the one or more computer programs configured to perform the multi-tenant data persistence methods of any of the embodiments above.
The contents of the method embodiment of the present application are all applicable to the computer apparatus embodiment, the functions specifically implemented by the computer apparatus embodiment are the same as those of the method embodiment, and the beneficial effects achieved by the method are also the same as those achieved by the method.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. A multi-tenant data persistence method is used for a Zeppelin platform, the Zeppelin platform is deployed in hosts of a K8S cluster, and the method is characterized by comprising the following steps:
mounting an S3 bucket in each compute node of the K8S cluster;
when the tenant uses the Zeppelin platform, creating a separate user directory for the tenant in the computing node corresponding to the tenant, wherein a plurality of user directories of the tenant corresponding to the same computing node are located in the S3 bucket mounted by the computing node, and the user directories of different tenants in the same S3 bucket are isolated from each other.
2. The multi-tenant data persistence method of claim 1, wherein the mounting of the S3 bucket in each compute node of the K8S cluster comprises:
mirroring software related to the mount of the S3 bucket;
mounting, by DaemonSet in the K8S cluster, the S3 bucket into each of the compute nodes based on the associated software that was mirrored.
3. The multi-tenant data persistence method of claim 2, wherein the software associated with the mount of the S3 bucket comprises a boot script and a dockerfile.
4. The multi-tenant data persistence method of claim 3, wherein the mounting of the S3 bucket into each of the compute nodes by the associated software made by daemonsett in the K8S cluster based on mirroring comprises:
creating a ConfigMap object containing basic configuration information for mounting the S3 bucket;
and creating a DaemonSet object, wherein the DaemonSet object mounts the S3 bucket into the target directory of each computing node based on the start script, the dockerfile and the ConfigMap object.
5. The multi-tenant data persistence method of claim 4, wherein the creating a separate user directory for the tenant in the compute node corresponding to the tenant while the tenant uses the Zeppelin platform comprises:
when any tenant uses the Zeppelin platform, judging whether the tenant needs to save data or not;
if yes, creating an independent user directory for the tenant in the computing node corresponding to the tenant.
6. The multi-tenant data persistence method of claim 5, wherein the creating a separate user directory for a tenant in the compute node corresponding to the tenant comprises:
the mounting type is designated as Host;
determining the target directory in the target computing node to be mounted according to the Host;
creating an initialization container, and mounting the target directory in the target computing node to be mounted in the initialization container;
creating the user catalog for the tenant in the target catalog;
creating a Zeppelin service container, and mounting the user directory corresponding to the tenant in the Zeppelin service container.
7. The multi-tenant data persistence method of claim 6, wherein the creating a separate user directory for a tenant in the compute node corresponding to the tenant further comprises:
and after the user directory is created for the tenant in the target directory, setting operation permission for the user directory.
8. A multi-tenant data persistence apparatus for a Zeppelin platform deployed in a host of a K8S cluster, comprising:
a mount module to mount a S3 bucket in each compute node of the K8S cluster;
an operation module, configured to create, in the computing node corresponding to the tenant, a separate user directory for the tenant when the tenant uses the Zeppelin platform, where a plurality of user directories of the tenant corresponding to the same computing node are located in the S3 bucket mounted by the computing node, and the user directories of different tenants in the same S3 bucket are isolated from each other.
9. A computer-readable storage medium, wherein a computer program is stored thereon, which when executed by a processor implements the multi-tenant data persistence method of any of claims 1 through 7.
10. A computer device, comprising:
one or more processors;
a memory;
one or more computer programs, wherein the one or more computer programs are stored in the memory and configured to be executed by the one or more processors, the one or more computer programs configured to: performing the multi-tenant data persistence method of any of claims 1 through 7.
CN202111432747.0A 2021-11-29 2021-11-29 Multi-tenant data persistence method and device, storage medium and computer equipment Pending CN114168156A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111432747.0A CN114168156A (en) 2021-11-29 2021-11-29 Multi-tenant data persistence method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111432747.0A CN114168156A (en) 2021-11-29 2021-11-29 Multi-tenant data persistence method and device, storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN114168156A true CN114168156A (en) 2022-03-11

Family

ID=80481440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111432747.0A Pending CN114168156A (en) 2021-11-29 2021-11-29 Multi-tenant data persistence method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN114168156A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114584564A (en) * 2022-03-23 2022-06-03 北京邮电大学深圳研究院 Mobile terminal side data addressing and analyzing technology for privacy resource protection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114584564A (en) * 2022-03-23 2022-06-03 北京邮电大学深圳研究院 Mobile terminal side data addressing and analyzing technology for privacy resource protection
CN114584564B (en) * 2022-03-23 2023-08-18 北京邮电大学深圳研究院 Mobile terminal side data addressing and analyzing method for protecting privacy resources

Similar Documents

Publication Publication Date Title
US11178207B2 (en) Software version control without affecting a deployed container
US9983891B1 (en) Systems and methods for distributing configuration templates with application containers
CN111338854B (en) Kubernetes cluster-based method and system for quickly recovering data
US8560826B2 (en) Secure virtualization environment bootable from an external media device
US20150128141A1 (en) Template virtual machines
US11748006B1 (en) Mount path management for virtual storage volumes in a containerized storage environment
US20120084768A1 (en) Capturing Multi-Disk Virtual Machine Images Automatically
US9135038B1 (en) Mapping free memory pages maintained by a guest operating system to a shared zero page within a machine frame
CN107797767A (en) One kind is based on container technique deployment distributed memory system and its storage method
US9792131B1 (en) Preparing a virtual machine for template creation
CN111090498B (en) Virtual machine starting method and device, computer readable storage medium and electronic equipment
US11144292B2 (en) Packaging support system and packaging support method
WO2016206414A1 (en) Method and device for merging multiple virtual desktop architectures
US11709692B2 (en) Hot growing a cloud hosted block device
Mavridis et al. Orchestrated sandboxed containers, unikernels, and virtual machines for isolation‐enhanced multitenant workloads and serverless computing in cloud
US11150981B2 (en) Fast recovery from failures in a chronologically ordered log-structured key-value storage system
US9986043B2 (en) Technology for service management applications and cloud workload migration
CN114168156A (en) Multi-tenant data persistence method and device, storage medium and computer equipment
US20220138023A1 (en) Managing alert messages for applications and access permissions
US9104544B1 (en) Mitigating eviction by maintaining mapping tables
CN113986858B (en) Linux compatible android system shared file operation method and device
CN103389909A (en) Rendering farm node virtualization deployment system and application thereof
US20230108778A1 (en) Automated Generation of Objects for Kubernetes Services
CN111949378B (en) Virtual machine starting mode switching method and device, storage medium and electronic equipment
CN114237814A (en) Virtual machine migration method and device across virtualization platforms and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination