CN115640021A - Secondary mirror image warehouse deployment method and system of global warehouse - Google Patents

Secondary mirror image warehouse deployment method and system of global warehouse Download PDF

Info

Publication number
CN115640021A
CN115640021A CN202211193478.1A CN202211193478A CN115640021A CN 115640021 A CN115640021 A CN 115640021A CN 202211193478 A CN202211193478 A CN 202211193478A CN 115640021 A CN115640021 A CN 115640021A
Authority
CN
China
Prior art keywords
warehouse
repo
global
mirror image
container
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211193478.1A
Other languages
Chinese (zh)
Inventor
蔡泽坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
CCB Finetech Co Ltd
Original Assignee
China Construction Bank Corp
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp, CCB Finetech Co Ltd filed Critical China Construction Bank Corp
Priority to CN202211193478.1A priority Critical patent/CN115640021A/en
Publication of CN115640021A publication Critical patent/CN115640021A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the application provides a secondary mirror image warehouse deployment method and system of a global warehouse, and belongs to the technical field of container mirror image warehouses. The method comprises the following steps: establishing ssh mutual trust between the global warehouse and each related repo node thereof, and constructing a storage space in each repo warehouse to obtain a repo warehouse as a secondary cluster mirror image warehouse; wherein each repo node is a linux server of each sub-cluster associated with the global repository; carrying out yum source configuration in each repo warehouse, installing required dependence, deploying customized harbor instances, and periodically synchronizing metadata information in the global warehouse to each repo warehouse. And modifying the domain name of the global warehouse into a repo warehouse domain name by using the image address used when the container is created by each sub-cluster application, and appointing to pull the mirror image from the repo warehouse. According to the scheme of the invention, the secondary mirror image warehouse is added in each cluster for mirror image distribution, so that the request pressure of the primary warehouse is shunted, the high availability in a multi-cluster scene is improved, and performance buffering is provided for transverse/longitudinal capacity expansion.

Description

Secondary mirror image warehouse deployment method and system of global warehouse
Technical Field
The application relates to the technical field of mirror image warehouses, in particular to a secondary mirror image warehouse deployment method and a secondary mirror image warehouse deployment system of a global warehouse.
Background
Virtualization has become a widely recognized way of sharing server resources that provides the system administrator with great flexibility in building operating system instances on demand. Since hypervisor virtualization technology still has some performance and resource usage efficiency issues, a new type of virtualization technology called container has emerged to help solve these issues. The technology of effectively dividing resources of a single operating system into isolated groups so as to better balance conflicting resource usage requirements among the isolated groups is container technology. In container technology, a very widely used item is Harbor (enterprise-level container mirror warehouse), which is an open-source container mirror warehouse item for storing cloud-based artifacts such as mirrors and helm chart. As a mirror image warehouse project of a CNCF (cloud native computing foundation) graduation level, the Harbor gives multiple capabilities of user authority control, policy management, vulnerability scanning and the like, and helps a user to safely and controllably manage products on kubernets (container arrangement system) or docker.
In many fields, such as financial institutions, a multi-cluster kubernets cluster application scenario is adopted, when a Harbor is used as a mirror warehouse of the kubernets cluster, mirror pulling of all nodes (master, ifra and node) in the cluster is pulled from the Harbor warehouse, the upper limit of the number of the nodes depends on the resource size of the master and the ifra nodes, and the maximum number of the nodes can reach hundreds; in a financial multi-cluster scene, a single hardor receives node nodes of multiple clusters, the number of mirror image pulling requests will increase dramatically, and a pulling mirror image failure may occur in a peak period, so that a pod in kubernets cannot operate normally, and a series of usability problems are brought. Aiming at the problem that a pod cannot be pulled up due to the fact that a mirror cannot be pulled in a scene of deploying and changing services in a multi-cluster scene, a new secondary mirror warehouse deployment method of a global warehouse needs to be created.
Disclosure of Invention
The embodiment of the application aims to provide a deployment method and a deployment system of a secondary mirror image warehouse of a global warehouse, so as to solve the problem that a pod cannot be pulled up due to the fact that a mirror image cannot be pulled in a deployment and service change scene in a multi-cluster scene.
In order to achieve the above object, a first aspect of the present application provides a method for deploying a secondary mirror image warehouse of a global warehouse, where the global warehouse and the secondary mirror image warehouse are both constructed based on a Harbor architecture, and the method includes: establishing ssh mutual trust between the global warehouse and each related repo node thereof, and constructing a storage space in each repo warehouse to obtain the repo warehouse as a secondary mirror image warehouse; wherein each repo node is a linux server of each sub-cluster associated with the global warehouse; carrying out yum source deployment in each repo warehouse, wherein the installation needs dependence, and a customized harbor instance is deployed; periodically synchronizing metadata information of the global warehouse and each repo warehouse; establishing mutual trust connection between each sub-cluster and the corresponding repo warehouse so as to appoint mirror image pulling from the corresponding repo warehouse when each sub-cluster application creates a container.
In this embodiment of the present application, the constructing a storage space at each repo node includes: based on the Harbor persistent storage requirement, establishing an lvm at each repo node; and performing logic volume partition allocation based on the created lvm, and creating a corresponding logic volume partition for the secondary mirror image warehouse.
In this embodiment of the present application, before yum source deployment is performed in each repo warehouse, the method further includes: and adding domain name resolution to the network configuration file and the system file in each repo warehouse, so that the global warehouse and each repo warehouse realize network intercommunication access.
In this embodiment of the present application, synchronizing file information in the global repository to each repo repository based on the deployed yum source includes: installing a preset component based on the yum source; driving the preset assembly to operate, and copying the harbor installation package mirror image to each repo warehouse; reading the mirror image of the hardor installation package, and enabling a registry container in the hardor architecture to access the global warehouse through domain name resolution processing; and starting the read hardor, and synchronously copying the files in the global warehouse to each repo warehouse based on the registration container.
In an embodiment of the present application, the preset component at least includes: a docker component, a docker-composition component, and an nfs-utils component.
In this embodiment of the application, the driving the preset component to run, and mirror-copying the hardor installation package to each repo warehouse includes: and driving the docker assembly to copy the image of the hardor installation package into each repo warehouse.
In this embodiment of the present application, the making a registry container in a hardor architecture accessible to the global repository through domain name resolution processing includes: performing docker-component file deployment in the read harbor based on the docker-component; adding extra-hosts resolution of the global warehouse domain name for the registry container in the deployed docker-composition file, so that the registry container can access the global warehouse.
In an embodiment of the present application, the hardor after starting reading includes: yml add proxy configuration to config.yml in the registry container; wherein the proxy configuration comprises: the global warehouse domain name, a preset deployment account login name and a login password; starting the read-back hardor based on the proxy configuration.
In an embodiment of the present application, before synchronously copying the files in the global repository into each repo repository based on the registry container, the method further includes: configuring the authentication mode of each repo repository to ldap, including: adding extra-hosts resolution of an ldap domain name for the core container in a deployed docker-composition file, so that the core container can access the ldap server; and modifying the authentication mode of the harbor of the repo warehouse into ldap, and configuring the ldap configuration parameters to be consistent with the global warehouse.
In this embodiment of the present application, the periodically synchronizing file information of the global repository and the respective repo repositories includes: driving and executing the synchronous script once every preset time, wherein the synchronous script comprises the following steps: traversing project information of the global warehouse and the repo warehouse, and deleting project information of the repo warehouse but not the global warehouse; if project information in a global repository does not exist in a repo repository, creating corresponding project information for the missing project information in the global repository; traversing the member information in the global warehouse, and if the corresponding member information does not exist in the repo warehouse, creating the corresponding member information aiming at the missing member information in the global warehouse; and reversely traversing the member information of the repo warehouse and the member information of the global warehouse, and deleting the member information of the repo warehouse which exists but does not exist in the global warehouse.
In an embodiment of the present application, the method further includes: and carrying out information creation operation and information deletion operation log recording, and recording corresponding execution time.
In an embodiment of the present application, the method further includes: driving and executing the capacity clearing script once every preset time, wherein the capacity clearing script comprises the following steps: checking the current usage of a storage directory of a registration container, and comparing the current usage of the storage directory with a preset storage directory usage threshold; and if the current usage of the storage directory is larger than the preset storage directory usage threshold, executing a repo node reset operation.
The second aspect of the present application provides a second-level mirror image warehouse deployment system of a global warehouse, the global warehouse and the second-level mirror image warehouse are all constructed based on a Harbor architecture, the system includes: the construction unit is used for establishing ssh mutual trust between the global warehouse and a related repo node thereof, and constructing a storage space at each repo node to obtain a repo warehouse as a secondary mirror image warehouse; the system comprises a deployment unit, a scheduling unit and a processing unit, wherein the deployment unit is used for carrying out yum source deployment in each repo warehouse, relying on the installation and deploying customized harbor examples; the operation and maintenance unit is used for periodically synchronizing the metadata information of the global warehouse and each repo warehouse; establishing mutual trust connection between each sub-cluster and the corresponding repo warehouse so as to appoint mirror image pulling from the corresponding repo warehouse when each sub-cluster application creates a container.
In this embodiment, the operation and maintenance unit is further configured to: driving and executing the capacity clearing script once every preset time, wherein the capacity clearing script comprises the following steps: checking the current usage of a storage directory of a registration container, and comparing the current usage of the storage directory with a preset storage directory usage threshold; and if the current usage of the storage directory is larger than the preset storage directory usage threshold, executing a repo node reset operation.
A third aspect of the present application provides a computer device, wherein the computer device is configured to execute the secondary mirror warehouse deployment method of the global warehouse.
A fourth aspect of the present application provides a machine-readable storage medium having stored thereon instructions, which when executed by a processor, cause the processor to be configured to perform the above-mentioned secondary mirror warehouse deployment method of a global warehouse.
A fifth aspect of the present application provides a computer program product, which includes a computer program, and when executed by a processor, the computer program implements the secondary mirror warehouse deployment method of the global warehouse described above.
Through the technical scheme, the secondary mirror image warehouse is added in each cluster to carry out mirror image distribution, so that the request pressure of the primary warehouse is shunted, the high availability in a multi-cluster scene is improved, and performance buffering is provided for transverse/longitudinal capacity expansion.
Additional features and advantages of embodiments of the present application will be described in detail in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the detailed description serve to explain the embodiments of the application and not to limit the embodiments of the application. In the drawings:
FIG. 1 is a flow chart schematically illustrating steps of a secondary mirror warehouse deployment method of a global warehouse according to an embodiment of the present application;
FIG. 2 is a system diagram schematically illustrating a secondary mirror warehouse deployment system of a global warehouse according to an embodiment of the present application;
fig. 3 schematically shows an internal structure diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the specific embodiments described herein are only used for illustrating and explaining the embodiments of the present application and are not used for limiting the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that if directional indications (such as up, down, left, right, front, back, 8230; \8230;) are referred to in the embodiments of the present application, the directional indications are only used to explain the relative positional relationship between the components, the motion situation, etc. in a specific posture (as shown in the attached drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present application, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
Virtualization has become a widely recognized way of sharing server resources that provides the system administrator with great flexibility in building operating system instances on demand. Since hypervisor virtualization technology still has some performance and resource usage efficiency issues, a new type of virtualization technology called container has emerged to help solve these issues. The technology of effectively dividing resources of a single operating system into isolated groups so as to better balance conflicting resource usage requirements among the isolated groups is container technology.
In container technology, a very widely used one is Harbor (enterprise-level container mirror warehouse), which is an open-source container mirror warehouse item for storing cloud-based artifacts such as mirrors and helm charts. As a mirror image warehouse project of a CNCF (cloud native computing foundation) graduation level, the Harbor gives multiple capabilities of user authority control, policy management, vulnerability scanning and the like, and helps users to safely and controllably manage products on kubernets (container arrangement systems) or dockers.
Kubernets, K8s for short, is an open source and is used for managing containerized applications on multiple hosts in a cloud platform, and aims to make deploying containerized applications simple and efficient (powerfull), and provides a mechanism for application deployment, planning, updating and maintenance. In kubernets, a plurality of containers can be created, each container runs an application instance, and then management, discovery and access of the group of application instances are realized through a built-in load balancing strategy, and the details do not need operation and maintenance personnel to perform complicated manual configuration and processing.
In many fields, such as financial institutions, a multi-cluster kubernets cluster application scenario is adopted, when a Harbor serves as a mirror warehouse of the kubernets cluster, mirror pulling of all nodes (master, infra, node) in the cluster is pulled from the Harbor warehouse, the upper limit of the number of the nodes depends on the resource size of the master and infra nodes, and the maximum number can reach hundreds; under the financial grade multi-cluster scene, a single harbor bears node nodes of a plurality of clusters, the quantity of mirror image pulling requests is increased sharply, and the mirror image pulling failure may be generated in the peak period, so that the pod in Kubernets cannot operate normally, and a series of usability problems are brought. For a real application scenario, a possible performance bottleneck can be encountered in an architecture in which each set of container cloud platform corresponds to one mirror image warehouse, and a problem that a pod cannot be pulled up due to the fact that a mirror image cannot be pulled may be encountered in a scenario in which a service is deployed and changed.
Based on the above problem, the cloud platform must consider the pressure of the mirror image warehouse when no matter the number of the horizontal expansion clusters and the number of the longitudinal expansion nodes, which greatly increases the limitation condition of the cluster scale. Aiming at the problems that in the existing scheme, a single hardor bears node nodes of a plurality of clusters, mirror image pulling requests are increased sharply, and pulling mirror image failure is possibly generated in a peak period, so that pod cannot run normally in Kubernets, a novel secondary mirror image warehouse deployment method of a global warehouse is created.
Fig. 1 schematically shows a flowchart of a secondary mirror warehouse deployment method of a global warehouse according to an embodiment of the present application. As shown in fig. 1, in an embodiment of the present application, a secondary mirror image warehouse deployment method for a global warehouse is provided, including the following steps:
step S10: and establishing ssh mutual trust between the global warehouse and the related repo nodes, and constructing a storage space at each repo node to obtain a repo warehouse serving as a secondary mirror image warehouse.
Specifically, when the secondary mirror image warehouse is constructed, the idea is that a linux server is applied to each sub-cluster as a secondary mirror image warehouse deployment node of the sub-cluster and is used as a repo node, and then a corresponding storage space is configured at the repo node to construct the secondary mirror image warehouse, so that left and right mirror image data of the global warehouse can be completely copied to the secondary mirror image warehouse. Therefore, when the subsequent multi-cluster synchronous image extraction is carried out, the secondary image warehouse can assist the global warehouse to carry out shunting, and the problem that the pod in the Kubernetes cannot normally run due to the fact that image pulling failure is possibly generated in the peak period is avoided.
In a possible embodiment, the configuration of the repo node should be increased based on the number of node nodes in the sub-cluster, for example, based on 10 node nodes, the CPU should not be less than 2C, the memory should not be less than 8G, and the data disk should not be less than 100G.
For the construction modes of the secondary mirror image warehouse, in order to embody the technical effects that can be achieved by the scheme of the present invention, different embodiments will be respectively exemplified, and from a plurality of different embodiments, the best specific construction mode of the secondary mirror image warehouse is determined.
When the second-level mirror image warehouse is deployed, the mirror image warehouse is compatible with the original architecture as far as possible under the condition of finishing the mirror image caching function, and the problem of mirror image distribution is solved under the condition of minimum change. The change to the system is reduced as much as possible, when the image extraction is executed, the method can be the same as the image mode in the traditional extraction global warehouse, any change to the image extraction process is not needed, and when the secondary image warehouse is constructed on the existing system, seamless connection can be carried out with the traditional architecture, so that the workload of developers is reduced.
Furthermore, because the global mirror warehouse in the architecture of the current multi-cluster container cloud platform mostly adopts Harbor with CNCF graduation level, the most mature industry and the widest application scenario as the warehouse architecture, and the functions of RBAC access and the like contained in the Harbor architecture are tightly combined with Kubernets, the standard scenario in the model selection of the scheme is that the global warehouse is the Harbor architecture. Then, a plurality of architecture modes of the secondary mirror image warehouse are specifically provided:
1) Firstly, the file system used by the global mirror warehouse is synchronized to the repo nodes where all the sub-clusters are located through the file system synchronization function, and the effect of full-scale copying is achieved sequentially.
2) And building a complete harbor warehouse at each repo node, configuring a source address into a global warehouse based on a copy rule of the harbor, and copying the full amount of images of the global warehouse to a secondary warehouse in a pull-based mode, namely completely creating a new global layer image warehouse at each repo node. The method cannot synchronize the account authority information existing in the global warehouse, has limited application scenes under the architecture of kubernets, and cannot finish authority control of the mirror image pulling. That is, for the secondary mirror image warehouse, a user who originally has access right to the global warehouse may not be able to access the corresponding secondary mirror image warehouse, and thus, corresponding mirror image extraction may not be able to be performed.
3) And only establishing a separate docker-registry container at each repo node, and configuring the global warehouse as a mirror source through proxy configuration of the registry. The scheme does not relate to synchronous related operation, but cannot perform project-based authority control on the mirror image, and is inconsistent with a global warehouse architecture, so that the operation and maintenance management cost is greatly increased.
4) And building a complete harber warehouse at the repo node, configuring proxy in a registration container in the harber architecture, configuring the global warehouse as a mirrorsource, calling the API of the global warehouse by a script which runs on the repo node at regular time, and completing synchronization of project and member information. The scheme does not relate to synchronous related operation, the communication with the global warehouse is completed through the interface, and the scheme is consistent with the architecture of the global warehouse and convenient to manage.
By introducing the architecture mode of the possible secondary mirror warehouse in the above 4, it can be clearly understood that the effect of the mode in the fourth mode is most expected according to the scheme of the present invention, that is, while the integral harber warehouse is built at the repo node, a proxy is configured in the registration container in the harber architecture, the global warehouse is configured as a mirrorsource, and the script running on the repo node at regular time calls the API of the global warehouse to complete the synchronization of the project and the member information. The scheme of the invention is just to adopt the mode to construct the secondary mirror image warehouse.
Based on the introduction, the connection information of each sub-cluster is collected firstly, and then the linux servers of each sub-cluster are called, wherein one linux server is selected from the sub-clusters as a carrying server of the secondary mirror image warehouse. Taking the selected linux server as a corresponding subset group repo node, and then creating a lvm (Logical Volume Manager, logical Volume management) at each repo node based on the Harbor persistent storage requirement; and performing logical volume partition allocation based on the created lvm, and creating a corresponding logical volume partition for the secondary mirror image warehouse. The lvm is a mechanism for managing the disk partition in the linux environment, and the lvm is a logic layer established on the hard disk and the partition to improve the flexibility of the disk partition management. The principle of operation of lvm is simple, in that it is abstractly encapsulated by the underlying physical hard disk and then presented to the upper application in the form of a logical volume. In the traditional disk management mechanism, our upper-layer application directly accesses the file system, so as to read the underlying physical hard disk, while in lvm, by encapsulating the underlying hard disk, when we operate on the underlying physical hard disk, we no longer operate on a partition, but do their underlying disk management operation through what is called a logical volume. Say i add a physical hard disk, the upper layer service is not felt at this time because the upper layer service is presented in the form of a logical volume. The largest characteristic of the lvm is that the dynamic management can be carried out on the disk. Because the size of the logical volume is dynamically adjustable and does not lose existing data. If we add a new hard disk, it will not change the existing upper layer logical volume. As a dynamic disk management mechanism, the logical volume technology greatly improves the flexibility of disk management. The scheme of the invention is based on the dynamic management capability of the lvm, performs corresponding logical volume partition allocation, and ensures the flexibility of linux server convolution allocation.
Step S20: and carrying out yum source deployment in each repo warehouse, wherein the customized harbor instances are deployed depending on installation.
Specifically, after the storage space of the repo warehouse is built, network intercommunication communication between the global warehouse and each corresponding repo warehouse needs to be built, although network connection exists between each repo node and the global warehouse on line, after volume splitting and allocation are carried out by lvm, intercommunication communication connection between a new split volume and the global warehouse and sub-clusters needs to be suggested. Based on this, domain name resolution needs to be added in/etc/hosts so as to have corresponding access rights.
Furthermore, because a corresponding secondary warehouse needs to be built on the basis of the harbor architecture in the repo warehouse, after the storage space is opened up, the harbor installation package needs to be read into the storage space, and the normal installation of the harbor is ensured, so that the repo warehouse is ensured to build the mirror image warehouse of the harbor architecture. Based on this, yum (Yellow dog update, modified) source needs to be configured at the repo node. The yum source is equivalent to a directory entry, and when the yum mechanism is used for installing software, if the dependent software needs to be installed, the yum mechanism searches for the dependent software according to a path defined in the yum source and installs the dependent software. The working principle of the method is that all RPM software packages are stored on the server, then the dependency relationship of each RPM file is analyzed through related functions, and the data are recorded into files and stored in a specific directory of the server. If some software needs to be installed, the dependency relationship file recorded on the server is downloaded first (in WWW or FTP mode), and the recorded data downloaded by the server is analyzed, and then all relevant software is obtained and downloaded at one time for installation.
And performing corresponding required component installation such as a docker component, a docker-compound component and an nfs-utils component based on the deployed yum.
The docker belongs to a package of a Linux container, provides a simple and easy-to-use container use interface, and is the most popular Linux container solution at present. And packaging the application program and the dependency of the application program in a file by the docker. Running this file creates a virtual container. The program runs in this virtual container as if it were running on a real physical machine. With docker, there is no concern about environmental concerns. Overall, the docker interface is fairly simple, and the user can easily create and use containers to place their own applications into the containers. The container can also be version managed, copied, shared, modified just like managing ordinary code. The scheme of the invention mainly depends on a docker component to carry out hardor reading.
Furthermore, the composition is a tool for arranging the docker containers, defines and runs the application of the multiple containers, can start the multiple containers in one command, and does not need to use shell scripts to start the containers by using the docker-composition. The composition manages a plurality of docker containers through one configuration file, in which all containers are defined by services, and then starts, stops and restarts the application using the docker-composition script, and services in the application and all containers depending on the services are well suited to a scenario in which a plurality of containers are used in combination for development.
Further, NFS-utils is a dependent component of NFS information delivery, and is deployed in a repo node to support subsequent file transfer.
After the installation is carried out into each component, the docker component is started, the docker installation package is copied to each repo node, the docker installation package is correspondingly read, then the docker-compound file is modified, and the required domain name resolution is increased.
Specifically, extra _ hosts resolution of the global repository domain name is added to the registry container in a docker-composition file deployed by the harbor, so that the container can access the global repository through the domain name. Wherein the registry container is a back end component that stores the item. Harbor is a complete mirror image warehouse service composed of multiple components, which includes a registry container. Besides the first time, the system also comprises a core container as an interface processor, a proxy as a front-end webpage display, a postgresql as a database and a jobservice as a task controller.
Furthermore, proxy configuration is added to the registry config.yml of registry and the registry is started, 2. Under the main directory of the registry, the/common/registry configuration yml configuration file is modified, and proxy configuration is added at the end of the file, for example:
proxy:
remoteurl:http://global.harbor.io
username:admin
password:adminpasswd
wherein remoteurl is a global warehouse domain name, and password is an admin account password. Preferably, in order to ensure that all images can be cached, that is, all image files have corresponding access rights, the account password used herein is the corresponding administrator account and password.
After the proxy configuration is increased, the corresponding hardor is started to synchronize global warehouse data. In order to ensure consistency with the global warehouse architecture and facilitate management, the authentication mode of the repo warehouse needs to be configured to be consistent with the global warehouse.
Specifically, in a scenario of a multi-cluster container cloud platform, the global hardbor warehouse and the kubernets jointly use an external ldap as a user authentication management component, and the secondary warehouse also needs to access the same set of ldap in order to maintain consistency of an authentication system when receiving a mirror pull request of a sub-cluster node. Configuring the authentication mode of each repo repository to ldap, including: adding extra _ hosts resolution of an ldap domain name for a core container in a deployed docker-composition file, such that the core container may access an ldap server; and modifying the authentication mode of the hardor of the repo warehouse into ldap, and configuring the ldap configuration parameters to be consistent with the global warehouse.
In one possible implementation, add extra _ hosts resolution of the ldap domain name to the core container in the docker-composition file deployed by harbor, allowing the container to access the ldap server through the domain name. Then choose ldap in the authentication mode of the harbor, and configure ldapurl, dn, password, search member, etc. to keep consistent with the global repository.
In the embodiment of the invention, as a secondary warehouse solution based on the container cloud native kubernets, the k8s end can be adapted to the secondary warehouse mirror image distribution scheme only by slightly changing the k8s end. The repo node is used as a harbor warehouse arranged in the cluster, and the external exposure service is realized in an http mode; the node of the child cluster needs to configure the instance-registry of the repo domain name, and the node can pull the mirror image from the secondary warehouse after configuration; and configuring the resolution of the repo ip and the repo domain name in/etc/hosts. And modifying the domain name of the global warehouse into a repo warehouse domain name by using the image address used when the container is created by each sub-cluster application. The contents of the imagePullSecret delivered by kubernets and located in each namespace and used for pulling the mirror image need to modify the global warehouse domain name into a repo warehouse domain name, and the user name and the password do not need to be changed due to the use of a uniform ldap server.
Step S30: and periodically synchronizing the metadata information of the global warehouse and each repo warehouse.
Specifically, the driving execution of the synchronization script every predetermined time (for example, every 5 minutes) includes: traversing project information of the global warehouse and the repo warehouse, and deleting project information which exists in the repo warehouse but does not exist in the global; if the project information in the global warehouse does not exist in the repo warehouse, corresponding project information creation is carried out on the project information which is corresponding to the missing project information in the global warehouse; traversing the member information in the global warehouse, and if the corresponding member information does not exist in the repo warehouse, creating the corresponding member information in the global warehouse corresponding to the missing member information; and reversely traversing the member information of the repo warehouse and the member information of the global warehouse, and deleting the member information of the repo warehouse which exists but does not exist in the global warehouse.
Further, the driving and executing the capacity clearing script once every predetermined time includes: checking the current usage of a storage directory of a registration container, and comparing the current usage of the storage directory with a preset storage directory usage threshold; and if the current usage of the storage directory is larger than the preset storage directory usage threshold, executing a repo node reset operation.
In one possible implementation, the repo node is configured with a capacity clean-up script via the boom, periodically checks the storage directory usage of the registers, supports custom thresholds, clears the registers and harbor-db contents when the thresholds are exceeded, is equivalent to resetting the repo node without affecting functionality, and the script executes every five minutes.
Step S40: establishing mutual trust connection between each sub-cluster and the corresponding repo warehouse so as to appoint mirror image pulling from the corresponding repo warehouse when each sub-cluster application creates a container.
Specifically, after the metadata of the repo warehouse and the global warehouse are synchronized, when the application creation container exists in each subsequent sub-cluster, the scheme that the containers need to be pulled from the global warehouse in the traditional scheme can be eliminated, and the repo warehouse of each sub-cluster can be subjected to mirror image pulling. Therefore, a mutual trust connection between each sub-cluster and the corresponding repo warehouse needs to be established. The mutual trust connection is established based on the mutual trust relationship between each sub-cluster and the global warehouse, and each sub-cluster can directly keep the original mirror image pulling rule to carry out corresponding mirror image pulling in the repo warehouse without any system change. Furthermore, after the mirror image pulling of each sub-cluster in the repo warehouse fails, the mirror image pulling can be directly attempted in the global warehouse, and any mirror image pulling rule switching is performed differently. The success rate of mirror image pulling can be guaranteed to a certain extent, and the flexibility of mirror image pulling is improved.
In the embodiment of the invention, the addresses of the node pull mirror image warehouses in each sub-cluster uniformly point to the secondary mirror image warehouse. After receiving the request, the secondary mirror image warehouse preferentially searches the mirror image in the local registry, and if the local registry exists, the secondary mirror image warehouse directly returns the mirror image content of the request; and if the local content does not exist, requesting the mirror image content from the global warehouse, storing the mirror image content in the local warehouse, and returning the mirror image content to the client. Compared with the condition that only one mirror image warehouse exists in the cluster, the secondary warehouse relieves the pressure of the primary warehouse and reserves performance buffer for cluster capacity expansion under the conditions of occupying limited resources, slightly modifying the original architecture and not influencing the existing functions.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a portion of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 2, a secondary mirror warehouse deployment system of a global warehouse is provided, where the global warehouse and the secondary mirror warehouse are both built based on a Harbor architecture, and the system includes: the construction unit is used for establishing ssh mutual trust between the global warehouse and a related repo node thereof, and constructing a storage space at each repo node to obtain a repo warehouse as a secondary mirror image warehouse; the deployment unit is used for carrying out yum source deployment in each repo warehouse and synchronizing the file information in the global warehouse to each repo warehouse based on the deployed yum source; and the operation and maintenance unit is used for regularly synchronizing the file information of the global warehouse and the file information of each repo warehouse.
In this embodiment, the operation and maintenance unit is further configured to: driving and executing the capacity clearing script once every preset time, wherein the capacity clearing script comprises the following steps: checking the current usage of a storage directory of a registration container, and comparing the current usage of the storage directory with a preset storage directory usage threshold; and if the current usage of the storage directory is larger than the preset storage directory usage threshold, executing a repo node reset operation.
The memory may include volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), including at least one memory chip.
The embodiment of the application provides a storage medium, wherein a program is stored on the storage medium, and when the program is executed by a processor, the method for deploying the secondary mirror image warehouse of the global warehouse is realized.
In one embodiment, a computer device is provided, which may be a server, and the internal structure thereof may be as shown in fig. 3. The computer apparatus includes a processor a01, a network interface a02, a memory (not shown in the figure), and a database (not shown in the figure) connected through a system bus. Wherein the processor a01 of the computer device is arranged to provide computing and control capabilities. The memory of the computer apparatus includes an internal memory a03 and a nonvolatile storage medium a04. The nonvolatile storage medium a04 stores an operating system B01, a computer program B02, and a database (not shown in the figure). The internal memory a03 provides an environment for running the operating system B01 and the computer program B02 in the nonvolatile storage medium a04. The network interface a02 of the computer apparatus is used for communicating with an external terminal through a network connection. The computer program B02 is executed by the processor a01 to implement a secondary mirror warehouse deployment method of a global warehouse.
Those skilled in the art will appreciate that the architecture shown in fig. 3 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the secondary mirror warehouse deployment system of the global warehouse provided by the present application may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in fig. 3. The memory of the computer device may store each program module of the secondary image warehouse deployment system constituting the global warehouse, and the computer program constituted by each program module causes the processor to execute the steps in the secondary image warehouse deployment method of the global warehouse according to the embodiments of the present application described in the present specification.
The present application also provides a computer program product adapted to perform the above-mentioned secondary mirror warehouse deployment method of a global warehouse when executed on a data processing device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media include permanent and non-permanent, removable and non-removable media and may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional identical elements in the process, method, article, or apparatus comprising the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (17)

1. A deployment method of a secondary mirror image warehouse of a global warehouse is characterized in that the global warehouse and the secondary mirror image warehouse are constructed based on a Harbor architecture, and the method comprises the following steps:
establishing ssh mutual trust between the global warehouse and each related repo node thereof, and constructing a storage space in each repo warehouse to obtain a repo warehouse serving as a secondary mirror image warehouse; wherein each repo node is a linux server of each sub-cluster associated with the global repository;
carrying out yum source deployment in each repo warehouse, depending on the installation requirement, and deploying customized harbor instances;
periodically synchronizing metadata information of the global warehouse and each repo warehouse;
establishing mutual trust connection between each sub-cluster and the corresponding repo warehouse so as to appoint mirror image pulling from the corresponding repo warehouse when each sub-cluster application creates a container.
2. The method of claim 1, wherein the constructing the storage space at each repo node comprises:
based on the Harbor persistent storage requirement, establishing a lvm at each repo node;
and performing logical volume partition allocation based on the created lvm, and creating a corresponding logical volume partition for the secondary mirror image warehouse.
3. The method of claim 1, wherein prior to yum source deployment in each repo warehouse, the method further comprises:
and adding domain name resolution to the network configuration file and the system file in each repo warehouse, so that the global warehouse and each repo warehouse realize network intercommunication access based on the domain name.
4. The method of claim 1, wherein the synchronizing file information in the global repository to each repo repository based on the deployed yum source comprises:
performing preset component installation based on the yum source;
driving the preset assembly to operate, and copying the harbor installation package mirror image to each repo warehouse;
reading the mirror image of the hardor installation package, and enabling a registry container in the hardor architecture to access the global warehouse through domain name resolution processing;
starting the read hardor, and establishing mirror cache authentication and configuration with the global warehouse based on the registration container.
5. The method according to claim 4, characterized in that said preset components comprise at least:
a docker component, a docker-compound component, and an nfs-utils component.
6. The method of claim 5, wherein the driving the preset components to run and mirror-copy the hardor installation package into each repo warehouse comprises:
and driving the docker assembly to copy the image of the hardor installation package into each repo warehouse.
7. The method of claim 5, wherein the making the global repository accessible to a registry container in a hardor schema through a domain name resolution process comprises:
performing docker-component file deployment in the read harbor based on the docker-component;
adding extra-hosts resolution of the global warehouse domain name for the registry container in the deployed docker-composition file, so that the registry container can access the global warehouse.
8. The method of claim 4, wherein the initiating the read-after-harbor comprises:
yml add proxy configuration to config.yml in the registry container; wherein the proxy configuration comprises: the global warehouse domain name, a preset deployment account login name and a login password;
starting the read hardor based on the proxy configuration.
9. The method of claim 7, wherein before synchronously copying the files in the global repository into the respective repo repositories based on a registry container, the method further comprises:
configuring the authentication mode of each repo repository to ldap, including:
adding extra-hosts resolution of an ldap domain name for a core container in a deployed docker-composition file, such that the core container may access an ldap server;
and modifying the authentication mode of the hardor of the repo warehouse into ldap, and configuring the ldap configuration parameters to be consistent with the global warehouse.
10. The method of claim 1, wherein the periodically synchronizing metadata information of the global repository and each repo repository comprises:
driving and executing the synchronous script once every preset time, wherein the synchronous script comprises the following steps:
traversing project information of the global warehouse and the repo warehouse, and deleting project information of the repo warehouse but not the global warehouse;
if project information in a global repository does not exist in a repo repository, creating corresponding project information for the missing project information in the global repository;
traversing the member information in the global warehouse, and if the corresponding member information does not exist in the repo warehouse, creating the corresponding member information for the missing member information in the global warehouse;
and reversely traversing the member information of the repo warehouse and the member information of the global warehouse, and deleting the member information of the repo warehouse which exists but does not exist in the global warehouse.
11. The method of claim 10, further comprising:
and carrying out information creation operation and information deletion operation log recording, and recording corresponding execution time.
12. The method of claim 1, further comprising:
driving and executing the capacity clearing script once every preset time, wherein the capacity clearing script comprises the following steps:
checking the current usage of a storage directory of a registration container, and comparing the current usage of the storage directory with a preset storage directory usage threshold;
and if the current usage of the storage directory is larger than the preset storage directory usage threshold, executing a repo node reset operation.
13. A secondary mirror image warehouse deployment system of a global warehouse, wherein the global warehouse and the secondary mirror image warehouse are constructed based on a Harbor architecture, and the system comprises:
the construction unit is used for establishing ssh mutual trust between the global warehouse and a related repo node thereof, and constructing a storage space at each repo node to obtain a repo warehouse as a secondary mirror image warehouse;
the deployment unit is used for carrying out yum source deployment in each repo warehouse, relying on the installation requirement and deploying the customized harbor instance;
an operation and maintenance unit for:
periodically synchronizing metadata information of the global warehouse and each repo warehouse;
establishing mutual trust connection between each sub-cluster and the corresponding repo warehouse so as to be used for appointing mirror image pulling from the corresponding repo warehouse when each sub-cluster application establishes a container.
14. The system of claim 13, wherein the operation and maintenance unit is further configured to:
driving and executing the capacity clearing script once every preset time, wherein the capacity clearing script comprises the following steps:
checking the current usage of a storage directory of a registration container, and comparing the current usage of the storage directory with a preset storage directory usage threshold;
and if the current usage of the storage directory is larger than the preset storage directory usage threshold, executing a repo node reset operation.
15. A computer device, characterized in that the computer device is configured to execute the secondary mirror warehouse deployment method of the global warehouse according to any one of claims 1 to 12.
16. A machine-readable storage medium having stored thereon instructions, which when executed by a processor, cause the processor to be configured to perform a secondary mirror warehouse deployment method of a global warehouse as claimed in any one of claims 1 to 12.
17. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements a method for secondary mirror warehouse deployment of a global warehouse as claimed in any one of claims 1 to 12.
CN202211193478.1A 2022-09-28 2022-09-28 Secondary mirror image warehouse deployment method and system of global warehouse Pending CN115640021A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211193478.1A CN115640021A (en) 2022-09-28 2022-09-28 Secondary mirror image warehouse deployment method and system of global warehouse

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211193478.1A CN115640021A (en) 2022-09-28 2022-09-28 Secondary mirror image warehouse deployment method and system of global warehouse

Publications (1)

Publication Number Publication Date
CN115640021A true CN115640021A (en) 2023-01-24

Family

ID=84941375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211193478.1A Pending CN115640021A (en) 2022-09-28 2022-09-28 Secondary mirror image warehouse deployment method and system of global warehouse

Country Status (1)

Country Link
CN (1) CN115640021A (en)

Similar Documents

Publication Publication Date Title
US11836516B2 (en) Reducing execution times in an on-demand network code execution system using saved machine states
US11700296B2 (en) Client-directed placement of remotely-configured service instances
US10606881B2 (en) Sharing container images between mulitple hosts through container orchestration
US11321130B2 (en) Container orchestration in decentralized network computing environments
CN106537338B (en) Self-expanding clouds
CN103491144B (en) A kind of construction method of Internet virtual platform
US8909767B2 (en) Cloud federation in a cloud computing environment
US8984243B1 (en) Managing operational parameters for electronic resources
US8769531B2 (en) Optimizing the configuration of virtual machine instances in a networked computing environment
US9350682B1 (en) Compute instance migrations across availability zones of a provider network
US10671377B2 (en) Method to deploy new version of executable in node based environments
CN109189334B (en) Block chain network service platform, capacity expansion method thereof and storage medium
US9882775B1 (en) Dependent network resources
US20130325885A1 (en) Provisioning composite applications using a hierarchical data structures
US11416220B2 (en) Provisioning composite applications using secure parameter access
CN115086166A (en) Computing system, container network configuration method, and storage medium
WO2015117278A1 (en) Method for obtaining clock interruption signal, and nfv functional entity
CN115640021A (en) Secondary mirror image warehouse deployment method and system of global warehouse
US10929168B2 (en) Enhanced data storage and versioning of virtual nodes in a data processing environment
Nocentino et al. Kubernetes architecture
US11853783B1 (en) Identifying hosts for dynamically enabling specified features when resuming operation of a virtual compute instance
Nocentino et al. Storing persistent data in Kubernetes
WO2023274014A1 (en) Storage resource management method, apparatus, and system for container cluster
EP4068091A1 (en) Hybrid approach to performing a lazy pull of container images
CN111264050B (en) Dynamically deployed limited access interface for computing resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination