CN111125003B - Container mirror image lightweight and quick distribution method - Google Patents
Container mirror image lightweight and quick distribution method Download PDFInfo
- Publication number
- CN111125003B CN111125003B CN201911162941.4A CN201911162941A CN111125003B CN 111125003 B CN111125003 B CN 111125003B CN 201911162941 A CN201911162941 A CN 201911162941A CN 111125003 B CN111125003 B CN 111125003B
- Authority
- CN
- China
- Prior art keywords
- mirror image
- layer
- service
- service center
- container
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000005540 biological transmission Effects 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000007726 management method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000036316 preload Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/188—Virtual file systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/11—File system administration, e.g. details of archiving or snapshots
- G06F16/128—Details of file system snapshots on the file-level, e.g. snapshot creation, administration, deletion
Abstract
The invention discloses a container mirror image lightweight and rapid distribution method, which comprises the following steps: step 1, layering of container mirror images, and step 2, distributing the container mirror images, wherein: in the step 1, the container mirror image is divided into two major class levels of application service and public mirror image; said step 2 comprises pulling the container image file from the service center. The invention can well reduce redundancy and save network and storage resources.
Description
Technical Field
The invention relates to the technical field of computer virtualization, in particular to a container mirror image lightweight and rapid distribution method.
Background
In the container art, the generated images are stored in a hierarchical form. The Docker container mirror image is a virtual file system formed by jointly mounting a plurality of layers of file systems, and only a 'combined mirror image' after being jointly mounted can be seen from the view angle of a user. Each level of file system after being jointly mounted may also be referred to as a level of mirroring. When the current container technology utilizes the mirror image to generate the container, all layers of the mirror image are jointly mounted according to the inheritance relation of the mirror image, so that a great amount of redundancy often occurs in the process of packaging and transmitting the mirror image, network and storage resources are consumed, and even the performance bottleneck of the whole system is formed.
Therefore, the invention provides a method for lightweight and rapid distribution of the container mirror image, which can well reduce redundancy and save network and storage resources.
Disclosure of Invention
The invention is realized by adopting the following technical scheme:
a container image lightweight and quick dispensing method comprising: step 1, layering of container mirror images, and step 2, distributing the container mirror images, wherein: in the step 1, the container mirror image is divided into two major class levels of application service and public mirror image; said step 2 comprises pulling the container image file from the service center.
The method comprises the following steps: the public mirror layer comprises a kernel layer, an operating system layer, a public component layer, a development language layer and a development framework layer.
The method comprises the following steps: the step 2 comprises the following steps: each node in the group uploads the container mirror metadata of the node to a mirror warehouse of the service center.
The method comprises the following steps: the step 2 comprises the following steps: step 2.1, setting up a mirror image warehouse in a service center, wherein each node of a cluster where the service center is located is provided with a mirror image loader, and the mirror image loader reports mirror image information owned by the machine to the mirror image warehouse of the service center;
and 2.2, when the node needs to pull the mirror image, the node mirror image loader firstly searches the mirror image warehouse of the service center of the group where the node mirror image loader is located for the distribution of the required mirror image layer, then selects a plurality of nodes, and pulls the required mirror image data file from the selected nodes in parallel.
The method comprises the following steps: the step 2 comprises the following steps: step 2.1, each service center sends the container mirror metadata stored in the mirror warehouse of the service center to other service centers, and each service center stores the metadata container mirror metadata of the other service centers in the mirror warehouse of the service center;
step 2.2, when the node needs to draw the mirror image, the node will go to the mirror image warehouse of the group service center to retrieve the distribution of the required mirror image layer, if the service center is located in the group, a plurality of nodes in the group are selected, and the required mirror image data file is drawn from the selected nodes in parallel; if the mirror image is not present in the group in which the service center is located and the mirror image is present in other groups, the service center selects other service centers, and the service center pulls the required mirror image data file from the selected service center in parallel and transmits the required mirror image data file to the nodes of the group.
The method comprises the following steps: and establishing a mirror image center of the service center in the plurality of service centers, wherein the transmission of the container mirror image file among the service centers is completed through the mirror image center of the service center.
The method further comprises the following step 3: the container image builds file optimizations.
The method, wherein step 3 comprises the following steps:
step 3.1, judging whether the continuous connected multiple instructions are instructions of the same command, if so, combining the continuous adjacent multiple instructions of the same command into one instruction;
step 2.2, judging whether an ADD command is to be run, if so, judging whether the source address of the ADD command is a local compressed file, and if not, modifying the ADD command into a COPY command;
step 2.3 determines whether CMD and/or encypoint commands are to be run, and if so, the parameters of the CMD and/or encypoint commands are represented using an array.
The method further comprises the step 4: the vessel image is preheated.
The method, wherein step 4 comprises the following steps: the kernel layer, the operating system layer, the common component layer, and the development language layer are preloaded on the nodes.
The method comprises the following steps: when a container image needs to be deployed, only the language layer and the development framework layer and application service layer related to specific services are loaded.
The method comprises the following steps: and when the language layer and the development framework layer and the application service layer related to the specific service are loaded, if the local node has the language layer and the development framework layer and the application service layer related to the specific service, loading the development framework layer and the application service layer from the local node, and if not, executing the step 2.
Drawings
FIG. 1 is a diagram of a mirrored hierarchical mechanism;
FIG. 2 is a diagram of a mirrored preload strategy;
FIG. 3 is a schematic diagram of P2P mirror data distribution in a service center;
fig. 4 is a schematic diagram of P2P mirror data distribution between service centers.
Detailed Description
The following describes specific embodiments of the present invention in detail with reference to the drawings.
As shown in fig. 1 to 4, the container image lightweight and quick distribution method of the present invention includes four steps: step 1, layering a container mirror image; step 2, optimizing a container mirror image construction file; step 3, preheating the mirror image of the container; step 4, container mirror image distribution.
Step 1. Container mirror image layering
Aiming at a container layering mechanism, the invention provides the following scheme: the container mirror image is divided into two major class layers of application service and public mirror image, wherein an application service layer is a service program written by a developer (such as source code, compiled executable files and the like), and the public mirror image layer is divided into five sub-layers from bottom to top, namely a kernel layer (such as bootfs), an operating system layer (such as Centos, ubuntu and the like), a public component layer (such as ssh, wget and the like), a development language layer (such as Java, python and the like), and a development framework layer (such as Spring, django, flash and the like).
After all the images are constructed according to the hierarchical relationship, only one original image layer node is selected for reference in the subsequent construction of a new application image, and the service logic program related to the application is added. And by combining with the incremental uploading and downloading mechanisms of the mirror image warehouse, the new application can be quickly uploaded to the container mirror image warehouse and then quickly distributed to other nodes. Thus, the mirror image construction speed can be greatly increased, and the cost required by management is reduced.
As shown in FIG. 1, if there is a mirror image which is layered according to the above principle and comprises Bootfs, centos, a common component layer and Python, django, service, when creating a Service1 mirror image, only the common component layer needs to be referred to, and three layers of application files of Java, spring, service1 unique to the mirror image are added respectively.
Step 2. Optimization of the Container mirror image build File
The mirror image is generally constructed by a Dockerfile grammar, which contains RUN, ENV, COPY, ADD and other grammar instructions, and each instruction generates a layer of mirror image when each instruction runs.
In the common mirror layer, namely the kernel layer, the operating system layer, the common component layer, the development language layer and the development framework layer, the construction of the final application service can be completed by only at most one instruction, but in the application service layer, a plurality of instructions are usually required, namely a plurality of sub-layers are generated.
The original Dockerf file has simple and flexible grammar, and many commands have various writing methods. If the user does not know the grammar of the Dockerf file in detail, the written image generated by the Dockerf file may have the problems of more layers, occupying excessive space, having long construction time and the like.
According to the invention, through researching the mirror image generation process and based on the Dockerfile grammar, a Docker grammar optimization strategy is researched, the structure of an application service layer is optimized and simplified by the following method, and the size of a mirror image file is reduced:
step 2.1, judging whether the continuous connected multiple instructions are instructions of the same command, if so, combining the continuous adjacent multiple instructions of the same command into one instruction. For example, the plurality of RUN commands may synthesize one RUN command using the & & symbol; the plurality of ENV commands may be combined into one ENV command. Since each command generates only one layer of mirror image, the number of mirror image layers of an application service layer can be reduced.
Step 2.2 judges whether an ADD command is to be run, if so, judges whether the source address of the ADD command is a local compressed file, and if not, modifies the ADD command into a COPY command.
The reason for this is that both the ADD command and the COPY command in Dockerfile can COPY the directory or file of the source address to the target location of the mirrored file system, but the ADD command has the function of decompression, a lighter weight COPY command should be used if only the local file is copied to the container target location.
Step 2.3 determines whether the CMD, ENRTYPOINT command is to be executed, and if so, the parameters of the CMD, ENRTYPOINT command are represented by means of an array.
The reason for this is that in the syntax of Dockerfile, parameters of CMD and ENTRYPOINT commands are specified in two ways, the first is to space the parameters after the command and the second is to use array specification. When the first mode is used for designating, the Docker adds/bin/sh-c to the command designated by the user, so that unexpected errors may occur, and thus the parameters of the CMD and ENTRYPOINT commands are designated in a unified array mode.
Step 3, preheating the mirror image of the container
The necessary basic commonality mirror layer is preloaded through management information analysis of the mirror layer and information such as node hardware characteristics, task mission and the like, so that the mirror distribution efficiency is improved, and the network transmission quantity is reduced. For example, when an application is deployed, the node loads a development language layer, a public component and other mirror image base layers in advance on the basis of rules and preset requirements before executing tasks, and only missing and unique contents can be transmitted on the basis of a mirror image layering mechanism when application services are deployed later, so that the data volume required to be transmitted is greatly reduced.
As shown in fig. 2, the kernel layer, the operating system layer, the common component layer, and the development language layer are preloaded on the nodes according to rules to perform container image warm-up;
when the container mirror images represented by Service1, service2 and Service3 are required to be deployed respectively, only the lacking Java language layer and the development framework layer and the application Service layer related to specific services are loaded. This can greatly reduce the time and resources consumed for image transfer and loading.
Step 4, container mirror image distribution
When a unique application service layer (i.e., an application service file not stored by the node) is loaded, a mirror layer not owned locally needs to be obtained from the network connection. The invention builds an intelligent mirror image distribution system based on P2P, solves the problems of low efficiency, low success rate and network bandwidth waste in the process of distributing and downloading the mirror image files, relieves the downloading pressure of a mirror image warehouse, and breaks the performance bottleneck.
The use scenario of mirror image P2P distribution is discussed in two scenarios, one is P2P mirror image distribution between the service center and the physical nodes of the cluster; the other is the problem of mirror image distribution between service centers and nodes, namely the problem of mirror image distribution between weakly connected nodes.
For the case of P2P mirror distribution at a service center, comprising the steps of:
and 4.1.1, setting a unique mirror image warehouse in the service center, wherein a mirror image loader is arranged on each physical node of the cluster where the service center is located and where the container engine is deployed, and reporting that the mirror image loader has mirror image meta information to the mirror image warehouse by the local machine, so that the mirror image warehouse has distributed data of all mirror image layers in the cluster.
Step 4.1.2 when the container engine needs to pull the mirror image, the mirror image loader will first go to the mirror image warehouse of the service center of the group where it is located to retrieve the distribution of the required mirror image layers, then select several nodes, and the container engine pulls the required multiple mirror image layers from the selected nodes in parallel. Compared with the mode that all layers of images are pulled from the image warehouse in sequence, the image acquisition speed can be greatly increased.
As shown in fig. 3, the host a has a container mirror layer 1 and a container mirror layer 2, the host B has a container mirror layer 3, and both hosts send the owned mirror information to the mirror warehouse through their own mirror loader. When one host computer C needs to load the mirror image layers 1, 2 and 3, the mirror image warehouse firstly inquires the positions of all the mirror image layers, and the mirror image warehouse provides a scheme for pulling the mirror image to the host computer C, namely the mirror image layers 1 and 2 are pulled from the host computer A in parallel, the mirror image layer 3 is pulled from the host computer B, and the host computer C pulls the mirror image files in parallel according to the scheme, so that the acquisition of the mirror image is quickened.
For the case of mirrored distribution of containers between service centers, comprising the steps of:
step 4.2.1, the mirror loader of each service center acquires container mirror metadata from the mirror warehouse of the service center and sends the container mirror metadata to other service centers, and the mirror loader of each service center stores the metadata container mirror metadata of other service centers to the mirror warehouse of the service center;
step 4.2.2 when the node container engine needs to pull the mirror image, the mirror image loader firstly searches the mirror image warehouse of the group service center where the mirror image loader is located for the distribution of the required mirror image layers, if the mirror image exists in the group of the service center, a plurality of nodes in the group are selected, and the required mirror image layers are pulled from the selected nodes in parallel; if the service center is in the group without the mirror image and the mirror images are in other groups, the service center selects other service centers, and the mirror image loader of the service center pulls the needed mirror image data files from the selected service centers in parallel and transmits the needed mirror image data files to the nodes of the group.
As shown in fig. 4, if the service center a has the mirror layers 1 and 3, the service center B has the mirror layer 2, and the mirror loader of a and B informs the other centers of the mirror layer data owned by themselves. When a node of the service center C needs to load the mirror layers 1, 2 and 3, but does not own the node, the corresponding mirrors are pulled from A, B in parallel.
In fig. 4, the mirror warehouse of each service center corresponds to the mirror warehouse of the service center in fig. 3.
In the scenario of mirror distribution between service centers, there are two ways of information communication between center mirror loaders: the mirror image loaders of the centers are communicated with each other pairwise or a more concentrated mirror image center is established for storing the mirror image information owned by each center.
When the service center is less, the two-by-two intercommunication mode is definitely the fastest; however, when the number of service centers increases, the network is complicated due to the intercommunication of every two service centers, and at this time, a centralized mirror image center of the service centers is established to simplify the network specification, and at this time, the transmission of the mirror image file between the service centers is completed through the mirror image center of the service centers.
In summary, the invention realizes management and transmission based on mirror image layering by reasonably planning the mirror image layering, and changes the current situation that the original redundant transmission basic mirror image layer is needed; the number of mirror image layers which are actually required to be transmitted is reduced through mirror image preheating, and the transmission quantity required by mirror image downloading is reduced; by the P2P distribution technology, the mirror image distribution speed is improved, the pressure of a mirror image warehouse is reduced, and network resources consumed by the downloading of the mirror image are further reduced; by optimizing the mirror image construction file, the mirror image synthesis process is improved, and the purposes of simplifying the layering of the mirror image and reducing the size of the mirror image file are achieved.
Claims (2)
1. A container image lightweight and quick dispensing method comprising: step 1, layering of container mirror images, and step 2, distributing of container mirror images, wherein the steps are as follows: in the step 1, the container mirror image is divided into two major class levels of application service and public mirror image; step 2 includes pulling the container image file from the service center;
the step 2 comprises the following steps: step 2.1, setting up a mirror image warehouse in a service center, wherein each node of a cluster where the service center is located is provided with a mirror image loader, and the mirror image loader reports mirror image information owned by the machine to the mirror image warehouse of the service center; each service center sends the container mirror metadata stored in the mirror warehouse of the service center to other service centers, and each service center stores the metadata container mirror metadata of the other service centers in the mirror warehouse of the service center; the method comprises the steps of establishing mirror image centers of a service center in a plurality of service centers, wherein container mirror image file transmission among the service centers is completed through the mirror image centers of the service centers;
step 2.2, when the node needs to draw the mirror image, firstly, searching the distribution of the required mirror image layer through a mirror image warehouse of the group service center where the node is located, if the mirror image exists in the group where the service center is located, selecting a plurality of nodes in the group, and drawing the required mirror image data file from the selected nodes in parallel; if the mirror image does not exist in the group in which the service center is located and the mirror image exists in other groups, the service center selects other service centers, and the service center pulls the needed mirror image data file from the selected service center in parallel and transmits the mirror image data file to the nodes of the group;
the method also comprises the following step 3: the container mirror image construction file optimization, the step 3 includes:
step 3.1, judging whether the plurality of continuous instructions are instructions of the same command, if so, merging the plurality of continuous instructions of the same command into one instruction;
step 3.2, judging whether an ADD command is to be run, if so, judging whether the source address of the ADD command is a local compressed file, and if not, modifying the ADD command into a COPY command;
step 3.3, judging whether to run the CMD and/or ENRTYPEINT command, if so, representing the parameters of the CMD and/or ENRTYPEINT command by using an array mode;
the method further comprises the step 4: the container mirror image is preheated, and the step 4 comprises the following steps: preloading a kernel layer, an operating system layer, a common component layer and a development language layer on a node;
when the container mirror image needs to be deployed, only a language layer, a development framework layer and an application service layer which are related to specific services are loaded;
and when the language layer and the development framework layer and the application service layer related to the specific service are loaded, if the local node has the language layer and the development framework layer and the application service layer related to the specific service, loading the development framework layer and the application service layer from the local node, and if not, executing the step 2.
2. The method according to claim 1, characterized in that: the public mirror layer comprises a kernel layer, an operating system layer, a public component layer, a development language layer and a development framework layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911162941.4A CN111125003B (en) | 2019-11-25 | 2019-11-25 | Container mirror image lightweight and quick distribution method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911162941.4A CN111125003B (en) | 2019-11-25 | 2019-11-25 | Container mirror image lightweight and quick distribution method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111125003A CN111125003A (en) | 2020-05-08 |
CN111125003B true CN111125003B (en) | 2024-01-26 |
Family
ID=70496517
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911162941.4A Active CN111125003B (en) | 2019-11-25 | 2019-11-25 | Container mirror image lightweight and quick distribution method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111125003B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111432006B (en) * | 2020-03-30 | 2023-03-31 | 中科九度(北京)空间信息技术有限责任公司 | Lightweight resource virtualization and distribution method |
CN112231052A (en) * | 2020-09-29 | 2021-01-15 | 中山大学 | High-performance distributed container mirror image distribution system and method |
CN114327754B (en) * | 2021-12-15 | 2022-10-04 | 中电信数智科技有限公司 | Mirror image exporting and assembling method based on container layering technology |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107729020A (en) * | 2017-10-11 | 2018-02-23 | 北京航空航天大学 | A kind of method for realizing extensive container rapid deployment |
CN108021608A (en) * | 2017-10-31 | 2018-05-11 | 赛尔网络有限公司 | A kind of lightweight website dispositions method based on Docker |
CN110673923A (en) * | 2019-09-06 | 2020-01-10 | 中国平安财产保险股份有限公司 | XWIKI system configuration method, system and computer equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10614117B2 (en) * | 2017-03-21 | 2020-04-07 | International Business Machines Corporation | Sharing container images between mulitple hosts through container orchestration |
-
2019
- 2019-11-25 CN CN201911162941.4A patent/CN111125003B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107729020A (en) * | 2017-10-11 | 2018-02-23 | 北京航空航天大学 | A kind of method for realizing extensive container rapid deployment |
CN108021608A (en) * | 2017-10-31 | 2018-05-11 | 赛尔网络有限公司 | A kind of lightweight website dispositions method based on Docker |
CN110673923A (en) * | 2019-09-06 | 2020-01-10 | 中国平安财产保险股份有限公司 | XWIKI system configuration method, system and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111125003A (en) | 2020-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111125003B (en) | Container mirror image lightweight and quick distribution method | |
US8176008B2 (en) | Apparatus and method for replicating data in file system | |
CN110502507A (en) | A kind of management system of distributed data base, method, equipment and storage medium | |
JP3526474B2 (en) | Distribution information management system in network | |
US7657609B2 (en) | Data transfer in a multi-environment document management system access | |
CN103765379A (en) | Cloud-based build service | |
KR101991537B1 (en) | Autonomous network streaming | |
CN102937918B (en) | A kind of HDFS runtime data block balance method | |
CN101710281B (en) | Dynamic integrated system and method of development platform based on Agent | |
KR20190116565A (en) | Management of multiple clusters of distributed file systems | |
US11809428B2 (en) | Scalable query processing | |
CN104487951A (en) | Distributed data management device and distributed data operation device | |
CN112882726B (en) | Hadoop and Docker-based deployment method of environment system | |
CN111190547A (en) | Distributed container mirror image storage and distribution system and method | |
CN104539730A (en) | Load balancing method of facing video in HDFS | |
CN116737363A (en) | Data set cache acceleration method, system, equipment and medium of deep learning platform | |
US11514000B2 (en) | Data mesh parallel file system replication | |
CN101236570A (en) | Method and system for coordinating access to locally and remotely exported file systems | |
CN108200211A (en) | Method, node and the inquiry server that image file is downloaded in cluster | |
US20210374136A1 (en) | Checkpoints in batch file processing | |
CN111432006A (en) | Lightweight resource virtualization and distribution method | |
CN1317662C (en) | Distribution type file access method | |
US7363210B2 (en) | Method and communications system for managing, supplying and retrieving data | |
CN115357375A (en) | Server less parallel computing method and system facing MPI | |
CN1744593B (en) | Transmission link selecting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |