CN111125003A - Container mirror image light weight and rapid distribution method - Google Patents

Container mirror image light weight and rapid distribution method Download PDF

Info

Publication number
CN111125003A
CN111125003A CN201911162941.4A CN201911162941A CN111125003A CN 111125003 A CN111125003 A CN 111125003A CN 201911162941 A CN201911162941 A CN 201911162941A CN 111125003 A CN111125003 A CN 111125003A
Authority
CN
China
Prior art keywords
mirror image
layer
container
mirror
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911162941.4A
Other languages
Chinese (zh)
Other versions
CN111125003B (en
Inventor
李新明
刘斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Edge Intelligence Of Cas Co ltd
Original Assignee
Edge Intelligence Of Cas Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Edge Intelligence Of Cas Co ltd filed Critical Edge Intelligence Of Cas Co ltd
Priority to CN201911162941.4A priority Critical patent/CN111125003B/en
Publication of CN111125003A publication Critical patent/CN111125003A/en
Application granted granted Critical
Publication of CN111125003B publication Critical patent/CN111125003B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/188Virtual file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/128Details of file system snapshots on the file-level, e.g. snapshot creation, administration, deletion

Abstract

The invention discloses a container mirror image lightweight and rapid distribution method, which comprises the following steps: step 1, layering container mirror images, and step 2, distributing container mirror images, wherein: in the step 1, the container mirror image is divided into two major classes of levels of application service and public mirror image; the step 2 comprises pulling the container image file from the service center. The invention can well reduce redundancy and save network and storage resources.

Description

Container mirror image light weight and rapid distribution method
Technical Field
The invention relates to the technical field of computer virtualization, in particular to a container mirror image lightweight and rapid distribution method.
Background
In the field of container technology, the generated images are stored in a hierarchical form. The Docker container mirror image is a virtual file system formed by the joint mounting of a plurality of layers of file systems, and in the view of a user, only a combined mirror image after the joint mounting can be seen. Each layer of the file system after the joint mount can also be called a layer of mirror image. When a container is generated by using a mirror image in the current container technology, all layers of the mirror image are jointly mounted according to the inheritance relationship of the mirror image, so that a large amount of redundancy often appears in the mirror image packaging transmission process, network and storage resources are consumed, and even the performance bottleneck of the whole system is formed.
Therefore, the invention provides a method for lightweight and rapid distribution of container mirror images, which can well reduce redundancy and save network and storage resources.
Disclosure of Invention
In order to realize the purpose of the invention, the following technical scheme is adopted for realizing the purpose:
a container mirror image lightweight and quick dispense method comprising: step 1, layering container mirror images, and step 2, distributing container mirror images, wherein: in the step 1, the container mirror image is divided into two major classes of levels of application service and public mirror image; the step 2 comprises pulling the container image file from the service center.
The method described, wherein: the common mirror layer comprises a kernel layer, an operating system layer, a common component layer, a development language layer and a development framework layer.
The method described, wherein: the step 2 comprises the following steps: each node in the group uploads the container mirror image metadata of the node to a mirror image warehouse of the service center.
The method described, wherein: the step 2 comprises the following steps: step 2.1, a mirror image warehouse is arranged in the service center, a mirror image loader is arranged on each node of a cluster where the service center is located, and the mirror image loader reports mirror image information owned by the local machine to the mirror image warehouse of the service center;
and 2.2, when the node needs to pull the mirror image, the node mirror image loader firstly goes to the mirror image warehouse of the service center of the group where the node is located to search the distribution of the required mirror image layer, then selects a plurality of nodes, and pulls the required mirror image data file from the selected nodes in parallel.
The method described, wherein: the step 2 comprises the following steps: step 2.1, each service center sends container mirror pixel data stored in a mirror image warehouse of the service center to other service centers, and each service center stores metadata container mirror pixel data of other service centers to the mirror image warehouse of the service center;
step 2.2, when the node needs to pull the mirror image, the mirror image warehouse of the group service center where the node is located is firstly removed to search the distribution of the required mirror image layer, if the group where the service center is located has the mirror image, a plurality of nodes in the group are selected, and the required mirror image data file is pulled in parallel from the selected nodes; if the group where the service center is located does not have the mirror image and other groups have the mirror image, the service center selects other service centers, and the service center pulls the required mirror image data files from the selected service centers in parallel and transmits the mirror image data files to the nodes of the group.
The method described, wherein: and establishing a mirror image center of the service center in the plurality of service centers, wherein the transmission of the container mirror image files among the service centers is completed through the mirror image center of the service center.
The method further comprises the following steps of 3: and optimizing the container mirror image construction file.
The method, wherein the step 3 comprises:
step 3.1, judging whether a plurality of continuously connected instructions are instructions of the same command, if so, combining the instructions of a plurality of continuously adjacent same commands into one instruction;
step 2.2, judging whether an ADD command needs to be operated, if so, firstly judging whether a source address of the ADD command is a local compressed file, and if not, modifying the ADD command into a COPY command;
step 2.3 judges whether to run the CMD and/or ENRTYPOINT commands, if so, the parameters of the CMD and/or ENRTYPOINT commands are expressed by using an array mode.
The method further comprises the following step 4: the vessel is preheated in mirror image.
The method, wherein the step 4 comprises: a kernel layer, an operating system layer, a common component layer, and a development language layer are preloaded on a node.
The method described, wherein: when the container mirror needs to be deployed, only the language layer and the development framework layer and the application service layer related to specific services are loaded.
The method described, wherein: and when the language layer and the development framework layer and the application service layer related to the specific service are loaded, if the local node has the language layer and the development framework layer and the application service layer related to the specific service, loading from the local node, and if the local node does not have the development framework layer and the application service layer related to the specific service, executing the step 2.
Drawings
FIG. 1 is a schematic diagram of a mirror layering mechanism;
FIG. 2 is a diagram of a mirror preloading strategy;
FIG. 3 is a schematic diagram of P2P mirror data distribution in a service center;
fig. 4 is a schematic diagram of the mirrored data distribution of the inter-service center P2P.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings.
As shown in fig. 1-4, the container mirror image lightweight and rapid dispensing method of the present invention comprises four steps: step 1, container mirror image layering; step 2, optimizing a container mirror image construction file; step 3, preheating the container in a mirror image manner; and 4, distributing the containers in a mirror image mode.
Step 1. container mirror layering
Aiming at a container layering mechanism, the invention provides the following scheme: the container mirror image is divided into two large-class levels of an application service layer and a common mirror image, wherein the application service layer is a service program (such as source code, compiled executable files and the like) written by a developer, and the common mirror image layer is divided into five sub-layers from bottom to top, namely a kernel layer (such as bootfs), an operating system layer (such as Centos, Ubuntu and the like), a common component layer (such as ssh, wget and the like), a development language layer (such as Java, Python and the like), and a development framework layer (such as Spring, Django, flash and the like).
After all the mirror images are constructed according to the hierarchical relationship, a certain original mirror image layer node is selected for reference only for subsequently constructing a new application mirror image, and a business logic program related to the application is added. And by combining an incremental uploading and downloading mechanism of the mirror image warehouse, the new application can be quickly uploaded to the container mirror image warehouse and then quickly distributed to other nodes. Therefore, the speed of mirror image construction can be greatly increased, and the cost required by management is reduced.
As shown in fig. 1, if there exists a mirror image layered according to the above principle, including Bootfs, centros, common component layer, Python, Django, and Service2, when creating the Service1 mirror image, only the common component layer needs to be referred to, and unique Java, Spring, and Service1 application file three layers are added respectively.
Step 2, optimizing container mirror image construction file
The mirror image is usually constructed through a Dockerfile grammar, the Dockerfile grammar comprises a plurality of grammar instructions such as RUN, ENV, COPY, ADD and the like, and a layer of mirror image is generated when one instruction is RUN.
In the common mirror layer, i.e., kernel layer, operating system layer, common component layer, development language layer, and development framework layer, usually only at most one instruction is needed to complete the construction of the final application service, but in the application service layer, several instructions are usually needed, i.e., several sub-layers are generated.
The syntax of the native Dockerfile is simple and flexible, and many commands have multiple writing methods. If the user does not have detailed knowledge about the grammar of the Dockerfile, the image generated by the Dockerfile may have the problems of more layers, occupying too much space, having long construction time and the like.
According to the invention, by researching the mirror image generation process and researching the Docker grammar optimization strategy based on the Dockerfile grammar, the structure of the application service layer is optimized and simplified by the following method, and the size of the mirror image file is reduced:
and 2.1, judging whether a plurality of continuously connected instructions are the instructions of the same command, and if so, merging the instructions of the plurality of continuously adjacent same commands into one instruction. For example, a plurality of RUN commands may synthesize one RUN command using & & symbol; multiple ENV commands may be combined into one ENV command. Since only one mirror is generated for each command, the number of mirror layers of the application service layer can be reduced.
And 2.2, judging whether the ADD command needs to be operated, if so, firstly judging whether the source address of the ADD command is a local compressed file, and if not, modifying the ADD command into a COPY command.
The reason for this is that both the ADD command and COPY command in Dockerfile can COPY the directory or file of the source address to the destination location of the mirrored file system, but the ADD command has the function of decompression, and if just copying the local file to the container destination location, a more lightweight COPY command should be used.
And 2.3, judging whether the CMD and ENRTYPOINT commands need to be operated, and if so, representing the parameters of the CMD and ENRTYPOINT commands in an array mode.
The reason for this is that in the Dockerfile syntax, there are two ways to specify the parameters of the CMD and ENTRYPOINT commands, the first is to add the parameters to the commands with space separation, and the second is to use array specification. When the first method is used for designation, Docker adds/bin/sh-c to the command designated by the user, so that unpredictable errors can occur, and therefore, the parameters of the CMD and ENTRYPOINT commands are designated in a unified array mode.
Step 3, preheating the container in mirror image
Through the management information analysis of the mirror image layer and the information such as node hardware characteristics, task mission and the like, the necessary basic common mirror image layer is preloaded, the mirror image distribution efficiency is improved, and the network transmission quantity is reduced. For example, when an application is deployed, a node locally loads a development language layer, a common component and other mirror image base layers in advance before executing a task based on rules and preset requirements, and then only transmits missing and unique contents based on a mirror image layering mechanism when an application service is deployed, so that the data volume needing to be transmitted is greatly reduced.
As shown in fig. 2, a kernel layer, an operating system layer, a common component layer and a development language layer are preloaded on a node according to rules to perform container mirror preheating;
when container images represented by Service1, Service2 and Service3 need to be deployed respectively, only a missing Java language layer and a development framework layer and an application Service layer related to a specific Service are loaded. Therefore, the time and resources consumed by image transmission and loading can be greatly reduced.
Step 4. Container mirror image dispensing
When loading the unique application service layer (i.e. the application service file not stored by the node), the local non-owned mirror layer needs to be acquired from the network connection. The invention constructs an intelligent image distribution system based on P2P, solves the problems of low efficiency, low success rate and network bandwidth waste in the image file distribution and downloading process, relieves the downloading pressure of an image warehouse and breaks performance bottlenecks.
The use scenario of the mirror P2P distribution is discussed in two scenarios, one is P2P mirror distribution between physical nodes in the service center and the same cluster; the other is the image distribution problem between service centers, between service centers and nodes, namely the image distribution problem between weak connection nodes.
For the case of mirror distribution at the service center P2P, the method includes the following steps:
step 4.1.1, a unique mirror image warehouse is set in the service center, a mirror image loader is set on each physical node of the cluster where the service center is located and where the container engine is deployed, and the mirror image loader reports that the mirror image loader has mirror image element information to the mirror image warehouse, so that the mirror image warehouse has distribution data of all mirror image layers in the cluster.
Step 4.1.2 when the container engine needs to pull the mirror image, the mirror image loader firstly goes to the mirror image warehouse of the service center of the group where the container engine is located to search the distribution of the required mirror image layer, then selects a plurality of nodes, and the container engine pulls a plurality of required mirror image layers from the selected nodes in parallel. Compared with the mode that all the mirror images of all the layers are sequentially pulled from the mirror image warehouse, the mode can greatly accelerate the speed of obtaining the mirror images.
As shown in fig. 3, a host a owns container mirror layers 1 and 2, a host B owns container mirror layer 3, and both hosts send owned mirror information to a mirror repository through their own mirror loaders. When a host C needs to load the mirror image layers 1, 2 and 3, the positions of all the mirror image layers are inquired from the mirror image warehouse, the mirror image warehouse provides a scheme for pulling the mirror image to the host C, namely the mirror image layers 1 and 2 are pulled from the host A in parallel, the mirror image layer 3 is pulled from the host B, and the host C pulls the mirror image file in parallel according to the scheme, so that the acquisition of the mirror image is accelerated.
For the case of container mirror distribution between service centers, the method comprises the following steps:
step 4.2.1 the mirror image loader of each service center acquires container mirror pixel data from the mirror image warehouse of the service center and sends the container mirror pixel data to other service centers, and the mirror image loader of each service center stores metadata container mirror pixel data of other service centers to the mirror image warehouse of the service center;
step 4.2.2 when the node container engine needs to pull the mirror image, the mirror image loader firstly goes to the mirror image warehouse of the group service center where the mirror image loader is located to search the distribution of the required mirror image layer, if the mirror image exists in the group where the service center is located, a plurality of nodes in the group are selected, and the required mirror image layer is pulled in parallel from the selected nodes; if the group where the service center is located does not have the mirror image and other groups have the mirror image, the service center selects other service centers, and a mirror image loader of the service center pulls the required mirror image data files from the selected service centers in parallel and transmits the mirror image data files to the nodes of the group.
As shown in fig. 4, if service center a owns mirror layers 1 and 3, service center B owns mirror layers 2, A, B, and the mirror loader of the other center informs the other center of its own mirror layer data. When a node in the service center C needs to load the mirror layers 1, 2, 3, but is not owned by itself, the corresponding mirror is pulled from A, B in parallel.
In fig. 4, the mirror image warehouse of each service center corresponds to the mirror image warehouse of the service center in fig. 3.
In the scenario of image distribution between service centers, there are two ways for information communication between the central image loaders: every two central mirror image loaders are communicated with each other or a more centralized mirror image center is established for storing mirror image information owned by each center.
When the number of service centers is small, the mode of intercommunication between two service centers is undoubtedly the fastest; however, when the number of service centers increases, the intercommunication between two service centers leads to the complication of the network, and at this time, a centralized mirror image center of the service center is established to simplify the network specification, and at this time, the transmission of the mirror image files between the service centers is completed through the mirror image center of the service center.
In conclusion, the invention realizes management and transmission based on mirror image layering by reasonably planning mirror image layering, and changes the current situation that a basic mirror image layer needs to be redundantly transmitted originally; through image preheating, the number of image layers which need to be transmitted actually is reduced, and the transmission quantity needed by image downloading is reduced; by the P2P distribution technology, the image distribution speed is improved, the image warehouse pressure is reduced, and the network resources consumed by image downloading are further reduced; by optimizing the mirror image construction file, the mirror image synthesis process is improved, and the purposes of simplifying mirror image layering and reducing the size of the mirror image file are achieved.

Claims (3)

1. A container mirror image lightweight and quick dispense method comprising: step 1, layering container mirror images, and step 2, distributing container mirror images, and is characterized in that: in the step 1, the container mirror image is divided into two major classes of levels of application service and public mirror image; the step 2 comprises pulling the container image file from the service center.
2. The method of claim 1, wherein: the common mirror layer comprises a kernel layer, an operating system layer, a common component layer, a development language layer and a development framework layer.
3. The method of claim 1, wherein: the step 2 comprises the following steps: each node in the group uploads the container mirror image metadata of the node to a mirror image warehouse of the service center.
CN201911162941.4A 2019-11-25 2019-11-25 Container mirror image lightweight and quick distribution method Active CN111125003B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911162941.4A CN111125003B (en) 2019-11-25 2019-11-25 Container mirror image lightweight and quick distribution method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911162941.4A CN111125003B (en) 2019-11-25 2019-11-25 Container mirror image lightweight and quick distribution method

Publications (2)

Publication Number Publication Date
CN111125003A true CN111125003A (en) 2020-05-08
CN111125003B CN111125003B (en) 2024-01-26

Family

ID=70496517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911162941.4A Active CN111125003B (en) 2019-11-25 2019-11-25 Container mirror image lightweight and quick distribution method

Country Status (1)

Country Link
CN (1) CN111125003B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432006A (en) * 2020-03-30 2020-07-17 中科九度(北京)空间信息技术有限责任公司 Lightweight resource virtualization and distribution method
CN112231052A (en) * 2020-09-29 2021-01-15 中山大学 High-performance distributed container mirror image distribution system and method
CN114327754A (en) * 2021-12-15 2022-04-12 中电信数智科技有限公司 Mirror image exporting and assembling method based on container layering technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729020A (en) * 2017-10-11 2018-02-23 北京航空航天大学 A kind of method for realizing extensive container rapid deployment
CN108021608A (en) * 2017-10-31 2018-05-11 赛尔网络有限公司 A kind of lightweight website dispositions method based on Docker
US20180276215A1 (en) * 2017-03-21 2018-09-27 International Business Machines Corporation Sharing container images between mulitple hosts through container orchestration
CN110673923A (en) * 2019-09-06 2020-01-10 中国平安财产保险股份有限公司 XWIKI system configuration method, system and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180276215A1 (en) * 2017-03-21 2018-09-27 International Business Machines Corporation Sharing container images between mulitple hosts through container orchestration
CN107729020A (en) * 2017-10-11 2018-02-23 北京航空航天大学 A kind of method for realizing extensive container rapid deployment
CN108021608A (en) * 2017-10-31 2018-05-11 赛尔网络有限公司 A kind of lightweight website dispositions method based on Docker
CN110673923A (en) * 2019-09-06 2020-01-10 中国平安财产保险股份有限公司 XWIKI system configuration method, system and computer equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432006A (en) * 2020-03-30 2020-07-17 中科九度(北京)空间信息技术有限责任公司 Lightweight resource virtualization and distribution method
CN112231052A (en) * 2020-09-29 2021-01-15 中山大学 High-performance distributed container mirror image distribution system and method
CN114327754A (en) * 2021-12-15 2022-04-12 中电信数智科技有限公司 Mirror image exporting and assembling method based on container layering technology
CN114327754B (en) * 2021-12-15 2022-10-04 中电信数智科技有限公司 Mirror image exporting and assembling method based on container layering technology

Also Published As

Publication number Publication date
CN111125003B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
US20220229649A1 (en) Conversion and restoration of computer environments to container-based implementations
US11829742B2 (en) Container-based server environments
CN111125003A (en) Container mirror image light weight and rapid distribution method
CN107562472B (en) Micro-service system and method based on docker container
US20120042305A1 (en) Translating declarative models
US20060107087A1 (en) System for optimizing server use in a data center
KR101991537B1 (en) Autonomous network streaming
US7657609B2 (en) Data transfer in a multi-environment document management system access
US7650609B2 (en) Multi-environment document management system access
CN112698921B (en) Logic code operation method, device, computer equipment and storage medium
CN114995841B (en) Method and system for realizing database cloud service upgrade
US20080127207A1 (en) System and method for consolidating middleware functionality
CN111682973A (en) Method and system for arranging edge cloud
US11809428B2 (en) Scalable query processing
CN111193610B (en) Intelligent monitoring data system and method based on Internet of things
CN115391035A (en) Method for collaborative management and scheduling of heterogeneous computing resources
CN101236570A (en) Method and system for coordinating access to locally and remotely exported file systems
CN116737363A (en) Data set cache acceleration method, system, equipment and medium of deep learning platform
CN115729674A (en) Container-based load migration system design method
CN108200211A (en) Method, node and the inquiry server that image file is downloaded in cluster
US7213245B2 (en) Software on demand system
US11163768B1 (en) Checkpoints in batch file processing
CN111459530B (en) Patching method, device and storage medium
CN111432006A (en) Lightweight resource virtualization and distribution method
CN115426370A (en) Heterogeneous container cloud platform and edge manufacturing service subscription implementation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant