CN111432006A - Lightweight resource virtualization and distribution method - Google Patents

Lightweight resource virtualization and distribution method Download PDF

Info

Publication number
CN111432006A
CN111432006A CN202010234317.7A CN202010234317A CN111432006A CN 111432006 A CN111432006 A CN 111432006A CN 202010234317 A CN202010234317 A CN 202010234317A CN 111432006 A CN111432006 A CN 111432006A
Authority
CN
China
Prior art keywords
mirror image
container
layer
mirror
basic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010234317.7A
Other languages
Chinese (zh)
Other versions
CN111432006B (en
Inventor
李新明
刘斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Jiudu Beijing Spatial Information Technology Co ltd
Original Assignee
Zhongke Jiudu Beijing Spatial Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Jiudu Beijing Spatial Information Technology Co ltd filed Critical Zhongke Jiudu Beijing Spatial Information Technology Co ltd
Priority to CN202010234317.7A priority Critical patent/CN111432006B/en
Publication of CN111432006A publication Critical patent/CN111432006A/en
Application granted granted Critical
Publication of CN111432006B publication Critical patent/CN111432006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to a lightweight resource virtualization and distribution method, which comprises the following steps: (a) dividing a container public mirror image in a service center into different mirror image layers, and acquiring mirror image layer management information; (b) selecting a mirror image of any node in the container public mirror image as a basic mirror image, developing an application service mirror image on the basis of the basic mirror image, and dividing a necessary basic commonality mirror image layer according to the basic mirror image; (c) analyzing the management information of the mirror image layer, and preloading the necessary basic commonality mirror image layer by combining the hardware characteristics and the task mission of the nodes; (d) the container engine is used for pulling the mirror layer data in the same service center in a blocking mode or establishing a P2P network for mirror data transmission between different service centers. Therefore, the resource utilization rate in the container stack can be optimized, the mirror image distribution speed and the mirror image loading speed are improved, and finally, service starting is realized as fast as possible and system resource consumption is reduced as low as possible.

Description

Lightweight resource virtualization and distribution method
Technical Field
The invention belongs to the technical field of networks, and relates to a lightweight resource virtualization and distribution method.
Background
In 2013, a lightweight virtualization technology represented by Docker has attracted wide attention, so the Container technology has attracted wide attention (Docker is a high-level Container engine based on L of a PaaS provider doccloud open source), meanwhile, the Docker company is actively developing a management scheme based on a Docker Container, the Docker Machine is a tool capable of directly installing the Docker engine through a command, the Docker Swarm is a cluster and scheduling tool capable of self-optimizing a distributed application infrastructure based on requirements of the life cycle of the application, the use and performance of the Container, and the Docker composite tool can construct a multi-Container application running on the Swarm.
The heuristic algorithm is very suitable for being used in the field of resource allocation due to the characteristics of random search, fast learning mechanism, adaptivity and the like, and in recent years, foreign students have obtained certain research results, Rajkumar Buyya teaching of the university of Australia teaches a distribution algorithm based on user task completion time and service cost, the algorithm guides a resource distribution scheme with high selective cost ratio of users, and the algorithm teaches Ke L iu of the university of Sword further provides a distribution algorithm based on time and expense balance, the algorithm takes the task final completion deadline and the user expense as consideration parameters, reduces the user completion time expectation, or meets the premise of meeting the user completion cost, and simultaneously, the algorithm improves the task distribution probability of the Californi algorithm, and improves the task distribution efficiency of the Californi algorithm based on the heuristic algorithm of the heuristic algorithm, and the heuristic algorithm of the optimization task completion of the optimization of the Californi-Californi algorithm, and the heuristic algorithm of the task optimization of the heuristic optimization of the task completion of the Californi-Californi algorithm, the heuristic optimization algorithm, the heuristic algorithm, the resource distribution of the resource completion time and the resource distribution of the resource optimization of the resource is more suitable for the resource completion of the resource is improved and the resource optimization of the resource is improved resource optimization of the resource sharing of the optimization of the resource optimization of the Californi-through the resource optimization of the Californi-sink task-sink task of the resource-sink task-sink-.
However, in the mobile environment, in order to ensure reliable and efficient service and sufficient utilization of resources, new requirements on resource virtualization and allocation are provided for the conditions that the service center is high in mobility, the resources of the service center are limited, the network connection environment is poor, and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a lightweight resource virtualization and allocation method.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows: a lightweight resource virtualization and allocation method comprises the following steps:
(a) dividing a container public mirror image in a service center into different mirror image layers, and acquiring mirror image layer management information; the mirror image layer sequentially comprises an inner core layer, an operating system layer, a common component layer, a development language layer and a development framework layer from bottom to top;
(b) selecting a mirror image of any node in the container public mirror image as a basic mirror image, developing an application service mirror image on the basis of the basic mirror image, and dividing a necessary basic commonality mirror image layer according to the basic mirror image;
(c) analyzing the management information of the mirror image layer, and preloading the necessary basic commonality mirror image layer by combining the hardware characteristics and the task mission of the nodes;
(d) the container engine is used for pulling the mirror layer data in the same service center in a blocking mode or establishing a P2P network for mirror data transmission between different service centers.
Optimally, in step (a), the generation of the container common image is regulated based on the Dockerfile syntax.
Further, the Dockerfile syntax adjustment rule is:
(a1) synthesizing RUN commands of consecutive adjacent pieces into one RUN command using & & symbol;
(a2) combining a plurality of ENV commands into one ENV command;
(a3) judging whether the source address of the ADD command is a local compressed file or not, and modifying the ADD command into a COPY command if the source address of the ADD command is not the local compressed file;
(a4) the parameters of the CMD and ENRTYPOINT commands are expressed in an array.
Optimally, in step (a), the Docker is also replaced to prepare the proc file system for the container to prepare a mount in place of the proc virtual file system at/proc locations within the container.
Further, a capacity limit for the rootfs of the vessel was also added to the Docker.
Due to the application of the technical scheme, compared with the prior art, the invention has the following advantages: according to the lightweight resource virtualization and distribution method, the common mirror image of the container in the service center is divided into different mirror image layers, and the common mirror image layer to be basic is preloaded, so that the resource utilization rate in the container stack can be optimized, the mirror image distribution speed and the mirror image loading speed are improved, and finally the service starting is as fast as possible and the system resource consumption is as low as possible.
Drawings
FIG. 1 is a schematic diagram of a hierarchy of common mirror images of containers in the lightweight resource virtualization and allocation method of the present invention;
FIG. 2 is a diagram illustrating mirror preloading in a lightweight resource virtualization and allocation method according to the present invention;
FIG. 3 is a schematic view of image distribution in the same service center in the lightweight resource virtualization and allocation method of the present invention;
FIG. 4 is a schematic view illustrating mirror image distribution among different service centers in the lightweight resource virtualization and allocation method according to the present invention;
fig. 5 is a schematic diagram of accessing a proc agent in the lightweight resource virtualization and allocation method of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The invention relates to a lightweight resource virtualization and distribution method, which comprises the following steps:
(a) dividing a container public mirror image in a service center into different mirror image layers, and acquiring mirror image layer management information; the mirror image layer sequentially comprises a kernel layer, an operating system layer, a common component layer, a development language layer and a development framework layer from bottom to top (as shown in figure 1 \22014; messenger);
(b) selecting a mirror image of any node in the container public mirror image as a basic mirror image, developing an application service mirror image on the basis of the basic mirror image, and dividing a necessary basic commonality mirror image layer according to the basic mirror image; a sender can select the mirror image of any node in the figure 1 as a basic mirror image according to requirements, and develop an application service mirror image on the basis of the mirror image, so that the utilization rate of the public mirror image can be maximized by fully utilizing the container layering characteristic;
(c) analyzing management information of a mirror image layer, and preloading the necessary basic commonality mirror image layer by combining hardware characteristics and task mission of the nodes; through the management information analysis of the mirror image layer and the information such as node hardware characteristics, task mission and the like, the necessary basic commonality mirror image layer is preloaded, the mirror image distribution efficiency is improved, and the network transmission amount is reduced; by analysis, it is expected that image preloading can save up to 90% more of image bandwidth at best. If one application utilizes a mirror preloading mechanism, a service center node thereof is loaded with mirror base layers such as a Python library, a public component and the like before executing a task based on rules and preset requirements, and then only missing contents are transmitted based on a mirror layering mechanism when the application service is deployed, so that the data volume needing to be transmitted is greatly reduced (as received in FIG. 2);
(d) the container engine is used for pulling the mirror layer data in the same service center in a blocking mode or establishing a P2P network for mirror data transmission between different service centers. The efficiency of obtaining the mirror image can be further improved by adopting P2P mirror image distribution, time and bandwidth are saved, and the availability of the information platform in a specific environment is improved; the usage scenario of the mirror P2P distribution is discussed in two scenarios: one is P2P mirror distribution within the service center between physical nodes of the same cluster; the other is the problem of image distribution between service centers (namely mobile information service centers), between service centers and nodes, namely between weakly connected nodes.
In each service center (the service center is provided with a host A and a host B), a unique mirror image warehouse is provided, each physical node provided with a container engine is provided with a mirror image loader, and the mirror image loader reports that the local machine (namely the host) has mirror image element information to the mirror image warehouse, so that the mirror image warehouse has distribution data of all mirror image layers in a cluster; when the container engine needs to pull the mirror image, the mirror loader first goes to the mirror repository to retrieve the distribution of the required mirror image layer, then selects a plurality of nodes, and gets the block-by-block pull mirror image data in parallel (as shown in fig. 3).
For image distribution between service centers, a central image loader in each service center is relied upon. Each central mirror loader acquires mirror metadata from the mirror warehouse of the service center, establishes contact with the central mirror loaders in other service centers, communicates mirror metadata, and establishes a P2P network for mirror data transmission (as shown in fig. 4).
In the present application, in step (a), the generation of the container common image is regulated based on the Dockerfile syntax; the method is characterized in that a container grammar optimization strategy is researched based on Dockerfile grammar by researching a mirror image generation process, and the purposes of simplifying mirror image layering and reducing the size of a mirror image file are achieved through optimization.
Specifically, a Dockerfile grammar optimizer is researched and realized to perform grammar optimization on the original Dockerfile, and the grammar optimization rule is as follows:
(a1) synthesizing RUN commands of consecutive adjacent pieces into one RUN command using & & symbol; RUN commands are very important commands in a Dockerfile, usually many RUN commands appear in one Dockerfile, and each command generates a layer of mirror image; the use of & & can synthesize a plurality of RUN commands which are continuously adjacent into one, can greatly reduce the number of layers and the size of the final generated mirror image;
(a2) combining a plurality of ENV commands into one ENV command; because each command can generate a layer of mirror image, in order to reduce the number of mirror image layers, the system integrates all ENV commands in the Dockerfile into one command;
(a3) judging whether the source address of the ADD command is a local compressed file or not, and modifying the ADD command into a COPY command if the source address of the ADD command is not the local compressed file; both the ADD command and COPY command in Dockerfile can COPY the directory or file of the source address to the destination location of the mirrored file system, but the ADD command has the function of decompression, if only copying the local file to the container destination location, then a lighter COPY command should be used;
(a4) representing parameters of the CMD and ENRTYPOINT commands in an array mode; in the syntax of Dockerfile, the parameters of CMD and ENTRYPOINT commands have two specifying modes, the first mode is that the parameters are added to the commands by space separation, and the second mode is that array specification is used; when the first mode is used for designation, the container is added/bin/sh-c in the command designated by the user, so that unpredictable errors can occur, and therefore, the parameters of the CMD and ENTRYPOINT commands are designated in a unified mode by using an array mode.
Docker lacks isolation of the container resource view. Docker can restrict the use of resources by processes in a container that cannot be automatically perceived in the runtime environment. Therefore, when the Docker provides an independent file system environment for the container, the proc file system can be mounted to/proc at the same time, so that the process in the container can interact with the kernel of the system by using the proc file system. The total memory amount, CPU information, system start time, and other information acquired by the process in the container via the proc file system are the same as those acquired on the host, but are inconsistent with the restrictions set when the container is started. For a large number of JVM-based Java applications, the application startup script relies primarily on system memory resource capacity to allocate heap and stack sizes for the JVM. Thus, an application in a 200 MB memory-limited container created on a 2GB host will assume that it has access to the entire 2GB of memory, and the boot script tells the Java runtime the 2GB of memory upon which to allocate the heap and stack size, and the application boot will certainly fail if it is completely different from the 200 MB memory size that it can actually use at most. Many applications also determine their own performance and set the number of threads based on the number of CPU cores detected from the proc file system. Although the user may use the parameters to limit the available CPU core set of the container when starting the container, the CPU core number information of the container host is obtained in the container, which may cause the operation condition of the application program to be unexpected.
The method and the device have the advantages that the step of preparing the proc file system for the container by replacing Docker is adopted, and a special mount is prepared at the/proc position in the container to replace the proc virtual file system of the prior direct mount system, so that the access of the process in the container to/proc is taken over, and the isolation of the container is enhanced. The take over/proc will be a fuse program running on the host, which may be called a proc agent, which will implement a virtual file system with the fuse. When the process in the container accesses/proc, the process accesses the virtual file system prepared by the proc agent, the proc agent can prepare special data to return to the process according to the strategy, and the strategy can be changed dynamically. The resource views between the containers can be isolated to a certain extent by using the proc agent, and most of problems caused by the fact that the resource views are not isolated are solved (as shown in fig. 5). Docker also lacks the limit of the quota of disks in the container. Docker prepares a writable rootfs save container for the container to save all modifications to the file system by processes in the container when starting the container, but does not limit the capacity of the rootfs. Rootfs of all containers on a host can share a host file system, and processes in any container can write the file system to be full, so that other containers have no space. In this way, the application adds the capacity limitation on the rootfs of the container to the Docker, ensures that the administrator of the container can limit the disk quota (including the space size and the inode number) in the container, and supports the resource management based on the container to take the use of the container to the disk into consideration.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.

Claims (5)

1. A lightweight resource virtualization and allocation method is characterized by comprising the following steps:
(a) dividing a container public mirror image in a service center into different mirror image layers, and acquiring mirror image layer management information; the mirror image layer sequentially comprises an inner core layer, an operating system layer, a common component layer, a development language layer and a development framework layer from bottom to top;
(b) selecting a mirror image of any node in the container public mirror image as a basic mirror image, developing an application service mirror image on the basis of the basic mirror image, and dividing a necessary basic commonality mirror image layer according to the basic mirror image;
(c) analyzing the management information of the mirror image layer, and preloading the necessary basic commonality mirror image layer by combining the hardware characteristics and the task mission of the nodes;
(d) the container engine is used for pulling the mirror layer data in the same service center in a blocking mode or establishing a P2P network for mirror data transmission between different service centers.
2. The lightweight resource virtualization and allocation method according to claim 1, wherein: in step (a), the generation of the container common image is regulated based on the Dockerfile syntax.
3. The lightweight resource virtualization and allocation method according to claim 2, wherein the Dockerfile syntax adjustment rule is:
(a1) synthesizing RUN commands of consecutive adjacent pieces into one RUN command using & & symbol;
(a2) combining a plurality of ENV commands into one ENV command;
(a3) judging whether the source address of the ADD command is a local compressed file or not, and modifying the ADD command into a COPY command if the source address of the ADD command is not the local compressed file;
(a4) the parameters of the CMD and ENRTYPOINT commands are expressed in an array.
4. The lightweight resource virtualization and allocation method according to claim 1, wherein: in step (a), a doc file system is also prepared for the container instead of the Docker, so as to prepare mount in the/proc location within the container instead of the proc virtual file system.
5. The lightweight resource virtualization and allocation method according to claim 4, wherein: a capacity limit on the rootfs of the vessel was also added to the Docker.
CN202010234317.7A 2020-03-30 2020-03-30 Lightweight resource virtualization and distribution method Active CN111432006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010234317.7A CN111432006B (en) 2020-03-30 2020-03-30 Lightweight resource virtualization and distribution method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010234317.7A CN111432006B (en) 2020-03-30 2020-03-30 Lightweight resource virtualization and distribution method

Publications (2)

Publication Number Publication Date
CN111432006A true CN111432006A (en) 2020-07-17
CN111432006B CN111432006B (en) 2023-03-31

Family

ID=71549857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010234317.7A Active CN111432006B (en) 2020-03-30 2020-03-30 Lightweight resource virtualization and distribution method

Country Status (1)

Country Link
CN (1) CN111432006B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114185641A (en) * 2021-11-11 2022-03-15 北京百度网讯科技有限公司 Virtual machine cold migration method and device, electronic equipment and storage medium
CN116204305A (en) * 2022-12-21 2023-06-02 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) Method for limiting number of dock container inodes

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159596A (en) * 2006-10-02 2008-04-09 国际商业机器公司 Method and apparatus for deploying servers
CN106227579A (en) * 2016-07-12 2016-12-14 深圳市中润四方信息技术有限公司 A kind of Docker container construction method and Docker manage control station
WO2017092672A1 (en) * 2015-12-03 2017-06-08 华为技术有限公司 Method and device for operating docker container
CN107819802A (en) * 2016-09-13 2018-03-20 华为软件技术有限公司 A kind of mirror image acquisition methods, node device and server in node cluster
CN108021608A (en) * 2017-10-31 2018-05-11 赛尔网络有限公司 A kind of lightweight website dispositions method based on Docker
CN108446166A (en) * 2018-03-26 2018-08-24 中科边缘智慧信息科技(苏州)有限公司 Quick virtual machine starts method
CN108616419A (en) * 2018-03-30 2018-10-02 武汉虹旭信息技术有限责任公司 A kind of packet capture analysis system and its method based on Docker
CN110096333A (en) * 2019-04-18 2019-08-06 华中科技大学 A kind of container performance accelerated method based on nonvolatile memory
US20190245949A1 (en) * 2018-02-06 2019-08-08 Nicira, Inc. Packet handling based on virtual network configuration information in software-defined networking (sdn) environments
CN110119377A (en) * 2019-04-24 2019-08-13 华中科技大学 Online migratory system towards Docker container is realized and optimization method
CN110673923A (en) * 2019-09-06 2020-01-10 中国平安财产保险股份有限公司 XWIKI system configuration method, system and computer equipment
CN110674043A (en) * 2019-09-24 2020-01-10 聚好看科技股份有限公司 Application debugging processing method and server
CN111125003A (en) * 2019-11-25 2020-05-08 中科边缘智慧信息科技(苏州)有限公司 Container mirror image light weight and rapid distribution method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159596A (en) * 2006-10-02 2008-04-09 国际商业机器公司 Method and apparatus for deploying servers
WO2017092672A1 (en) * 2015-12-03 2017-06-08 华为技术有限公司 Method and device for operating docker container
CN106227579A (en) * 2016-07-12 2016-12-14 深圳市中润四方信息技术有限公司 A kind of Docker container construction method and Docker manage control station
CN107819802A (en) * 2016-09-13 2018-03-20 华为软件技术有限公司 A kind of mirror image acquisition methods, node device and server in node cluster
CN108021608A (en) * 2017-10-31 2018-05-11 赛尔网络有限公司 A kind of lightweight website dispositions method based on Docker
US20190245949A1 (en) * 2018-02-06 2019-08-08 Nicira, Inc. Packet handling based on virtual network configuration information in software-defined networking (sdn) environments
CN108446166A (en) * 2018-03-26 2018-08-24 中科边缘智慧信息科技(苏州)有限公司 Quick virtual machine starts method
CN108616419A (en) * 2018-03-30 2018-10-02 武汉虹旭信息技术有限责任公司 A kind of packet capture analysis system and its method based on Docker
CN110096333A (en) * 2019-04-18 2019-08-06 华中科技大学 A kind of container performance accelerated method based on nonvolatile memory
CN110119377A (en) * 2019-04-24 2019-08-13 华中科技大学 Online migratory system towards Docker container is realized and optimization method
CN110673923A (en) * 2019-09-06 2020-01-10 中国平安财产保险股份有限公司 XWIKI system configuration method, system and computer equipment
CN110674043A (en) * 2019-09-24 2020-01-10 聚好看科技股份有限公司 Application debugging processing method and server
CN111125003A (en) * 2019-11-25 2020-05-08 中科边缘智慧信息科技(苏州)有限公司 Container mirror image light weight and rapid distribution method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NICOLAE SIRBU: ""Docker – the Solution for Isolated Environments"", 《HTTPS://DZONE.COM/ARTICLES/DOCKER-THE-SOLUTION-FOR-ISOLATED-ENVIRONMENTS》 *
WANG KANGJIN; YANG YONG; LI YING; LUO HANMEI; MA LIN: ""FID: A Faster Image Distribution System for Docker Platform"", 《2017 IEEE 2ND INTERNATIONAL WORKSHOPS ON FOUNDATIONS AND APPLICATIONS OF SELF* SYSTEMS (FAS*W)》 *
金永霞等: "云计算实验平台虚拟机镜像的管理与维护", 《实验技术与管理》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114185641A (en) * 2021-11-11 2022-03-15 北京百度网讯科技有限公司 Virtual machine cold migration method and device, electronic equipment and storage medium
CN114185641B (en) * 2021-11-11 2024-02-27 北京百度网讯科技有限公司 Virtual machine cold migration method and device, electronic equipment and storage medium
CN116204305A (en) * 2022-12-21 2023-06-02 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) Method for limiting number of dock container inodes
CN116204305B (en) * 2022-12-21 2023-11-03 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) Method for limiting number of dock container inodes

Also Published As

Publication number Publication date
CN111432006B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
US11487771B2 (en) Per-node custom code engine for distributed query processing
CN110663019B (en) File system for Shingled Magnetic Recording (SMR)
US8762480B2 (en) Client, brokerage server and method for providing cloud storage
KR20210019533A (en) Operating system customization in on-demand network code execution systems
US11449355B2 (en) Non-volatile memory (NVM) based method for performance acceleration of containers
JP4772854B2 (en) Computer system configuration management method, computer system, and configuration management program
US20110078681A1 (en) Method and system for running virtual machine image
US20170289059A1 (en) Container-based mobile code offloading support system in cloud environment and offloading method thereof
CN113296792B (en) Storage method, device, equipment, storage medium and system
US11048716B1 (en) Managed virtual warehouses for tasks
CN103077197A (en) Data storing method and device
CN111381928B (en) Virtual machine migration method, cloud computing management platform and storage medium
CN111432006A (en) Lightweight resource virtualization and distribution method
US20240004853A1 (en) Virtual data source manager of data virtualization-based architecture
CN113032099A (en) Cloud computing node, file management method and device
CN113918281A (en) Method for improving cloud resource expansion efficiency of container
CN116737363A (en) Data set cache acceleration method, system, equipment and medium of deep learning platform
CN116680040B (en) Container processing method, device, equipment, storage medium and program product
US11263026B2 (en) Software plugins of data virtualization-based architecture
CN111459668A (en) Lightweight resource virtualization method and device for server
US11916998B2 (en) Multi-cloud edge system
US20220405249A1 (en) Providing writable streams for external data sources
US20220383219A1 (en) Access processing method, device, storage medium and program product
US11868805B2 (en) Scheduling workloads on partitioned resources of a host system in a container-orchestration system
CN115794368A (en) Service system, memory management method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant