CN110543311B - Mirror image construction method, device and storage medium - Google Patents

Mirror image construction method, device and storage medium Download PDF

Info

Publication number
CN110543311B
CN110543311B CN201910838178.6A CN201910838178A CN110543311B CN 110543311 B CN110543311 B CN 110543311B CN 201910838178 A CN201910838178 A CN 201910838178A CN 110543311 B CN110543311 B CN 110543311B
Authority
CN
China
Prior art keywords
hpc
image
mirror image
layer
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910838178.6A
Other languages
Chinese (zh)
Other versions
CN110543311A (en
Inventor
解西国
韩孟之
翟健
孙建鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dawning Information Industry Beijing Co Ltd
Original Assignee
Dawning Information Industry Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dawning Information Industry Beijing Co Ltd filed Critical Dawning Information Industry Beijing Co Ltd
Priority to CN201910838178.6A priority Critical patent/CN110543311B/en
Publication of CN110543311A publication Critical patent/CN110543311A/en
Application granted granted Critical
Publication of CN110543311B publication Critical patent/CN110543311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the technical field of high-performance computing, and provides a mirror image construction method, a mirror image construction device and a storage medium. The mirror image construction method comprises the following steps: HPC application images are built in accordance with the structure of the HPC software stack using a singulty container engine. On one hand, all files on which the HPC application depends are packaged into the mirror image by the method, so that containerized deployment of the HPC application can be realized, and the deployment process is simplified. On the other hand, compared with the traditional virtualization technology, the mirror image file manufactured by utilizing the singulty is small in size, convenient to deploy and release, and efficient in starting and running of the container. Furthermore, long-term studies by the inventors have found that employing singulty is a more suitable way than employing Docker in achieving containerized deployment of HPC applications.

Description

Mirror image construction method, device and storage medium
Technical Field
The invention relates to the technical field of high-performance computing (High Performance Computing, HPC), in particular to a mirror image construction method, a device and a storage medium.
Background
Conventional HPC parallel computing programs are typically distributed in source code packets. To deploy these programs, the corresponding compiler, math library and message passing interface (Message Passing Interface, MPI) library are installed in the HPC cluster first, and then the compiling environment detection, processor optimization, and underlying communication optimization are completed by means of the compiling script, so that the acceleration can be performed by using the computing power of the HPC cluster, but the above process is too complex.
Disclosure of Invention
An objective of the embodiments of the present application is to provide a method, an apparatus, and a storage medium for mirror image construction, so as to improve the above technical problems.
In order to achieve the above purpose, the present application provides the following technical solutions:
in a first aspect, an embodiment of the present application provides a mirror image construction method, including: HPC application images are built in accordance with the structure of the HPC software stack using a singulty container engine.
In the method, firstly, a container technology single quality is adopted, and an HPC application image is constructed according to an HPC software stack structure, namely, all files on which the HPC application runs are packaged in the image, so that once the image is deployed, a container is started based on the image, and the HPC application runs in the container as if the HPC application runs directly on a physical machine provided with a corresponding running environment, and therefore, the deployment process of the HPC application can be greatly simplified. Second, compared with traditional virtualization technologies (e.g., virtual machines), the image file manufactured by using the singulty container engine is small in size, convenient for application deployment and release, and efficient in starting and running of the container. Furthermore, long-term studies by the inventors have found that the alternative container technology dock, which is currently relatively common, is not suitable for containerized deployment of HPC applications, and that adoption of the singulty container technology is a desirable approach.
In one implementation manner of the first aspect, the constructing, by using the singulty container engine, an HPC application image according to a structure of an HPC software stack includes: the HPC application images are built once in order from the bottom to the top of the HPC software stack using a singulty container engine.
In one implementation manner of the first aspect, the constructing, by using the singulty container engine, an HPC application image according to a structure of an HPC software stack includes: constructing each layer of mirror images sequentially from the bottom layer to the top layer of the HPC software stack by utilizing a singulty container engine, wherein the constructed top layer of mirror images are the HPC application mirror images; if the file of the target layer is not contained in the basic mirror image, constructing the target layer mirror image according to the basic mirror image and the file of the target layer; if part of the files of the target layer are contained in the basic mirror image, constructing the target layer mirror image according to the basic mirror image and the files of the target layer which are not contained in the basic mirror image; if all files of the target layer are contained in the basic mirror image, skipping the construction of the target layer mirror image; the target layer refers to any layer except the bottom layer in the HPC software stack, and the base image refers to the last built layered image before the target layer image is built.
The HPC software stack has a layered structure, the closer to the system kernel is to the bottom layer, the closer to the top layer is to the application program, and the upper layer has a dependency relationship (but not absolute) to the lower layer, so that images can be built according to the sequence from the bottom layer to the top layer of the HPC software stack, and the problem that the files of the upper layer are not dependent on the base when packaged can be avoided. In one implementation, the HPC application image may be directly built once, and in another implementation, multiple hierarchical images may also be built corresponding to the structure of the HPC software stack, where the top-level image is the final HPC application image to be obtained. For the latter implementation, the mirror image of each layer may be built strictly according to the structure of the HPC software stack, or a more flexible building manner may be adopted, for example, if the file of the a layer is already packaged into the mirror image when the corresponding mirror image of a layer before the a layer is built, the building of the mirror image of the a layer may be skipped, and the building of the mirror image corresponding to the upper layer of the a layer may be continued (if the a layer has an upper layer).
In one implementation manner of the first aspect, the HPC software stack includes, in order from bottom layer to top layer: system underlying libraries, infiniband networks, compilers, message passing interfaces MPI, math libraries, and HPC applications.
The above implementation provides a specific HPC software stack hierarchy, it being understood that the HPC software stack may have other structures as well.
In an implementation manner of the first aspect, the method for constructing the HPC application image includes: the image is built based on images in an image warehouse, the images are built based on local, the compressed package is built based on or the image file is built based on.
The manner of constructing the mirror image by using the singulty container engine has various options, which is beneficial to the user to select according to the own requirements.
In one implementation of the first aspect, the image repository includes a Docker image repository, and the local image includes a local Docker image.
Although the inventor finds that Singularity is more suitable for containerized deployment of HPC applications than Docker, it does not exclude Docker images as a basis for constructing Singularity images, because many related files are already packaged in Docker images, the trouble of self-packaging can be omitted, and Docker, which is a container technology widely used at present, is more manufactured and easily available.
In one implementation manner of the first aspect, the format of the HPC application image includes: squarfs compression format, ext3 format, or sadbox directory.
The single quality container engine can be used for constructing images in various formats, so that a user can select the images according to own requirements, wherein ext3 and pandbox are editable image formats, the images are convenient to modify, squarfs is a compressed format, and the manufactured images are small in size and convenient to transmit.
In one implementation manner of the first aspect, the method further includes: the HPC application image is deployed to nodes in an HPC cluster.
In one implementation manner of the first aspect, the deploying the HPC application image onto a node in an HPC cluster includes: and deploying the HPC application image to an I/O node serving as shared storage in an HPC cluster.
The HPC application can be directly deployed on the local storage of each computing node, or can be deployed on the I/O node serving as the shared storage in the HPC cluster, and the shared storage is mounted by each computing node, so that the effect achieved is similar to that of the former deployment mode, and only one mirror image is required to be deployed in one HPC cluster, so that the HPC cluster is very convenient. The specific deployment mode is not limited, and the mirror image can be transmitted to the corresponding node through a network after being manufactured, or can be copied through a USB flash disk, and the like. Because the singulty mirror image is not large in size, the transfer or copying is convenient.
In a second aspect, an embodiment of the present application provides a mirror image construction apparatus, including: and the construction module is used for constructing the HPC application mirror image according to the structure of the HPC software stack by utilizing the singulty container engine.
In a third aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon computer program instructions which, when read and executed by a processor, perform the steps of the method provided by the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide an electronic device, including: a memory and a processor, the memory having stored therein computer program instructions which, when read and executed by the processor, perform the steps of the method provided by the first aspect or any one of the possible implementations of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a schematic diagram of an HPC cluster;
FIG. 2 shows a flowchart of a method for mirror image construction according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an HPC software stack according to an embodiment of the present application;
FIG. 4 shows a schematic diagram of mirror construction using a singulty container engine;
FIG. 5 shows a functional block diagram of a mirror image construction apparatus according to an embodiment of the present application;
fig. 6 shows a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The image construction method provided by the embodiment of the application is used for constructing an HPC application image, wherein the HPC application refers to an application program deployed into an HPC cluster and executed by a node in the HPC cluster, and the HPC application comprises but is not limited to a parallel computing program. The HPC cluster is briefly described below.
FIG. 1 illustrates a schematic diagram of an HPC cluster. The HPC cluster 100 in fig. 1 includes a login management node 110, an I/O node 120, and a computing node 130, where the nodes are interconnected by a management network 140 and a wireless bandwidth (Infiniband) network 150, where the management network 140 may be an ethernet network, the transmission rate is not too high, and the Infiniband network 150 is a high-speed network, and is mainly used for communication between the nodes when performing computing tasks. When using HPC cluster 100, a user does not log directly into compute node 130, but submits a compute job task on log-in management node 110, and a job scheduling system deployed in HPC cluster 100 is responsible for scheduling the compute task to execute on the compute node.
The job computing task described above requires that the computing nodes 130 complete by executing an HPC application, which may be deployed directly to the local storage of each computing node 130, or, in a more general manner, to the I/O nodes 120 that are shared storage, each computing node 130 mounts the shared storage, so that consistent access to the HPC application may be achieved. In some implementations, the I/O nodes may also be replaced with parallel storage systems.
It should be noted that the structure of the HPC cluster in fig. 1 is only an example, and the HPC cluster may also include more types of nodes than in fig. 1 when implemented, and thus fig. 1 should not be construed as limiting the scope of protection of the present application.
In the comparison embodiment, before the HPC application is deployed, an operation environment of the HPC application is first built on a corresponding node (such as a computing node or an I/O node), and the process is relatively complex. After the virtualization technology appears, although the deployment process can be simplified by means of a virtual machine, the virtual machine is not suitable for HPC scenes sensitive to performance due to the problems of overlarge mirror volume, slow starting, high occupied resources and the like.
According to the image construction method provided by the embodiment of the application, the HPC application image is constructed by utilizing the single quality container engine, so that containerized deployment of the HPC application can be realized, the deployment process is simplified, and various problems existing in deployment by adopting a traditional virtualization technology (such as a virtual machine) can be solved. Among them, singulty is a container technology, and a software tool for realizing the container technology is a singulty container engine, which was originally developed by the national laboratory of Lorenteberg, U.S.A.
The containerized deployment of the HPC application refers to packaging the running environments (such as a dynamic library, a configuration file and the like) on which the HPC application depends into the mirror image of the container, then deploying the mirror image onto corresponding nodes in the HPC cluster, enabling the container to be started by the nodes based on the mirror image, and synchronously executing the HPC application when the container is started. The container technique is a lightweight virtualization technique. Compared with the traditional virtualization technology, the container directly runs on the kernel of the host machine, has no performance loss caused by an intermediate virtualization layer, has higher execution efficiency and less resource occupation, and is suitable for packaging HPC application sensitive to performance. Furthermore, the container can achieve a boot speed on the order of seconds, which is much better than a virtual machine (typically requiring several minutes to boot). Moreover, compared with huge image files (generally more than a few GB) of the virtual machine, the image files of the container are smaller, generally only about hundreds of MB, so that the method is more suitable for deployment and release of applications.
Furthermore, since singulty is not the only container technology, the inventors also compared singulty to another mainstream container technology Docker before finally selecting a singulty container engine as a tool to build a mirror image, and concluded that: singulty is better suited than Docker as a way to containerize deployment of HPC applications. Briefly described as follows:
Dock has been used more and more in recent years as a common container technology in the fields of cloud computing and the like. However, the inventor has long studied and found that if the Docker container is used to package HPC applications, some problems occur at runtime, and the compatibility with HPC software stack is poor, and the main problems are as follows:
(1) Docker uses cgroups and namespace provided by Linux kernel to realize resource limitation and isolation, while HPC application is contrary to the former, and integrates calculation by MPI and the like to realize massive parallel calculation. Excessive resource isolation makes the inter-process communication very complex for the Docker when running the HPC application.
(2) The Docker is complex in configuration in the aspect of cross-physical machine communication and is not suitable for HPC application parallel communication. Since Docker by default enables network isolation, i.e., the container has a network stack independent of the host, complex virtual network cards (veth) and iptables rules need to be configured or communicated by vxlan, etc. when the container communicates across physical hosts. For HPC applications, large-scale cross-node parallel computing is often required. Before the HPC application is operated, the container communication of the cross-node needs to be configured, so that the operation difficulty of the application is increased.
(3) The startup mode of the Docker container has poor compatibility with the parallel startup mode of the traditional HPC application. The process manager (typically mpirun) provided by the MPI at the time of conventional HPC application run is responsible for launching the application process at the corresponding node. And the Docker container is started up by the Docker run command management, so that the Docker container has poor compatibility with the traditional HPC application starting mode.
(4) The Docker mirror storage approach is not applicable to HPC clusters. The Docker image adopts a hierarchical management mode, and is required to be stored on a local disk (generally under a/var directory). It has been mentioned before in describing fig. 1 that in a more common HPC cluster implementation, applications are deposited on shared storage of the cluster, and all nodes mount the shared storage, enabling consistent access. The mirror image stored on the local disk first occupies a relatively large amount of disk space. In addition, when the local disk has no corresponding mirror image, the local disk needs to be downloaded to a mirror image warehouse node. When large-scale parallel computation is performed, a large number of computing nodes can simultaneously download images to the image warehouse nodes, so that on one hand, the application is slowly started, and on the other hand, the large-scale access often causes the breakdown of the image warehouse nodes, so that the application is failed to start.
However, singulness is designed by adopting a different thought from that of Docker, so that the problem of Docker does not exist, and is more suitable for HPC scenes than Docker. Specifically, for (1), isolation of resources by singulty is less, only an mount point (MNT nacispace in Linux kernel) is isolated by default, and cgroups is not enabled, so that communication between processes is relatively simple when the singulty runs HPC application; for (2), the single quality does not isolate the network, the container directly shares the host network, no special configuration of the network is required before the HPC application is run (e.g., no configuration of a separate IP address, hostname, etc.); for (3), the starting mode of the single container has good compatibility with the parallel starting mode of the traditional HPC application, and the starting mode of the traditional HPC application does not need to be greatly changed, so that the starting mode can be started through the MPI provided MPirun or through the srun in the job scheduling system (the starting mode through the srun is exemplified below); for (4), the single quality mirror support is directly deployed on shared storage in the HPC cluster, and the single quality mirror support meets the mainstream HPC cluster implementation, and the container is started quickly, so that the problem of mirror warehouse performance bottleneck is not caused.
It should be emphasized that the above-mentioned drawbacks of Docker in terms of containerized deployment of HPC applications and the above-mentioned advantages of singulty in terms of containerized deployment of HPC applications are all results obtained by the inventors after practice and careful study, and all contribute to the present invention in the course of the invention.
How to build HPC application images using the singulty container engine is described in detail below. Fig. 2 shows a flowchart of a mirror image construction method provided in an embodiment of the present application. The method may or may not be performed by a node in the HPC cluster, i.e. the construction and deployment of the mirror does not have to be done on the same device. For example, image construction may be performed on a certain computing node, and then the constructed image is transmitted to the I/O node through the network to complete deployment, or image construction may be performed on a certain device outside the HPC cluster, and then the constructed image is copied to the I/O node through the usb disk to complete deployment, and so on. Referring to fig. 2, the method includes:
step S200: HPC application images are built in accordance with the structure of the HPC software stack using a singulty container engine.
The single-level container engine is released in the form of a source code package, and the source code package can be downloaded to a single-level website first, then compiled and installed, and the compiling and the installation of the single-level container engine can be referred to the implementation manner in the prior art and will not be described in detail here.
HPC software stack refers to a stacked hierarchy of libraries on which HPC applications run. Fig. 3 shows a schematic structural diagram of an HPC software stack according to an embodiment of the present application. In FIG. 3, the HPC software stack includes the following layers from the bottom to the top:
and (3) a kernel: the system comprises a processor (CPU in the figure), a memory (Mem in the figure), driving modules (eth 0 and mlx5 in the figure) corresponding to various hardware devices, and the like.
The system bottom layer library: including libc, librt, libm, pthread, hwloc and the like.
Infiniband network: including ibverbs, rdmacm, ibumad, daplofa, etc., for supporting high-speed communication between nodes.
A compiler: and a compiler such as gcc, intel, pgi is mainly used for compiling source codes.
MPI: including intelmpi, openmpi, mpich, mvapich, etc. MPI is a common programming interface in HPC, where there may be a large number of processes concurrently running parallel computation, message communication and data synchronization through MPI.
Mathematical library: the method comprises blas, lapack, saclapack, fftw and other library files, and is mainly used for providing mathematical operation support.
HPC application: the method comprises hpl, vasp, wrf, gromacs and other programs and is mainly used for realizing parallel computation based on HPC clusters.
In the HPC software stack, the upper layer has a dependency relationship with the lower layer (the upper layer and the lower layer are not limited to the adjacent upper layer and lower layer), for example, the lower layer of the MPI can use a shared memory or an Infiniband network to realize efficient intra-node communication and inter-node communication, a source code package of the HPC application needs to be compiled by a compiler, and each layer above a system lower layer library depends on functions provided by the system lower layer library, and the like. Conversely, the lower layers in the HPC software stack are not dependent on the implementation of the upper layers.
Because the singulty container and the physical host share the operating system kernel, after the kernel module is loaded by the physical host, the application in the container can directly use the device without reloading the corresponding kernel module in the container. Therefore, the program and the program library (the part above the dotted line in fig. 3) above the kernel are only required to be packaged into the mirror image of the single-location container during construction, and the kernel is not required to be packaged, so that the kernel layer is omitted when the HPC software stack is described later, and the description is simplified.
With respect to fig. 3, it should be noted that each layer may include more content than in the figure, for example, for the compiler layer, only three compilers common under Linux are listed in fig. 3, and in practice, more kinds of compilers may be included. It should also be noted that when the image is built, not all contents of each layer need to be packaged into the image, and the packaged contents need to be selected according to actual requirements (to be used by the HPC application), for example, as for the compiler layer, although three compilers are listed in fig. 3, only gcc and intel compilers may be packaged according to actual requirements when a certain HPC application image is built.
Furthermore, it should be understood that the hierarchy of the HPC software stack shown in fig. 3 is only one possible partitioning, nor does it exclude other partitioning, for example, in one partitioning, merging the Infiniband network and the system underlying library into one layer, or in another partitioning, further splitting of the system underlying library into multiple sub-layers is also possible. Of course, for simplicity, the following description will be mainly based on the HPC software stack structure in fig. 3, and when the HPC software stack has a different structure from that in fig. 3, the method may be adjusted accordingly.
It has been mentioned before that in the HPC software stack, the upper layer has a dependency on the lower layer, and vice versa, so that the images can be built in order from the bottom layer to the top layer of the HPC software stack, so that the files of the upper layer do not have the problem of having no dependency base when packaged. For example, for the HPC software stack in fig. 3, the image construction is performed in the order of the system bottom library, infiniband network, compiler, MPI, math library, and HPC application, and finally the HPC application image is obtained.
In the first construction mode, files of each layer can be packaged at one time according to the structure of the HPC software stack and the sequence from the bottom layer to the top layer, so that the HPC application mirror image is constructed at one time. The construction mode is simpler and more direct, and the efficiency is higher.
In the second construction mode, considering that the HPC software stack is layered, the construction of each layer of images can be sequentially performed in the order from the bottom layer to the top layer of the HPC software stack, wherein the constructed top layer image is the final HPC application image to be obtained. This construction is flexible.
For the second construction method, first, a mirror corresponding to the bottom layer in the HPC software stack needs to be constructed, for example, in the case of fig. 3, the system bottom layer library mirror. Then, for all layers except the bottom layer in the HPC software stack, the construction mode of the corresponding mirror image is similar, so that one layer can be arbitrarily selected for description, and the HPC software stack can be called a target layer. The construction of the target layer mirror has three possible scenarios:
First kind: if the file of the target layer is not included in the base image (regarding the base image, to be described later), the target layer image is constructed according to the base image and the file of the target layer, and the constructed target layer image includes the content of the base image and the file of the target layer.
Second kind: if part of the files of the target layer are contained in the basic mirror image, constructing the target layer mirror image according to the basic mirror image and the files of the target layer which are not contained in the basic mirror image, wherein the constructed target layer mirror image contains the content of the basic mirror image and the files of the target layer which are not contained in the basic mirror image;
third kind: if all files of the target layer are contained in the basic mirror image, the construction of the target layer mirror image is skipped, and the construction of the subsequent layer mirror image is continued (if no subsequent layer exists, the construction is ended).
The base image refers to the last built hierarchical image before the target layer image is built.
For example, after the system bottom library image is built, since the Infiniband network is a layer located above the system bottom library, the Infiniband network image is built immediately, so that for this layer of the Infiniband network, the base image is the system bottom library image. Since the system bottom library image does not contain the file of the Infiniband network, the Infiniband network image belongs to the first case, and the Infiniband network image is constructed according to the system bottom library image and the file of the Infiniband network (for example, ibverbs, rdmacm in fig. 3, etc.), and the constructed Infiniband network image contains the system bottom library image and the file of the newly packaged Infiniband network.
For another example, for some compilers, such as intel compilers, that are self-contained with math libraries, it is assumed that for some HPC applications all math libraries that they rely on are already contained in the intel compiler's own math library, and that the intel compilers are packaged in the built compiler image, so that the MPI image also contains the files of the math libraries. After the MPI image is constructed, since the math library is a layer on top of the MPI, the math library image is constructed immediately, so that for this layer of math library the base image is the MPI image. Since the MPI image does not contain the file of the mathematical library, the third situation is that the construction of the mathematical library is directly skipped, and the construction of the HPC application image is started.
For another example, although the intel compiler itself has a math library, the math library that a certain HPC application depends on is not completely contained in the math library that the intel compiler itself has, in this case, it is the second case that after the MPI image is built, the math library image is still to be built, but only those math library files that are not contained by the intel compiler need to be packed into the math library image, and the math library that is already contained by the intel compiler need not be repeatedly packed. After the mathematical library image is built, the HPC application image is continuously built, and the basic image of the HPC application image is the mathematical library image for the layer of the HPC application.
It will be appreciated that in some arrangements, a corresponding hierarchical image may also be built for each layer of the HPC software stack, whether or not the files of that layer are contained in the images corresponding to the previous layers. That is, for these schemes, the process of building the HPC application image is:
constructing a bottom library mirror image of the system; constructing an Infiniband network image based on a file of the system bottom layer image and the Infiniband network; constructing a compiler image based on the Infiniband network image and the file of the compiler; constructing an MPI mirror image based on the compiler mirror image and the files of the MPI; constructing a mathematical library image based on the MPI image and the file of the mathematical library; an HPC application image is constructed based on the mathematical library image and the files of the HPC application.
In the singulty container engine, build commands can be used to build images (including HPC application images as well as layered images). FIG. 4 shows a schematic diagram of the construction of a mirror image by a singulty container engine. Referring to FIG. 4, the construction of the HPC application image includes: image construction based on an image warehouse, image construction based on local (referring to the local of the device constructing the image), construction based on a compressed package or construction based on a record file.
The mirror image warehouse may be a Docker mirror image warehouse, a singulty mirror image warehouse, etc., and the local mirror image may also be a local Docker mirror image, a local singulty mirror image, etc. Although it is pointed out above that singulty is more suitable for containerized deployment of HPC application than Docker, the mirror image construction method in the application does not exclude Docker mirror images as the basis for constructing singulty mirror images, because many related files are already packaged in Docker mirror images, the trouble of self-packaging can be omitted, and Docker is a container technology widely used at present, and manufactured high-quality mirror images are more and are easy to obtain.
With continued reference to FIG. 4, the image formats created by the build include a squarafs compression format, an ext3 format, or a sadbox directory. The ext3 and the sadbox are in editable mirror image formats, so that mirror images are convenient to modify, the squarfs is in a compressed format, and the manufactured mirror images are small in size, so that transmission and deployment are convenient. The image of the squarfs compression format can be directly created or can be compressed by images of other two formats.
As can be seen from FIG. 4, the image is constructed flexibly by using the single container engine, and the construction mode and the output image format of the image are selected in various ways, so that a user can adopt one of the construction modes and one of the image formats according to actual requirements.
Because the record file contains all the processes of image creation, the image construction process is specifically described below by taking the record file as an example, and by adopting the second construction mode, one record file is configured for each layer of image construction. The record file includes the following typical fields:
Bootstrap/From: a source of the base image;
% label: describing information of the mirror image, and optionally;
% files: wherein the specified file will be copied into the image to be built;
% post: commands executed when the mirror image is constructed, namely specific construction operations;
% runscript: a program executed by default when the container is started;
% test: the test instruction executed after the mirror image is constructed is used for detecting whether the mirror image is constructed correctly or not;
% environment: environmental variable settings.
The build command in the singulty container engine reads the record file and completes the mirror image construction according to the file content. The following is an illustration based on the HPC software stack structure in FIG. 3:
step A: system bottom library image/Infiniband network image construction
Step a builds a system bottom library image based on the centros 7.4 system (a Linux distribution) and installs Mellanox Infiniband network drivers in the system bottom library image to build an Infiniband network image. The following is the record file used to construct the Infiniband network image.
The BootStrap/From field of the record file is appointed to use the Docker mirror image of centos 7.4.1708 as a basic mirror image, that is to say, the system bottom library mirror image directly uses the ready-made Docker mirror image, thus the realization is simpler, and the self-packing of the files of the system bottom library is also possible. Copy of the Infiniband drive file to the/tmp directory within the image is specified in the% file field. The primary operation specified in the% post field is to configure the driver yum source and then install the Infiniband driver user state related rpm package with yum, with some of the commands used such as cd, cat, rm being supported by the system underlying library. Ibstat is performed after completion of the image construction, designated in the% test field, and it is checked whether an Infiniband network can be used in the container. The file content is specifically as follows:
Saving the above-mentioned record file as ib_mlnx.def, and then executing the following command to complete the image construction work: singularity build/public/technical/images/ib_mlnx.img ib_mlnx.def constructs a mirror image from the record file using the build command in the single container engine and saves the mirror image as/public/technical/images/ib_mlnx.img.
To this end, the Infiniband network image construction is complete. Supplementary explanation is needed for the following points: first, the above-used rip file is merely exemplary, and the rip file used to construct the Infiniband network image must not be so written; secondly, the mirror image created by adopting the record file defaults to a squarfs compression format; thirdly, the build command used for constructing the mirror image can be manually input and executed by a user, can be automatically executed in a script, and is not limited. The above points apply equally to the mirror image construction hereinafter, and will not be described in detail.
And (B) step (B): compiler image construction
Step B involves only two compilers, the GNU compiler and the intel compiler. The GNU compiler is already contained in the generic linux release (i.e., already packaged into the system-underlying library image) and can be installed using yum commands. For an Intel compiler, the Intel compiler can be installed in equipment (namely equipment for constructing a mirror image), then a binary file and a function library related to the compiler are extracted, packed into a compressed file, and then the compressed file is packed into the mirror image. The Intel compiler is provided with an MKL mathematical library, and can be packaged into an image together with the compiler. The following is the record file used to build the compiler image.
The specified base image in the BootStrap/From field of the flip file is a local image, namely the base image ib_mlnx.img constructed in the step A. The packaged Intel compiler is specified in the% file field to be copied to the image under the/opt directory. Execution yum is specified in the% post field to command installation of the GNU compiler and to expand the compression package of the Intel compiler. The version of the intel compiler is output in the% test field (icc is the intel compiler), and if the version is successfully output, the intel compiler is successfully packed into the image. The environment variables in the container are configured in the% environment field, i.e., the compiler-related binary files, header files, and LIBRARY files are added to environment variables such as the system PATH, INCLUDE, LIBRARY _path and ld_library_path. The file content is specifically as follows:
/>
saving the above-mentioned record file as an intel. Def, and then executing the following commands to complete the image construction work:
singularity build/public/singularity/images/intel.img intel.def
thus, the compiler image construction is completed.
Step C: MPI mirror construction
The image constructed in step C is packaged with openmpi (one implementation version of MPI), and the method of constructing an image based on other versions of MPI is similar. The following is the record file used to construct the MPI image.
The specified base image in the BootStrap/From field of the flip file is a local image, namely the base image intel. Img constructed in the step B. Copy openmpi source code packets to the/mnt directory in the mirror is specified in the% file field. The execution of the compilation installation process of openmpi is specified in the%post field. The environment variables within the container are configured in the% environment field. The file content is specifically as follows:
saving the above-mentioned record file as openmpi.def, then executing the following commands to complete the image construction work: singularity build/public/technical/images/openmpi.img openmpi.def to this point, MPI image construction is complete.
Step D: mathematic library image construction
HPC applications often require linear algebraic calculations such as matrix multiplication, which are typically done by calling a standard mathematical library, such as blas, lapack, scalapack, fftw shown in fig. 3. Since the intel compiler is packed in step B, already contains the compiler's own math library, step D can also be skipped if the HPC application does not rely on additional math libraries, and step E is performed directly. However, for a clearer explanation of the content of the present application, it is assumed here that HPC applications rely on the lapack library, and that they are not included in the MPI image previously built. The following is the record file used to construct the math library image.
The specified base image in the BootStrap/From field of the flip file is a local image, namely, the base image openmpi.img constructed in the step C. Copy the lapack source code packet to the/mnt directory in the mirror is specified in the% file field. The compilation installation process of performing the lapack is specified in the%post field. The environment variables within the container are configured in the% environment field. The file content is specifically as follows:
saving the above-mentioned record file as a stack. Def, and then executing the following commands to complete the image construction work:
singularity build/public/singularity/images/lapack.img lapack.def
thus, the mathematical library image construction is completed.
Step E: HPC application image construction
After the steps A to D are executed, files such as a packed system bottom library, an Infiniband network, a compiler, an MPI, a math library and the like in the single quality container mirror image are provided with complete environments for the operation of the HPC application, and the HPC application can be installed and the mirror image can be constructed. The following is the record file used to build the HPC application image.
The specified base image in the BootStrap/From field of the flip file is a local image, namely the base image lapack. Img constructed in the step D. The% file field specifies that the source code packet source hplc-2.2. Tar. Gz and the compilation configuration file make. Intel are copied to the mirror/mnt directory for testing the floating point performance of the HPC system. The compilation installation process of performing the lapack is specified in the%post field. The environment variables within the container are configured in the% environment field. The file content is specifically as follows:
Saving the record file as hplc.def, and then executing the following commands to complete the image construction work:
singularity build/public/singularity/images/hpl.img hpl.def
to this end, the HPC application image construction is complete.
It can be appreciated that, if the first construction method is adopted, only one record file may be configured to directly construct the HPC application image, and the contents of the record file may be integrated with the contents of each record file in steps a to E, which is not specifically described herein.
Step S210: the HPC application image is deployed onto nodes in the HPC cluster.
The constructed HPC application image can be directly deployed on the nodes in the HPC cluster in the modes of USB flash disk copy, network transmission and the like, and a complex compiling and installing process is not needed, so that various errors generated in the compiling and installing process are not needed to be processed, and the deployment process of the HPC application is greatly simplified. In addition, the HPC application mirror image is not large in size, so that the HPC application mirror image is convenient to transfer or copy. The mirror image can be manufactured by a professional who is skilled in the HPC, and compiling optimization can be performed in the manufacturing process, so that the operation efficiency of the HPC application is further improved. It has been previously mentioned that HPC applications may be deployed on individual compute nodes, depending on the implementation, or in a more common implementation, on I/O nodes that are shared storage.
In fig. 2, step S210 is shown as a dashed box, indicating that this step is optional, i.e., in some aspects, step S200 may be temporarily undeployed after the image is built, e.g., may be stored locally on the device, or the built image may be uploaded to an image repository for storage.
After deployment, the exec command in the single container engine can start the container based on the HPC application image and run the HPC application in the container, taking the hpl image constructed by the above example as an example, the statement of running the hpl is as follows: the basic exec/public/basic/images/hplc. It will be appreciated that if a certain computing node is to run an HPC application, that node will at least first install the singulty container engine (or may install it in shared storage).
For HPC clusters using a job scheduling system (such as a slurm job scheduling system, for example), a user may complete the submission of the hplc calculation task using a slurm script. For example, 2 computing nodes are specified in the following slurm script, each node runs 24 processes, and starting of the hplc process is completed by using a srun command (actually, the computing node still runs the hplc program through an exec command in the singulness container engine):
#!/bin/bash
#SBATCH-J hpl
#SBATCH-N 2
#SBATCH--ntasks-per-node=24
#SBATCH-p test
srun--mpi=pmi2singularity exec/public/singularity/images/hpl.img xhpl
It can be seen that the startup mode of the single container is compatible with the parallel startup mode of the conventional HPC application, for example, the conventional HPC application may be started by the job scheduling system, and in the above example, the container is started by the job scheduling system. The situation for launching the HPC application by the mpirun mode is similar and no specific analysis is done.
Since all files on which the HPC application runs depend are already packaged in the mirror image, the singulty container can be started directly, and is very simple. The container directly runs on the kernel of the host machine, unlike the traditional virtual machine, the container has no performance loss caused by an intermediate virtualization layer, and the container occupies less resources after being started, so that the container is suitable for encapsulating HPC application which has very high computing efficiency. As a corollary, the following table compares the efficiency of directly running hplc on a hplc running in a container and a physical host versus:
TABLE 1
It can be seen that in the same hardware environment (node number, core number), the hplc is run in the singulness container and directly on the physical host, the calculation peaks are similar, and the performance loss is almost negligible.
Fig. 5 shows a functional block diagram of a mirror image constructing apparatus 300 provided in an embodiment of the present application. Referring to fig. 5, the mirror image constructing apparatus 300 includes: a construction module 310 for constructing the HPC application image according to the structure of the HPC software stack using the singulty container engine.
In one implementation of the image construction apparatus 300, the construction module 310 constructs an HPC application image according to the structure of the HPC software stack using a singulty container engine, including: the HPC application images are built once in order from the bottom to the top of the HPC software stack using a singulty container engine.
In one implementation of the image construction apparatus 300, the construction module 310 constructs an HPC application image according to the structure of the HPC software stack using a singulty container engine, including: constructing each layer of mirror images sequentially from the bottom layer to the top layer of the HPC software stack by utilizing a singulty container engine, wherein the constructed top layer of mirror images are the HPC application mirror images; if the file of the target layer is not contained in the basic mirror image, constructing the target layer mirror image according to the basic mirror image and the file of the target layer; if part of the files of the target layer are contained in the basic mirror image, constructing the target layer mirror image according to the basic mirror image and the files of the target layer which are not contained in the basic mirror image; if all files of the target layer are contained in the basic mirror image, skipping the construction of the target layer mirror image; the target layer refers to any layer except the bottom layer in the HPC software stack, and the base image refers to the last built layered image before the target layer image is built.
In one implementation of the image construction apparatus 300, the HPC software stack includes, in order from bottom to top: system underlying libraries, infiniband networks, compilers, message passing interfaces MPI, math libraries, and HPC applications.
In one implementation of the image construction apparatus 300, the construction method of the HPC application image includes: the image is built based on images in an image warehouse, the images are built based on local, the compressed package is built based on or the image file is built based on.
In one implementation of the image construction apparatus 300, the image repository includes a Docker image repository, and the local image includes a local Docker image.
In one implementation of the image construction apparatus 300, the format of the HPC application image includes: squarfs compression format, ext3 format, or sadbox directory.
In one implementation of the image construction apparatus 300, the image construction apparatus 300 further includes: a deployment module 320 (which is shown in dashed box in fig. 5 to represent an option) is used to deploy the HPC application image onto a node in the HPC cluster.
In one implementation of the image construction apparatus 300, the deploying module 320 deploys the HPC application image onto a node in the HPC cluster, including: and deploying the HPC application image to an I/O node serving as shared storage in an HPC cluster.
The mirror image construction apparatus 300 according to the embodiment of the present application has been described in the foregoing method embodiment, and for brevity, reference may be made to the corresponding contents of the method embodiment where the apparatus embodiment is not mentioned.
Fig. 6 shows one possible structure of an electronic device 400 provided in an embodiment of the present application. Referring to fig. 6, the electronic device 400 includes: processor 410, memory 420, and communication interface 430, which are interconnected and communicate with each other by a communication bus 440 and/or other forms of connection mechanisms (not shown).
The Memory 420 includes one or more (Only one is shown in the figure), which may be, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), and the like. The processor 410, as well as other possible components, may access, read, and/or write data from, the memory 420.
The processor 410 includes one or more (only one shown) which may be an integrated circuit chip having signal processing capabilities. The processor 410 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a micro control unit (MicroController Unit, MCU), a network processor (Network Processor, NP), or other conventional processor; but may also be a special purpose processor including a digital signal processor (Digital Signal Processor, DSP for short), an application specific integrated circuit (Application Specific Integrated Circuits, ASIC for short), a field programmable gate array (Field Programmable Gate Array, FPGA for short), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Communication interface 430 includes one or more (only one shown) that may be used to communicate directly or indirectly with other devices for data interaction. Communication interface 430 may be an ethernet interface; may be a high-speed network interface (e.g., an Infiniband network); may be a mobile communications network interface, such as an interface of a 3G, 4G, 5G network; or may be other types of interfaces with data transceiving functionality.
One or more computer program instructions may be stored in memory 420 that can be read and executed by processor 410 to implement the steps of the image construction method provided by embodiments of the present application, as well as other desired functions.
It is to be understood that the configuration shown in fig. 6 is merely illustrative, and that electronic device 400 may also include more or fewer components than those shown in fig. 6, or have a different configuration than that shown in fig. 6. The components shown in fig. 6 may be implemented in hardware, software, or a combination thereof. In the embodiment of the present application, the electronic device 400 may be a node in the HPC cluster, or may not be a node in the HPC cluster. As mentioned previously, the construction of the single mirror may or may not be done by nodes in the HPC cluster.
The present application also provides a computer readable storage medium, where computer program instructions are stored, where the computer program instructions, when read and executed by a processor of a computer, perform the steps of the image construction method provided in the embodiments of the present application. For example, a computer-readable storage medium may be implemented as memory 420 in electronic device 400 in FIG. 6.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (7)

1. A mirror image construction method, comprising:
constructing an HPC application mirror image according to the structure of the high-performance computing HPC software stack by utilizing a singulty container engine;
wherein the constructing the HPC application image by the single quality container engine according to the structure of the high performance computing HPC software stack comprises:
constructing each layer of mirror images sequentially from the bottom layer to the top layer of the HPC software stack by utilizing a singulty container engine, wherein the constructed top layer of mirror images are the HPC application mirror images;
if the file of the target layer is not contained in the basic mirror image, constructing the target layer mirror image according to the basic mirror image and the file of the target layer;
if part of the files of the target layer are contained in the basic mirror image, constructing the target layer mirror image according to the basic mirror image and the files of the target layer which are not contained in the basic mirror image;
If all files of the target layer are contained in the basic mirror image, skipping the construction of the target layer mirror image;
the target layer refers to any layer except the bottom layer in the HPC software stack, and the basic mirror image refers to a last built layered mirror image before the target layer mirror image is built;
the HPC software stack sequentially comprises, from bottom layer to top layer: the system comprises a system bottom layer library, an Infiniband network, a compiler, a message passing interface MPI, a math library and an HPC application; the Infiniband network comprises a library file for supporting high-speed communication among nodes of the HPC cluster; the MPI comprises library files for supporting message communication and data synchronization between processes of parallel computation;
the singulty container and the physical host share an operating system kernel, and after the kernel module is loaded by the physical host, the application in the container can directly use the physical host.
2. The image construction method according to claim 1, wherein the construction method of the HPC application image includes: the image is built based on images in an image warehouse, the images are built based on local, the compressed package is built based on or the image file is built based on.
3. The image construction method according to claim 1, wherein the format of the HPC application image includes: squarfs compression format, ext3 format, or sadbox directory.
4. The image construction method according to claim 1, characterized in that the method further comprises:
the HPC application image is deployed to nodes in an HPC cluster.
5. The image construction method according to claim 4, wherein the deploying the HPC application image onto a node in an HPC cluster comprises:
and deploying the HPC application image to an I/O node serving as shared storage in an HPC cluster.
6. A mirror image construction apparatus, comprising:
the construction module is used for constructing an HPC application mirror image according to the structure of the high-performance computing HPC software stack by utilizing the singulty container engine;
wherein the constructing the HPC application image by the single quality container engine according to the structure of the high performance computing HPC software stack comprises:
constructing each layer of mirror images sequentially from the bottom layer to the top layer of the HPC software stack by utilizing a singulty container engine, wherein the constructed top layer of mirror images are the HPC application mirror images;
if the file of the target layer is not contained in the basic mirror image, constructing the target layer mirror image according to the basic mirror image and the file of the target layer;
If part of the files of the target layer are contained in the basic mirror image, constructing the target layer mirror image according to the basic mirror image and the files of the target layer which are not contained in the basic mirror image;
if all files of the target layer are contained in the basic mirror image, skipping the construction of the target layer mirror image;
the target layer refers to any layer except the bottom layer in the HPC software stack, and the basic mirror image refers to a last built layered mirror image before the target layer mirror image is built;
the HPC software stack sequentially comprises, from bottom layer to top layer: the system comprises a system bottom layer library, an Infiniband network, a compiler, a message passing interface MPI, a math library and an HPC application; the Infiniband network comprises a library file for supporting high-speed communication among nodes of the HPC cluster; the MPI comprises library files for supporting message communication and data synchronization between processes of parallel computation;
the singulty container and the physical host share an operating system kernel, and after the kernel module is loaded by the physical host, the application in the container can directly use the physical host.
7. A computer readable storage medium, characterized in that it has stored thereon computer program instructions which, when read and executed by a processor, perform the steps of the method according to any of claims 1-5.
CN201910838178.6A 2019-09-05 2019-09-05 Mirror image construction method, device and storage medium Active CN110543311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910838178.6A CN110543311B (en) 2019-09-05 2019-09-05 Mirror image construction method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910838178.6A CN110543311B (en) 2019-09-05 2019-09-05 Mirror image construction method, device and storage medium

Publications (2)

Publication Number Publication Date
CN110543311A CN110543311A (en) 2019-12-06
CN110543311B true CN110543311B (en) 2024-01-23

Family

ID=68712564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910838178.6A Active CN110543311B (en) 2019-09-05 2019-09-05 Mirror image construction method, device and storage medium

Country Status (1)

Country Link
CN (1) CN110543311B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111221541A (en) * 2019-12-26 2020-06-02 曙光信息产业(北京)有限公司 Cluster parallel program deployment method and device
CN111142865A (en) * 2019-12-30 2020-05-12 北京百迈客生物科技有限公司 Method and system for deploying third-party software on biological cloud
CN111736956B (en) * 2020-06-29 2023-01-10 苏州浪潮智能科技有限公司 Container service deployment method, device, equipment and readable storage medium
CN113821219A (en) * 2020-11-16 2021-12-21 北京沃东天骏信息技术有限公司 Method and system for realizing application program containerization
CN113821228B (en) * 2021-09-30 2023-07-11 奥特酷智能科技(南京)有限公司 Method for constructing ROS or ROS-like project based on layered container mirror image
CN114217908B (en) * 2022-02-23 2022-07-15 广州趣丸网络科技有限公司 Container starting method, system, device and equipment
CN115686870B (en) * 2022-12-29 2023-05-16 深圳开鸿数字产业发展有限公司 Parallel computing method, terminal and computer readable storage medium
CN116107715B (en) * 2023-02-02 2023-09-26 北京天云融创软件技术有限公司 Method for running Docker container task and task scheduler
CN117972670A (en) * 2024-03-28 2024-05-03 北京大学 Cloud container mirror image building method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227579A (en) * 2016-07-12 2016-12-14 深圳市中润四方信息技术有限公司 A kind of Docker container construction method and Docker manage control station
WO2017045424A1 (en) * 2015-09-18 2017-03-23 乐视控股(北京)有限公司 Application program deployment system and deployment method
CN108694092A (en) * 2018-05-11 2018-10-23 华中科技大学 A kind of container communication means and system towards Parallel application

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10324696B2 (en) * 2016-03-28 2019-06-18 International Business Machines Corporation Dynamic container deployment with parallel conditional layers

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017045424A1 (en) * 2015-09-18 2017-03-23 乐视控股(北京)有限公司 Application program deployment system and deployment method
CN106227579A (en) * 2016-07-12 2016-12-14 深圳市中润四方信息技术有限公司 A kind of Docker container construction method and Docker manage control station
CN108694092A (en) * 2018-05-11 2018-10-23 华中科技大学 A kind of container communication means and system towards Parallel application

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
G M KURTZER 等.Singularity: Scientific containers for mobility of compute.《PLoS One》.2017,第12卷(第5期),第1-20页. *
Singularity: Scientific containers for mobility of compute;G M KURTZER 等;《PLoS One》;20170511;第12卷(第5期);第1-20页 *
何宇锋 等.容器技术在移动核心网的应用.《移动通信》.2018,(第3期),第27-32页. *
基于Slurm的深度学习高性能计算平台设计及其调度实现技术;陆忠华等;《科研信息化技术与应用》;20180320;第9卷(第02期);第40-45页 *
容器技术在移动核心网的应用;何宇锋等;《移动通信》;20180315(第03期);第27-32页 *
高性能集群计算系统的构建;李圣强;李闽峰;刘桂平;王斌;吴婷;王浩;;地震;32(1);第144-149页 *

Also Published As

Publication number Publication date
CN110543311A (en) 2019-12-06

Similar Documents

Publication Publication Date Title
CN110543311B (en) Mirror image construction method, device and storage medium
US20210349706A1 (en) Release lifecycle management system for multi-node application
US9965307B2 (en) Building virtual appliances
CN112416524A (en) Implementation method and device of cross-platform CI/CD (compact disc/compact disc) based on docker and kubernets offline
US9928062B2 (en) ISA-ported container images
US8972979B2 (en) Configuration of virtual appliances
US8914785B2 (en) Providing virtual appliance system firmware images
US9170797B2 (en) Automated deployment of an application in a computing platform
US20200034167A1 (en) Automatic application migration across virtualization environments
US9665356B2 (en) Configuration of an application in a computing platform
US20100205604A1 (en) Systems and methods for efficiently running multiple instances of multiple applications
CN111221541A (en) Cluster parallel program deployment method and device
CN111492347A (en) System and method for updating containers
WO2022016848A1 (en) Method and apparatus for performing application deployment according to service role
CN110007980B (en) Method and device for realizing multi-service server
US20190250960A1 (en) Method, apparatus, and server for managing image across cloud servers
US10203976B2 (en) Virtual appliance management in a virtualized computing environment based on operational modes associated with virtual appliance
CN117112122A (en) Cluster deployment method and device
CN114090171A (en) Virtual machine creation method, migration method and computer readable medium
US9411569B1 (en) System and method for providing a climate data analytic services application programming interface distribution package
CN116382713A (en) Method, system, device and storage medium for constructing application mirror image
CN115016862A (en) Kubernetes cluster-based software starting method, device, server and storage medium
Sekigawa et al. Web Application-Based WebAssembly Container Platform for Extreme Edge Computing
CN113806015B (en) Virtual routing network construction method and device based on ARM architecture
Abeni et al. Running repeatable and controlled virtual routing experiments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant