CN113687915B - Container running method, device, equipment and storage medium - Google Patents
Container running method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113687915B CN113687915B CN202110934711.6A CN202110934711A CN113687915B CN 113687915 B CN113687915 B CN 113687915B CN 202110934711 A CN202110934711 A CN 202110934711A CN 113687915 B CN113687915 B CN 113687915B
- Authority
- CN
- China
- Prior art keywords
- container
- image file
- file
- target image
- file cache
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45562—Creating, deleting, cloning virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45566—Nested virtual machines
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45575—Starting, stopping, suspending or resuming virtual machine instances
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application discloses a container operation method, a device, equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: starting a file cache container, wherein the file cache container is started by a first container; storing a target image file to the file cache container in response to a nested container build instruction of the first container, the nested container build instruction being for instructing building of a second container in the first container, the target image file being for supporting operation of the second container; and responding to the running instruction of the second container, acquiring the target image file from the file cache container, and running the second container based on the target image file. In the embodiment of the application, the target image file can be directly acquired in the file cache container, so that the second container is operated based on the target image file, failure of operation of the second container caused by incapability of acquiring the target image file is avoided, and the success rate of operation of the container is improved.
Description
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for running a container.
Background
Currently, developers can run applications in any environment by packaging them with an open source application container engine (docker) and relying on packages into portable containers. Nested containers (dockerindockers) refer to techniques in which a developer needs to mirror or run other containers within a certain container during development.
When the container is running, the container needs to run based on the image file. In the nested container operation scenario, the containers share the daemon of the docker (docker daemon) on the host, and only the image file of the host where the docker daemon is located can be obtained due to the isolation of the containers.
However, in the operation process of the nested container, a part of the image files in the nested container need to be acquired, and the image files in the nested container cannot be successfully acquired due to the authority problem, so that the operation of the nested container fails.
Disclosure of Invention
The embodiment of the application provides a container operation method, device, equipment and storage medium. The technical scheme is as follows:
in one aspect, embodiments of the present application provide a method of operating a container, the method comprising:
starting a file cache container, wherein the file cache container is started by a first container;
storing a target image file to the file cache container in response to a nested container build instruction of the first container, the nested container build instruction being for instructing building of a second container in the first container, the target image file being for supporting operation of the second container;
and responding to the running instruction of the second container, acquiring the target image file from the file cache container, and running the second container based on the target image file.
In another aspect, embodiments of the present application provide a container handling apparatus, the apparatus comprising:
the starting module is used for starting the file cache container, and the file cache container is started by the first container;
a storage module, configured to store a target image file to the file cache container in response to a nested container construction instruction of the first container, where the nested container construction instruction is configured to instruct construction of a second container in the first container, and the target image file is configured to support operation of the second container;
and the running module is used for responding to the running instruction of the second container, acquiring the target image file from the file cache container and running the second container based on the target image file.
In another aspect, embodiments of the present application provide a computer device comprising a processor and a memory; the memory stores at least one instruction, at least one program, code set, or instruction set that is loaded and executed by the processor to implement the container running method as described in the above aspect.
In another aspect, embodiments of the present application provide a computer readable storage medium having at least one computer program stored therein, the computer program being loaded and executed by a processor to implement a container running method as described in the above aspects.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computer device to implement the container running method provided in various alternative implementations of the above aspects.
In the embodiment of the application, by starting the file cache container, when the second container is built in the first container, namely in the scene of the nested container, the target image file which is required by the second container to be operated and is positioned in the first container is stored in the file cache container, and when the second container is operated, the target image file can be directly acquired in the file cache container, so that the second container is operated based on the target image file, the failure of the operation of the second container caused by the failure of acquiring the target image file in the first container is avoided, and the success rate of the operation of the container is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an architecture diagram of a container deployment architecture provided in one exemplary embodiment of the present application;
FIG. 2 is a flow chart of a method of operating a container provided in an exemplary embodiment of the present application;
FIG. 3 is a flow chart of a method of operating a container provided in another exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a structure for obtaining a target image file according to an exemplary embodiment of the present application;
FIG. 5 is a flow chart of a method of operating a container provided in another exemplary embodiment of the present application;
FIG. 6 is a flowchart of a boot file cache container provided in one exemplary embodiment of the present application;
FIG. 7 is a block diagram of a container handling apparatus according to an exemplary embodiment of the present application;
fig. 8 is a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
References herein to "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
As shown in fig. 1, the docker is deployed in a manner that a client is separated from a server. Where the docker client 101 (dockerclient) is a docker command line tool that communicates with a docker daemon 102 (dockrdaemon) to send requests for container construction, execution, etc. to the designated docker daemon. The docker daemon 102 is used for receiving and processing requests of the docker client 101, and performing processes of container management. When the mirror image is built, the docker client 101 sends a mirror image building command (dockerbuild) to the docker daemon 102, and the docker daemon 102 builds the mirror image; when pulling the image, the docker client 101 sends a pull image command (dockerin) to the docker daemon 102, and the docker daemon 102 pulls the image in the repository 103; when running a container, the docker client 101 sends a run container command (dockerrun) to the docker daemon 102, and the docker daemon 102 is based on a mirrored run container (containers).
In the related art, the construction and operation of the docker may need to be performed in a docker container (container), that is, a nested container exists. While nested containers share the docket of the host (docket) with nested containers, docket of the host may only access the file system in the host, while nested containers are not visible to other containers. Taking fig. 1 as an example, the dockee emeron in the host may only access the image file 105 in the host, but not the image file in the docker container 104. When a container is built in the docker container 104, a nested container will be generated, and when the container in the docker container 104 is operated, an image file in the docker container 104 may need to be acquired, but due to the authority problem, the dockerdaemon in the host cannot successfully acquire the image file of the docker container 104, which causes the container to fail to operate.
FIG. 2 illustrates a flow chart of a method of operating a container provided in an exemplary embodiment of the present application. This embodiment will be described by taking the method for installing a computer device with a dock as an example, and the method includes the steps of:
in step 201, a file cache container is started, the file cache container being started by a first container.
In the embodiment of the application, the first container refers to a container which is nested, that is, the construction and operation of the container are performed in the first container.
In one possible implementation manner, a container start command is sent to a dockeraemon of a host through dockerclipent in a first container, and the dockerchaemon starts a file cache container according to the container start command, wherein the file cache container is used for storing an image file newly built by the first container.
In the related art, the first container is invisible to other containers, so that the image file in the first container cannot be acquired, and because the file cache container is started by the first container, the file cache can be set to be accessed by the other containers in the starting process, so that the image file stored in the file cache container can be acquired.
Step 202, storing a target image file to a file cache container in response to a nested container construction instruction of the first container, wherein the nested container construction instruction is used for instructing construction of the second container in the first container, and the target image file is used for supporting operation of the second container.
Optionally, the nested container construction instruction is used to instruct to construct a second container in the first container, i.e. in the embodiment of the present application, the second container refers to a nested container, which is located within the first container. And the management operation on the first container and the second container is executed through a docker daemon in the host.
In order to enable the second container to acquire the target image file in the first container in the operation process, in the embodiment of the invention, the target image file required by the second container to operate is stored in the file cache container, so that the second container can acquire the target image file conveniently, wherein the target image file is the image file in the first container.
Optionally, the file cache container corresponds to the first container, i.e. the file cache container is only used to store mirrored files in a unique first container. Illustratively, when the second container is built in the first container A1, storing the target image file in the first container A1 into the file cache container A2, and starting the file cache container A2 by the first container A1; when the second container is built in the first container B1, the target image file in the first container B1 is stored in the file cache container B2, and the file cache container B2 is started by the first container B1.
In step 203, in response to the running instruction of the second container, the target image file is obtained in the file cache container, and the second container is run based on the target image file.
Optionally, the second container will send an operation instruction to the dockardaemon when the second container is operated, and in this process, because the target image file is already stored in the file cache container, and the file cache container can be accessed, the target image file required by the second container will be acquired in the file cache container, and then the second container is operated based on the target image file.
Optionally, the target image file is a full image file or a partial image file required to run the second container.
In the embodiment of the application, by starting the file cache container, when the second container is built in the first container, namely in the scene of the nested container, the target image file which is required by the second container to be operated and is positioned in the first container is stored in the file cache container, and when the second container is operated, the target image file can be directly acquired in the file cache container, so that the second container is operated based on the target image file, the failure of the operation of the second container caused by the failure of acquiring the target image file in the first container is avoided, and the success rate of the operation of the container is improved.
To ensure that the target image file in the file cache container is successfully obtained, in one possible implementation, a start parameter is set for the file cache container when the file cache container is started, so that when the target image file in the file cache container is obtained later, the start parameter of the file cache container can be obtained, and an exemplary embodiment will be used below.
Referring to fig. 3, a flow chart of a method of operating a container according to an exemplary embodiment of the present application is shown. This embodiment will be described by taking the method for installing a computer device with a dock as an example, and the method includes the steps of:
in step 301, a file cache container is started in response to a container start instruction of the first container, where the container start instruction includes a target name and a target path, the target name is used to identify the file cache container, and the target path is an open path of the file cache container.
In the embodiment of the application, the file cache container is started by a first container, wherein a container starting instruction is initiated by a dockerclipent in the first container. Because there may be multiple containers, in order to successfully obtain the target image file in the file cache container, an identifier needs to be set for the file cache container, and optionally, a fixed container name may be set for the file cache container to be used as a unique identifier of the file cache container, that is, the container start instruction includes the target name, and the target name is used as the container name of the file cache container.
Optionally, the container startup instruction further includes a target path, which is used to indicate that the target path of the file cache container is open, that is, the access right of the target path of the file cache container is in an open state, and other containers may read the file in the file cache container based on the target path.
Illustratively, the container start instruction may be a docker run-rm-d-name=filestore-V/workspace-entrypoint=sleep busy 7200, including the destination name fileStore and the destination path workspace.
In step 302, in response to the nested container build instruction, the target image file is stored into the target path of the file cache container.
After receiving the nested container construction instruction and creating the target image file by the first container, storing the target image file under a target path of the file cache container, so that the target image file can be conveniently read based on the target path.
Optionally, the target image file in the first container may be copied under the target path of the file cache container by a dockercp command.
In combination with the above example, after the first container creates the target image file/workspace/test. Txt, then store the/workspace/test. Txt into the target path workspace of the file cache container fileStore, i.e., dock cp/workspace/test. Txt fileStore:/workspace.
In step 303, the file cache container is validated based on the target name in response to the execution instruction of the second container.
Optionally, the running instruction of the second container includes a target name of the file cache container, and when the running instruction of the second container is received, the dockerstream confirms the file cache container according to the target name in the running instruction, and acquires the target image file in the file cache container whose container name is the target name.
Illustratively, the running instruction includes a dock run-volume-from=filestore …, which indicates that the target image file is obtained in the file cache container fileStore.
Step 304, the target image file is read from the file cache container based on the target path.
When the file cache container is determined, the target image file is read in the file cache container, and in the reading process, the target image file can be read in the file cache container based on the target path because the target path of the file cache container is in an open state.
Illustratively, as shown in fig. 4, a container startup instruction is initiated by a docker client 401 in a first container, and after receiving the container startup instruction, a dockerdaemon402 in a host starts a file cache container filestore403, and sets a target path of the file cache container filestore403 to be visible to other containers. When the first container creates the target image file required by the second container 404, the target image file is stored in the file cache container file 403, and when receiving the operation instruction for operating the second container 404 sent by the dock client 401, the dock file 402 obtains the target image file in the file cache container file 403, and mounts the target image file in the second container 404, thereby completing the obtaining of the target image file.
Step 305, running a second container based on the target image file.
In the process of operating the second container based on the target image file, the image file of the second container needs to be constructed first, and the target image file can be all image files required by the second container or part of image files. When the target image file is all image files required by the second container, the image file of the second container can be directly constructed based on the target image file, and then the second container is operated based on the constructed image file of the second container.
When the target image file is only a part of the image files required by the second container, other required image files can be obtained in the file system where the host computer is located or in a container warehouse. And constructing an image file of the second container based on all the required image files, and operating the second container based on the image file.
Illustratively, an image file of the second container may be constructed by a dockerbuild command, e.g., by the command docker build-tmyharbor. Catfile. Test. The image file of the second container is constructed, and the second container is run based on the image file, i.e., docker run-volume-from = fileStore-rm-d-name = testl-V/var/run/dockersock/var/run/docker. Pack myharbor. Catile.
Optionally, after the image file of the second container is constructed, the image file of the second container may be stored in a repository, so as to facilitate subsequent acquisition and use. Illustratively, the mirrored myharbor. Catfile.test is stored to the repository via a docker push myharbor. Catfile.test command.
Step 306, obtaining a container image file, wherein the container image file is printed by the second container in the running process.
Because the file cache container is abnormal or the target image file is not successfully stored in the file cache container and other abnormal conditions exist, the target image file cannot be successfully obtained when the second container runs, namely the situation that the target image file is failed to mount exists. Thus, in one possible implementation, the container image file is obtained, and whether the target image file is successfully mounted is determined according to the container image file.
Optionally, the container image file is a file printed out by the second container during the running process of the image file based on the construction, and the file contains all image files during the running process of the second container.
In step 307, in response to the container image file containing the target image file, it is determined that the target image file is successfully mounted.
After the container image file is obtained, whether the container image file contains the target image file or not can be detected, if so, the target image file is successfully mounted, namely, the second container can normally operate.
And when the container image file does not contain the target image file, indicating that the target image file is failed to mount, and displaying error information for prompting that the target image file is not mounted successfully.
Step 308, stop and delete file cache container.
Because the file cache container continuously occupies CPU resources in the running process, in order to reduce the occupation of the resources, the file cache container stops running after the second container is run based on the target image file, so that the continuous occupation of the CPU resources is avoided.
Alternatively, the file cache container may be stopped by a dockerstop command. In connection with the above example, by sending a docktorppfilestore command to dockardaemon, the running of the file cache container filestore may be stopped.
In addition, the file cache container occupies a certain storage space, so in one possible implementation mode, after the operation of the file cache container is stopped, the file cache container can be deleted, and the file cache container is prevented from being continuously reserved, so that the waste of the storage space is avoided. Alternatively, the object parameter may be set in the container start-up instruction to indicate that the file cache container is automatically deleted when the file cache container stops running, where the object parameter may be an-rm parameter, that is, as shown in the container start-up instruction docker run-rm-d-name=filestore-V/workspace-entry=sleep busy box 7200, including an-rm parameter, and the file cache container is deleted when the file cache container stops running.
In one possible embodiment, the container operation flow is as shown in fig. 5:
step 501, a first container is run.
Step 502, a file cache container is started in response to a container start instruction of a first container.
Alternatively, the container start instruction may be a dock run-rm-d-name=filestore-V/workspace-entrypoint=sleep busy box 7200.
In step 503, the first container creates a target image file.
I.e., echo "test" > >/workspace/test txt.
Step 504, storing the target image file in a file cache container.
I.e., docker/workspace/test txt fleStore/workspace.
In step 505, in response to the running instruction of the second container, the target image file is obtained in the file cache container, and the second container is run based on the target image file.
The execution instructions may be: dock run-volumes-from = fileStore-rm-d-name = testl-V/var/run/dock.
Step 506, determining whether the container image file contains the target image file, if yes, executing step 507, and if not, executing step 508.
In step 507, the object image file is successfully mounted.
Step 508, the target image file fails to mount.
Step 509, the file cache container is stopped.
I.e. docker stop fileStore.
In this embodiment, a fixed target name is set for the file cache container, and a target path of the file cache container is opened, and when the target image file is stored in the file cache container, the file cache container can be determined based on the target name, and the target image file is read from the file cache container based on the target path, so that successful mounting of the target image file is ensured.
In this embodiment, after the second container is operated based on the target image file, the operation of the file cache container is stopped, and the file cache container is further deleted, so that the continuous occupation of the file cache container to the CPU resource and the storage resource is avoided, unnecessary resource waste is caused, and the resource utilization rate is improved.
Under a possible application scenario, there may be a situation that the first container is stopped accidentally, and when the first container is stopped accidentally, the file cache container is in a running state continuously, and CPU resources are occupied continuously, so that resource waste is caused. Thus, to avoid the situation that the file cache container is continuously running due to an abnormal reason, in one possible implementation, when the file cache container is started, a running time threshold is set for the file cache container, and optionally, the running time threshold is set in a container starting instruction.
Optionally, stopping and deleting the file cache container in response to the runtime of the file cache container reaching a runtime threshold. When the time reaches the running time threshold value, the running of the file cache container is stopped. After the file cache container stops, it will be further deleted.
Illustratively, the container start instruction dock run-rm-d-name=filestore-V/workspace-entry-sleep busy box 7200 sets the running time threshold to 7200s, that is, after the running time of the file cache container fileStore reaches 2 hours, the running of the file cache container fileStore is stopped, and since the instruction contains the-rm parameter, the file cache container is deleted after stopping running.
In this embodiment, by setting the running time threshold for the file cache container, when the running time reaches the running time threshold, the running of the file cache container is automatically stopped, so that the file cache container is prevented from running continuously due to abnormality, and unnecessary resource waste is avoided.
In the related art, since the container name of the first container is set randomly, and the first container is invisible to other containers, the target image file in the first container cannot be obtained, and when the modification right of the starting parameter of the first container is provided, the container name of the first container can be modified, and the path of the first container can be opened, and the path is set to be visible to other containers. Therefore, before starting the file cache container, it is determined whether the starting parameters of the first container can be modified, and the method may include the following steps:
step one, in response to a modification instruction of a first container starting parameter, modifying the name of the first container to be a target name and opening a target path of the first container.
Optionally, the first container starting parameter is a container name of the first container and an open path of the first container. When a modification instruction of the first container starting parameter is received, modifying the container name of the first container into a fixed target name; and opening the target path of the first container, so that other containers can read the target image file in the first container based on the target path.
And step two, starting the file cache container in response to failure of modification of the first container starting parameters.
When the modification right of the first container starting parameter is not provided, the first container starting parameter fails to be modified, namely the container name of the first container cannot be modified and the target path of the first container cannot be set to be visible to other containers, at this time, a file caching container is required to be started, and the second container is enabled to successfully mount the target image file by using the file caching container.
In this embodiment, when the modification of the first container starting parameter fails, the file cache container is started, and when the modification of the first container starting parameter succeeds, the target image file can be directly obtained in the first container, so that the file cache container is not required to be started, and the occupation of resources by the file cache container is reduced.
In the above embodiment, the first container and the second container share the dockeraemon in the host, and since the dockeraemon in the host can only access the file system in the host, the image file in the first container cannot be accessed, the file cache container is set to obtain the target image file in the first container, so as to support the operation of the second container. In another possible implementation manner, the dockeraemon may be built in the first container, and the dockeraemon in the first container may directly obtain the target image file in the first container, without building a file cache container. The following will describe exemplary embodiments. In combination with the above embodiment, the starting the file cache container may include the following steps:
in step 601, the operating environment of the first container is determined.
Alternatively, the container may run in a Kubernetes cluster, or may run in a local environment. Wherein a Kubernetes (K8 s) cluster is a portable, scalable open-source platform for managing containerized workload serviceable, there is deployment of multiple containers in the K8s cluster. When the first container is operated in the K8s cluster and the first container is operated in the local environment, the rights of the first container are different, and the influence of newly built dockeraemon on other containers is also different, so that the operation environment of the first container needs to be determined first, and the first container is judged to operate in the local environment or in the K8s cluster.
In step 602, a file cache container is started in response to a first container running in a Kubernetes cluster.
When the container runs in the K8s cluster, if the dockardaemon is newly built in the first container, the influence on other containers may be generated, rights are required to be applied to management nodes in the K8s, the process is complicated, and when the rights cannot be acquired, the dockardaemon cannot be newly built, so that the mounting of the target image file is influenced. Therefore, when the container operates in the K8s cluster, the target image file in the first container can be stored into the file cache container by starting the file cache container, so that the second container can successfully mount the target image file.
In response to the first container running locally, a container daemon is established in the first container for obtaining the target image file in the first container, step 603.
When the first container operates locally, no influence on other containers exists, and a docker teammon can be directly built in the first container in a docker run-priviled mode. When a nested container construction instruction of a first container is received, a dock demason in the first container constructs a second container in the first container, and when an operation instruction of the second container is received, a target image file is read from the first container, and the second container is operated based on the target image file, so that a file cache container does not need to be constructed.
Fig. 7 is a block diagram of a container handling apparatus according to an exemplary embodiment of the present application, the apparatus comprising:
a starting module 701, configured to start a file cache container, where the file cache container is started by a first container;
a storage module 702, configured to store, in response to a nested container construction instruction of the first container, a target image file to the file cache container, where the nested container construction instruction is configured to instruct construction of a second container in the first container, and the target image file is configured to support operation of the second container;
and the running module 703 is configured to obtain the target image file from the file cache container in response to a running instruction of the second container, and run the second container based on the target image file.
Optionally, the starting module 701 is further configured to:
the method comprises the steps of responding to a container starting instruction of a first container, starting the file cache container, wherein the container starting instruction comprises a target name and a target path, the target name is used for identifying the file cache container, and the target path is an open path of the file cache container;
the storage module 702 is further configured to:
and responding to the nested container construction instruction, and storing the target image file into the target path of the file cache container.
Optionally, the running module 703 includes:
a confirmation unit configured to confirm the file cache container based on the target name in response to an operation instruction of the second container;
the reading unit is used for reading the target mirror image file from the file cache container based on the target path;
and the operation unit is used for operating the second container based on the target image file.
Optionally, the apparatus further includes:
the acquisition module is used for acquiring a container mirror image file, wherein the container mirror image file is printed by the second container in the operation process;
and the determining module is used for determining that the target image file is successfully mounted in response to the fact that the target image file is contained in the container image file.
Optionally, the apparatus further includes:
and the first deleting module is used for stopping and deleting the file cache container.
Optionally, the container start instruction includes a runtime threshold;
the apparatus further comprises:
and the second deleting module is used for stopping and deleting the file cache container in response to the running time of the file cache container reaching the running time threshold.
Optionally, the apparatus further includes:
the modification module is used for responding to a modification instruction of a first container starting parameter, modifying the name of the first container into a target name and opening a target path of the first container;
optionally, the starting module 701 is further configured to:
and starting the file cache container in response to the failure of the modification of the first container starting parameter.
Optionally, the starting module 701 includes:
a determining unit configured to determine an operating environment of the first container;
the starting unit is used for responding to the first container to run in the Kubernetes cluster and starting the file cache container;
optionally, the apparatus further includes:
and the establishing module is used for responding to the first container running locally and establishing a container daemon in the first container, wherein the container daemon is used for acquiring the target image file in the first container.
In the embodiment of the application, by starting the file cache container, when the second container is built in the first container, namely in the scene of the nested container, the target image file which is required by the second container to be operated and is positioned in the first container is stored in the file cache container, and when the second container is operated, the target image file can be directly acquired in the file cache container, so that the second container is operated based on the target image file, the failure of the operation of the second container caused by the failure of acquiring the target image file is avoided, and the success rate of the operation of the container is improved.
It should be noted that: the apparatus provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the method embodiments are described in the method embodiments, which are not repeated herein.
Referring to fig. 8, a schematic structural diagram of a computer device according to an exemplary embodiment of the present application is shown. Specifically, the present invention relates to a method for manufacturing a semiconductor device. The computer device 800 includes a central processing unit (Central Processing Unit, CPU) 801, a system memory 804 including a random access memory 802 and a read only memory 803, and a system bus 805 connecting the system memory 804 and the central processing unit 801. The computer device 800 also includes a basic Input/Output system (I/O) 806 for facilitating the transfer of information between the various devices within the computer, and a mass storage device 807 for storing an operating system 813, application programs 814, and other program modules 815.
The basic input/output system 806 includes a display 808 for displaying information and an input device 809, such as a mouse, keyboard, or the like, for user input of information. Wherein the display 808 and the input device 809 are connected to the central processing unit 801 via an input output controller 810 connected to the system bus 805. The basic input/output system 806 can also include an input/output controller 810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 810 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 807 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805. The mass storage device 807 and its associated computer-readable media provide non-volatile storage for the computer device 800. That is, the mass storage device 807 may include a computer readable medium (not shown), such as a hard disk or drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes random access Memory (RAM, random Access Memory), read Only Memory (ROM), flash Memory or other solid state Memory technology, compact disk (CD-ROM), digital versatile disk (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 804 and mass storage device 807 described above may be collectively referred to as memory.
The memory stores one or more programs configured to be executed by the one or more central processing units 801, the one or more programs containing instructions for implementing the methods described above, the central processing unit 801 executing the one or more programs to implement the methods provided by the various method embodiments described above.
According to various embodiments of the present application, the computer device 800 may also operate by being connected to a remote computer on a network, such as the Internet. I.e., the computer device 800 may be connected to a network 812 through a network interface unit 811 connected to the system bus 805, or other types of networks or remote computer systems (not shown) may be connected to the system using the network interface unit 811.
The memory also includes one or more programs stored in the memory, the one or more programs including steps for performing the methods provided by the embodiments of the present application, as performed by the computer device.
Embodiments of the present application also provide a computer readable storage medium storing at least one instruction that is loaded and executed by a processor to implement the container running method of the above embodiments.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computer device to perform the container running method provided in the various alternative implementations of the above aspects.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable storage medium. Computer-readable storage media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.
Claims (11)
1. A method of operating a container, the method comprising:
starting a file cache container, wherein the file cache container is started by a first container, and the file cache container is set to be accessible by other containers in the starting process;
storing a target image file in the first container to the file cache container in response to a nested container construction instruction of the first container, wherein the nested container construction instruction is used for instructing construction of a second container in the first container, and the target image file is used for supporting operation of the second container;
and responding to an operation instruction for operating the second container sent by the first container, acquiring the target image file from the file cache container, and operating the second container based on the target image file.
2. The method of claim 1, wherein the enabling a file cache container comprises:
the method comprises the steps of responding to a container starting instruction of a first container, starting the file cache container, wherein the container starting instruction comprises a target name and a target path, the target name is used for identifying the file cache container, and the target path is an open path of the file cache container;
the storing, in response to the nested container construction instruction of the first container, the target image file in the first container to the file cache container includes:
and responding to the nested container construction instruction, and storing the target image file in the first container into the target path of the file cache container.
3. The method of claim 2, wherein the obtaining the target image file in the file cache container and running the second container based on the target image file in response to the running instruction sent by the first container to run the second container comprises:
responding to an operation instruction sent by the first container for operating the second container, and confirming the file cache container based on the target name;
reading the target image file from the file cache container based on the target path;
and operating the second container based on the target image file.
4. A method according to any one of claims 1 to 3, wherein, in response to the execution instruction sent by the first container to execute the second container, after the target image file is obtained in the file cache container and the second container is executed based on the target image file, the method further comprises:
obtaining a container image file, wherein the container image file is printed by the second container in the operation process;
and determining that the target image file is successfully mounted in response to the target image file contained in the container image file.
5. A method according to any one of claims 1 to 3, wherein, in response to the execution instruction sent by the first container to execute the second container, after the target image file is obtained in the file cache container and the second container is executed based on the target image file, the method further comprises:
and stopping and deleting the file cache container.
6. The method of claim 2, wherein the container start-up instruction includes a runtime threshold;
the method further comprises the steps of:
and stopping and deleting the file cache container in response to the running time of the file cache container reaching the running time threshold.
7. A method according to any one of claims 1 to 3, wherein prior to said enabling of the file cache container, the method further comprises:
responding to a modification instruction of a first container starting parameter, modifying the name of the first container as a target name and opening a target path of the first container;
the starting file cache container comprises:
and starting the file cache container in response to the failure of the modification of the first container starting parameter.
8. A method according to any one of claims 1 to 3, wherein the enabling of the file cache container comprises:
determining an operating environment of the first container;
starting the file cache container in response to the first container running in a Kubernetes cluster;
the method further comprises the steps of:
and in response to the first container running locally, establishing a container daemon in the first container, the container daemon being used for acquiring the target image file in the first container.
9. A container handling apparatus, the apparatus comprising:
the starting module is used for starting the file cache container, the file cache container is started by the first container, and the file cache container is set to be accessible by other containers in the starting process;
a storage module, configured to store, in response to a nested container construction instruction of the first container, a target image file in the first container to the file cache container, where the nested container construction instruction is configured to instruct construction of a second container in the first container, and the target image file is configured to support operation of the second container;
and the operation module is used for responding to an operation instruction sent by the first container for operating the second container, acquiring the target image file from the file cache container and operating the second container based on the target image file.
10. A computer device, the computer device comprising a processor and a memory; the memory stores at least one instruction, at least one program, code set, or instruction set that is loaded and executed by the processor to implement the container running method of any one of claims 1 to 8.
11. A computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to implement the container running method of any one of claims 1 to 8.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110934711.6A CN113687915B (en) | 2021-08-16 | 2021-08-16 | Container running method, device, equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110934711.6A CN113687915B (en) | 2021-08-16 | 2021-08-16 | Container running method, device, equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113687915A CN113687915A (en) | 2021-11-23 |
| CN113687915B true CN113687915B (en) | 2023-07-21 |
Family
ID=78580257
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110934711.6A Active CN113687915B (en) | 2021-08-16 | 2021-08-16 | Container running method, device, equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113687915B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120315812B (en) * | 2025-06-11 | 2025-12-09 | 北京火山引擎科技有限公司 | Container safety creation method, medium, equipment and product in large model scene |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103368807A (en) * | 2012-04-05 | 2013-10-23 | 思科技术公司 | System and method for migrating application virtual machines in a network environment |
| CN110688174A (en) * | 2019-09-30 | 2020-01-14 | 李福帮 | Container starting method, storage medium and electronic device |
| CN110716980A (en) * | 2018-06-27 | 2020-01-21 | 上海掌颐网络科技有限公司 | Virtual coverage management consensus block chain operating system based on nested container |
| US10749980B1 (en) * | 2016-12-21 | 2020-08-18 | EMC IP Holding Company LLC | Autonomous storage device and methods for distributing content |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10599463B2 (en) * | 2018-03-28 | 2020-03-24 | Nutanix, Inc. | System and method for creating virtual machines from containers |
-
2021
- 2021-08-16 CN CN202110934711.6A patent/CN113687915B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103368807A (en) * | 2012-04-05 | 2013-10-23 | 思科技术公司 | System and method for migrating application virtual machines in a network environment |
| US10749980B1 (en) * | 2016-12-21 | 2020-08-18 | EMC IP Holding Company LLC | Autonomous storage device and methods for distributing content |
| CN110716980A (en) * | 2018-06-27 | 2020-01-21 | 上海掌颐网络科技有限公司 | Virtual coverage management consensus block chain operating system based on nested container |
| CN110688174A (en) * | 2019-09-30 | 2020-01-14 | 李福帮 | Container starting method, storage medium and electronic device |
Non-Patent Citations (2)
| Title |
|---|
| Docker Container Security in Cloud Computing;Kelly Brady等;《2020 10th Annual Computing and Communication Workshop and Conference (CCWC)》;第975-980页 * |
| 基于Docker-Swarm的微服务管理技术研究与实现;吴杰楚;《中国优秀硕士学位论文全文数据库 信息科技辑》;第I139-162页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113687915A (en) | 2021-11-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3944082A1 (en) | Extending the kubernetes api in-process | |
| US8966318B1 (en) | Method to validate availability of applications within a backup image | |
| CN113296792B (en) | Storage method, device, equipment, storage medium and system | |
| US20150067167A1 (en) | Hot pluggable extensions for access management system | |
| US10310900B2 (en) | Operating programs on a computer cluster | |
| CN109614167B (en) | Method and system for managing plug-ins | |
| CN111464603B (en) | Server capacity expansion method and system | |
| CN111506388B (en) | Container performance detection method, container management platform and computer storage medium | |
| WO2016116013A1 (en) | Software upgrade method and system | |
| US8473702B2 (en) | Information processing apparatus, execution environment transferring method and program thereof | |
| CN117827365A (en) | Port allocation method, device, equipment, medium and product for application container | |
| CN119226094A (en) | Database monitoring automated deployment method, device, equipment and medium | |
| CN113687915B (en) | Container running method, device, equipment and storage medium | |
| US20240160425A1 (en) | Deployment of management features using containerized service on management device and application thereof | |
| CN111241540A (en) | Service processing method and device | |
| CN114356214B (en) | Method and system for providing local storage volume for kubernetes system | |
| CN116339920B (en) | Information processing method, device, equipment and medium based on cloud platform | |
| CN114816481B (en) | A method, device, equipment and storage medium for batch upgrading of firmware | |
| CN113448609B (en) | Container upgrading method, device, equipment and storage medium | |
| CN117032818A (en) | A basic input and output system BIOS option configuration method and device | |
| JPH09319720A (en) | Distributed process management system | |
| JPH11232233A (en) | Network computer management method and network computer system | |
| CN120762734B (en) | Application release control method, system and server | |
| CN113900765A (en) | Cloud application startup method, device, terminal, and computer-readable storage medium | |
| CN119621232B (en) | Non-container application arranging method and device based on Kubernetes CRI plug-in |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |