CN116719604A - Container migration method and device, storage medium and electronic equipment - Google Patents

Container migration method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116719604A
CN116719604A CN202310610048.3A CN202310610048A CN116719604A CN 116719604 A CN116719604 A CN 116719604A CN 202310610048 A CN202310610048 A CN 202310610048A CN 116719604 A CN116719604 A CN 116719604A
Authority
CN
China
Prior art keywords
migration
target container
target
memory
memory data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310610048.3A
Other languages
Chinese (zh)
Inventor
曹玲玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202310610048.3A priority Critical patent/CN116719604A/en
Publication of CN116719604A publication Critical patent/CN116719604A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a container migration method and device, a storage medium and electronic equipment. The method comprises the following steps: responding to a migration request for migrating a target container group of a source end to a destination end, and acquiring memory data of the target container group; storing the memory data of the target container group in a file form to obtain memory data to be migrated; and synchronously migrating the memory data to be migrated by adopting a pre-copy mechanism, and completing migration of the target container group from the source end to the destination end, wherein in each ith stage in synchronous migration, according to whether a target container in the target container group meets a preset migration stopping condition, a target migration operation is determined to be executed on the memory data to be migrated corresponding to the ith stage, i is more than or equal to 1 and less than or equal to N, and i is a positive integer. The application solves the technical problem that the time of shutdown migration cannot be effectively reduced when the container migration is carried out in the related technology, so that a large number of unnecessary iterative migration is executed, thereby effectively improving the container migration efficiency.

Description

Container migration method and device, storage medium and electronic equipment
Technical Field
Embodiments of the present application relate to the field of computers, and in particular, to a container migration method and apparatus, a storage medium, and an electronic device.
Background
Service migration is one of the most common operations of cloud service providers, and has important roles in host maintenance, load balancing, service upgrading operation and maintenance and the like. Currently, as the use range of containers is wider and wider, containers have gradually replaced virtual machines to become carriers for providing resources and services.
The container is a set of independent processes in the host, and has independent command space and system resources, and because the data in the running container memory can be changed continuously, if the file content is modified, the corresponding file also needs to be updated. It can be understood that, in the migration process, all memory pages are first transferred by adopting the pre-copy mechanism, then the dirty pages modified in the last iteration are iteratively copied, the dirty page data is transferred to the target node, and the outdated memory pages are rewritten to update the memory state. When the dirty page data amount of each iteration of the application program reaches a preset threshold, although the memory page threshold during shutdown migration is preset by the pre-copy mechanism, the memory page data amount during shutdown migration can be limited within the threshold range, the method is used for the application of the memory change comparison block, and the iteration process can be continued and can not be stopped all the time because the dirty page data can not reach the preset threshold all the time. Therefore, how to reduce the time of shutdown migration is one of the key points of container migration research.
Disclosure of Invention
The embodiment of the application provides a container migration method and device, a storage medium and electronic equipment, which at least solve the technical problem that a large number of unnecessary iterative migration is executed because the shutdown migration time cannot be reduced when a container is migrated in the related technology.
According to one embodiment of the present application, there is provided a container migration method including: responding to a migration request for migrating a target container group of a source end to a destination end, and acquiring memory data of at least one target container in the target container group; determining memory data to be migrated of the target container group based on the memory data of the at least one target container; and synchronously migrating the memory data to be migrated by adopting a pre-copy mechanism, wherein the synchronous migration comprises a plurality of stages, and in each ith stage, according to whether at least one target container in the target container group meets the preset migration stopping condition, the target migration operation is determined to be executed on the memory data to be migrated corresponding to the ith stage, i is more than or equal to 1 and less than or equal to N, and i is a positive integer.
Optionally, obtaining memory data of at least one target container in the target container group includes: for each target container in the target container group, process information of at least one process group in the target container is acquired, wherein the process information comprises at least one of the following steps: process group ID, parent process ID, container ID, process status of each child process; determining a process node of the target container based on the process information of the target container, and determining node information of the process node; constructing a process tree of the target container based on the process nodes and the node information, and carrying out corresponding backup operation on the node information of the process nodes in the process tree to obtain memory backup data of the target container, wherein the memory backup data comprises at least one of the following components: file description data recorded by a Pagemap file, memory mapping data recorded by a Smaps file and/or a map_file, kernel data recorded by a Ptrace SEIZE tool; memory data for each target container in the target container group is determined based on the memory backup data.
Optionally, determining the memory data to be migrated corresponding to the ith stage includes: when i is equal to 1, the memory data to be migrated comprises memory data of each target container of the source end before synchronous migration starts; when i is not equal to 1, the memory data to be migrated comprises the memory dirty page data generated by each target container in the migration process of the i-1 stage.
Optionally, the data size of the memory data to be migrated in the i-th stage of the migration from the source end to the destination end and the migration time decrease with the increase of the i value until i=n, and no new memory dirty page data is generated any more.
Optionally, the target migration operation includes: and stopping the migration operation or the iterative migration operation, wherein the synchronous migration of the memory data to be migrated is performed by adopting a pre-copy mechanism, and the migration of the target container group from the source end to the destination end is completed, and the method comprises the following steps: under the condition that only one target container is included in the target container group, determining to execute target migration operation on memory data to be migrated corresponding to the ith stage according to whether the target container reaches a first migration stopping condition in the ith stage, and completing migration of the target container group from a source end to a destination end, wherein the first migration stopping condition comprises at least one of the following conditions: the first data quantity of the memory dirty page data of the target container in the ith stage is smaller than a preset first quantity threshold value, and the first time number of synchronous iteration of each target container in the ith stage is smaller than a preset first time number threshold value; under the condition that the target container group comprises a plurality of target containers, determining to execute target migration operation on memory data to be migrated corresponding to the ith stage according to whether each target container reaches a second migration stopping condition in the ith stage, and completing migration of the target container group from a source end to a destination end, wherein the second migration stopping condition comprises at least one of the following conditions: the second data quantity of the memory dirty page data of the plurality of target containers in the target container group in the ith stage is smaller than a preset second quantity threshold value, and the first times of each target container in the target container group for executing iterative migration operation in the ith stage are smaller than a first quantity threshold value.
Optionally, when only one target container is included in the target container group, determining, according to whether the target container reaches the first migration stopping condition in the ith stage, to execute the target migration operation on the memory data to be migrated corresponding to the ith stage, including: judging whether the target container reaches a first migration stopping condition in the ith stage, wherein when the target container reaches the first migration stopping condition in the ith stage, determining to execute migration stopping operation on memory data to be migrated corresponding to the ith stage; and when the target container does not reach the first migration stopping condition in the ith stage, determining to continue to execute the (i+1) th round of iterative migration on the memory data to be migrated corresponding to the ith stage until the target container reaches the migration stopping condition.
Optionally, when the target container group includes a plurality of target containers, determining, according to whether the plurality of target containers reach the second migration stopping condition in the ith stage, to execute the target migration operation on the memory data to be migrated corresponding to the ith stage, includes: judging whether a plurality of target containers in the target container group reach a second migration stopping condition in the ith stage, wherein when the plurality of target containers in the target container group reach the second migration stopping condition in the ith stage, the migration stopping operation is determined to be executed on the memory data to be migrated corresponding to the ith stage; and when the plurality of target containers in the target container group reach the second migration stopping condition in the ith stage, continuing to perform the (i+1) -th round of iterative migration on the memory data to be migrated corresponding to the ith stage until each target container in the target container group reaches the migration stopping condition.
According to another embodiment of the present application, there is provided a container transfer device including: the acquisition module is used for responding to a migration request for migrating the target container group of the source end to the destination end and acquiring memory data of at least one target container in the target container group; the determining module is used for determining memory data to be migrated of the target container group based on the memory data of at least one target container; the migration module is used for synchronously migrating the memory data to be migrated by adopting a pre-copy mechanism, and completing migration of the target container group from the source end to the destination end, wherein the synchronous migration comprises a plurality of stages, and in each ith stage, according to whether at least one target container in the target container group meets the preset migration stopping condition, the target migration operation is determined to be executed on the memory data to be migrated corresponding to the ith stage, i is more than or equal to 1 and less than or equal to N, and i is a positive integer.
According to a further embodiment of the present application, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above embodiments of the container migration method when run.
According to a further embodiment of the application there is also provided an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above embodiments of the container migration method.
In the above process, considering that the memory state of the container is continuously changed when the container provides service, if all the memory data are directly migrated to the target node of the destination end, a long downtime may be caused, so in the embodiment of the application, the memory data are stored in a file form, thereby ensuring the normal operation of the container; in addition, the memory data to be migrated is synchronously migrated for a plurality of times through a pre-copy mechanism, so that a target container group is migrated from a source end to a destination end, in the process, the time for stopping migration is obviously reduced through setting the container data volume and the maximum iteration number after each iteration migration, and meanwhile, the condition that the iteration process cannot be stopped is avoided, so that the technical problem that the time for stopping migration cannot be effectively reduced when the container migration is carried out in the related technology, a large number of unnecessary iteration migration is executed is solved, and the container migration efficiency is effectively improved.
Drawings
FIG. 1 is a schematic diagram of the hardware architecture of an alternative mobile terminal for a container migration method according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative container migration method according to an embodiment of the present application;
FIG. 3 is a flow chart of an alternative memory data retrieval according to an embodiment of the application;
FIG. 4 is a flow chart of another alternative container migration method according to an embodiment of the present application;
FIG. 5 is a schematic illustration of an alternative container transfer device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an alternative electronic device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims and drawings of the present application are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In addition, the related information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party. For example, an interface is provided between the system and the relevant user or institution, before acquiring the relevant information, the system needs to send an acquisition request to the user or institution through the interface, and acquire the relevant information after receiving the consent information fed back by the user or institution.
In order to better understand the embodiments of the present application, technical terms related to the embodiments of the present application are explained as follows:
a container: an operating system level virtualization approach, which implements isolation between containers by means of namespaces (namespaces) and control groups (cgroups) of Linux, provides an independent running environment. Compared with a virtual machine, the container has no independent OS and can directly run on a host machine; in addition, the container does not have a Hypervisor layer, and fewer abstraction layers. Therefore, compared with a virtual machine, the container has the advantages of quick start and stop, less memory occupation, high performance and the like, and is more suitable for serving as a service providing carrier of the existing cloud computing.
CRIU (Checkpoint/Restore in Userspace): it is a process checkpoint and restoration tool implemented in user space. By using the CRIU, a running process can be saved to disk and restarted when needed without affecting its state or data. This allows easy migration, troubleshooting, and debugging operations, and also improves system availability and flexibility.
Dock: an open-source containerized platform may assist developers in running and deploying applications on different operating systems. It uses lightweight, portable containers to package applications and all their dependencies and provides a unified interface to manage these containers. The goal of Docker is to achieve a cross-platform, fast build, efficient deployment, and easy-to-manage application environment.
Docker Daemon: is a daemon of the Docker and is responsible for managing and controlling the lifecycle of the Docker container. It works with the Docker Engine, receives instructions from the Docker CLI (command line interface), and creates, starts, stops, deletes containers, etc. based on these instructions. Meanwhile, the Docker Daemon is also responsible for maintaining a mirror image library and processing tasks such as network connection and the like. On the Linux system, the Docker Daemon runs in the background and provides services through Unix sockets (Unix sockets) or TCP/IP ports.
Smaps documents: a file provided by a Linux kernel records memory information used by a process. Each process has its own Smaps file containing details of the virtual address space, physical address space, and shared library used by the process. By looking at the Smaps file, the process memory occupation situation can be better known, and the program performance and debugging problems are optimized.
Pagemap: a data format for describing the structure of a web page document, which is typically represented in JSON. Pagemap contains location and attribute information about different parts of the web page (e.g., title, text, picture, etc.), which can help the search engine better understand the page content and provide more accurate search results.
Pod (Port Of Destination): as the smallest deployable unit in Kubernetes (open-sourced container orchestration platform), it contains one or more tightly-associated containers. Each Pod has its own independent IP address and port space and can share storage and network resources. Pod is commonly used to run an application or service and is managed, monitored, and scaled by Kubernetes.
Mmap (Memory Map): the method is used for mapping the file into the memory, and the effect is equivalent to the reading and writing operation of the file by performing the reading and writing operation of the memory of the mapping area.
Ptrace SEIZE tool: a debugging tool for Linux system can make debugger obtain the control right of target process and make operation and observation at running time. The tool employs a "SEIZE" operation, i.e., a specified process is paused and transferred to the debugger for execution in order to perform various operations.
Example 1
The file system has now transferred the file system contents of the container during migration, but the running container may modify the file contents, so the previously transferred file contents may be outdated. In the same way, the memory data is changed continuously, and when the container is in an operation state, the memory data transmitted to the destination server is outdated. Thus, if complete synchronization of the container state is to be accomplished, the container to be migrated must be stopped and then replicated at which point the state of the container is restored on the target node. Although the shutdown migration avoidance is unavoidable, the time of shutdown migration may be compressed as much as possible. Therefore, how to reduce the time of shutdown migration is one of the key points of container migration research.
In order to solve the above problems, an embodiment of the present application provides a container migration method, which optimizes a migration manner based on a memory, thereby solving the above problems. The container migration method will be specifically described below.
The method embodiments provided in the embodiments of the present application may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal of a container migration method according to an embodiment of the present application. As shown in fig. 1, a mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, wherein the mobile terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store computer programs, such as software programs and modules of application software, such as computer programs corresponding to the container migration method in the embodiment of the present application, and the processor 102 executes the computer programs stored in the memory 104 to perform various functional applications and data processing, that is, implement the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
In this embodiment, a container migration method running on the mobile terminal is provided, fig. 2 is a flowchart of the container migration method according to an embodiment of the present application, and as shown in fig. 2, the flowchart includes steps S202 to S206 as follows:
step S202, in response to a migration request for migrating the target container group of the source end to the destination end, obtaining the memory data of at least one target container in the target container group.
In the technical solution provided in step S202, the target container set includes: one or more target containers, and thus, the migration of the set of target containers is effectively the migration of individual target containers within the set of target containers. In addition, the memory data of the target container group is obtained as the container memory data of each target container. In the embodiment of the application, only the memory data (namely the content needing to be migrated) is migrated instead of the whole target container, so that time and bandwidth can be effectively saved, and meanwhile, only the memory data is required to be focused during subsequent updating, backup, recovery and other operations, so that the management efficiency is improved.
Step S204, obtaining the memory data to be migrated of the target container group based on the memory data of at least one target container.
In the technical solution provided in step S204, the memory data to be migrated of the target container group may be obtained according to the memory data of at least one target container in the target container group.
And S206, synchronously migrating the memory data to be migrated by adopting a pre-copy mechanism, and completing migration of the target container group from the source end to the destination end.
In the technical scheme provided in the step S206, the pre-copy mechanism is adopted to synchronously migrate the memory data to be migrated, that is, some important files or directories in the container are copied before migration, then a new container is started, and the files or directories are copied from the source container of the source end to the new container of the destination end.
In addition, the synchronous migration process includes a plurality of stages, in each ith stage, it may be determined whether at least one target container in the target container set meets a preset migration stopping condition, and the target migration operation is performed on the memory data to be migrated corresponding to the ith stage, where i is greater than or equal to 1 and less than or equal to N, and i is a positive integer. Wherein, the preset migration stopping conditions include, but are not limited to: the data of the dirty page of the memory meets the threshold condition, and the iterative migration times reach the maximum. The method can effectively reduce the time for stopping migration and avoid unlimited iterative migration process.
Based on the above-defined schemes in step S202 to step S206, it may be known that, in an embodiment, in response to a migration request for migrating a target container group of a source end to a destination end, memory data of at least one target container in the target container group is obtained; obtaining memory data to be migrated of the target container group based on the memory data of at least one target container; and synchronously migrating the memory data to be migrated by adopting a pre-copy mechanism, wherein the synchronous migration comprises a plurality of stages, and in each ith stage, according to whether at least one target container in the target container group meets the preset migration stopping condition, the target migration operation is determined to be executed on the memory data to be migrated corresponding to the ith stage, i is more than or equal to 1 and less than or equal to N, and i is a positive integer.
Therefore, through the technical scheme of the embodiment of the application, the aim of reducing the stopping time of the migration task is achieved by using the pre-copy action of the CRU tool, the technical effect of improving the efficiency of container migration is achieved, and the technical problem that the time of shutdown migration cannot be effectively reduced when the container migration is carried out in the related technology, so that a large number of unnecessary iterative migration is executed is solved.
The main execution body of the above steps may be a server, a terminal, or the like, but is not limited thereto.
The above-described method of this embodiment is further described below.
As an optional implementation manner, in the technical solution provided in step S202, the method may include:
for each target container in the target container group, process information of at least one process group in the target container is acquired, wherein the process information comprises at least one of the following steps: process group ID, parent process ID, container ID, process status of each child process; determining a process node of the target container based on the process information of the target container, and determining node information of the process node; constructing a process tree of the target container based on the process nodes and the node information, and carrying out corresponding backup operation on the node information of the process nodes in the process tree to obtain memory backup data of the target container, wherein the memory backup data comprises at least one of the following components: file description data recorded by a Pagemap file, memory mapping data recorded by a Smaps file and/or a map_file, kernel data recorded by a Ptrace SEIZE tool; memory data for each target container in the target container group is determined based on the memory backup data.
In the embodiment of the application, the memory data of the target container is collected in the same manner as the memory data of the process group, wherein the process data of the process group can be collected and backed up through the CRU, so that the collection and the backup of the container memory data of the target container can be realized by means of the CRU, the container memory data can be stored in the form of a file, each container comprises a plurality of processes, the processes run in the container and have a certain cooperative relationship with each other, and a process group is formed.
Specifically, fig. 3 is a flowchart of an alternative process for acquiring memory data according to an embodiment of the present application, as shown in fig. 3, process information of a process group in each target container is first acquired, where the process information includes, but is not limited to: a process group ID (PGID, i.e., a unique PGID in each process, which is an ID of the process group to which the process belongs), a parent process ID (i.e., a PID of a parent process or container that started the container), a container ID (i.e., a unique identifier that a container instance allocates on a host for use in its runtime environment and network namespace), a process state of each child process; next, determining a process node of the Docker Daemon and node information based on the process information of the target container, wherein the node information includes, but is not limited to: the process state, priority, resource occupation condition and the like of the process; then, constructing a process tree of the target container based on the process node and the node information, so as to ensure that memory data changed when the container runs can be stored and transmitted in real time, and in addition, information such as a command space, a file system organization structure and the like of the target container can be collected besides constructing the process tree; corresponding backup operation is carried out on node information of process nodes in a process tree to obtain memory backup data of a target container, wherein mapping relation from virtual addresses to physical addresses can be recorded through Smaps files and/or memory mapping data of the files mapped to a memory through mmap can be recorded through map files, file description data of flag bits of each memory page can be recorded through Pagemap, the first two tools can judge which memory pages need to be backed up and classified, however, processes of the target container can be infected through Ptrace SEIZE tools, and the target container is controlled to send the data of the memory pages to parasitic processes through pipelines and backup the data to corresponding memory mirror files; and finally, determining the memory data of each target container in the target container group based on the memory backup data. Executing the checkpoint on all target containers in the target container group through the real-time steps, and storing the memory state of the target container as an image file, thereby realizing quick backup of the memory data of the container, and simultaneously quickly recovering the memory state of the target container when the target container is restarted later.
As an optional implementation manner, in the technical solution provided in step S206, the method determines memory data to be migrated corresponding to the ith stage, and includes: when i is equal to 1, the memory data to be migrated comprises memory data of each target container of the source end before the migration process starts; when i is not equal to 1, the memory data to be migrated comprises the memory dirty page data generated by each target container in the migration process of the i-1 stage.
In this embodiment, when the memory pre-copy mechanism is used to migrate the container, only the first migration will migrate the complete memory data of each target container in the source, and the subsequent iterative migration only migrates the dirty memory page data modified in the previous iteration. Therefore, the data volume transmitted by the memory page of the container is reduced by compressing the memory page of the container, the online migration efficiency of the container is effectively improved, and the time consumed in the online migration process of the virtual machine is reduced. In this case, if the memory pages of some containers are changed during the copying process, the case is called that the memory pages are dirty, and the number of the memory pages is called as the memory dirty page data.
In the process of migrating the container by adopting the memory pre-copy mechanism, optionally, the data size of the memory data to be migrated in the ith stage of migrating from the source end to the destination end and the migration time decrease along with the increase of the i value, until i=n, no new dirty memory page data is generated.
That is, the data amount of the memory data to be migrated from the 1 st stage to the N-th stage of the source terminal to the destination terminal is gradually reduced, and the migration time of the i-th stage is reduced along with the increase of the i value until no new dirty page data of the memory is generated in the migration process of the N-th stage.
As another alternative embodiment, in the technical solution provided in step S206, the target migration operation includes: stopping the migration operation or iterating the migration operation, wherein the synchronous migration of the memory data to be migrated by adopting a pre-copy mechanism can be specifically discussed according to the number of target containers in the container group, wherein:
case one: under the condition that the target container group only comprises one target container, determining to execute target migration operation on memory data to be migrated corresponding to the ith stage according to whether the target container reaches a first migration stopping condition in the ith stage, and completing migration of the target container group from the source end to the destination end.
In the foregoing technical solution, specifically, in the case where only one target container is included in the target container group, it may be determined whether the target container reaches the first migration stopping condition in the ith stage, where the first migration stopping condition includes at least one of the following: the first data quantity of the memory dirty page data of the target container in the ith stage is smaller than a preset first quantity threshold value, and the first time number of synchronous iteration of each target container in the ith stage is smaller than a preset first time number threshold value; and determining to execute target migration operation on the memory data to be migrated corresponding to the ith stage according to the result, thereby completing migration of the target container group from the source end to the destination end.
Optionally, judging whether the target container reaches a first migration stopping condition in the ith stage, wherein when the target container reaches the first migration stopping condition in the ith stage, determining to execute migration stopping operation on memory data to be migrated corresponding to the ith stage; and when the target container does not reach the first migration stopping condition in the ith stage, determining to continue to execute the (i+1) th round of iterative migration on the memory data to be migrated corresponding to the ith stage until the target container reaches the migration stopping condition.
That is, any one of the first migration stopping conditions is satisfied, the target container can be subjected to shutdown migration, so that the memory data of the container is prevented from being subjected to endless migration, and the container migration efficiency is effectively improved.
And a second case: under the condition that the target container group comprises a plurality of target containers, determining to execute target migration operation on memory data to be migrated corresponding to the ith stage according to whether each target container reaches a second migration stopping condition in the ith stage, and completing migration of the target container group from the source end to the destination end.
In the foregoing technical solution, specifically, in the case where the target container group includes a plurality of target containers, the second migration stopping condition may be determined according to whether the plurality of target containers reach the second migration stopping condition in the ith stage, where the second migration stopping condition includes at least one of the following: the second data quantity of the memory dirty page data of a plurality of target containers in the target container group in the ith stage is smaller than a preset second quantity threshold value, and the first times of each target container in the target container group for executing iterative migration operation in the ith stage are smaller than a first time threshold value; and determining to execute target migration operation on the memory data to be migrated corresponding to the ith stage according to the result, and completing migration of the target container group from the source end to the destination end.
Optionally, judging whether the plurality of target containers reach a second migration stopping condition in the ith stage; when a plurality of target containers reach a second migration stopping condition in the ith stage, determining to execute migration stopping operation on memory data to be migrated corresponding to the ith stage; and when the plurality of target containers do not reach the second migration stopping condition in the ith stage, determining to continue to execute the (i+1) th round of iterative migration on the memory data to be migrated corresponding to the ith stage until each target container in the target container group reaches the migration stopping condition.
That is, any one of the second migration stopping conditions is satisfied, the target container group can be migrated in a halt manner, so that the memory data of the container is prevented from being migrated endlessly, and the container migration efficiency is effectively improved.
Specifically, fig. 4 is a flowchart of an optional migration to multiple containers according to an embodiment of the present application, as shown in fig. 4, first determining whether to forcibly run a shutdown migration operation to a target container group, if yes, directly performing the shutdown migration operation to a memory file to be migrated of the target container group regardless of the size of the data amount of the memory dirty page data of each target container in the i-th stage in the target container group, so as to synchronize the memory data; otherwise, executing a checkpoint operation on each target container in the target container group (i.e. executing a round of pre-copy iteration), and continuously judging whether the size of the data volume of the memory dirty page data of the target container in the ith stage is smaller than a second number threshold; if the memory change amount of the target container group is small, the memory file to be migrated corresponding to the ith stage can be subjected to migration stopping operation; otherwise, continuing to judge that the first time number of iterative migration operation of each target container in the target container group in the ith stage is smaller than a first time number threshold (namely, the iteration number reaches the maximum value), and if so, executing migration stopping operation on the memory file to be migrated corresponding to the ith stage; otherwise, continuing to execute the steps.
As another optional implementation manner, whether to execute the shutdown migration operation on the memory data to be migrated corresponding to the ith stage can be judged by comparing the first data amount of the memory dirty page data of the target container in the ith stage with the preset first number threshold value, and whether to execute the shutdown migration operation on the memory data to be migrated corresponding to the ith stage can be judged by calculating the visceral speech rate of the content page of the target container. The technical effects achieved by the scheme are the same as those achieved by the related scheme that the data volume of the internal memory dirty pages is small enough, and the transmission of redundant internal memory data can be obviously reduced.
Specifically, since the dirty rate of the memory pages of the container at runtime is agnostic, and if the dirty rate of the memory pages is greater than the transmission rate in the network where the memory exists, the pre-copy iteration will proceed endlessly. Therefore, in the embodiment of the present application, when the pre-copy iteration of each stage (from the second stage) starts, the dirty rate of the current memory page relative to the memory page after the pre-copy of the previous stage may be first determined, if the dirty rate is too high and is greater than the transmission rate in the memory network, the next pre-copy iteration is continuously performed on the memory file to be migrated corresponding to the stage, then the above process is repeated again, and if the dirty rate is lower than a certain value, the present iteration is continuously performed to reduce the dirty page to be transmitted to the destination end; if this rate is below another smaller value, then the iterative execution of the pre-copy (i.e., the shutdown migration operation) is ended. The dirty rate can be obtained by comparing the change amount of the memory page before the i-th stage memory pre-copy with the memory page after the i-1 th round of memory pre-copy and dividing the change amount by the total number of the memory pages after the i-1 th round of memory pre-copy.
Further, when the migration operation of the target container group is completed, the recovery operation of the container group at the destination end can be further performed in the following manner: starting a destination machine container (namely a destination end) according to a mode of designating a container number and a memory mirror image file; under the condition that the container to be restored belongs to the offsite migration container according to the mode of executing localization judgment, the destination container can identify the offsite migration container according to the mode of dynamically loading the configuration information and the bottom layer mirror image information of the container to be restored into the mirror image storage drive; and generating a migration success message and feeding back to the source machine container when the container recovery operation is successfully executed, or generating a recovery failure message and feeding back to the source machine container to restart the container service when the container recovery operation is failed to execute.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the various embodiments of the present application.
Example 2
Based on embodiment 1 of the present application, an embodiment of a container migration device is further provided, and the device is used to implement embodiment 1 and a preferred embodiment, which are not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 5 is a schematic diagram of an alternative container migration apparatus according to an embodiment of the present application, where, as shown in fig. 5, the container migration apparatus includes at least an obtaining module 51, a determining module 53, and a migration module 55, where:
an obtaining module 51, configured to obtain memory data of at least one target container in the target container group in response to a migration request for migrating the target container group at the source end to the destination end;
a determining module 53, configured to determine memory data to be migrated of the target container group based on the memory data of at least one target container;
the migration module 55 is configured to perform synchronous migration on memory data to be migrated by using a pre-copy mechanism, so as to complete migration of the target container group from the source end to the destination end, where the synchronous migration includes multiple stages, and in each ith stage, it is determined whether at least one target container in the target container group meets a preset migration stopping condition, and the target migration operation is performed on the memory data to be migrated corresponding to the ith stage, where i is greater than or equal to 1 and less than or equal to N, and i is a positive integer.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Example 3
According to another aspect of the embodiments of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the container migration method in embodiment 1 described above when running.
In one exemplary embodiment, a computer-readable storage medium implements the following steps by executing the program:
step S202, in response to a migration request for migrating a target container group of a source end to a destination end, obtaining memory data of at least one target container in the target container group;
step S204, obtaining memory data to be migrated of a target container group based on the memory data of at least one target container;
step S206, synchronously migrating the memory data to be migrated by adopting a pre-copy mechanism, and completing migration of the target container group from the source end to the destination end, wherein the synchronous migration comprises a plurality of stages, and in each ith stage, according to whether at least one target container in the target container group meets the preset migration stopping condition, the target migration operation is determined to be executed on the memory data to be migrated corresponding to the ith stage, i is more than or equal to 1 and less than or equal to N, and i is a positive integer.
In another exemplary embodiment, the computer readable storage medium described above may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Example 4
According to another aspect of the embodiments of the present application, there is also provided an electronic device, wherein fig. 4 is a schematic diagram of an alternative electronic device according to an embodiment of the present application, as shown in fig. 4, the electronic device including one or more processors; and a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a method for running the programs, wherein the programs are configured to perform the domain name discrimination method in embodiment 1 described above when run.
In one exemplary embodiment, program runtime execution implements the steps of:
step S202, in response to a migration request for migrating a target container group of a source end to a destination end, obtaining memory data of at least one target container in the target container group;
step S204, obtaining memory data to be migrated of a target container group based on the memory data of at least one target container;
Step S206, synchronously migrating the memory data to be migrated by adopting a pre-copy mechanism, and completing migration of the target container group from the source end to the destination end, wherein the synchronous migration comprises a plurality of stages, and in each ith stage, according to whether at least one target container in the target container group meets the preset migration stopping condition, the target migration operation is determined to be executed on the memory data to be migrated corresponding to the ith stage, i is more than or equal to 1 and less than or equal to N, and i is a positive integer.
In another exemplary embodiment, the electronic device may further include a transmission device connected to the processor, and an input/output device connected to the processor.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps of them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The above is only a preferred embodiment of the present application, and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method of container migration, comprising:
responding to a migration request for migrating a target container group of a source end to a destination end, and acquiring memory data of at least one target container in the target container group;
determining memory data to be migrated of the target container group based on the memory data of at least one target container;
and synchronously migrating the memory data to be migrated by adopting a pre-copy mechanism, so as to finish migrating the target container group from the source end to the destination end, wherein the synchronous migration comprises a plurality of stages, and in each ith stage, according to whether at least one target container in the target container group meets a preset migration stopping condition, the target migration operation is determined to be executed on the memory data to be migrated corresponding to the ith stage, i is more than or equal to 1 and less than or equal to N, and i is a positive integer.
2. The method of claim 1, wherein retrieving memory data for at least one target container in the set of target containers comprises:
for each target container in the target container group, process information of at least one process group in the target container is acquired, wherein the process information comprises at least one of the following steps: process group ID, parent process ID, container ID, process status of each child process;
determining a process node of the target container based on the process information of the target container, and determining node information of the process node;
constructing a process tree of the target container based on the process node and the node information, and carrying out corresponding backup operation on the node information of the process node in the process tree to obtain memory backup data of the target container, wherein the memory backup data comprises at least one of the following components: file description data recorded by a Pagemap file, memory mapping data recorded by a Smaps file and/or a map_file, kernel data recorded by a Ptrace SEIZE tool;
and determining the memory data of each target container in the target container group based on the memory backup data.
3. The method of claim 1, wherein determining the memory data to be migrated corresponding to the i-th phase comprises:
when i is equal to 1, the memory data to be migrated comprises memory data of each target container of the source end before synchronous migration starts;
when i is not equal to 1, the memory data to be migrated includes the memory dirty page data generated by each target container in the migration process of the i-1 stage.
4. A method according to claim 3, wherein the data size of the memory data to be migrated in the ith stage of migration from the source to the destination decreases with increasing i value until no new dirty page data is generated when i=n.
5. The method of claim 1, wherein the target migration operation comprises: and stopping the migration operation or the iterative migration operation, wherein the synchronous migration is performed on the memory data to be migrated by adopting a pre-copy mechanism, and the migration of the target container group from the source end to the destination end is completed, and the method comprises the following steps:
under the condition that only one target container is included in the target container group, determining to execute the target migration operation on the memory data to be migrated corresponding to the ith stage according to whether the target container reaches a first migration stopping condition in the ith stage, and completing migration of the target container group from the source end to the destination end, wherein the first migration stopping condition comprises at least one of the following conditions: the first data quantity of the memory dirty page data of the target container in the ith stage is smaller than a preset first quantity threshold value, and the first time number of synchronous iteration of each target container in the ith stage is smaller than a preset first time number threshold value;
Under the condition that the target container group comprises a plurality of target containers, determining to execute the target migration operation on the memory data to be migrated corresponding to the ith stage according to whether each target container reaches a second migration stopping condition in the ith stage, and completing migration of the target container group from the source end to the destination end, wherein the second migration stopping condition comprises at least one of the following conditions: the second data quantity of the memory dirty page data of the target containers in the target container group in the ith stage is smaller than a preset second quantity threshold value, and the first times of each target container in the target container group for executing the iterative migration operation in the ith stage are smaller than the first quantity threshold value.
6. The method according to claim 5, wherein determining, in a case where only one of the target containers is included in the target container group, to perform the target migration operation on the memory data to be migrated corresponding to the ith stage according to whether the target container reaches a first migration stopping condition in the ith stage, includes:
determining whether the target container reaches the first stop migration condition at the ith stage, wherein,
When the target container reaches the first migration stopping condition in the ith stage, determining to execute the migration stopping operation on the memory data to be migrated corresponding to the ith stage;
and when the target container does not reach the first migration stopping condition in the ith stage, determining to continue to execute the (i+1) -th round of iterative migration on the memory data to be migrated corresponding to the ith stage until the target container reaches the migration stopping condition.
7. The method according to claim 5, wherein, in the case that the target container group includes a plurality of target containers, determining to execute the target migration operation on the memory data to be migrated corresponding to the ith stage according to whether the plurality of target containers each reach a second migration stopping condition in the ith stage, includes:
determining whether a plurality of the target containers within the target container group reach the second migration stopping condition at the ith stage, wherein,
when a plurality of target containers in the target container group reach the second migration stopping condition in the ith stage, determining to execute the migration stopping operation on the memory data to be migrated corresponding to the ith stage;
And when a plurality of target containers in the target container group reach the second migration stopping condition in the ith stage, continuing to perform the (i+1) -th round of iterative migration on the memory data to be migrated corresponding to the ith stage until each target container in the target container group reaches the migration stopping condition.
8. A container transfer device, comprising:
the acquisition module is used for responding to a migration request for migrating a target container group of a source end to a destination end and acquiring memory data of at least one target container in the target container group;
the storage module is used for determining memory data to be migrated of the target container group based on the memory data of at least one target container;
and the migration module is used for synchronously migrating the memory data to be migrated by adopting a pre-copy mechanism, and completing migration of the target container group from the source end to the destination end, wherein the synchronous migration comprises a plurality of stages, and in each ith stage, according to whether at least one target container in the target container group meets a preset migration stopping condition, the target migration operation executed on the memory data to be migrated corresponding to the ith stage is determined, wherein i is more than or equal to 1 and less than or equal to N, and i is a positive integer.
9. A computer readable storage medium, characterized in that a computer program is stored in the computer readable storage medium, wherein the computer program, when being executed by a processor, implements the steps of the container migration method as claimed in any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the container migration method of any one of claims 1 to 7 when the computer program is executed by the processor.
CN202310610048.3A 2023-05-26 2023-05-26 Container migration method and device, storage medium and electronic equipment Pending CN116719604A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310610048.3A CN116719604A (en) 2023-05-26 2023-05-26 Container migration method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310610048.3A CN116719604A (en) 2023-05-26 2023-05-26 Container migration method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116719604A true CN116719604A (en) 2023-09-08

Family

ID=87868915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310610048.3A Pending CN116719604A (en) 2023-05-26 2023-05-26 Container migration method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116719604A (en)

Similar Documents

Publication Publication Date Title
US11327799B2 (en) Dynamic allocation of worker nodes for distributed replication
US11010240B2 (en) Tracking status and restarting distributed replication
US20200348852A1 (en) Distributed object replication architecture
CN111966305B (en) Persistent volume allocation method and device, computer equipment and storage medium
US11349915B2 (en) Distributed replication and deduplication of an object from a source site to a destination site
CN107515776B (en) Method for upgrading service continuously, node to be upgraded and readable storage medium
US11226847B2 (en) Implementing an application manifest in a node-specific manner using an intent-based orchestrator
CN109062655B (en) Containerized cloud platform and server
CN107111533B (en) Virtual machine cluster backup
CN110088733A (en) The layout based on accumulation layer of virtual machine (vm) migration
US11347684B2 (en) Rolling back KUBERNETES applications including custom resources
US10620871B1 (en) Storage scheme for a distributed storage system
US20190272224A1 (en) Establishing and monitoring programming environments
CN111684437B (en) Staggered update key-value storage system ordered by time sequence
US20230315584A1 (en) Backing up data for a namespace assigned to a tenant
Terneborg et al. Application agnostic container migration and failover
US20230376357A1 (en) Scaling virtualization resource units of applications
US10635523B2 (en) Fast recovery from failures in a chronologically ordered log-structured key-value storage system
US11461131B2 (en) Hosting virtual machines on a secondary storage system
CN116719604A (en) Container migration method and device, storage medium and electronic equipment
CN117389713B (en) Storage system application service data migration method, device, equipment and medium
CN116501552B (en) Data backup method, device, system and storage medium
GB2542585A (en) Task scheduler and task scheduling process
CN116094897A (en) Webpage configuration method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination