CN110990133A - Edge computing service migration method and device, electronic equipment and medium - Google Patents

Edge computing service migration method and device, electronic equipment and medium Download PDF

Info

Publication number
CN110990133A
CN110990133A CN201911120829.4A CN201911120829A CN110990133A CN 110990133 A CN110990133 A CN 110990133A CN 201911120829 A CN201911120829 A CN 201911120829A CN 110990133 A CN110990133 A CN 110990133A
Authority
CN
China
Prior art keywords
page
key
migration
migrated
application service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911120829.4A
Other languages
Chinese (zh)
Other versions
CN110990133B (en
Inventor
孙广宇
周哲
李欣桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Institute of Information Technology AIIT of Peking University
Hangzhou Weiming Information Technology Co Ltd
Original Assignee
Advanced Institute of Information Technology AIIT of Peking University
Hangzhou Weiming Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Institute of Information Technology AIIT of Peking University, Hangzhou Weiming Information Technology Co Ltd filed Critical Advanced Institute of Information Technology AIIT of Peking University
Priority to CN201911120829.4A priority Critical patent/CN110990133B/en
Publication of CN110990133A publication Critical patent/CN110990133A/en
Application granted granted Critical
Publication of CN110990133B publication Critical patent/CN110990133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/482Application

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides an edge computing service migration method and device, electronic equipment and a computer readable medium. The method comprises the following steps: dividing a memory page of an application service to be migrated into a normal page and a key page according to a modification frequency, wherein the application service to be migrated is running in an original container of a primary edge server; the modification frequency of the key page is less than or equal to the preset frequency, and the modification frequency of the normal page is greater than the preset frequency. And after receiving the migration request, sending the key page to a page migration accelerator so that the page migration accelerator compresses the key page and transmits the compressed key page to a destination edge server. The page migration accelerator is independent of a CPU (central processing unit) of the primary edge server and independently performs compression processing on transmission data. The scheme is suitable for the edge computing scene, can realize short downtime and reestablishment time on a target machine, and realizes seamless migration of edge computing service.

Description

Edge computing service migration method and device, electronic equipment and medium
Technical Field
The present application relates to the field of edge computing technologies, and in particular, to an edge computing service migration method and apparatus, an electronic device, and a computer-readable medium.
Background
Edge computing refers to an open platform integrating network, computing, storage and application core capabilities at one side close to an object or a data source to provide recent services nearby. The edge computing forms a three-layer system structure of intelligent terminal-edge server-cloud data center by giving certain computing power and storage power to network edge equipment, and provides communication and IT service, storage and computing resources at the edge of the network so as to reduce processing delay of applications and more effectively utilize the mobile network.
In the field of edge computing, it is often necessary to deploy applications using container technology. When a single edge server is overloaded or a user moves far away, the application deploying the service through the container needs to be dynamically (seamlessly) migrated across the servers. The evaluation of the dynamic migration effect is affected by many aspects, such as the total data amount and the transmission frequency transmitted in the migration process; freeze time of service (freqen time); down time of the service; the setup time of the service on the new server; the influence of the migration process on the normal operation of the CPU and the memory, and the like.
Most existing container platforms support dynamic migration of containers, but are not suitable for special scenes of edge calculation: the bandwidth is limited in a Wide Area Network (WAN) environment, and the transmission data volume needs to be compressed as much as possible; the computing resources of the edge computing nodes are limited, and the occupation of CPU and memory resources is reduced as much as possible in the migration process, so that the normal operation of other services is prevented from being influenced.
However, some common applications occupy a large amount of memory, and when such applications are dynamically migrated, a large amount of memory pages need to be transmitted, which results in an excessively long transmission time and an excessively long downtime. In order to reduce transmission time and downtime, the existing optimization methods mainly include two methods, a compression method (compression pages) and a Pre-copy memory (Pre-copy) method.
The compression method is used for migrating after compressing original pages, and shortens transmission time by reducing data volume, but due to the computing density of a compression algorithm, a large number of pages occupy CPU resources, so that normal operation of other unrelated applications is influenced.
The Pre-copy method further reduces downtime based on compression of pages, but increases overall transmission time. The Pre-copy method dumps (dump) the memory data related to the application for multiple iterations, so that the application always keeps normal operation on the host until the last dump is finished. Meanwhile, after each dump, the source host transfers the pages to the destination host, but only after the first dump, all the pages need to be transferred, and after each subsequent dump, only the pages different from the last time need to be transferred, namely, delta pages (delta pages) between two dumps. Each dump is executed after the end of the previous round of transmission, i.e. the time interval between two dumps is proportional to the delta pages size of the last dump. In some applications that do not modify memory frequently, the delta pages delivered each time by the Pre-copy method converge quickly. The application will only shut down during the delta pages phase of the last dump transmission throughout the migration, thereby significantly reducing downtime. However, most common applications modify the memory quickly, only a few pages related to library and model parameters will not change in the adjacent dumps, and this part will not converge due to multiple dumps, for example, the YOLO-V3 object detection application will modify up to 1GB of memory within 100ms, and the Pre-copy method fails in the dynamic migration of such applications.
Whether the migration process is fast or not determines the quality of service and user experience, and even user security.
Disclosure of Invention
The application aims to provide an edge computing service migration method and device, an electronic device and a computer readable medium.
A first aspect of the present application provides an edge computing service migration method, including:
dividing a memory page of an application service to be migrated into a normal page and a key page according to a modification frequency, wherein the application service to be migrated is running in an original container of a primary edge server; the modification frequency of the key page is less than or equal to the preset frequency, and the modification frequency of the normal page is greater than the preset frequency;
after receiving a migration request, sending the key page to a page migration accelerator so that the page migration accelerator compresses the key page and transmits the compressed key page to a target edge server;
the page migration accelerator is independent of a CPU (central processing unit) of the primary edge server and independently performs compression processing on transmission data.
A second aspect of the present application provides an edge computing service migration apparatus, including:
the page dividing module is used for dividing a memory page of the application service to be migrated into a normal page and a key page according to the modification frequency, wherein the application service to be migrated is running in an original container of a primary edge server; the modification frequency of the key page is less than or equal to the preset frequency, and the modification frequency of the normal page is greater than the preset frequency;
the sending module is used for sending the key page to a page migration accelerator after receiving the migration request, so that the page migration accelerator compresses the key page and transmits the compressed key page to a target edge server;
the page migration accelerator is independent of a CPU (central processing unit) of the primary edge server and independently performs compression processing on transmission data.
A third aspect of the present application provides an electronic device comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the computer program when executing the computer program to perform the method of the first aspect of the application.
A fourth aspect of the present application provides a computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of the first aspect of the present application.
Compared with the prior art, the edge computing service migration method, the edge computing service migration device, the electronic equipment and the medium divide the memory page of the application service to be migrated into the normal page and the key page according to the modification frequency, wherein the application service to be migrated is running in the original container of the primary edge server; the modification frequency of the key page is less than or equal to the preset frequency, and the modification frequency of the normal page is greater than the preset frequency. And after receiving the migration request, sending the key page to a page migration accelerator so that the page migration accelerator compresses the key page and transmits the compressed key page to a destination edge server. The page migration accelerator is independent of a CPU (central processing unit) of the primary edge server and independently performs compression processing on transmission data. Because the normal page reestablishing time with high modification frequency is less than the direct compression transmission time, only the key page with low modification frequency is transmitted in the dynamic migration of the edge computing service, and the independently arranged page migration accelerator is used for further compression, so that the short downtime and the reestablishing time on a target machine can be realized even under the conditions that the CPU resource is limited, the WAN bandwidth is limited, and a large-scale application can quickly modify the memory, and the seamless migration is realized.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a schematic view of an application scenario of the present application;
FIG. 2 illustrates a flow chart of a method of edge computing service migration provided by some embodiments of the present application;
FIG. 3 illustrates a schematic diagram of memory page partitioning provided by some embodiments of the present application;
FIG. 4 illustrates a schematic diagram of a page migration accelerator provided by some embodiments of the present application;
FIG. 5 illustrates a schematic diagram of code segment partitioning provided by some embodiments of the present application;
FIG. 6 illustrates a schematic diagram of an edge computing service migration apparatus provided in some embodiments of the present application.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which this application belongs.
In addition, the terms "first" and "second", etc. are used to distinguish different objects, rather than to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the application provides an edge computing service migration method and device, an electronic device and a computer readable medium, which are described below with reference to the accompanying drawings.
First, an application scenario of the present application will be described.
Please refer to fig. 1, which shows a schematic diagram of an application scenario of the present application. When a single edge server is overloaded or a user moves far away, the application deployed through the container needs to be dynamically (seamlessly) migrated across servers, as shown in fig. 1, the application needs to be migrated from an original container on a primary edge server to a new container on a destination edge server.
Referring to fig. 2, a flowchart of an edge computing service migration method according to some embodiments of the present application is shown, where as shown, the edge computing service migration method may include the following steps:
step S101: dividing a memory page of an application service to be migrated into a normal page and a key page according to a modification frequency, wherein the application service to be migrated is running in an original container of a primary edge server; the modification frequency of the key page is less than or equal to the preset frequency, and the modification frequency of the normal page is greater than the preset frequency.
In practical applications, pages are divided into two types, those pages which are modified quickly in the program and do not need to be migrated to the target machine are called normal pages (normal pages), and those pages which need to be kept in the program for a long time and are not modified frequently are called critical pages (reduce pages).
Specifically, in the application, the memory page of the application is divided into a normal page and a key page according to the modification frequency. As shown in FIG. 3, the normal page and the critical page are located in different address segments in the virtual page table. For example, the VP 0-VP 1025 address segments are normal pages, and the VP 7071-VP 18096 address segments are key pages. When checkpoint operations are performed, only the content in the reduce pages is collected and the content in the normal pages will be ignored.
The preset frequency can be set according to actual conditions, and the method is not limited in the application.
It can be understood that only the key pages are concerned during the dynamic migration, that is, only a part of the key pages which are not modified quickly are transmitted, and the normal pages which are modified quickly can be generated by being re-run on the destination machine quickly, so that the data transmission time of the migration can be reduced, and therefore, the short downtime and the re-establishment time on the destination machine can be realized, and the seamless migration is realized.
Step S102: after receiving a migration request, sending the key page to a page migration accelerator so that the page migration accelerator compresses the key page and transmits the compressed key page to a target edge server;
the page migration accelerator is independent of a CPU (central processing unit) of the primary edge server and independently performs compression processing on transmission data.
Specifically, in order to avoid occupation of a compression algorithm in a compression method on resources of a CPU and a memory, reduce the amount of data transmission and reduce transmission time as much as possible, an auxiliary migration hardware independent of the CPU is designed to perform comparison, duplication removal and compression on the data transmission independently, so that better migration performance is realized.
As shown in fig. 4, the page migration accelerator may be an accelerator composed of four main parts, namely, a page cache (page cache), a delta compression logic unit (delta-compression logic), a compression logic unit (decompression logic), and a send & receive buffer (send & receive buffer). The page migration accelerator is connected with a PCIe bus, and is connected with a network card and a physical Memory through DMA (Direct Memory Access). When a page is to be sent, firstly searching the page in a page cache, if cache hit occurs (cache hit), sending the old page and the new page to a delta-compression logic unit together for comparing and compressing delta, and then updating the old page in the page cache; if a cache miss occurs, then the LRU (least recently used) replacement algorithm is used to replace some old page with this page and the page is transferred directly. When a page is received, the reverse process is performed.
Processing a page using the page migration accelerator comprises the following steps:
a1, page cache searching: firstly, searching whether an earlier version of a page is cached in the page cache by taking a physical address of the page as a keyword in the page cache, and if so, entering the step A2 a; if cachemiss, step A2b is entered.
A2a delta-compression step after the page cache hit, if the cache hit occurs in the previous step, taking out an earlier version of the page in the page cache, sending the version of the page to a delta-compression logic unit together with the new version of the page, setting the state of the delta-compression logic to be 1, indicating that comparison and compression based on delta are performed, sending the compressed result to a send & receive buffer, and entering the step A3 a;
general compression step after page cache miss, a2 b: if cachemiss occurs in the previous step, which indicates that the page cache does not have an earlier version of the page, the page is directly sent to a delta-compression logic unit, the state of the delta-compression logic is set to 2, which indicates that normal compression which is not based on delta is performed, and the compressed result is sent to a send & receive buffer, and the process proceeds to step A3b.
Page cache replacement after page cache hit: updating the found old version of page in the page cache to the new version of page;
page cache replacement after page cache miss, a3 b: and replacing the least recently accessed old version page by the page by using the LRU replacement strategy.
In some embodiments of the present application, the method further comprises:
after the transmission of the key page is finished, stopping the original container, and establishing a new container in the target edge server; and running the application service to be migrated in the new container to obtain the modification of the normal page, filling the key page into the new container, and resuming the running of the application service to be migrated to finish the service migration.
Compared with the prior art, in the edge computing service migration method provided by this embodiment, the memory page of the application service to be migrated is divided into the normal page and the key page according to the modification frequency, and the application service to be migrated is running in the original container of the primary edge server; the modification frequency of the key page is less than or equal to the preset frequency, and the modification frequency of the normal page is greater than the preset frequency. And after receiving the migration request, sending the key page to a page migration accelerator so that the page migration accelerator compresses the key page and transmits the compressed key page to a destination edge server. The page migration accelerator is independent of a CPU (central processing unit) of the primary edge server and independently performs compression processing on transmission data. Because the normal page reestablishing time with high modification frequency is less than the direct compression transmission time, only the key page with low modification frequency is transmitted in the dynamic migration of the edge computing service, and the independently arranged page migration accelerator is used for further compression, so that the short downtime and the reestablishing time on a target machine can be realized even under the conditions that the CPU resource is limited, the WAN bandwidth is limited, and a large-scale application can quickly modify the memory, and the seamless migration is realized.
In order to ensure the correctness of the application service after restarting on the destination edge server, the breakpoint of the application service needs to be traced back to a time point of "the previous round of normal pages has all failed and the current round of normal pages has not started to generate".
In order to implement this backtracking, taking the concept of transactions in the nonvolatile memory as a reference, on the basis of the above embodiment, before step S102, the method further includes:
dividing the code of the application service to be migrated into a transactional code segment and a non-transactional code segment; the transactional code segment refers to a code segment which does not allow a key page to be directly created or modified (only a normal page can be created or modified), and the non-transactional code segment refers to a code segment which allows a key page to be directly created or modified;
and after the current transaction operation of the application service to be migrated is finished, writing result data of the operation of the transactional code segment in the application service to be migrated into a key page.
In practice, the API is provided to ensure that if a transaction fails or is interrupted, all memory page changes it involves will be rolled back automatically (to a state where the transaction did not begin).
Specifically, FIG. 5 illustrates an example of a programming model in the Hybrid-page model of the present application, demonstrating the partitioning of transactional and non-transactional code sections of pseudo code for the specific application YOLO-v 3. When an application for target detection of multiple pictures is executed, the flow is as follows:
B1. and configuring a YOLO-v3 model, and importing the trained weight files.
B2. For a certain picture to be subjected to target detection, a transaction is established, API (application programming interface) creation and maintenance are provided, and the ACID (atomicity, consistency, isolation and durability) of the transaction is ensured.
Specifically, a transaction is divided into three phases of starting, executing and writing back:
the starting phase of the transaction: the content in the register is saved to the appointed position of the reduce pages, the content in the cache is written back to the memory, the stage is atomic for checkpoint, namely if a checkpoint request is received at the stage, the execution is carried out after the stage is finished.
The execution phase of the transaction: executing the transaction, wherein the externally declared CRucial pages are read-only and not writable in the transaction execution phase; the normal pages declared outside or inside the transaction are readable and writable (note that no guarantees can be made as to the initial value of the normal pages declared outside at the beginning of the transaction).
When a program requests to write an external reduce page inside a transaction, a normal page is automatically allocated inside the transaction and is allowed to act as an original reduce page during the execution phase of the transaction, and then the read-write operation on the original reduce page links to the normal page until the execution phase of the transaction is finished, and the content of the automatically allocated normal page is written back to the original reduce page in the write-back phase of the transaction. When a program tries to write an entire normal page into a structural page, the normal page needing automatic allocation is allocated in a copy-on-write manner to save memory. When a checkpoint request of keep running is received at the execution stage of a transaction, a Page migration accelerator generates a mirror image based on the reduce pages at the moment and the register content saved at the starting stage of the transaction, and performs deduplication, compression and sending, and meanwhile, the original transaction is executed as usual.
Write back phase of transaction: this phase performs a write operation to the external reduce pages. There are two types of write operations: an automatically assigned write-back operation and an intentional write-back operation.
The automatically assigned write-back operation refers to B2B. And in the execution phase of the transaction, processing when the program requests to write the external reduce page in the transaction, writing the automatically allocated data temporarily stored in the normal pages back to the original reduce pages in the phase, and then destroying the automatically allocated normal pages, so that the whole processing process for writing the reduce pages is transparent to the user.
For users who know more about the programming model and are unwilling to cause the inevitable waste of memory resources due to the automatic allocation of normal pages, the deliberate write-back operation of the final phase after the execution phase of the things is provided, and the users can reasonably schedule the memory by themselves so that the write operation to the external relational pages is only executed finally. Like the start phase of a transaction b2a, this phase is also atomic to checkpoint, i.e. if a checkpoint request is received at this phase, it will wait until this phase is finished.
The target detection of each picture is taken as a transaction, and the operation corresponding to the three phases of the transaction is as follows:
b2a. start phase of transaction: this phase does not change from transaction to transaction.
B2b. execution phase of transaction: at this stage, the picture data in the corresponding reduce page is read first, then the relevant preprocessing operation is performed on the picture, and then the picture is propagated forward by using the model configured in B1. In addition to reading the model parameters in the crystalline pages at this stage, intermediate data is stored in the normal pages. For result data obtained after forward propagation, it can be written to external reduce pages using an automatically assigned write-back or deliberate write-back method.
When the write-back of automatic allocation is used, a return value of model.forward (img) in the transactional code segment is directly assigned to a reduce page address. When using intentional write back, then the normalpages inside the transaction need to apply for the return value of model.
B2c. write back phase of transaction: when using auto-assigned writeback, no extra code is needed at this stage, the auto-assigned normal pages will be automatically written back; when using deliberate write-back, it is necessary to manually write back the return value of model.forward (img) that the user applies for being temporarily stored in normalpages into the cognitive page.
B3. And if the picture in the B2 for executing the target detection is the last picture, ending the program, otherwise, jumping to the B2, and continuously processing the next picture.
The normal pages and the CRUCIAL pages are located in different address segments in the virtual page table. When the checkpoint operation is executed, only the content in the reduce pages is collected, and the content in the normal pages is ignored.
By the embodiment, the correctness of the application service after the restart on the target edge server can be ensured.
In the foregoing embodiment, an edge computing service migration method is provided, and correspondingly, an edge computing service migration apparatus is also provided in the present application. The edge computing service migration apparatus provided in the embodiment of the present application may implement the edge computing service migration method, and the edge computing service migration apparatus may be implemented by software, hardware, or a combination of software and hardware. For example, the edge computing service migration apparatus may include integrated or separate functional modules or units to perform the corresponding steps in the methods described above. Please refer to fig. 6, which illustrates a schematic diagram of an edge computing service migration apparatus according to some embodiments of the present application. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
As shown in fig. 6, the edge computing service migration apparatus 10 may include:
the page dividing module 101 is configured to divide a memory page of an application service to be migrated into a normal page and a key page according to a modification frequency, where the application service to be migrated is running in an original container of a primary edge server; the modification frequency of the key page is less than or equal to the preset frequency, and the modification frequency of the normal page is greater than the preset frequency;
the sending module 102 is configured to send the key page to a page migration accelerator after receiving the migration request, so that the page migration accelerator compresses the key page and transmits the compressed key page to a destination edge server;
the page migration accelerator is independent of a CPU (central processing unit) of the primary edge server and independently performs compression processing on transmission data.
In some implementations of embodiments of the present application, the normal page and the critical page are located in different address segments in a virtual page table.
In some implementations of embodiments of the present application, the apparatus 10 further comprises:
a code division module to:
after receiving the migration request, the sending module divides the codes of the application service to be migrated into transactional code segments and non-transactional code segments before sending the key page to a page migration accelerator; the transactional code segment refers to a code segment which does not allow a key page to be directly created or modified, and the non-transactional code segment refers to a code segment which allows a key page to be directly created or modified;
and after the current transaction operation of the application service to be migrated is finished, writing result data of the operation of the transactional code segment in the application service to be migrated into a key page.
In some implementations of embodiments of the present application, the apparatus 10 further comprises:
a rebuild service module to:
after the transmission of the key page is finished, a new container is established in the target edge server;
and running the application service to be migrated in the new container to obtain the modification of the normal page, and filling the key page into the new container.
The edge computing service migration apparatus 10 provided in the embodiment of the present application has the same beneficial effects as the edge computing service migration method provided in the foregoing embodiment of the present application with the same inventive concept.
The embodiment of the present application further provides an electronic device corresponding to the edge computing service migration method provided in the foregoing embodiment, including: the edge computing service migration method comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to execute the edge computing service migration method provided by any one of the previous embodiments of the application.
The electronic device provided by the embodiment of the application and the edge computing service migration method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic device.
The embodiments of the present application further provide a computer readable medium corresponding to the edge computing service migration method provided in the foregoing embodiments, and when being executed by a processor, the computer program may execute the edge computing service migration method provided in any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the edge computing service migration method provided by the embodiment of the present application have the same beneficial effects as the method adopted, run, or implemented by the application program stored in the computer-readable storage medium.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present disclosure, and the present disclosure should be construed as being covered by the claims and the specification.

Claims (10)

1. An edge computing service migration method, comprising:
dividing a memory page of an application service to be migrated into a normal page and a key page according to a modification frequency, wherein the application service to be migrated is running in an original container of a primary edge server; the modification frequency of the key page is less than or equal to the preset frequency, and the modification frequency of the normal page is greater than the preset frequency;
after receiving a migration request, sending the key page to a page migration accelerator so that the page migration accelerator compresses the key page and transmits the compressed key page to a target edge server;
the page migration accelerator is independent of a CPU (central processing unit) of the primary edge server and independently performs compression processing on transmission data.
2. The method of claim 1, wherein the normal page and the critical page are located in different address segments in a virtual page table.
3. The method of claim 1, wherein after receiving the migration request and before sending the key page to the page migration accelerator, further comprising:
dividing the code of the application service to be migrated into a transactional code segment and a non-transactional code segment; the transactional code segment refers to a code segment which does not allow a key page to be directly created or modified, and the non-transactional code segment refers to a code segment which allows a key page to be directly created or modified;
and after the current transaction operation of the application service to be migrated is finished, writing result data of the operation of the transactional code segment in the application service to be migrated into a key page.
4. The method of claim 1, further comprising:
after the transmission of the key page is finished, stopping the original container, and establishing a new container in the target edge server;
and running the application service to be migrated in the new container to obtain the modification of the normal page, and filling the key page into the new container.
5. An edge computing service migration apparatus, comprising:
the page dividing module is used for dividing a memory page of the application service to be migrated into a normal page and a key page according to the modification frequency, wherein the application service to be migrated is running in an original container of a primary edge server; the modification frequency of the key page is less than or equal to the preset frequency, and the modification frequency of the normal page is greater than the preset frequency;
the sending module is used for sending the key page to a page migration accelerator after receiving the migration request, so that the page migration accelerator compresses the key page and transmits the compressed key page to a target edge server;
the page migration accelerator is independent of a CPU (central processing unit) of the primary edge server and independently performs compression processing on transmission data.
6. The apparatus of claim 5, wherein the normal page and the critical page are located in different address segments in a virtual page table.
7. The apparatus of claim 5, further comprising:
a code division module to:
after receiving the migration request, the sending module divides the codes of the application service to be migrated into transactional code segments and non-transactional code segments before sending the key page to a page migration accelerator; the transactional code segment refers to a code segment which does not allow a key page to be directly created or modified, and the non-transactional code segment refers to a code segment which allows a key page to be directly created or modified;
and after the current transaction operation of the application service to be migrated is finished, writing result data of the operation of the transactional code segment in the application service to be migrated into a key page.
8. The apparatus of claim 5, further comprising:
a rebuild service module to:
after the transmission of the key page is finished, a new container is established in the target edge server;
and running the application service to be migrated in the new container to obtain the modification of the normal page, and filling the key page into the new container.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor executes the computer program to implement the method according to any of claims 1 to 4.
10. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of any one of claims 1 to 4.
CN201911120829.4A 2019-11-15 2019-11-15 Edge computing service migration method and device, electronic equipment and medium Active CN110990133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911120829.4A CN110990133B (en) 2019-11-15 2019-11-15 Edge computing service migration method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911120829.4A CN110990133B (en) 2019-11-15 2019-11-15 Edge computing service migration method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN110990133A true CN110990133A (en) 2020-04-10
CN110990133B CN110990133B (en) 2022-11-04

Family

ID=70084626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911120829.4A Active CN110990133B (en) 2019-11-15 2019-11-15 Edge computing service migration method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN110990133B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880430A (en) * 2020-08-27 2020-11-03 珠海格力电器股份有限公司 Control method and device for intelligent household equipment
CN113556727A (en) * 2021-07-19 2021-10-26 中国联合网络通信集团有限公司 Data transmission method and system of cloud equipment based on mobile container
WO2022061587A1 (en) * 2020-09-23 2022-03-31 西门子股份公司 Edge computing method and system, edge device, and control server
US11803413B2 (en) 2020-12-03 2023-10-31 International Business Machines Corporation Migrating complex legacy applications

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150261581A1 (en) * 2012-11-30 2015-09-17 Huawei Technologies Co., Ltd. Method, apparatus, and system for implementing hot migration of virtual machine
CN106101211A (en) * 2016-06-08 2016-11-09 西安电子科技大学 A kind of carrier wave emigration method rewriting probabilistic forecasting based on page
US9880870B1 (en) * 2015-09-24 2018-01-30 Amazon Technologies, Inc. Live migration of virtual machines using packet duplication
CN108279969A (en) * 2018-02-26 2018-07-13 中科边缘智慧信息科技(苏州)有限公司 Stateful service container thermomigration process based on memory compression transmission
CN110351336A (en) * 2019-06-10 2019-10-18 西安交通大学 A kind of edge service moving method based on docker container

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150261581A1 (en) * 2012-11-30 2015-09-17 Huawei Technologies Co., Ltd. Method, apparatus, and system for implementing hot migration of virtual machine
US9880870B1 (en) * 2015-09-24 2018-01-30 Amazon Technologies, Inc. Live migration of virtual machines using packet duplication
CN106101211A (en) * 2016-06-08 2016-11-09 西安电子科技大学 A kind of carrier wave emigration method rewriting probabilistic forecasting based on page
CN108279969A (en) * 2018-02-26 2018-07-13 中科边缘智慧信息科技(苏州)有限公司 Stateful service container thermomigration process based on memory compression transmission
CN110351336A (en) * 2019-06-10 2019-10-18 西安交通大学 A kind of edge service moving method based on docker container

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880430A (en) * 2020-08-27 2020-11-03 珠海格力电器股份有限公司 Control method and device for intelligent household equipment
WO2022061587A1 (en) * 2020-09-23 2022-03-31 西门子股份公司 Edge computing method and system, edge device, and control server
CN116349216A (en) * 2020-09-23 2023-06-27 西门子股份公司 Edge computing method and system, edge device and control server
US11803413B2 (en) 2020-12-03 2023-10-31 International Business Machines Corporation Migrating complex legacy applications
CN113556727A (en) * 2021-07-19 2021-10-26 中国联合网络通信集团有限公司 Data transmission method and system of cloud equipment based on mobile container
CN113556727B (en) * 2021-07-19 2022-08-23 中国联合网络通信集团有限公司 Data transmission method and system of cloud equipment based on mobile container

Also Published As

Publication number Publication date
CN110990133B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
CN110990133B (en) Edge computing service migration method and device, electronic equipment and medium
US11531625B2 (en) Memory management method and apparatus
CN103067425B (en) Virtual machine creation method, virtual machine management system and relevant device
CN110427284B (en) Data processing method, distributed system, computer system, and medium
US9235524B1 (en) System and method for improving cache performance
US8996811B2 (en) Scheduler, multi-core processor system, and scheduling method
CN103098043B (en) On demand virtual machine image streaming method and system
US8499010B2 (en) Garbage collection in a multiple virtual machine environment
US9875056B2 (en) Information processing system, control program, and control method
WO2022089281A1 (en) Container-based application management method and apparatus
US9047221B2 (en) Virtual machines failover
CN105760228A (en) Method for improving game fluency under low-memory Android device
CN110196681B (en) Disk data write-in control method and device for business write operation and electronic equipment
US20140298333A1 (en) Migration processing program, migration method, and cloud computing system
KR102315102B1 (en) Method, device, apparatus, and medium for booting a virtual machine
CN112748869B (en) Data processing method and device
US20130103910A1 (en) Cache management for increasing performance of high-availability multi-core systems
CN107870877B (en) Method and system for managing data access in a storage system
US9588976B1 (en) Delayed allocation for a direct access non-volatile file system
KR20190095489A (en) Graph processing system and operating method of graph processing system
US9053033B1 (en) System and method for cache content sharing
US9690619B2 (en) Thread processing method and thread processing system for setting for each thread priority level of access right to access shared memory
US9009416B1 (en) System and method for managing cache system content directories
CN112181601A (en) Memory pre-copying and virtual machine migration method and system based on dirtying rate prediction
CN115098231A (en) Cross-data-center transaction processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200826

Address after: Room 101, building 1, block C, Qianjiang Century Park, ningwei street, Xiaoshan District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Weiming Information Technology Co.,Ltd.

Applicant after: Institute of Information Technology, Zhejiang Peking University

Address before: Room 288-1, 857 Xinbei Road, Ningwei Town, Xiaoshan District, Hangzhou City, Zhejiang Province

Applicant before: Institute of Information Technology, Zhejiang Peking University

Applicant before: Hangzhou Weiming Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200410

Assignee: Zhejiang smart video security Innovation Center Co.,Ltd.

Assignor: Institute of Information Technology, Zhejiang Peking University

Contract record no.: X2022330000930

Denomination of invention: Edge computing service migration method, device, electronic equipment and media

Granted publication date: 20221104

License type: Common License

Record date: 20221229