CN113190242A - Method and system for accelerating to pull mirror image file - Google Patents

Method and system for accelerating to pull mirror image file Download PDF

Info

Publication number
CN113190242A
CN113190242A CN202110634271.2A CN202110634271A CN113190242A CN 113190242 A CN113190242 A CN 113190242A CN 202110634271 A CN202110634271 A CN 202110634271A CN 113190242 A CN113190242 A CN 113190242A
Authority
CN
China
Prior art keywords
image file
node
converter
image
format
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110634271.2A
Other languages
Chinese (zh)
Other versions
CN113190242B (en
Inventor
蔡锡生
王玉虎
陈明恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Softcom Power Information Technology Co ltd
Original Assignee
Hangzhou Langche Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Langche Technology Co ltd filed Critical Hangzhou Langche Technology Co ltd
Priority to CN202110634271.2A priority Critical patent/CN113190242B/en
Publication of CN113190242A publication Critical patent/CN113190242A/en
Application granted granted Critical
Publication of CN113190242B publication Critical patent/CN113190242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a method and a system for pulling an image file in an accelerated manner, wherein the method for pulling the image file in the accelerated manner comprises the following steps: the method comprises the steps that a converter intercepts a mirror image file pulling request of a node, wherein the request carries information of a target mirror image; the converter determines whether a second image file of the target image exists in the second warehouse or not, and if so, the converter sends the second image file to the node; if not, the converter acquires the image file from the first warehouse according to the information of the target image to obtain a first image file, converts the format of the first image file into a second format to obtain a second image file, wherein the second format comprises an accelerated downloading format or a delayed loading format, stores the second image file into the second warehouse, and sends the second image file to the node.

Description

Method and system for accelerating to pull mirror image file
Technical Field
The application relates to the technical field of computer software technology application, in particular to a method and a system for accelerating the pulling of an image file.
Background
In a container arrangement engine (kubernets, k8s for short) cluster, the process of starting a container includes pulling an image file from an image repository and deploying an image according to the image file. In practical investigations, it was found that the container start-up process takes a long time.
Aiming at the problem that the starting process of the container in the related art is long in time consumption, an effective solution is not provided.
Disclosure of Invention
The embodiment of the application provides a method and a system for pulling an image file in an accelerated manner, so as to at least solve the problem that the time consumption of a container starting process in the related art is long.
In a first aspect, an embodiment of the present application provides a method for accelerating pulling of an image file, where the method includes:
a converter intercepts a mirror image file pulling request of a node, wherein the request carries information of a target mirror image;
the converter determines whether a second image file of the target image exists in a second warehouse or not, and if yes, the converter sends the second image file to the node;
if not, the converter acquires an image file from a first warehouse according to the information of the target image to obtain a first image file, converts the format of the first image file into a second format to obtain a second image file, wherein the second format comprises an accelerated download format or a delayed loading format,
the converter stores the second image file to the second repository and sends the second image file to the node.
In some of these embodiments, the method comprises:
before the converter intercepts an image file pulling request of a node, the node calls a container operation time, and the container operation time sends the image file pulling request to the first warehouse;
the converter sending the second image file to the node comprises: and the converter sends the second image file to the container operation of the node.
In some of these embodiments, where the second format is a lazy load format, the method includes:
under the condition that a node pulls a data block of the target mirror image, the converter sends notification information to a trigger in each node in the cluster;
and after receiving the notification information, the trigger determines whether a container uses the target mirror image on the node, if so, the trigger calls the container to run, and the container runs to synchronize the data block to the target mirror image in the node where the trigger is located.
In some of these embodiments, in the event a node pulls a data block of the target image, the translator registers a deferred pull record for the data block of the target image;
and under the condition that the trigger is started for the first time, the trigger acquires the delay pull record and calls a container to run, and the container runs to synchronize the data block in the delay pull record to a target mirror image in a node where the trigger is located.
In a second aspect, an embodiment of the present application provides a system for accelerating the pulling of an image file, the system comprising a transformer,
the converter intercepts a mirror image file pulling request of a node, wherein the request carries information of a target mirror image;
the converter determines whether a second image file of the target image exists in a second warehouse or not, and if yes, the converter sends the second image file to the node;
if not, the converter acquires an image file from a first warehouse according to the information of the target image to obtain a first image file, converts the format of the first image file into a second format to obtain a second image file, wherein the second format comprises an accelerated download format or a delayed loading format,
the converter stores the second image file to the second repository and sends the second image file to the node.
In some embodiments, before intercepting an image file pull request of a node, the node invokes a container runtime, which sends the image file pull request to the first repository;
the converter sending the second image file to the node comprises: and the converter sends the second image file to the container operation of the node.
In some of these embodiments, where the second format is a lazy load format, the system further comprises a flip-flop,
under the condition that a node pulls a data block of the target mirror image, the converter sends notification information to a trigger in each node in the cluster;
and after receiving the notification information, the trigger determines whether a container uses the target mirror image on the node, if so, the trigger calls the container to run, and the container runs to synchronize the data block to the target mirror image in the node where the trigger is located.
In some of these embodiments, in the event a node pulls a data block of the target image, the translator registers a deferred pull record for the data block of the target image;
and under the condition that the trigger is started for the first time, the trigger acquires the delay pull record and calls a container to run, and the container runs to synchronize the data block in the delay pull record to a target mirror image in a node where the trigger is located.
In a third aspect, an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method for accelerating the pulling of an image file when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for speeding up pulling an image file.
Compared with the related art, the method for accelerating the pulling of the image file provided by the embodiment of the application intercepts the image file pulling request of the node through the converter, wherein the request carries the information of the target image; the converter determines whether a second image file of the target image exists in the second warehouse or not, and if so, the converter sends the second image file to the node; if not, the converter acquires the image file from the first warehouse according to the information of the target image to obtain a first image file, converts the format of the first image file into a second format to obtain a second image file, wherein the second format comprises an accelerated downloading format or a delayed loading format, stores the second image file into the second warehouse, and sends the second image file to the node.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of an application environment of a method for accelerating the pulling of an image file according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for accelerating the pulling of an image file according to a first embodiment of the present application;
FIG. 3 is a diagram illustrating a method for speeding up the pulling of an image file according to a first embodiment of the present application;
FIG. 4 is a flowchart of a method for accelerating the pulling of an image file according to a second embodiment of the present application;
FIG. 5 is a diagram illustrating a method for speeding up the pulling of an image file according to a second embodiment of the present application;
FIG. 6 is a block diagram of a system for accelerating the pulling of an image file according to a third embodiment of the present application;
fig. 7 is an internal structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The method for accelerating the pulling of the image file, provided by the application, can be applied to the application environment as shown in figure 1, FIG. 1 is a schematic diagram of an application environment of a method for accelerating the pulling of an image file according to an embodiment of the present application, as shown in fig. 1, a container arrangement engine (k 8 s) cluster includes a plurality of nodes, a first repository is a container mirror repository, each node can pull an image file from the first repository, for example, after a user initiates a request to mirror a deployment workload on the k8s cluster, the k8s cluster receives the request, and determines a node among the plurality of nodes according to a scheduling algorithm, thereby obtaining a target node, the k8s cluster initiating a request for starting a Container of the target image to the target node, the target node receiving the request and calling a Container Runtime (Container Runtime), the Container Runtime initiating a request for pulling an image file of the target image to the first repository; and the target node obtains the mirror image file of the target mirror image through the container operation, so that the container can be started, and the mirroring deployment of the workload is completed.
Fig. 2 is a flowchart of a method for pulling an image file at an accelerated speed according to a first embodiment of the present application, and as shown in fig. 2, the flowchart includes the following steps:
step S201, a converter intercepts an image file pulling request of a node, wherein the request carries information of a target image, for example, the request carries a name of the target image;
step S202, the converter determines whether a second mirror image file of the target mirror image exists in a second warehouse, and if the second mirror image file of the target mirror image exists in the second warehouse, the converter sends the second mirror image file to the node, wherein the second warehouse can be a local warehouse or a remote warehouse or a warehouse provided by a cloud service provider, and further the converter sends the second mirror image file to the node, and the converter sends the second mirror image file to the node to run in a container of the node;
step S203, if the second repository does not have the second image file of the target image, the converter acquires the image file from the first repository according to the information of the target image to obtain the first image file, converts the format of the first image file into the second format to obtain the second image file, stores the second image file in the second repository, and sends the second image file to the node, and further, the converter sends the second image file to the node includes that the converter sends the second image file to the container of the node to run.
Through steps S201 to S203, compared to the problem that the time consumption of the container starting process in the prior art is long, in the embodiment, by setting the converter, the converter intercepts the image file pulling request of the node, and determines whether the second image file of the target image exists in the second warehouse, if the second image file of the target image exists in the second warehouse, the converter directly sends the second image file to the node, because the second warehouse can continuously store the common image and the used image, most of the images have already finished the caching of the image file of the second format in the second warehouse, so that the user can obviously experience the acceleration effect brought by the scheme, and because the pulled image file occupies about 76% of the container starting time, the accelerated pulling of the image file can shorten the starting time of the container, and the problem that the time consumption of the container starting process in the related art is long is solved, the effect of improving the starting speed of the container is achieved.
In addition, if the second mirror image file of the target mirror image does not exist in the second warehouse, the converter acquires the mirror image file from the first warehouse to obtain the first mirror image file, and converting the format of the first image file into a second format to obtain a second image file, storing the second image file in a second warehouse by the converter, and sends the second image file to the node, compared with the prior art that if the user wants to pull the image in an accelerated way, the user needs to manually complete the conversion of the image format, but the mirror image conversion and the mirror image storage operation are complex, which increases the learning cost of the user, the scheme can convert the format of the first mirror image file into the second format through the converter, under the condition of not changing the use habits of other images of the user, the operation of converting the image format of the user is avoided, thereby providing convenience for the imaging deployment work of the user.
Fig. 3 is a schematic diagram of a method for accelerating the pulling of an image file according to a first embodiment of the present application, as shown in fig. 3, the method includes the following steps:
step S301, a user initiates a request on a container arrangement engine cluster;
step S302, a container arrangement engine cluster determines nodes through a scheduling algorithm, the nodes call container operation, the container operation initiates a request to a first warehouse, and a converter intercepts the request;
step S303, the converter determines whether a second image file of the target image exists in the second warehouse, if the second image file of the target image exists in the second warehouse, the converter directly sends the second image file to the node, if the second image file of the target image does not exist in the second warehouse, the converter initiates a request for pulling the first image file of the target image to the first warehouse, and the first warehouse returns the first image file;
step S304, the converter converts the format of the first image file into a second format to obtain a second image file;
step S305, the converter stores the second image file in the second repository, and returns the second image file to the container runtime of the node.
It should be noted that the second format is any format capable of increasing the image file pull speed, and the second format may be, but is not limited to, an accelerated download format or a delayed load format provided by an open source project (Containerd), because only about 6.4% of the content of the image file is used during the container start-up process, and most of the content of the image file is gradually read and used during the container operation process after the container start-up, the image file can be delayed loaded through a remote snapshot in the container start-up phase, after the container start-up, a data block corresponding to the requirement can be downloaded according to the service requirement, and if there is no requirement, the downloading will not be triggered, which is also the main principle of applying the delayed load format.
In some embodiments, fig. 4 is a flowchart of a method for accelerating the pulling of an image file according to a second embodiment of the present application, and as shown in fig. 4, in the case that the second format is a deferred loading format, the flowchart includes the following steps:
step S401, when a node pulls a data block of a target mirror image, a converter sends notification information to a trigger in each node in a cluster, for example, after a container is started, a node in a k8S cluster is triggered by a service, and a data block B of the target mirror image needs to be pulled, when the node calls a container operation, the container operation initiates a request for pulling the data block B to a first repository, an interceptor intercepts the request, the interceptor obtains the data block B from the first repository, the data block B exists in a second repository, and returns the data block B to the container operation of the node, the converter sends notification information to the trigger in each node in the cluster, and notifies the trigger in each node: a node pulls a data block of a target mirror image;
step S402, after the trigger receives the notification information, determining whether a container uses the target mirror image on the node, if so, the trigger calls the container to run, and the container runs to synchronize the data block to the target mirror image in the node where the trigger is located.
Considering that if a node pulls the data block B of the target image, all containers on other nodes that have operated the target image will also use the data block B in the future, through steps S401 to S402, the present solution sets a trigger in each node in the cluster, and once a node pulls the data block of the target image, the converter will actively notify all triggers, so that the trigger can perform data block synchronization on the target image in the node, and thus all containers that have operated the target image pull the data block B, so that when the containers need to use the data block B in the future service, the containers do not need to pull the data block B, and the operation efficiency of the containers is improved.
Further, in consideration of the fact that when a trigger for the initial start of a node exists, a container already exists on the node and the converter needs to register a delayed pull record of a data block of a target mirror image under the condition that the node in the cluster pulls the data block of the target mirror image, specifically, a lightweight database can be set for the delayed pull record, and the database belongs to a built-in component in the converter and is used for storing the delayed pull record; and under the condition that the trigger of a certain node is started for the first time, the trigger of the node acquires the delayed pull record from the converter, calls the container to run, acquires the data block in the delayed pull record from the second warehouse during the container running, and synchronizes the data block to the target mirror image in the node where the trigger is located.
Fig. 5 is a schematic diagram of a method for accelerating the pulling of an image file according to a second embodiment of the present application, where as shown in fig. 5, a cluster includes four nodes, a node a, a node B, a node C, and a node D, and each node is provided with a trigger, where the trigger of the node D belongs to the initial startup, and the method includes the following steps:
step S501, after a trigger of a node D is started, the trigger acquires a delayed pull record from a converter, a data block a in the delayed pull record, which records a mirror image a, is pulled, and the trigger determines that a container in the node D uses the mirror image a, so as to invoke container operation of the node D, the container operation acquires the data block a from a second repository, and synchronizes the data block a to the mirror image a in the node D, it should be noted that step S501 may also occur in the execution process of any one of steps S502 to S505;
step S502, after the node A is triggered by the service, the data block B of the mirror image A needs to be pulled, when the node A calls the container operation of the node A, the container operation initiates a request for pulling the data block B, the interceptor intercepts the request, and the interceptor acquires the data block B from the first warehouse, stores the data block B in the second warehouse and sends the data block B to the container operation of the node A;
step S503, the converter registers information about that the mirror a has pulled the data block B in the delayed pull record;
step S504, the converter sends notification information to the triggers in the nodes in the cluster, and the triggers in the nodes are notified: the data block of the mirror image A in the delayed pull record changes;
step S505, after the trigger in each node receives the notification information, determining whether a container uses the mirror image A on the node, if so, calling the container to run by the trigger;
step S506, the container obtains the mirror image A in the second warehouse during operation, and synchronizes the data block B of the mirror image A to the mirror image A in the node where the trigger is located.
Fig. 6 is a block diagram of a structure of a system for pulling an image file at an accelerated speed according to a third embodiment of the present application, and as shown in fig. 6, the system includes a converter 61 and a trigger 62;
the converter 61 intercepts a mirror image file pulling request of a node, wherein the request carries information of a target mirror image; the converter 61 determines whether a second image file of the target image exists in the second warehouse, and if so, the converter 61 sends the second image file to the node; if not, the converter 61 acquires the image file from the first warehouse according to the information of the target image to obtain a first image file, converts the format of the first image file into a second format to obtain a second image file, wherein the second format comprises a delayed loading format, and the converter 61 stores the second image file to the second warehouse and sends the second image file to the node;
when a node is triggered by a service and pulls a data block of a target mirror image, a converter 61 registers a delay pull record of the data block of the target mirror image and sends notification information to a trigger 62 in each node in a cluster; after receiving the notification information, the trigger 62 determines whether a container uses a target mirror image on the node where the trigger 62 is located, if so, the trigger 62 invokes the container to run, and the container runs to synchronize the data block to the target mirror image in the node where the trigger 62 is located;
under the condition that the trigger 62 is started for the first time, the trigger 62 acquires the delay pull record, and invokes the container runtime, and the container runtime synchronizes the data blocks in the delay pull record to the target image in the node where the trigger 62 is located.
In one embodiment, fig. 7 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, and as shown in fig. 7, there is provided an electronic device, which may be a server, and an internal structure diagram of which may be as shown in fig. 7. The electronic device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the electronic device is used for storing data. The network interface of the electronic device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a method of accelerating a pull of an image file.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be understood by those skilled in the art that various features of the above-described embodiments can be combined in any combination, and for the sake of brevity, all possible combinations of features in the above-described embodiments are not described in detail, but rather, all combinations of features which are not inconsistent with each other should be construed as being within the scope of the present disclosure.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for accelerating the pulling of an image file, the method comprising:
a converter intercepts a mirror image file pulling request of a node, wherein the request carries information of a target mirror image;
the converter determines whether a second image file of the target image exists in a second warehouse or not, and if yes, the converter sends the second image file to the node;
if not, the converter acquires an image file from a first warehouse according to the information of the target image to obtain a first image file, converts the format of the first image file into a second format to obtain a second image file, wherein the second format comprises an accelerated download format or a delayed loading format,
the converter stores the second image file to the second repository and sends the second image file to the node.
2. The method according to claim 1, characterized in that it comprises:
before the converter intercepts an image file pulling request of a node, the node calls a container operation time, and the container operation time sends the image file pulling request to the first warehouse;
the converter sending the second image file to the node comprises: and the converter sends the second image file to the container operation of the node.
3. The method of claim 2, wherein if the second format is a lazy load format, the method comprises:
under the condition that a node pulls a data block of the target mirror image, the converter sends notification information to a trigger in each node in the cluster;
and after receiving the notification information, the trigger determines whether a container uses the target mirror image on the node, if so, the trigger calls the container to run, and the container runs to synchronize the data block to the target mirror image in the node where the trigger is located.
4. The method of claim 3, wherein in the event a node pulls a data block of the target image, the translator registers a deferred pull record for the data block of the target image;
and under the condition that the trigger is started for the first time, the trigger acquires the delay pull record and calls a container to run, and the container runs to synchronize the data block in the delay pull record to a target mirror image in a node where the trigger is located.
5. A system for accelerating the pulling of an image file, the system comprising a translator,
the converter intercepts a mirror image file pulling request of a node, wherein the request carries information of a target mirror image;
the converter determines whether a second image file of the target image exists in a second warehouse or not, and if yes, the converter sends the second image file to the node;
if not, the converter acquires an image file from a first warehouse according to the information of the target image to obtain a first image file, converts the format of the first image file into a second format to obtain a second image file, wherein the second format comprises an accelerated download format or a delayed loading format,
the converter stores the second image file to the second repository and sends the second image file to the node.
6. The system of claim 5, wherein before the converter intercepts an image file pull request of a node, the node invokes a container runtime, which sends the image file pull request to the first repository;
the converter sending the second image file to the node comprises: and the converter sends the second image file to the container operation of the node.
7. The system of claim 6, wherein in the case that the second format is a lazy load format, the system further comprises a flip-flop,
under the condition that a node pulls a data block of the target mirror image, the converter sends notification information to a trigger in each node in the cluster;
and after receiving the notification information, the trigger determines whether a container uses the target mirror image on the node, if so, the trigger calls the container to run, and the container runs to synchronize the data block to the target mirror image in the node where the trigger is located.
8. The system of claim 7, wherein in the event a node pulls a data block of the target image, the translator registers a deferred pull record for the data block of the target image;
and under the condition that the trigger is started for the first time, the trigger acquires the delay pull record and calls a container to run, and the container runs to synchronize the data block in the delay pull record to a target mirror image in a node where the trigger is located.
9. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of accelerating the pulling of an image file according to any one of claims 1 to 4 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of accelerating a pull of an image file according to any one of claims 1 to 4.
CN202110634271.2A 2021-06-08 2021-06-08 Method and system for accelerating to pull mirror image file Active CN113190242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110634271.2A CN113190242B (en) 2021-06-08 2021-06-08 Method and system for accelerating to pull mirror image file

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110634271.2A CN113190242B (en) 2021-06-08 2021-06-08 Method and system for accelerating to pull mirror image file

Publications (2)

Publication Number Publication Date
CN113190242A true CN113190242A (en) 2021-07-30
CN113190242B CN113190242B (en) 2021-10-22

Family

ID=76976180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110634271.2A Active CN113190242B (en) 2021-06-08 2021-06-08 Method and system for accelerating to pull mirror image file

Country Status (1)

Country Link
CN (1) CN113190242B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883006A (en) * 2021-03-12 2021-06-01 云知声智能科技股份有限公司 Enterprise-level container mirror image acceleration method and device, electronic equipment and storage medium
CN114860344A (en) * 2022-05-26 2022-08-05 中国工商银行股份有限公司 Container starting method and device, computer equipment and storage medium
CN116302209A (en) * 2023-05-15 2023-06-23 阿里云计算有限公司 Method for accelerating starting of application process, distributed system, node and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109104451A (en) * 2017-06-21 2018-12-28 阿里巴巴集团控股有限公司 The pre-heating mean and node of the method for down loading and node of Docker mirror image, Docker mirror image
US20200042353A1 (en) * 2018-08-03 2020-02-06 Virtustream Ip Holding Company Llc Management of Unit-Based Virtual Accelerator Resources
CN112883006A (en) * 2021-03-12 2021-06-01 云知声智能科技股份有限公司 Enterprise-level container mirror image acceleration method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109104451A (en) * 2017-06-21 2018-12-28 阿里巴巴集团控股有限公司 The pre-heating mean and node of the method for down loading and node of Docker mirror image, Docker mirror image
US20200042353A1 (en) * 2018-08-03 2020-02-06 Virtustream Ip Holding Company Llc Management of Unit-Based Virtual Accelerator Resources
CN112883006A (en) * 2021-03-12 2021-06-01 云知声智能科技股份有限公司 Enterprise-level container mirror image acceleration method and device, electronic equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883006A (en) * 2021-03-12 2021-06-01 云知声智能科技股份有限公司 Enterprise-level container mirror image acceleration method and device, electronic equipment and storage medium
CN112883006B (en) * 2021-03-12 2024-06-11 云知声智能科技股份有限公司 Enterprise-level container mirror image acceleration method and device, electronic equipment and storage medium
CN114860344A (en) * 2022-05-26 2022-08-05 中国工商银行股份有限公司 Container starting method and device, computer equipment and storage medium
CN116302209A (en) * 2023-05-15 2023-06-23 阿里云计算有限公司 Method for accelerating starting of application process, distributed system, node and storage medium
CN116302209B (en) * 2023-05-15 2023-08-04 阿里云计算有限公司 Method for accelerating starting of application process, distributed system, node and storage medium

Also Published As

Publication number Publication date
CN113190242B (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN113190242B (en) Method and system for accelerating to pull mirror image file
US9967205B2 (en) Resource downloading method and apparatus
US20170085419A1 (en) System and method for deploying an application
CN114125028B (en) Method, apparatus, device, storage medium and program product for operating micro-application
CN110968331B (en) Method and device for running application program
CN110417785A (en) A kind of installation method, system and the storage medium of cloud mobile phone games
CN114116056A (en) Page display method and device
RU2759330C2 (en) Postponing call requests for remote objects
CN112019643B (en) Docker mirror image downloading method and system
CN112035273A (en) Hardware data acquisition method and system and computer equipment
US9875119B2 (en) Extensibility framework
O'Sullivan et al. The cloud personal assistant for providing services to mobile clients
CN114937452A (en) Service calling method and system, calling device, target device and readable storage medium
CN106293790B (en) application program upgrading method and device based on Firefox operating system
CN112883006B (en) Enterprise-level container mirror image acceleration method and device, electronic equipment and storage medium
KR20140054073A (en) Application synchronization method and program
CN112148351A (en) Cross-version compatibility method and system for application software
US20160182605A1 (en) Dynamic Content Aggregation
CN116932241A (en) Service starting method and related device
CN112328598A (en) ID generation method, device, electronic equipment and storage medium
CN112764881A (en) Method, system, computer device and storage medium for pipeline deployment mirroring
CN111880895A (en) Data reading and writing method and device based on Kubernetes platform
WO2024093409A1 (en) Service component calling method and apparatus, and computer device and storage medium
CN114327667B (en) Dynamic resource loading method and system
CN112631152B (en) AR-based application program starting method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220727

Address after: 100094 Room 502, floor 5, building 16, East District, yard 10, northwest Wangdong Road, Haidian District, Beijing

Patentee after: Softcom power information technology (Group) Co.,Ltd.

Address before: 311100 Room 802, building 12, 1818-2, Wenyi West Road, Yuhang street, Yuhang District, Hangzhou City, Zhejiang Province

Patentee before: HANGZHOU LANGCHE TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220829

Address after: 518000 floor 2-24, building a, Zhongshe Plaza, No.1028, Buji Road, Dongxiao street, Luohu District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen Softcom Power Information Technology Co.,Ltd.

Address before: 100094 Room 502, floor 5, building 16, East District, yard 10, northwest Wangdong Road, Haidian District, Beijing

Patentee before: Softcom power information technology (Group) Co.,Ltd.