CN116192872A - Method, system, electronic device and storage medium for accelerating supply of edge environment container - Google Patents

Method, system, electronic device and storage medium for accelerating supply of edge environment container Download PDF

Info

Publication number
CN116192872A
CN116192872A CN202211631568.4A CN202211631568A CN116192872A CN 116192872 A CN116192872 A CN 116192872A CN 202211631568 A CN202211631568 A CN 202211631568A CN 116192872 A CN116192872 A CN 116192872A
Authority
CN
China
Prior art keywords
request
edge
mirror image
layer
mirror
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211631568.4A
Other languages
Chinese (zh)
Inventor
方维维
张昊
王文光
路红英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN202211631568.4A priority Critical patent/CN116192872A/en
Publication of CN116192872A publication Critical patent/CN116192872A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

The invention provides an edge environment container accelerated supply method, an edge environment container accelerated supply system, electronic equipment and a storage medium. The supply method comprises the following steps: s1, generating a downloading request by DockerDaemon; s2, intercepting the downloading request by a DownloadProxy; s3, inquiring whether the file of the downloading request is a mirror image of the files; if yes, the request is directly forwarded to a cloud mirror warehouse; if not, sending a request for inquiring the position of the mirror layer file to the central node according to the requested SHA256 value, receiving the returned position and reconstructing the downloading request; s4, inquiring whether the mirror image layer file is cached on an edge node; if not, directly forwarding the request to a cloud mirror warehouse; if yes, forwarding the downloading request to a corresponding edge node; s5, forwarding the requested downloading result to the DockerDaemon. According to the scheme, storage resources of all edge nodes are integrated in a distributed cache mode, so that the cache content range is expanded, and the mirror image downloading speed is further increased.

Description

Method, system, electronic device and storage medium for accelerating supply of edge environment container
Technical Field
The invention belongs to the technical field of edge computing, and particularly relates to an edge environment container acceleration supply method, an edge environment container acceleration supply system, electronic equipment and a storage medium.
Background
Edge computing has received academic and industrial attention as a new computing model once introduced. For edge computing, the limited nature of edge node resources enables container technology to be organically integrated with edge computing. Furthermore, the high cohesion, low coupling characteristics of containers are also well suited for edge environments, which also makes containerization an increasingly de-facto standard for edge computing platforms. The advantages of combining edge computation with container technology are as follows: 1) The adoption of the container technology can rapidly deploy or terminate the service in the edge environment; 2) The container technology provides a convenient service migration and service discovery method for the edge equipment so as to manage the service in the edge equipment; 3) The container technology can improve the fault tolerance of the service and enhance the availability and reliability of the edge nodes; 4) The edge end can adopt a caching strategy for the mirror image of the container and the required data, so that the overall performance of the edge service is improved.
However, introducing container technology creates new problems while providing edge service deployment solutions for edge computing. The creation of the container is based on the corresponding image file, and the size of the image file is tens of M, more than G. Because of the limited network bandwidth of the edge nodes, the process of downloading the images can generate a lot of time delay, thereby affecting the current quality of service (english is called qualityoffervice, and english is abbreviated as QoS).
In order to solve the problem of increasing the download speed of service images, there are various proposals for increasing the supply speed of containers. The solutions proposed at present all require customized mirroring or source level redesign and implementation of the download mechanism of the application container engine (english name Docker). For example, a mechanism named FogDocker can improve the download speed of the image by constructing the base files in the image layer as a special base layer and modifying the container deployment process of the Docker. But this is difficult to achieve in an edge environment closer to public clouds because of its customized modifications to each image; another container deployment accelerating scheme, namely a dockerPI, adopts a multithreading technology to accelerate the decompression process after the Docker image is downloaded, and parallelizes the serial processes of downloading, decompressing and landing of the image acquisition process, thereby accelerating the container deployment. However, the scheme modifies the source code level of the Docker, so that the integrity of the Docker is destroyed.
These non-ubiquitous acceleration strategies are difficult to popularize in practical applications. In addition, conventional distributed storage strategies are mainly directed to single files of video, data, etc., and do not take into account the important feature of multiplexing at the mirror layer. Therefore, how to accelerate the downloading speed of the mirror image at the edge end without changing the original organization structure of the Docker and the mirror image is a key problem to be solved.
Disclosure of Invention
The invention provides an edge environment container acceleration supply method, an edge environment container acceleration supply system, electronic equipment and a storage medium, so that the downloading speed of a mirror image is accelerated at an edge end on the basis of not changing the original organization structure of the Docker and the mirror image.
To solve the above technical problem, a first aspect of the present invention provides an accelerated supply method for an edge environment container, including: s1, an application container engine native daemon (with an English name of DockerDaemon) generates a downloading request; s2, a download agent (English name is Downloadproxy) intercepts the download request; s3, inquiring whether the file of the downloading request is a description list of a mirror image (English name is manifes); if yes, the request is directly forwarded to a cloud mirror warehouse; if not, sending a request for inquiring the position of the mirror layer file to the central node according to the requested SHA256 value, receiving the returned position and reconstructing the downloading request; s4, inquiring whether the mirror image layer file is cached on an edge node; if not, directly forwarding the request to a cloud mirror warehouse; if yes, forwarding the downloading request to a corresponding edge node; s5, forwarding the requested downloading result to the DockerDaemon.
In some exemplary embodiments, in the step S1, the method specifically includes: s101, after receiving a container constructing request, dockerDaemon searches a self cache to find out whether a corresponding mirror image file of the container exists locally; if yes, the Docker Daemon directly acquires the mirror image alives form; if not, the DockerDaemon generates a GET request for downloading the artifacts of the corresponding images and sends the GET request to the cloud image warehouse to acquire the artifacts forms of the images; s102, after receiving a returned alifits form, the DockerDaemon deserializes the alifits form and polls the fslicators field in the alifits form; the DockerDaemon uses the fslayer field as the SHA256 value of the unique identification of the mirror image layer, and searches whether a layer file which can be multiplexed exists in the local mirror image layer file; if yes, the DockerDaemon directly forwards the request to a cloud mirror warehouse; if not, the DockerDaemon generates a download request for the layer file which does not exist locally and sends the request to the cloud mirror warehouse.
In some exemplary embodiments, in the step S2, the method specifically includes: the downloading request is intercepted by the downloading proxy, a corresponding query cache location request is generated according to the SHA256 value carried in the downloading request, and the query cache location request is forwarded to a task processing module of the central node.
In some exemplary embodiments, in the step S3, the method specifically includes: s301, a task processing module of a central node receives the query cache position request and forwards the query cache position request to an open-source lightweight distributed key value storage database (ETCD database for short) to find whether corresponding position information of the mirror layer exists or not; the ETCD database returns the corresponding result to the task processing module of the central node; the ETCD database stores IDs of all edge devices of the mirror layer in a key value pair mode, and the cache position of the mirror layer is stored under the position path of the ETCD database; s302, a task processing module of the central node takes the result and returns to the downlink proxy; if the corresponding position information of the mirror image layer is found out from the ETCD database, returning Key-Value to the Downloadproxy; and if the corresponding position information of the mirror layer is not found in the ETCD database, returning a null value to the DownloadProxy.
In some exemplary embodiments, in the step S4, the method specifically includes: when a downlink proxy of an edge node receives a response message, if the response message is a null value, the downlink proxy directly forwards a downloading request to a cloud mirror warehouse, and delivers an obtained mirror layer file to the dockproxy; if the response message is not empty, the Download Proxy constructs a new Download request and sends the new Download request to the corresponding edge node, so as to obtain the mirror layer tar.gz compressed file cached in the edge cluster.
In some exemplary embodiments, in the step S5, the method specifically includes: after receiving the tar.gz compressed file of the corresponding request layer, the DockerDaemon decompresses and mounts the file on the same mounting point by adopting the original decompression mode and the file system of the Docker, and finally forms a complete mirror image.
A second aspect of the invention provides an edge environs container accelerated feed system, comprising: the cloud storage layer is used as a centralized mirror image warehouse of the whole cluster and stores all mirror images in the edge system; the redirection layer is deployed at the center node end and is used for storing the corresponding relation between the edge equipment and the cache layer in the edge cluster and carrying out centralized management on the placement position of the cache mirror layer; the local cache layer is deployed at the cache node end and is used for storing the file which is actually cached, intercepting the mirror image downloading request when the Docker carries out mirror image downloading, and redirecting the mirror image downloading request of the Docker according to the mirror image layer position information forwarded by the redirecting layer.
In some exemplary embodiments, the redirection layer includes an ETCD, a cache placement module, and a download task processing module; the ETCD is deployed to a central node and is used for being used as an open-source lightweight distributed key value storage database to store information of a storage position of the mirror image layer; the cache placement module is internally provided with a cache placement algorithm and is used for calculating the cache position of the mirror image layer, calculating the download cost of the cache file of the whole edge cluster according to the node requirements and the network conditions among the nodes, and minimizing the cost through distributed solving; the download task processing module is in communication connection with the ETCD, and is used for receiving a download request forwarded by an edge node, reading corresponding file cache position information from the ETCD according to the request, and returning the file cache position information to a request node; the Local cache layer comprises DockerDaemon, downloadProxy and a Local warehouse (the English name is Local Registry); the DockerDaemon is a native daemon of the Docker, and is used for receiving a request sent by a user, downloading a corresponding mirror image, and constructing a corresponding container according to the mirror image; the DownloadProxy is deployed on the edge node in a container form, is used for intercepting a mirror image downloading request of the DockerDaemon, requesting a corresponding mirror image layer storage position from a downloading task processing module according to the request, and redirecting the downloading request to a cloud mirror image warehouse or an edge cache node according to a returned result; the LocalRegistry is used as a small mirror warehouse for storing distributed edge layer data on edge devices.
A third aspect of the present invention provides an electronic apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform an edge environmenta container acceleration provisioning method as described above.
A fourth aspect of the present invention provides a computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the above-described edge environment container acceleration supply method.
The technical scheme provided by the invention has at least the following advantages:
the invention provides an edge environment container acceleration supply method, an edge environment container acceleration supply system, electronic equipment and a storage medium. In addition, compared with the traditional single-node cache, the scheme integrates the storage resources of all edge nodes in a distributed cache mode, so that the cache content range is expanded, and the mirror image downloading speed is further increased.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, which are not to be construed as limiting the embodiments unless specifically indicated otherwise.
Fig. 1 is a diagram of a mirror image accelerated downloading scheme (english is called ImageAccelerated DownloadSchemebasedonDistributedCache, english is called IADSDC) accelerated downloading scheme architecture based on distributed cache in the present invention;
FIG. 2 is a flow chart of IADSDC processing for a mirror image download request according to the present invention;
FIG. 3 is a schematic diagram of a response flow of IADSDC processing a mirror image download request according to the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to the present invention.
Detailed Description
As known from the background art, in the prior art, edge nodes are limited by network bandwidth, and a required mirror image cannot be quickly downloaded from a cloud mirror image warehouse; and the storage capacity of the edge node is limited, and only a very small part of image files can be cached locally at the node.
In order to solve the problem of slow image downloading speed in the background technology and increase universality of the scheme, the inventor designs an image acceleration downloading scheme IADSDC based on a distributed cache based on the original Docker image downloading flow through research. The scheme adopts a central design, the edge nodes are divided into two roles of a central node and a cache node, the cache node is responsible for storing mirror images with layers as granularity, and the central node is used for managing cache metadata. In addition, all components are arranged on the edge node in a containerized mode, the code layer of the Docker is not changed, high-cohesion low-coupling is truly realized, and plug and play can be realized. The method and the device solve the problems of accelerating the downloading speed of the container mirror image in the edge calculation and placing the mirror image cache on the edge cluster with limited storage space.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings. However, as will be appreciated by those of ordinary skill in the art, in the various embodiments of the present application, numerous technical details have been set forth in order to provide a better understanding of the present application. However, the technical solutions claimed in the present application can be implemented without these technical details and with various changes and modifications based on the following embodiments.
As shown in fig. 1 and 3, a first aspect of the present invention provides an accelerated supply method for an edge environment container, including:
s1, generating a downloading request by the DockerDaemon.
S2, the downloading request is intercepted by the downloading proxy.
S3, inquiring whether the file of the downloading request is a mirror image of the files;
if yes, the request is directly forwarded to a cloud mirror warehouse;
if not, sending the position of the query layer to the central node according to the requested SHA256 value, receiving the returned position and reconstructing the downloading request.
S4, inquiring whether the layer is cached on the edge node; if not, directly forwarding the request to a cloud mirror warehouse; if yes, forwarding the downloading request to a corresponding edge node; s5, forwarding the requested downloading result to the Docker Daemon.
In some exemplary embodiments, in the step S1, the method specifically includes:
s101, after receiving a container constructing request, dockerDaemon searches a self cache to find out whether a corresponding mirror image file of the container exists locally;
if yes, the DockerDaemon directly acquires the mirror image alives form;
if not, the DockerDaemon generates a GET request for downloading the artifacts of the corresponding images and sends the GET request to the cloud image warehouse to acquire the artifacts forms of the images.
S102, after receiving a returned alifits form, the DockerDaemon deserializes the alifits form and polls the fslicators field in the alifits form; the DockerDaemon uses the fslayer field as the SHA256 value of the unique identification of the mirror image layer, and searches whether a layer file which can be multiplexed exists in the local mirror image layer file;
if yes, the DockerDaemon directly forwards the request to a cloud mirror warehouse;
if not, the DockerDaemon generates a download request for the layer file which does not exist locally and sends the request to the cloud mirror warehouse.
In some exemplary embodiments, in the step S2, the method specifically includes:
the downloading request is intercepted by the downloading proxy, a corresponding query cache location request is generated according to the SHA256 value carried in the downloading request, and the query cache location request is forwarded to a task processing module of the central node.
In some exemplary embodiments, in the step S3, the method specifically includes:
s301, a task processing module of a central node receives the query cache location request and forwards the query cache location request to an ETCD database to find out whether corresponding location information of the mirror layer exists; the ETCD database returns the corresponding result to the task processing module of the central node; the ETCD database stores the IDs of all edge devices of the mirror layer in a key value pair mode, and the cache position of the mirror layer is stored under the position path of the ETCD database.
S302, a task processing module of the central node takes the result and returns to the downlink proxy;
if the corresponding position information of the mirror image layer is found out from the ETCD database, returning Key-Value to the Downloadproxy;
and if the corresponding position information of the mirror layer is not found in the ETCD database, returning a null value to the DownloadProxy.
In some exemplary embodiments, in the step S4, the method specifically includes:
when a downlink Proxy of an edge node receives a response message, if the response message is a null value, the downlink Proxy directly forwards a downloading request to a cloud mirror warehouse, and delivers an obtained mirror layer file to the DockerProxy; if the response message is not empty, the DownloadProxy constructs a new download request and sends the new download request to the corresponding edge node, so as to obtain the mirror layer tar.gz compressed file cached in the edge cluster.
In some exemplary embodiments, in the step S5, the method specifically includes:
after receiving the tar.gz compressed file of the corresponding request layer, the DockerDaemon decompresses and mounts the file on the same mounting point by adopting the original decompression mode and the file system of the Docker, and finally forms a complete mirror image.
A second aspect of the invention provides an edge environs container accelerated feed system, comprising:
the cloud storage layer is used as a centralized mirror image warehouse of the whole cluster and stores all mirror images in the edge system; the mirror image warehouse adopts the same API interface as the Dockerhub, and the user side cannot feel that the downloading process is different from that of the Dockerhub when the mirror image is downloaded. Therefore, in the cloud-edge combination environment, a private image warehouse is built in the cloud by adopting a Harbor, and images are stored for the whole edge cluster.
The redirection layer is deployed at the center node end and is used for storing the corresponding relation between the edge equipment and the cache layer in the edge cluster and carrying out centralized management on the placement position of the cache mirror image layer.
The local cache layer is deployed at the cache node end and is used for storing the file which is actually cached, intercepting the mirror image downloading request when the Docker carries out mirror image downloading, and redirecting the mirror image downloading request of the Docker according to the mirror image layer position information forwarded by the redirecting layer.
In some exemplary embodiments, referring to fig. 2, the redirection layer includes an ETCD, a cache placement module, and a download task processing module.
The ETCD is deployed to a central node and is used for being used as an open-source lightweight distributed key value storage database to store information of storage positions of the mirror image layer.
The cache placement module is internally provided with a cache placement algorithm and is used for calculating the cache position of the mirror image layer, calculating the cache file downloading cost of the whole edge cluster according to the node requirements and the network conditions among the nodes, and minimizing the cost through distributed solving.
The download task processing module is in communication connection with the ETCD, and is used for receiving a download request forwarded by an edge node, reading corresponding file cache position information from the ETCD according to the request, and returning the file cache position information to the requesting node.
With continued reference to FIG. 2, the Local cache layer includes DockerDaemon, downloadProxy and Local Registry.
The DockerDaemon is a native daemon of the Docker, and is used for receiving a request sent by a user, downloading a corresponding mirror image, and constructing a corresponding container according to the mirror image. Specifically, the main functions of Daemon include image management, image construction, rest pi, authentication, security, core network, and orchestration.
The DownloadProxy is deployed on the edge node in a container form, and is used for intercepting a mirror image downloading request of the dockerDaemon, requesting a corresponding mirror image layer storage position from a downloading task processing module according to the request, and redirecting the downloading request to a cloud mirror image warehouse or an edge cache node according to a returned result.
The LocalRegistry is used as a small mirror warehouse for storing distributed edge layer data on edge devices. The local Registry adopts the Registry v2 mirror issued by the Docker official, and the internal core routing mechanism is the same as that of the dockerin hub, so that the component is the same as an API interface adopted by the dockerin hub, and a download request does not need to be redesigned.
Referring to fig. 4, another embodiment of the present application provides an electronic device, including: at least one processor 110; and a memory 111 communicatively coupled to the at least one processor; the memory 111 stores instructions executable by the at least one processor 110, the instructions being executable by the at least one processor 110 to enable the at least one processor 110 to perform any one of the method embodiments described above.
Where the memory 111 and the processor 110 are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors 110 and the memory 111 together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 110 is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor 110.
The processor 110 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 111 may be used to store data used by processor 110 in performing operations.
Another embodiment of the present application relates to a computer-readable storage medium storing a computer program. The computer program implements the above-described method embodiments when executed by a processor.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described above. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples of implementing the present application and that various changes in form and details may be made therein without departing from the spirit and scope of the present application. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention shall be defined by the appended claims.

Claims (10)

1. An edge environmentcontainer accelerated feeding method, characterized by comprising:
s1, an application container engine native daemon generates a downloading request;
s2, the download agent intercepts the download request;
s3, inquiring whether the file of the downloading request is a description list of the mirror image;
if yes, the request is directly forwarded to a cloud mirror warehouse; if not, sending a request for inquiring the position of the mirror layer file to the central node according to the requested SHA256 value, receiving the returned position and reconstructing the downloading request;
s4, inquiring whether the mirror image layer file is cached on an edge node; if not, directly forwarding the request to a cloud mirror warehouse; if yes, forwarding the downloading request to a corresponding edge node;
s5, forwarding the requested downloading result to the application container engine native daemon.
2. The method for accelerating the supply of containers in an edge environment according to claim 1, wherein in said step S1, specifically comprising:
s101, an application container engine native daemon searches a self cache after receiving a container constructing request, and searches whether a corresponding mirror image file of the container exists locally;
if yes, the application container engine native daemon directly acquires the mirror image description list form;
if not, the application container engine native daemon generates a GET request for downloading the description list of the corresponding mirror image and sends the GET request to the cloud mirror image warehouse to acquire the description list form of the mirror image;
s102, after receiving a returned description list form, an application container engine native daemon process deserializes the description list form and polls a fslayers field in the description list form;
the application container engine native daemon uses the fslayer field as the SHA256 value of the unique identification of the mirror image layer, and searches whether a reusable layer file exists in the local mirror image layer file;
if yes, the application container engine native daemon directly forwards the request to a cloud mirror warehouse;
if not, the application container engine native daemon generates a download request for the layer file which does not exist locally and sends the request to the cloud mirror warehouse.
3. The method for accelerating the supply of containers in an edge environment according to claim 1, characterized in that in said step S2, it specifically comprises:
the download agent intercepts the download request, generates a corresponding query cache location request according to the SHA256 value carried in the download request, and forwards the query cache location request to a task processing module of the central node.
4. The method for accelerating the supply of containers in an edge environment according to claim 1, characterized in that in said step S3, it specifically comprises:
s301, a task processing module of a central node receives the query cache position request and forwards the query cache position request to an open-source lightweight distributed key value storage database to find out whether corresponding position information of the mirror layer exists;
the light weight distributed key value storage database of the open source returns the corresponding result to the task processing module of the central node;
the method comprises the steps that an open-source lightweight distributed key value storage database stores IDs of all edge devices of a mirror layer in a key value pair mode, and the cache position of the mirror layer is stored under a location path of the open-source lightweight distributed key value storage database;
s302, a task processing module of the central node takes the result and returns the result to the download agent;
if the corresponding position information of the mirror image layer is found in the lightweight distributed key value storage database of the self-opening source, returning the database of the stored data with the key value pair to the download agent;
if the corresponding position information of the mirror layer is not found in the lightweight distributed key value storage database of the open source, returning a null value to the download agent.
5. The method for accelerating the supply of containers in an edge environment according to claim 1, characterized in that in said step S4, it specifically comprises:
when a download agent of an edge node receives a response message, if the response message is a null value, the download agent directly forwards a download request to a cloud mirror warehouse, and delivers an obtained mirror layer file to a Docker download agent;
if the response message is not empty, the download agent constructs a new download request and sends the new download request to the corresponding edge node, so as to obtain the mirror layer tar.gz compressed file cached in the edge cluster.
6. The method for accelerating the supply of containers in an edge environment according to claim 1, characterized in that in said step S5, it specifically comprises:
after receiving the tar.gz compressed file of the corresponding request layer, the application container engine native daemon decompresses and mounts the file system to the same mounting point by adopting the original decompression mode of the application container engine, and finally forms a complete mirror image.
7. An edge environs container accelerated feed system, comprising:
the cloud storage layer is used as a centralized mirror image warehouse of the whole cluster and stores all mirror images in the edge system;
the redirection layer is deployed at the center node end and is used for storing the corresponding relation between the edge equipment and the cache layer in the edge cluster and carrying out centralized management on the placement position of the cache mirror layer;
the local cache layer is deployed at the cache node end and is used for storing the file which is actually cached, intercepting the mirror image downloading request when the application container engine performs mirror image downloading, and redirecting the mirror image downloading request of the application container engine according to the mirror image layer position information forwarded by the redirecting layer.
8. The edge environmentcontainer accelerated feed system of claim 7, wherein said redirection layer comprises an open-source lightweight distributed key-value store database, a cache placement module, and a download task processing module;
the lightweight distributed key value storage database of the open source is deployed to a central node and is used for storing information of a storage position of a mirror image layer;
the cache placement module is internally provided with a cache placement algorithm and is used for calculating the cache position of the mirror image layer, calculating the download cost of the cache file of the whole edge cluster according to the node requirements and the network conditions among the nodes, and minimizing the cost through distributed solving;
the download task processing module is in communication connection with the open source lightweight distributed key value storage database, and is used for receiving a download request forwarded by an edge node, reading corresponding file cache position information from the open source lightweight distributed key value storage database according to the request, and returning the file cache position information to a request node;
the local cache layer comprises an application container engine native daemon, a download agent and a local warehouse;
the application container engine native daemon is used for receiving a request sent by a user, downloading a corresponding mirror image and constructing a corresponding container according to the mirror image;
the download agent is deployed on the edge node in a container form and is used for intercepting a mirror image download request of an application container engine native daemon, requesting a corresponding mirror image layer storage position from a download task processing module according to the request, and redirecting the download request to a cloud mirror image warehouse or an edge cache node according to a returned result;
the local warehouse is used as a small mirror warehouse for storing the distributed edge layer data on the edge equipment.
9. An electronic device, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the edge environs container acceleration supply method of any of claims 1 to 7.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the edge environs container acceleration supply method of any of claims 1 to 7.
CN202211631568.4A 2022-12-19 2022-12-19 Method, system, electronic device and storage medium for accelerating supply of edge environment container Pending CN116192872A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211631568.4A CN116192872A (en) 2022-12-19 2022-12-19 Method, system, electronic device and storage medium for accelerating supply of edge environment container

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211631568.4A CN116192872A (en) 2022-12-19 2022-12-19 Method, system, electronic device and storage medium for accelerating supply of edge environment container

Publications (1)

Publication Number Publication Date
CN116192872A true CN116192872A (en) 2023-05-30

Family

ID=86435455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211631568.4A Pending CN116192872A (en) 2022-12-19 2022-12-19 Method, system, electronic device and storage medium for accelerating supply of edge environment container

Country Status (1)

Country Link
CN (1) CN116192872A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117369953A (en) * 2023-12-08 2024-01-09 中电云计算技术有限公司 Mirror synchronization method, device, equipment and storage medium
CN117850980A (en) * 2023-12-25 2024-04-09 慧之安信息技术股份有限公司 Container mirror image construction method and system based on cloud edge cooperation

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117369953A (en) * 2023-12-08 2024-01-09 中电云计算技术有限公司 Mirror synchronization method, device, equipment and storage medium
CN117369953B (en) * 2023-12-08 2024-03-15 中电云计算技术有限公司 Mirror synchronization method, device, equipment and storage medium
CN117850980A (en) * 2023-12-25 2024-04-09 慧之安信息技术股份有限公司 Container mirror image construction method and system based on cloud edge cooperation

Similar Documents

Publication Publication Date Title
CN107590001B (en) Load balancing method and device, storage medium and electronic equipment
CN102185928B (en) Method for creating virtual machine in cloud computing system and cloud computing system
EP3591940B1 (en) Mirror image distribution method, and mirror image acquisition method and apparatus
CN103200212B (en) A kind of method and system realizing distributed conversation under cloud computing environment
CN104731516B (en) A kind of method, apparatus and distributed memory system of accessing file
US7831734B2 (en) Method and system for remote configuration of network devices
CN106663033B (en) System and method for supporting a wraparound domain and proxy model and updating service information for cross-domain messaging in a transactional middleware machine environment
CN116192872A (en) Method, system, electronic device and storage medium for accelerating supply of edge environment container
CN113965560A (en) Data transmission method, proxy server, storage medium, and electronic device
CN108737176B (en) Data gateway control method, electronic equipment, storage medium and architecture
CN111274310A (en) Distributed data caching method and system
CN107210924B (en) Method and apparatus for configuring a communication system
CN103207841A (en) Method and device for data reading and writing on basis of key-value buffer
US11922059B2 (en) Method and device for distributed data storage
CN113760453B (en) Container mirror image distribution system and container mirror image pushing, pulling and deleting method
KR102260781B1 (en) Integration System of Named Data Networking-based Edge Cloud Computing for Internet of Things
EP2778968A1 (en) Mobile telecommunication device remote access to cloud-based or virtualized database systems
CN111158851B (en) Rapid deployment method of virtual machine
CN107172214B (en) Service node discovery method and device with load balancing function
US20180123935A1 (en) Network Configuration Management System
CN110049081A (en) For build and using high availability Docker private library method and system
CN113542373B (en) Route service discovery device and method for PAAS platform
US11663058B1 (en) Preemptive filtering of events of an event bus with a deterministic filter
CN107911413A (en) A kind of Distributed database service management system and method
US11474846B2 (en) Controller for bridging database architectures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination