CN115687420A - Mirror image warehouse distributed caching method and device - Google Patents

Mirror image warehouse distributed caching method and device Download PDF

Info

Publication number
CN115687420A
CN115687420A CN202211321362.1A CN202211321362A CN115687420A CN 115687420 A CN115687420 A CN 115687420A CN 202211321362 A CN202211321362 A CN 202211321362A CN 115687420 A CN115687420 A CN 115687420A
Authority
CN
China
Prior art keywords
mirror image
data center
mirror
service
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211321362.1A
Other languages
Chinese (zh)
Inventor
王晓亮
缪俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Rivtower Technology Co Ltd
Original Assignee
Suzhou Changtong Internet Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Changtong Internet Technology Co ltd filed Critical Suzhou Changtong Internet Technology Co ltd
Priority to CN202211321362.1A priority Critical patent/CN115687420A/en
Publication of CN115687420A publication Critical patent/CN115687420A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the specification discloses a mirror image warehouse distributed caching method and a mirror image warehouse distributed caching device, wherein the method comprises the following steps: the method comprises the steps that a plurality of data center mirror image warehouses are served in a deployment mode, and cache services are deployed and operated corresponding to each data center; responding to a first data center pull first mirror image request instruction, and acquiring corresponding first mirror image metadata in a mirror image warehouse; inquiring whether a first mirror image block file corresponding to first mirror image metadata is stored in a first data center cache; when the first image block file is not stored, storing the first image block file to a first data center cache service and sending a first image to a first cluster node of a first data center; and when the first mirror image is stored, sending the first mirror image to a first cluster node of a first data center, wherein the first mirror image comprises first mirror image metadata and a first mirror image block file stored in a cache service of the first data center. The scheme of the invention can obviously reduce the flow pressure of the mirror image warehouse, improve the mirror image pulling speed and reduce the network communication cost.

Description

Mirror image warehouse distributed caching method and device
Technical Field
The present disclosure relates to the field of computer software technologies, and in particular, to a distributed caching method and apparatus for a mirror repository, an electronic device, and a storage medium.
Background
The mirror repository provides two key functions, storage and distribution of the mirror. The storage means pushes the mirror image to a mirror image warehouse; distribution refers to distributing an image from an image repository to a machine running the image. The traffic volume borne by the image distribution is far larger than that of the image storage, and the actual enterprise application has larger challenges. The first challenge encountered by the mirror warehouse is how to cope with the pressure of massive mirror distribution, and the second challenge is how to improve the performance of mirror distribution with proper cost. In a cross-data center scenario, the mirror image warehouse is generally globally deployed across the data center, so how to construct a high-availability mirror image cache mechanism to relieve the pressure of the mirror image warehouse and improve the performance of mirror image distribution, reduce the pressure on the central mirror image warehouse and the network communication cost, and improve the mirror image pull speed is an urgent technical problem to be solved.
Disclosure of Invention
An object of the embodiments of the present specification is to provide a mirror image warehouse distributed caching method and apparatus, an electronic device, and a storage medium, in order to solve the above problem.
In order to solve the above technical problem, the embodiments of the present specification are implemented as follows:
in a first aspect, a mirror image warehouse distributed caching method is provided, where a mirror image warehouse serving multiple data centers and used for providing mirror image centralized storage and distribution is deployed, where each data center includes at least 1 cluster node, and a mirror image includes metadata and block files; the method comprises the following steps that a cache service for storing block files is deployed and operated corresponding to each data center, and is applied to a first data center, and the method comprises the following steps:
responding to a request instruction for pulling a first mirror image, and acquiring first mirror image metadata corresponding to the request instruction in the mirror image warehouse;
inquiring whether a first mirror image block file corresponding to the first mirror image metadata is already saved in the cache service of the first data center;
when the first mirror image block file is not stored, storing the first mirror image block file to the cache service of the first data center, and sending the first mirror image to a first cluster node of the first data center, wherein the first mirror image comprises the first mirror image metadata and the first mirror image block file;
and when the first mirror image is saved, sending the first mirror image to a first cluster node of the first data center, wherein the first mirror image comprises the first mirror image metadata and the first mirror image block file saved in the cache service of the first data center.
Further, a caching service for saving the block files is deployed and operated corresponding to each data center, including setting the size of the caching service and the validity period of the caching service.
Further, when the first cluster node is a kubernets node, a container Pod for injecting the address corresponding to the cache service into the system hosts file is run in the first cluster node.
Further, a plurality of caching Service instances are deployed to the first data center by using a Kubernets Deployment, and the caching Service instances are operated by using a Kubernets Service.
Further, the first mirror image block file is distributed to the plurality of cache service instances of the first data center in a sharing mode.
Further, according to the use frequency of the image block files in the cache service, the image block files with low use frequency stored in the cache service are eliminated by adopting an LRU algorithm.
In a second aspect, a mirror warehouse distributed caching apparatus is provided, including:
a first module capable of deploying an image repository serving a plurality of data centers for providing image centralized storage and distribution, the data centers including at least 1 cluster node, the image including metadata and block files;
the second module, which can be deployed and run corresponding to each data center for storing a cache service of a block file, is applied to the first data center, and includes:
the third module can respond to a request instruction for pulling a first mirror image, and acquire first mirror image metadata corresponding to the request instruction in the mirror image warehouse;
a fourth module capable of querying whether a first mirror block file corresponding to the first mirror metadata has been saved in the cache service of the first data center;
a fifth module, configured to, when not saved, save the first mirror image block file to the cache service of the first data center, and send the first mirror image to a first cluster node of the first data center, where the first mirror image includes the first mirror image metadata and the first mirror image block file;
a sixth module, configured to send the first mirror image to a first cluster node of the first data center when the first mirror image is saved, where the first mirror image includes the first mirror image metadata and the first mirror image block file saved in the cache service of the first data center.
Further, a caching service for saving the block files is deployed and operated corresponding to each data center, including setting the size of the caching service and the validity period of the caching service.
Further, when the first cluster node is a kubernets node, a container Pod for injecting the address corresponding to the cache service into the system hosts file is run in the first cluster node.
Further, a plurality of caching Service instances are deployed to the first data center by using a Kubernets Deployment, and the caching Service instances are operated by using a Kubernets Service.
Further, the first image block file is distributed to the plurality of cache service instances of the first data center in a sharing mode.
Further, according to the use frequency of the image block files in the cache service, the image block files with low use frequency stored in the cache service are eliminated by adopting an LRU algorithm.
In a third aspect, an electronic device is provided, which includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the mirror store distributed caching method of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, which is characterized by storing one or more programs, and when the one or more programs are executed by an electronic device including a plurality of application programs, the electronic device is caused to execute the image repository distributed caching method according to the first aspect.
The specification can achieve at least the following technical effects:
according to the scheme of the invention, a block file caching service is added in a distributed mode in different data centers to cache the recently used mirror image block files, so that the mirror image is pulled from an original centralized warehouse to be pulled from the block file caching service, the network communication cost is greatly reduced, the mirror image pulling speed is increased, and the pressure on a central mirror image warehouse is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the description below are only some embodiments described in the present specification, and for those skilled in the art, other drawings may be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a mirror warehouse distributed caching scheme system according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a distributed caching method for a mirror warehouse according to an embodiment of the present disclosure.
Fig. 3 is a second schematic diagram of a distributed caching method for a mirror warehouse according to an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a mirror warehouse distributed caching apparatus according to an embodiment of the present disclosure.
Fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without making any creative effort shall fall within the protection scope of the present specification.
Key terms
Mirror image warehouse: as one of the core components of the Docker technology, its main role is to be responsible for storage and distribution of the mirrored content. The use range of the system is divided into a public mirror warehouse and a private mirror warehouse, wherein the public mirror warehouse can be used by anyone, and the private mirror warehouse refers to a mirror warehouse which is deployed inside a company or an organization and used for Docker mirror storage and distribution of self application. In the process of constructing an automatic release system used inside a company, from the safety point of view, the applied packaged mirror image is generally only stored in a private mirror image warehouse, and the connection point of the CI/CD flow is also completed by the operations of uploading and pulling the mirror image to the private mirror image warehouse.
Caching service: refers to a technology or service for storing network contents which need to be accessed frequently in a system which is closest to a user and has a higher access speed so as to improve the access speed of the contents. The distributed cache can read data with high performance, can dynamically expand cache nodes, can automatically discover and switch fault nodes, can automatically balance data partitions, can provide a graphical management interface for a user, and is very convenient to deploy and maintain. Distributed caching has been widely used in the distributed field and cloud computing field. The cache hit rate is one of the important factors for judging the effect of improving the access speed.
A detailed description of a mirror warehouse distributed caching scheme referred to in this specification is provided below by way of specific examples.
Example one
The invention has the following focus on how to construct a high-availability mirror image caching mechanism to relieve the pressure of a mirror image warehouse, improve the performance of mirror image distribution, reduce the pressure on a central mirror image warehouse and the network communication cost, and improve the speed of mirror image pulling. The Image file Image includes metadata and a block file. Docker container mirroring is designed to completely isolate mirror metadata from block file storage. Wherein, the mirror image layer metadata adopts three layers of repository, image and layer from top to bottom; the name, the label and the corresponding ID of the mirror image are stored in the replay layer, and the image layer comprises a mirror image framework, an operating system, creation time, historical information, rootfs and the like; the layer corresponds to the physical mirror layer block file. With such a design based on the Image file Image, developers often use the same tag to push to the Image repository during the actual development process. After a new mirror image is pushed each time, when the same label is pulled again, the latest mirror image cannot be pulled in the mirror image warehouse because the cache service already has a complete mirror image, and the finally taken mirror image is the mirror image of the old version in the cache server. It can be appreciated that if the caching service only caches the image block file, and does not cache the image metadata, the latest version of the image file can be more accurately obtained.
Therefore, the technical scheme of the embodiment of the invention is to introduce the block file cache according to whether the mirror image warehouse is a common mirror image, when the block file is introduced for storage, the validity period and the cache size of the block file cache need to be considered, and an LRU mechanism is used for eliminating a part of mirror image caches, so that the cache service is kept in a healthy state, and meanwhile, the problems of cache invalidation and cache jitter caused by too low cache hit rate are avoided.
Referring to fig. 1, a system configuration diagram according to an embodiment of the present invention is shown. For an application scenario composed of multiple data centers, the mirror image warehouse is generally globally deployed across the data centers, so that the cache service needs to optimize the topology structure of the cache service first. It should be noted that, if the cache service is globally deployed across data centers like a mirror image warehouse, when a mirror image is pulled, a cluster corresponding to each data center interacts with the cache service, and when a block file that does not exist is pulled, the block file in the cache service is refreshed. When using a topology of a global mirror repository plus global caching service, if the block file to be pulled is in the caching service but not in the same data center, the performance of obtaining the mirror may not be as good as pulling the block file directly through the mirror repository. Meanwhile, the topology has a disadvantage that the cache service enters a frequently refreshed cache state due to a large difference of globally used image files, and the cache hit rate is reduced. Therefore, the cache service is deployed in each data center in a distributed mode, the searching range of the cached block files can be reduced, the mirror image block files are prevented from being pulled across the data centers, and therefore the cache hit rate is improved.
Fig. 2 is a schematic diagram illustrating a distributed caching method for a mirror warehouse according to an embodiment of the present invention. The method comprises the following steps:
s1: a mirror repository is deployed for providing mirror centralized storage and distribution that serves a plurality of data centers, the data centers including at least 1 cluster node, the mirror including metadata and block files.
Optionally, a caching service for saving the block files is deployed and operated corresponding to each data center, including setting the size of the caching service and the validity period of the caching service.
S2: deploying and operating a caching service for storing the block files corresponding to each data center; and the number of the first and second groups,
for any of the data centers, comprising:
s3: and responding to a request instruction for pulling the first mirror image, and acquiring first mirror image metadata corresponding to the request instruction in the mirror image warehouse.
S4: and inquiring whether a first mirror image block file corresponding to the first mirror image metadata is already saved in the cache service of the first data center.
S5: and when the first mirror image block file is not stored, storing the first mirror image block file to the cache service of the first data center, and sending the first mirror image to a first cluster node of the first data center, wherein the first mirror image comprises the first mirror image metadata and the first mirror image block file.
Optionally, when the first cluster node is a kubernets node, a container Pod for injecting the address corresponding to the cache service into the system hosts file is run in the first cluster node.
Specifically, as shown in fig. 3, the mirror cache service is a service that runs inside each cluster, and its service address is an internal address of the Kubernetes cluster. When the container operation pulls the mirror image, the mirror image warehouse is accessed instead of the cache service. When a container is operated to pull a mirror image from a cache service in a cluster, the flow for accessing a mirror image warehouse needs to be intercepted in the cluster and transferred to the cache service in the cluster. A Pod for injecting caching services into hosts files can be run on a Kubernetes node. After the Pod is started, the cluster address of the cache service is injected into the hosts file, and the pulling of the mirror image on the subsequent host is all pulled through the cache service. Managing the Pod via the DaemonSet of Kubernetes can ensure that one instance runs on each node.
Optionally, a plurality of caching Service instances are deployed to the first data center using kubernets Deployment, and the caching Service instances are run using kubernets Service. Specifically, all block files of the cache service are acquired from the mirror image warehouse, and the loss of any one instance only loses a part of the cache block files and does not influence the pulling speed of the mirror image.
Optionally, the first image block file is distributed to the plurality of cache service instances of the first data center in a sharing manner. In particular, this approach enables each newly cached chunk file to be shared to all cache instances, thereby providing stable cache service performance.
S6: and when the first mirror image is saved, sending the first mirror image to a first cluster node of the first data center, wherein the first mirror image comprises the first mirror image metadata and the first mirror image block file saved in the cache service of the first data center.
Optionally, according to the usage frequency of the image block files in the cache service, the image block files with low usage frequency stored in the cache service are eliminated by adopting an LRU algorithm. Specifically, since the block file in the cache Service has a de-exclusion mechanism and the cache instance is stateless, the cache Service instance may be deployed through kubernets Deployment and the Service is provided through kubernets Service. Meanwhile, a plurality of caching service instances have a small problem, and the block file cached by each instance is different according to different served mirror image requests, so when a cluster pulls a mirror image, the request can be random to one of the instances, but the instance does not cache the block file of the mirror image yet. This may result in a lower performance than the instance in which the image block file was cached.
Example two
Fig. 4 is a schematic structural diagram of a mirror-image warehouse distributed caching apparatus 400 according to an embodiment of the present disclosure. Referring to fig. 4, in one embodiment, a mirror warehouse distributed caching apparatus 400 includes:
a first module 401 capable of deploying an image repository serving a plurality of data centers for providing image centralized storage and distribution, the data centers comprising at least 1 cluster node, the image comprising metadata and block files;
a second module 402, capable of being deployed and running a caching service for saving the block file corresponding to each data center, applied to the first data center, includes:
a third module 403, configured to, in response to a request instruction for pulling a first mirror image, obtain first mirror image metadata in the mirror image warehouse, where the first mirror image metadata corresponds to the request instruction;
a fourth module 404, configured to query whether a first image block file corresponding to the first image metadata is already stored in the cache service of the first data center;
a fifth module 405, configured to save the first mirror image block file to the cache service of the first data center and send the first mirror image to a first cluster node of the first data center when the first mirror image block file is not saved, where the first mirror image includes the first mirror image metadata and the first mirror image block file;
a sixth module 406, configured to send the first mirror image to a first cluster node of the first data center when the first mirror image is saved, where the first mirror image includes the first mirror image metadata and the first mirror image block file saved in the cache service of the first data center.
It should be understood that, in the embodiment of the present description, the mirror warehouse distributed cache apparatus may further perform the method performed by the mirror warehouse distributed cache apparatus (or device) in fig. 1 to 3, and implement the functions of the mirror warehouse distributed cache apparatus (or device) in the examples shown in fig. 1 to 3, which are not described herein again.
EXAMPLE III
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present specification. Referring to fig. 5, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the shared resource access control device on the logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
deploying an image repository serving a plurality of data centers for providing image centralized storage and distribution, the data centers including at least 1 cluster node, the image including metadata and block files;
deploying and operating a caching service for storing the block files corresponding to each data center;
responding to a request instruction for pulling a first mirror image from a first data center, and acquiring first mirror image metadata corresponding to the request instruction in the mirror image warehouse;
inquiring whether a first mirror image block file corresponding to the first mirror image metadata is already saved in the cache service of the first data center;
when the first mirror image block file is not stored, storing the first mirror image block file to the cache service of the first data center, and sending the first mirror image to a first cluster node of the first data center, wherein the first mirror image comprises the first mirror image metadata and the first mirror image block file;
and when the first mirror image is saved, sending the first mirror image to a first cluster node of the first data center, wherein the first mirror image comprises the first mirror image metadata and the first mirror image block file saved in the cache service of the first data center.
The mirror repository distributed caching method disclosed in the embodiments of fig. 1 to fig. 3 in this specification may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of this specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and combines hardware thereof to complete the steps of the method.
Of course, besides the software implementation, the electronic device of the embodiment of the present disclosure does not exclude other implementations, such as a logic device or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or a logic device.
Example four
Embodiments of the present specification also propose a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable electronic device comprising a plurality of application programs, are capable of causing the portable electronic device to perform the method of the embodiments shown in fig. 1 to 3, and in particular to perform the method of:
deploying an image repository serving a plurality of data centers for providing image centralized storage and distribution, the data centers including at least 1 cluster node, the image including metadata and block files;
deploying and operating a cache service for storing the block files corresponding to each data center;
responding to a request instruction for pulling a first mirror image from a first data center, and acquiring first mirror image metadata corresponding to the request instruction in the mirror image warehouse;
inquiring whether a first mirror image block file corresponding to the first mirror image metadata is already saved in the cache service of the first data center;
when the first mirror image block file is not stored, storing the first mirror image block file to the cache service of the first data center, and sending the first mirror image to a first cluster node of the first data center, wherein the first mirror image comprises the first mirror image metadata and the first mirror image block file;
and when the first mirror image is saved, sending the first mirror image to a first cluster node of the first data center, wherein the first mirror image comprises the first mirror image metadata and the first mirror image block file saved in the cache service of the first data center.
In short, the above description is only a preferred embodiment of the present disclosure, and is not intended to limit the scope of the present disclosure. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present specification shall be included in the protection scope of the present specification.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises that element.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the system embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.

Claims (14)

1. The mirror image warehouse distributed caching method is characterized in that a mirror image warehouse serving a plurality of data centers and used for providing mirror image centralized storage and distribution is deployed, the data centers comprise at least 1 cluster node, and the mirror image comprises metadata and block files; deploying and operating a caching service for storing the block files corresponding to each data center; the method comprises the following steps:
responding to a request instruction for pulling a first mirror image from a first data center, and acquiring first mirror image metadata corresponding to the request instruction in the mirror image warehouse;
inquiring whether a first mirror image block file corresponding to the first mirror image metadata is already saved in the cache service of the first data center;
when the first mirror image block file is not stored, storing the first mirror image block file to the cache service of the first data center, and sending the first mirror image to a first cluster node of the first data center, wherein the first mirror image comprises the first mirror image metadata and the first mirror image block file;
and when the first mirror image is saved, sending the first mirror image to a first cluster node of the first data center, wherein the first mirror image comprises the first mirror image metadata and the first mirror image block file saved in the cache service of the first data center.
2. The mirror warehouse distributed caching method of claim 1, wherein deploying and running a caching service for saving the block files corresponding to each data center comprises setting a size of the caching service and an expiration date of the caching service.
3. The distributed caching method for the mirror warehouse of claim 2, wherein when the first cluster node is a kubernets node, a container Pod for injecting a corresponding address of the caching service into a system hosts file is run in the first cluster node.
4. The mirror repository distributed caching method according to claim 3, wherein a plurality of caching Service instances are deployed to the first data center using a Kubernets Deployment, and wherein the caching Service instances are run using a Kubernets Service.
5. The mirror repository distributed caching method according to claim 4, wherein the first mirror block file is allocated to the plurality of caching service instances of the first data center in a shared manner.
6. The distributed caching method for the mirror image warehouse according to any one of claims 1 to 5, wherein said mirror image block files stored in said caching service with low frequency of use are eliminated by using LRU algorithm according to frequency of use of said mirror image block files in said caching service.
7. A mirror warehouse distributed caching apparatus, comprising:
a first module capable of deploying an image repository serving a plurality of data centers for providing image centralized storage and distribution, the data centers including at least 1 cluster node, the image including metadata and block files;
the second module, which can be deployed and run corresponding to each data center for storing a cache service of a block file, is applied to the first data center, and includes:
the third module can respond to a request instruction for pulling a first mirror image, and acquire first mirror image metadata corresponding to the request instruction in the mirror image warehouse;
a fourth module capable of querying whether a first mirror block file corresponding to the first mirror metadata has been saved in the cache service of the first data center;
a fifth module, configured to, when not saved, save the first mirror image block file to the cache service of the first data center, and send the first mirror image to a first cluster node of the first data center, where the first mirror image includes the first mirror image metadata and the first mirror image block file;
a sixth module, configured to send the first mirror image to a first cluster node of the first data center when the first mirror image is saved, where the first mirror image includes the first mirror image metadata and the first mirror image block file saved in the cache service of the first data center.
8. The mirror repository distributed caching apparatus according to claim 7, wherein deploying and running a caching service for saving the chunk file corresponding to each data center comprises setting a size of the caching service and an expiration date of the caching service.
9. The distributed caching apparatus for a mirror warehouse of claim 8, wherein when the first cluster node is a kubernets node, a container Pod for injecting a corresponding address of the caching service into a system hosts file is run in the first cluster node.
10. The mirror repository distributed caching apparatus of claim 9, wherein a plurality of caching Service instances are deployed to the first data center using a kubernets Deployment, and wherein the caching Service instances are run using a kubernets Service.
11. The mirror repository distributed caching apparatus according to claim 10, wherein the first mirror block file is allocated to the plurality of caching service instances of the first data center in a shared manner.
12. The distributed caching apparatus for an image repository according to any one of claims 7 to 11, wherein the image block files stored in the caching service with low frequency of use are eliminated by using an LRU algorithm according to the frequency of use of the image block files in the caching service.
13. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions that when executed cause the processor to perform the mirror store distributed caching method of any one of claims 1 to 6.
14. A computer-readable storage medium storing one or more programs which, when executed by an electronic device including a plurality of application programs, cause the electronic device to perform the mirror repository distributed caching method of any one of claims 1 to 6.
CN202211321362.1A 2022-10-26 2022-10-26 Mirror image warehouse distributed caching method and device Pending CN115687420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211321362.1A CN115687420A (en) 2022-10-26 2022-10-26 Mirror image warehouse distributed caching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211321362.1A CN115687420A (en) 2022-10-26 2022-10-26 Mirror image warehouse distributed caching method and device

Publications (1)

Publication Number Publication Date
CN115687420A true CN115687420A (en) 2023-02-03

Family

ID=85098865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211321362.1A Pending CN115687420A (en) 2022-10-26 2022-10-26 Mirror image warehouse distributed caching method and device

Country Status (1)

Country Link
CN (1) CN115687420A (en)

Similar Documents

Publication Publication Date Title
CA3048737C (en) Service processing and consensus method and device
CN107133234B (en) Method, device and system for updating cache data
US20200257450A1 (en) Data hierarchical storage and hierarchical query method and apparatus
CN110597739A (en) Configuration management method, system and equipment
CN110968603B (en) Data access method and device
CN107103011B (en) Method and device for realizing terminal data search
CN107040576B (en) Information pushing method and device and communication system
US11868631B2 (en) System startup method and related device
US20170153909A1 (en) Methods and Devices for Acquiring Data Using Virtual Machine and Host Machine
CN111708787A (en) Multi-center service data management system
US20230030856A1 (en) Distributed table storage processing method, device and system
CN111865687B (en) Service data updating method and device
CN110784498A (en) Personalized data disaster tolerance method and device
US9928174B1 (en) Consistent caching
CN110908965A (en) Object storage management method, device, equipment and storage medium
US11288237B2 (en) Distributed file system with thin arbiter node
CN112433921A (en) Method and apparatus for dynamic point burying
CN111694639A (en) Method and device for updating address of process container and electronic equipment
CN112597151A (en) Data processing method, device, equipment and storage medium
CN109286532B (en) Management method and device for alarm information in cloud computing system
WO2023045575A1 (en) Permission management and control in blockchain
CN115687420A (en) Mirror image warehouse distributed caching method and device
CN114785662B (en) Storage management method, device, equipment and machine-readable storage medium
CN115328608A (en) Kubernetes container vertical expansion adjusting method and device
CN106940660B (en) Method and device for realizing cache

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230920

Address after: 10/F, Guotou Building, No. 398 Shaoxing Road, Gongshu District, Hangzhou City, Zhejiang Province, 310000

Applicant after: Hangzhou Xita Technology Co.,Ltd.

Address before: Room 301-4, Floor 3, Xinhuihu Building, No. 66, Lugang Street, High-speed Railway New Town, Xiangcheng District, Suzhou, Jiangsu Province, 215133

Applicant before: Suzhou Changtong Internet Technology Co.,Ltd.