CN114116538A - Mirror cache management method, device, equipment and storage medium - Google Patents

Mirror cache management method, device, equipment and storage medium Download PDF

Info

Publication number
CN114116538A
CN114116538A CN202111425307.2A CN202111425307A CN114116538A CN 114116538 A CN114116538 A CN 114116538A CN 202111425307 A CN202111425307 A CN 202111425307A CN 114116538 A CN114116538 A CN 114116538A
Authority
CN
China
Prior art keywords
mirror image
layer
memory
mirror
image layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111425307.2A
Other languages
Chinese (zh)
Inventor
包红强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Big Data Technologies Co Ltd
Original Assignee
New H3C Big Data Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Big Data Technologies Co Ltd filed Critical New H3C Big Data Technologies Co Ltd
Priority to CN202111425307.2A priority Critical patent/CN114116538A/en
Publication of CN114116538A publication Critical patent/CN114116538A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Abstract

The application discloses a mirror cache management method, a mirror cache management device, mirror cache management equipment and a mirror cache management storage medium, wherein the mirror cache management method comprises the following steps: receiving a mirror image to be cached, and recording layer names of data of each layer of the mirror image; judging whether mirror image layers with the same name exist in the memory according to the layer names of the mirror image data of each layer; and caching the mirror image layer which is not in the current memory into the memory. According to the mirror image cache management method provided by the embodiment of the application, the mirror image warehouse of the server side can cache the mirror image layer shared among the plurality of mirror images in the memory in advance, and after receiving the downloading instruction initiated by the client side, the pre-cached mirror image layer is sent to the client side, so that the mirror image downloading rate of the client side is greatly improved.

Description

Mirror cache management method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for managing a cache of a mirror image.
Background
With the increasing maturity of cloud computing technologies, private clouds typified by government affairs clouds are rapidly growing with great support from national policies. The cloud-going process of traditional industries such as finance, medical treatment, industrial manufacturing and the like is accelerated, and private cloud markets are gradually concerned by cloud service providers, system integrators, IDC (Internet data center) service providers and users of all industries.
With the continuous growth of the private cloud market, the iteration of the internal private cloud drilling is more and more frequent, and the field and the internal drilling both need to continuously customize and output the private cloud deployment package, including the container Docker mirror image package. At present, in a production line, a mirror image service provided on a cloud often needs to distribute a mirror image to dozens of nodes and hundreds of nodes. However, the issued images are generally large, when images are distributed to dozens of nodes and hundreds of nodes, the image downloading is too slow, and the requirement of users on the improvement of the image downloading speed is more and more urgent.
Disclosure of Invention
The embodiment of the application provides a mirror cache management method, a mirror cache management device, mirror cache management equipment and a mirror cache management storage medium. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present application provides a mirror cache management method, including:
receiving a mirror image to be cached, and recording layer names of data of each layer of the mirror image;
judging whether mirror image layers with the same name exist in the memory according to the layer names of the mirror image data of the layers;
and caching the mirror image layer which is not in the current memory into the memory.
In an optional embodiment, after caching the mirror layer that is not present in the memory into the memory, the method further includes:
and adding 1 to the access times of the existing mirror image layer in the current memory, and updating the access time of the existing mirror image layer.
In an optional embodiment, caching the mirror layer that is not present in the memory into the memory includes:
judging whether the memory is sufficient;
if the memory is insufficient, determining the image layer with the earliest access time in the image layer with the least access times in the current memory as a to-be-replaced image layer, deleting the to-be-replaced image layer, and caching the to-be-cached image layer into the memory;
and if the memory is sufficient, directly caching the mirror image layer to be cached into the memory.
In an optional embodiment, determining, as the mirror layer to be replaced, the mirror layer with the earliest access time in the mirror layers with the smallest number of accesses in the current memory, deleting the mirror layer to be replaced, and caching the mirror layer to be cached in the memory includes:
acquiring the size of data of a mirror image layer to be cached;
and selecting at least one mirror image layer from one or more mirror image layers with the least access times as a mirror image layer to be replaced according to the earliest access time principle until the sum of the data of the determined mirror image layer to be replaced is more than or equal to the data of the mirror image layer to be cached, deleting the determined mirror image layer to be replaced, and caching the mirror image layer to be cached in a memory.
In an optional embodiment, after caching the mirror layer that is not present in the memory into the memory, the method further includes:
acquiring a mirror image to be deleted;
deleting the mirror image layer with the access frequency of 1 in the mirror image to be deleted, and subtracting 1 from the access frequency of the mirror image layer with the access frequency of more than 1.
In an optional embodiment, further comprising:
receiving a downloading instruction uploaded by a client, and determining a target mirror image to be downloaded according to the downloading instruction;
and judging whether each mirror image layer of the target mirror image is positioned in the memory, and sending the mirror image layer positioned in the memory to the client.
In an optional embodiment, if one or more image layers of the target image are not located in the memory, the method further includes:
and acquiring one or more mirror image layers which are not positioned in the memory from the hard disk, sending the mirror image layers to the client, and caching the one or more mirror image layers in the memory.
In a second aspect, an embodiment of the present application provides a mirrored cache management apparatus, including:
the receiving module is used for receiving the mirror image to be cached and recording the layer name of each layer of data of the mirror image;
the judging module is used for judging whether mirror image layers with the same name exist in the memory according to the layer names of the mirror image data of all layers;
and the cache module is used for caching the mirror image layer which is not in the current memory into the memory.
In a third aspect, an embodiment of the present application provides a mirrored cache management device, including a processor and a memory storing program instructions, where the processor is configured to execute the mirrored cache management method provided in the foregoing embodiment when executing the program instructions.
In a fourth aspect, the present application provides a computer-readable medium, on which computer-readable instructions are stored, where the computer-readable instructions are executed by a processor to implement a mirrored cache management method provided in the foregoing embodiment.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the mirror cache management method provided by the embodiment of the application, the mirror layer shared among a plurality of mirrors can be cached in the memory in advance by the mirror warehouse of the server, the mirror layer with more use times can be stored in the memory as much as possible, the mirror layer with more access times can not be deleted from the memory, and the cache hit rate is improved. After the local client initiates a downloading instruction, the mirror image data is read from the server, and the pre-cached mirror image layer can be directly sent to the client, so that the speed of downloading the mirror image by the client is greatly improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flowchart illustrating a mirrored cache management method in accordance with an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating a hierarchical mirror image in accordance with an exemplary embodiment;
FIG. 3 is a diagram illustrating a mirror caching method in accordance with an illustrative embodiment;
FIG. 4 is a diagram illustrating a mirror caching method in accordance with an illustrative embodiment;
FIG. 5 is a diagram illustrating a mirror caching method in accordance with an illustrative embodiment;
FIG. 6 is a schematic diagram illustrating a hierarchical mirror image in accordance with an exemplary embodiment;
FIG. 7 is a diagram illustrating a mirror caching method in accordance with an illustrative embodiment;
FIG. 8 is a diagram illustrating a mirror caching method in accordance with an illustrative embodiment;
FIG. 9 is a block diagram illustrating a mirrored cache management apparatus in accordance with an illustrative embodiment;
FIG. 10 is a block diagram illustrating a mirrored cache management device in accordance with an illustrative embodiment;
FIG. 11 is a schematic diagram illustrating a computer storage medium in accordance with an exemplary embodiment.
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments of the invention to enable those skilled in the art to practice them.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of systems and methods consistent with certain aspects of the invention, as detailed in the appended claims.
In the current scheme, the mirror image service provided on the cloud usually needs to distribute the mirror image to dozens of nodes and hundreds of nodes, however, the distributed mirror image is generally large, up to 100M, when the mirror image is distributed to dozens of nodes and hundreds of nodes, the mirror image downloading is too slow, and the requirement of the user on the improvement of the mirror image downloading speed is more and more urgent.
In order to solve the problems in the prior art, embodiments of the present application provide a cache-based image high-speed downloading method. And when the mirror image is downloaded from the server, if the cache of the mirror image warehouse of the server is hit, the mirror image is downloaded from the memory of the server, and if the cache of the server is not hit, the mirror image is downloaded from a hard disk of the server. The reading speed of the hard disk is much slower than that of the memory, so that the mirror image layer shared among a plurality of mirror images is cached in the memory, the mirror image layer with more use times is stored in the memory as much as possible, the mirror image layer with more access times cannot be deleted from the memory, the cache hit rate is improved, data is read from the memory every time, and the rate of downloading the mirror images by a client side can be greatly improved.
The following describes a mirror cache management method according to an embodiment of the present application in detail with reference to the accompanying drawings. Referring to fig. 1, the method specifically includes the following steps.
S101, receiving a mirror image to be cached, and recording layer names of data of each layer of the mirror image.
Mirroring is a type of redundancy in which data on one disk has an identical copy on another disk, i.e., mirroring. In a possible implementation manner, the mirror repository in the server may store mirror data, and store the mirror in a hard disk of the server. However, the client downloads the mirror image from the hard disk at a slower rate, and in order to increase the downloading rate of the client, the mirror image warehouse in the server may cache the received mirror image in the memory.
And after receiving the mirror image to be cached, recording the layer name of each layer of data of the mirror image. Typically, the mirrored data is stored hierarchically, and the same hierarchical data exists between different mirrors. As shown in fig. 2, mirror 1(image1) includes A, B, C, D four mirror layers, mirror 2(image2) includes A, B, E three mirror layers, and mirror 3(image3) includes A, F, G three mirror layers. Wherein, mirror 1(image1) and mirror 2(image2) have A, B two identical mirror layers, and mirror 1(image1), mirror 2(image2) and mirror 3(image3) have a one identical mirror layer.
S102, judging whether mirror image layers with the same name exist in the memory according to the layer names of the mirror image data of all layers.
In a possible implementation manner, when the server caches the mirror image in the memory, since the memory of the server is limited and only the limited mirror image can be stored, the mirror image layer shared among the multiple mirror images can be cached in the memory, so that the mirror image layer with a large number of use times can be stored in the memory as much as possible.
Specifically, after receiving a new mirror image to be cached, the server acquires a layer name of each mirror image layer of the mirror image, and judges whether the mirror image layer with the same name exists in the memory according to the layer name.
S103, caching the mirror image layer which is not in the current memory into the memory.
Specifically, a mirror layer which is not present in the current memory is determined as a mirror layer to be cached, the mirror layer is cached in the memory, the number of access times of the mirror layer which is present in the current memory is added by 1, and the access time of the mirror layer which is present is updated.
In an optional embodiment, when determining that the mirror image layer not present in the current memory is the mirror image layer to be cached and caching the mirror image layer to be cached in the memory, the method further includes determining whether the memory of the current server is sufficient, and if the memory is sufficient, directly caching the mirror image layer to be cached in the memory.
And if the memory is insufficient, determining the image layer with the earliest access time in the image layer with the least access times in the current memory as the image layer to be replaced, deleting the image layer to be replaced, and caching the image layer to be cached in the memory.
Specifically, a mirror layer with the smallest access frequency is found out according to the access frequency of an existing mirror layer in the memory, if a plurality of mirror layers with the smallest access frequency exist, a mirror layer with the earliest access time is found out from the mirror layers with the smallest access frequency, and the mirror layer with the earliest access time in the mirror layers with the smallest access frequency is determined as a mirror layer to be replaced.
When replacing, the data size of the mirror image layer is also considered, the data size of the mirror image layer to be cached is obtained, at least one mirror image layer is selected from one or more mirror image layers with the least access times as the mirror image layer to be replaced according to the earliest access time principle until the sum of the determined data of the mirror image layer to be replaced is more than or equal to the data of the mirror image layer to be cached, the determined mirror image layer to be replaced is deleted, and the mirror image layer to be cached is cached in the memory.
Specifically, one of the mirror image layers with the least number of access times and the earliest access time is determined as a mirror image layer to be replaced, the data sizes of the mirror image layer to be replaced and the mirror image layer to be cached are compared, if the data of the mirror image layer to be cached is smaller than or equal to the data of the mirror image layer to be replaced, the mirror image layer to be replaced is deleted, and the mirror image layer to be cached is cached in the memory. If the data of the mirror image layer to be cached is larger than the data of the mirror image layer to be replaced, it is still insufficient to delete only one data memory of the mirror image layer to be replaced. Therefore, in addition to the determined mirror image layer to be replaced, in the mirror image layer with the least number of access times, a mirror image layer to be replaced is determined according to the principle that the access time is the earliest, until the sum of the data of the determined mirror image layer to be replaced is greater than or equal to the data of the mirror image layer to be cached, the determined mirror image layer to be replaced is deleted, and the mirror image layer to be cached is cached in the memory.
In an exemplary scenario, the server caches image 1(image1) and image 2(image2) in the memory, and as shown in fig. 3, stores image data in the arrays key1, key2, and key3, since image 1(image1) and image 2(image2) have A, B identical image layers, the cache only needs to cache A, B, C, D, E for five image layers, where the number of accesses of image layer a and image layer B is 2. A (2,5) in fig. 3 indicates that the latest access number of the mirror layer a is 2 and the current access time is 5.
Further, the server caches the image 3(image3) in the memory, as shown in fig. 4, the image 3(image3) includes A, F, G three image layers, and A, B, C, D, E five image layers already exist in the memory, so that the image layer a in the image 3(image3) already exists in the memory, and only by adding 1 to the number of access times of the image layer a in the memory and updating the access time of the image layer a, the layer identifier of a becomes a (3,8), and the image layers F and G that are not in the memory are cached in the memory. As shown in fig. 4, the mirror layer F is cached in the memory, the data size of the mirror layer F is obtained, whether the memory of the server has a space is determined according to the data size of the mirror layer F, and if the memory has a space, the mirror layer F is directly cached in the memory.
Further, caching the mirror image layer G in the memory, as shown in fig. 5, obtaining the data size of the mirror image layer G, determining whether there is space in the memory of the server according to the data size of the mirror image layer G, if the memory is insufficient, determining the mirror image layer with the earliest access time in the mirror image layer with the smallest number of access times in the current memory as the mirror image layer to be replaced, as shown in fig. 5, determining the mirror image layer with the smallest number of access times in the current memory as a mirror image layer C, a mirror image layer D, a mirror image layer E, and a mirror image layer F, and determining the mirror image layer C as the mirror image layer to be replaced. And judging the data size of the mirror image layer G and the mirror image layer C, and directly replacing the mirror image layer C with the mirror image layer G if the mirror image layer G is smaller than or equal to the data size of the mirror image layer C.
Further, the server caches a mirror image 4(image) in the memory, as shown in fig. 6, the mirror image 4(image) includes a mirror image layer a, a mirror image layer B, and a mirror image layer H, and currently, the mirror image layer a and the mirror image layer B exist in the memory, so that only the number of access times of the mirror image layer a and the mirror image layer B in the memory needs to be increased by 1, and the access time of the mirror image layer a and the mirror image layer B is updated. As shown in fig. 7, the layer id of a is changed to a (4,11), the layer id of B is changed to (3,11), and the mirror layer H that is not present in the memory is cached in the memory. When adding the mirror layer H to the cache, a memory shortage is found. Determining the image layer D with the earliest access time in the image layer with the least access times in the current memory as the image layer to be replaced, if the data size of the image layer H is larger than that of the image layer D, sequentially determining the image layer E with the earliest access time in the image layer with the least access times, if the data size of the image layer H is smaller than the sum of the image layer D and the image layer E, determining the image layer D and the image layer E as the image layer to be replaced, deleting the image layer D and the image layer E, and caching the image layer H in the memory.
According to the steps, the common mirror image layers with more access times can be cached in the memory, the mirror image layers which are not used frequently are eliminated, and the hit rate of subsequent mirror image downloading is improved.
In a possible implementation manner, due to the limited memory of the server, some useless images with low access times and early access time can be deleted. When the mirror image is deleted, firstly, the mirror image to be deleted is obtained, the mirror image layer with the access frequency of 1 in the mirror image to be deleted is deleted, and the access frequency of the mirror image layer with the access frequency of more than 1 is reduced by 1. According to this method, a useful image layer in a useless image can be saved.
As shown in fig. 8, mirror 3(image3) needs to be deleted, and mirror 3(image3) includes A, F, G three mirror layers, where the number of accesses of mirror layer F and mirror layer G is 1, so that the cached data of mirror layer F and mirror layer G are deleted, and the number of accesses of mirror layer a is reduced by 1.
In an optional embodiment, further comprising: receiving a downloading instruction uploaded by a client, determining a target mirror image to be downloaded according to the downloading instruction, judging whether each mirror image layer of the target mirror image is located in a memory, and sending the mirror image layer located in the memory to the client.
Specifically, the local client initiates a mirror image downloading instruction, the server receives the downloading instruction uploaded by the client, and determines a target mirror image to be downloaded by the client according to the received downloading instruction. The server side judges whether each mirror image layer of the target mirror image to be downloaded by the client side is cached in the memory, if the mirror image layer of the target mirror image to be downloaded by the client side is cached in the memory, the cached mirror image layer of the target mirror image is returned to the client side, and the client side reads the mirror image layer of the target mirror image from the memory.
In a possible implementation manner, if one or more image layers of the target image are not located in the memory, the method further includes obtaining one or more image layers that are not located in the memory from the hard disk, sending the one or more image layers to the client, and the client reads the remaining image layers in the hard disk. And caching the mirror image layer which is not positioned in the memory into the memory.
The embodiment of the application is combined with mirror image hierarchical data storage, the same data exists, an LFU (Least Frequently Used) caching algorithm is innovatively provided, the access times, the data size and the access time of each layer of the mirror image are recorded, when the reserved cache is insufficient, the hierarchical data with the lowest access times and the earliest access time are replaced preferentially, and when the new hierarchical data is larger than the data to be replaced, the hierarchical data with the lowest access times and the earliest access time are replaced sequentially. And when the mirror image is deleted, only deleting the hierarchical data with the cache access frequency of 1, and subtracting 1 from the access frequency of the hierarchical data larger than 1.
According to the mirror cache management method provided by the embodiment of the application, the mirror layer shared among a plurality of mirrors can be cached in the memory in advance by the mirror warehouse of the server, the mirror layer with more use times can be stored in the memory as much as possible, the mirror layer with more access times can not be deleted from the memory, and the cache hit rate is improved. After a local client initiates a downloading instruction, mirror image data is read from a server, and if the mirror image data to be read exists in a memory of the server, a pre-cached mirror image layer can be directly sent to the client, so that the speed of downloading the mirror image by the client is greatly improved, and the problem of over-slow mirror image downloading in the prior art is solved.
An embodiment of the present application further provides a mirror image cache management apparatus, where the apparatus is configured to execute the mirror image cache management method provided in the foregoing embodiment, and as shown in fig. 9, the apparatus includes: a receiving module 901, a judging module 902 and a caching module 903.
A receiving module 901, configured to receive a mirror image to be cached, and record a layer name of each layer of data of the mirror image;
a judging module 902, configured to judge whether there is a mirror image layer with the same name in the memory according to the layer name of each layer of data of the mirror image;
the caching module 903 is configured to cache a mirror layer that is not present in the current memory in the memory.
It should be noted that, when the mirrored cache management apparatus provided in the foregoing embodiment executes the mirrored cache management method, only the division of the functional modules is taken as an example, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the mirrored cache management apparatus provided in the foregoing embodiments and the mirrored cache management method embodiment belong to the same concept, and details of implementation processes thereof are referred to in the method embodiment, and are not described herein again.
The embodiment of the present application further provides an electronic device corresponding to the mirror cache management method provided in the foregoing embodiment, so as to execute the mirror cache management method.
Referring to fig. 10, a schematic diagram of an electronic device provided in some embodiments of the present application is shown. As shown in fig. 10, the electronic apparatus includes: the system comprises a processor 1000, a memory 1001, a bus 1002 and a communication interface 1003, wherein the processor 1000, the communication interface 1003 and the memory 1001 are connected through the bus 1002; the memory 1001 stores a computer program that can be executed on the processor 1000, and the processor 1000 executes the mirror cache management method provided in any of the foregoing embodiments when executing the computer program.
The Memory 1001 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is implemented through at least one communication interface 1003 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like may be used.
Bus 1002 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 1001 is used for storing a program, and the processor 1000 executes the program after receiving an execution instruction, and the mirror image cache management method disclosed in any embodiment of the present application may be applied to the processor 1000, or implemented by the processor 1000.
Processor 1000 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 1000. The Processor 1000 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1001, and the processor 1000 reads information in the memory 1001 and completes the steps of the method in combination with hardware thereof.
The electronic device provided by the embodiment of the present application and the mirror image cache management method provided by the embodiment of the present application have the same inventive concept and have the same beneficial effects as the method adopted, operated or implemented by the electronic device.
Referring to fig. 11, the computer-readable storage medium is an optical disc 1100, and a computer program (i.e., a program product) is stored on the optical disc, and when the computer program is executed by a processor, the computer program executes the mirror image cache management method provided in any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the mirror image cache management method provided by the embodiment of the present application have the same beneficial effects as the method adopted, run, or implemented by the application program stored in the computer-readable storage medium.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A mirror image cache management method is characterized by comprising the following steps:
receiving a mirror image to be cached, and recording layer names of data of each layer of the mirror image;
judging whether mirror image layers with the same name exist in the memory according to the layer names of the mirror image data of each layer;
and caching the mirror image layer which is not in the current memory into the memory.
2. The method of claim 1, wherein caching the mirror layer that is not currently in memory into memory, further comprises:
and adding 1 to the access times of the existing mirror image layer in the current memory, and updating the access time of the existing mirror image layer.
3. The method of claim 1, wherein caching mirror layers not currently available in memory into memory comprises:
judging whether the memory is sufficient;
if the memory is insufficient, determining the image layer with the earliest access time in the image layer with the least access times in the current memory as a to-be-replaced image layer, deleting the to-be-replaced image layer, and caching the to-be-cached image layer into the memory;
and if the memory is sufficient, directly caching the mirror image layer to be cached into the memory.
4. The method according to claim 3, wherein determining a mirror layer with the earliest access time among mirror layers with the least number of accesses in a current memory as a mirror layer to be replaced, deleting the mirror layer to be replaced, and caching the mirror layer to be cached in the memory comprises:
acquiring the size of data of a mirror image layer to be cached;
selecting at least one mirror image layer from one or more mirror image layers with the least access times as a mirror image layer to be replaced according to the earliest access time principle until the sum of the data of the determined mirror image layer to be replaced is more than or equal to the data of the mirror image layer to be cached, deleting the determined mirror image layer to be replaced, and caching the mirror image layer to be cached in a memory.
5. The method of claim 1, wherein caching the mirror layer that is not currently in memory into memory, further comprises:
acquiring a mirror image to be deleted;
deleting the mirror image layer with the access frequency of 1 in the mirror image to be deleted, and subtracting 1 from the access frequency of the mirror image layer with the access frequency of more than 1.
6. The method of claim 1, further comprising:
receiving a downloading instruction uploaded by a client, and determining a target mirror image to be downloaded according to the downloading instruction;
and judging whether each mirror image layer of the target mirror image is positioned in the memory, and sending the mirror image layer positioned in the memory to the client.
7. The method of claim 6, wherein if one or more mirror layers of the target image are not located in memory, further comprising:
and acquiring one or more mirror image layers which are not positioned in the memory from the hard disk, sending the mirror image layers to the client, and caching the mirror image layers into the memory.
8. A mirrored cache management apparatus, comprising:
the receiving module is used for receiving the mirror image to be cached and recording the layer name of each layer of data of the mirror image;
the judging module is used for judging whether mirror image layers with the same name exist in the memory according to the layer names of the mirror image data of all layers;
and the cache module is used for caching the mirror image layer which is not in the current memory into the memory.
9. A mirrored cache management device comprising a processor and a memory storing program instructions, the processor being configured to perform the mirrored cache management method of any one of claims 1 to 7 when executing the program instructions.
10. A computer readable medium having computer readable instructions stored thereon which are executed by a processor to implement a mirrored cache management method according to any one of claims 1 to 7.
CN202111425307.2A 2021-11-26 2021-11-26 Mirror cache management method, device, equipment and storage medium Pending CN114116538A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111425307.2A CN114116538A (en) 2021-11-26 2021-11-26 Mirror cache management method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111425307.2A CN114116538A (en) 2021-11-26 2021-11-26 Mirror cache management method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114116538A true CN114116538A (en) 2022-03-01

Family

ID=80370625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111425307.2A Pending CN114116538A (en) 2021-11-26 2021-11-26 Mirror cache management method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114116538A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785770A (en) * 2022-04-01 2022-07-22 京东科技信息技术有限公司 Mirror layer file sending method and device, electronic equipment and computer readable medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785770A (en) * 2022-04-01 2022-07-22 京东科技信息技术有限公司 Mirror layer file sending method and device, electronic equipment and computer readable medium

Similar Documents

Publication Publication Date Title
US9846540B1 (en) Data durability using un-encoded copies and encoded combinations
CN112565325B (en) Mirror image file management method, device and system, computer equipment and storage medium
JP2005056420A (en) Method and system for managing object stored in cache
JP6388339B2 (en) Distributed caching and cache analysis
US11599503B2 (en) Path name cache for notifications of file changes
CN113672175A (en) Distributed object storage method, device and equipment and computer storage medium
CN107368608A (en) The HDFS small documents buffer memory management methods of algorithm are replaced based on ARC
CN113168404A (en) System and method for replicating data in a distributed database system
CN115599747B (en) Metadata synchronization method, system and equipment of distributed storage system
US20210318803A1 (en) System and method for storing and accessing blockchain data
CN107992270B (en) Method and device for globally sharing cache of multi-control storage system
CN112148736A (en) Method, device and storage medium for caching data
CN112839076A (en) Data storage method, data reading method, gateway, electronic equipment and storage medium
CN114116538A (en) Mirror cache management method, device, equipment and storage medium
US11429311B1 (en) Method and system for managing requests in a distributed system
CN112650729B (en) Rights management method, system and storage medium of distributed file system
CN110298031B (en) Dictionary service system and model version consistency distribution method
US8028011B1 (en) Global UNIX file system cylinder group cache
CN114785662B (en) Storage management method, device, equipment and machine-readable storage medium
CN115840731A (en) File processing method, computing device and computer storage medium
CN115203255A (en) Data query method and device, electronic equipment and storage medium
CN112506875B (en) File storage method, related device and file storage system
JP2007293433A (en) Document management system
CN114764416A (en) Data caching method, device and equipment and computer readable storage medium
CN111209304A (en) Data processing method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination