WO2021232289A1 - 镜像拉取方法及相关产品 - Google Patents

镜像拉取方法及相关产品 Download PDF

Info

Publication number
WO2021232289A1
WO2021232289A1 PCT/CN2020/091316 CN2020091316W WO2021232289A1 WO 2021232289 A1 WO2021232289 A1 WO 2021232289A1 CN 2020091316 W CN2020091316 W CN 2020091316W WO 2021232289 A1 WO2021232289 A1 WO 2021232289A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
cache
preset
target
warehouse
Prior art date
Application number
PCT/CN2020/091316
Other languages
English (en)
French (fr)
Inventor
徐进
Original Assignee
深圳市欢太科技有限公司
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市欢太科技有限公司, Oppo广东移动通信有限公司 filed Critical 深圳市欢太科技有限公司
Priority to PCT/CN2020/091316 priority Critical patent/WO2021232289A1/zh
Priority to CN202080099553.0A priority patent/CN115380269A/zh
Publication of WO2021232289A1 publication Critical patent/WO2021232289A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation

Definitions

  • This application relates to the computer field, and specifically relates to a mirror image pulling method and related products.
  • Harbor's strategy-based Docker mirror replication function can synchronize mirroring between different data centers and different operating environments, and provide a friendly management interface, which greatly simplifies the mirror management in actual operation and maintenance.
  • Harbor's mirror copy function cannot perceive the usage of application mirroring in the computer room where it is located, which will cause meaningless mirror copy and waste network bandwidth and disk space.
  • the embodiment of the application provides a method for pulling mirror images and related products, which can perceive the usage of application mirrors in the computer room, and implement mirroring according to the usage of application mirrors in the computer room, which can save network bandwidth and disk space.
  • an image pull method applied to a cache proxy warehouse, includes:
  • the cache proxy warehouse is located in the image distribution system, the image distribution system further includes a central image warehouse and a message center, the cache agent warehouse is located in the target computer room,
  • the target computer room also includes the target cluster;
  • the content that needs to be pulled in the image pull request is pulled from the central image repository and stored in the preset cache.
  • an embodiment of the present application provides an image pulling device, which is applied to a cache proxy warehouse, and the device includes: a receiving unit, a detection unit, and an image pulling unit, wherein:
  • the receiving unit is configured to receive a mirror pull request, the mirror pull request is initiated by a target cluster, the cache proxy warehouse is located in the mirror distribution system, and the mirror distribution system further includes a central mirror warehouse and a message center.
  • the cache proxy warehouse is located in a target computer room, and the target computer room further includes the target cluster;
  • the detection unit is configured to detect whether the image pull request hits a preset cache
  • the image pull unit is configured to pull the content that the image pull request needs to pull from the preset cache when the image pull request hits the preset cache;
  • the image pull unit is further configured to pull the content that needs to be pulled in the image pull request from the central image repository when the image pull request misses the preset cache, and save it in the store. Said preset cache.
  • embodiments of the present application provide a server, including a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be processed by the above
  • the above program includes instructions for executing the steps in the first aspect of the embodiments of the present application.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the above-mentioned computer-readable storage medium stores a computer program for electronic data exchange, wherein the above-mentioned computer program enables a computer to execute Some or all of the steps described in one aspect.
  • the embodiments of the present application provide a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute as implemented in this application.
  • the computer program product may be a software installation package.
  • FIG. 1A is a schematic structural diagram of a server provided by an embodiment of the present application.
  • FIG. 1B is a schematic diagram of the architecture for implementing the image pull method provided by the embodiment of the present application.
  • FIG. 1C is a schematic flowchart of a mirror image pulling method disclosed in an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of another image pulling method disclosed in an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of another server disclosed in an embodiment of the present application.
  • FIG. 4A is a schematic structural diagram of a mirror image pulling device disclosed in an embodiment of the present application.
  • Fig. 4B is a schematic structural diagram of another mirror image pulling device disclosed in an embodiment of the present application.
  • the cache proxy warehouse involved in the embodiments of the present application may be a server or may be set on the server.
  • FIG. 1A is a schematic structural diagram of a server disclosed in an embodiment of the present application.
  • the server 100 may include a control circuit, and the control circuit may include a storage and processing circuit 110.
  • the storage and processing circuit 110 can be a memory, such as a hard disk drive memory, a non-volatile memory (such as flash memory or other electronic programmable read-only memory used to form a solid-state drive, etc.), and a volatile memory (such as static or dynamic random access memory). Access to memory, etc.), etc., are not limited in the embodiment of the present application.
  • the processing circuit in the storage and processing circuit 110 can be used to control the operation of the server 100.
  • the processing circuit can be implemented based on one or more microprocessors, microcontrollers, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and so on.
  • the storage and processing circuit 110 can be used to run software in the server 100, such as Internet browsing applications, voice over internet protocol (VOIP) phone call applications, email applications, media playback applications, operating system functions, etc. .
  • These softwares can be used to perform some control operations, for example, camera-based image capture, ambient light measurement based on ambient light sensors, proximity sensor measurement based on proximity sensors, and information based on status indicators such as the status indicators of light-emitting diodes.
  • Display functions touch event detection based on touch sensors, functions associated with displaying information on multiple (for example, layered) displays, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals ,
  • the control operations associated with the collection and processing of button press event data, as well as other functions in the server 100, etc., are not limited in the embodiment of the present application.
  • the server 100 may also include an input-output circuit 150.
  • the input-output circuit 150 may be used to enable the server 100 to input and output data, that is, to allow the server 100 to receive data from an external device and also allow the server 100 to output data from the server 100 to an external device.
  • the input-output circuit 150 may further include a sensor 170.
  • the sensor 170 may include an ambient light sensor, a proximity sensor based on light and capacitance, and a touch sensor (for example, a light-based touch sensor and/or a capacitive touch sensor. The touch sensor structure is used independently), acceleration sensor, gravity sensor, and other sensors.
  • the input-output circuit 150 may also include one or more displays, such as the display 130.
  • the display 130 may include one or a combination of a liquid crystal display, an organic light emitting diode display, an electronic ink display, a plasma display, and a display using other display technologies.
  • the display 130 may include a touch sensor array (ie, the display 130 may be a touch display screen).
  • the touch sensor can be a capacitive touch sensor formed by an array of transparent touch sensor electrodes (such as indium tin oxide (ITO) electrodes), or it can be a touch sensor formed using other touch technologies, such as sonic touch, pressure-sensitive touch, and resistance. Touch, optical touch, etc., are not limited in the embodiment of the present application.
  • the audio component 140 may be used to provide audio input and output functions for the server 100.
  • the audio component 140 in the server 100 may include a speaker, a microphone, a buzzer, a tone generator, and other components for generating and detecting sounds.
  • the communication circuit 120 may be used to provide the server 100 with the ability to communicate with external devices.
  • the communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals.
  • the wireless communication circuit in the communication circuit 120 may include a radio frequency transceiver circuit, a power amplifier circuit, a low noise amplifier, a switch, a filter, and an antenna.
  • the wireless communication circuit in the communication circuit 120 may include a circuit for supporting near field communication (NFC) by transmitting and receiving near-field coupled electromagnetic signals.
  • the communication circuit 120 may include a near field communication antenna and a near field communication transceiver.
  • the communication circuit 120 may also include a cellular phone transceiver and antenna, a wireless local area network transceiver circuit and antenna, and so on.
  • the server 100 may further include a battery, a power management circuit, and other input-output units 160.
  • the input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes, and other status indicators.
  • the user can control the operation of the server 100 by inputting commands through the input-output circuit 150, and can use the output data of the input-output circuit 150 to realize receiving status information and other output from the server 100.
  • Figure 1B provides a system architecture for implementing the methods involved in the embodiments of the present application.
  • the methods described in the embodiments of the present application can be applied to a cache proxy warehouse, which can be set on a server,
  • the cache agent warehouse is located in the cache distribution system.
  • the system can include a central mirror warehouse, a message center, and N computer rooms, where N is a positive integer, and a cache agent warehouse is deployed in each computer room.
  • the central mirror warehouse can be used to store all application mirrors. When the client pushes an application mirror to the central mirror warehouse, the warehouse will notify the message center of the mirror.
  • the image name contains the application name; the message center can be used to save the image push record and supports a real-time monitoring interface, through which the client can receive the event of pushing a new image in real time.
  • the cache proxy warehouse can serve mirror clients in the same computer room, and cache application mirrors that are pulled on demand. At the same time, the cache proxy warehouse also supports an application-aware image preheating mechanism (image preheating module) and an image elimination mechanism (image elimination module). Of course, the cache proxy warehouse can also include a registry module.
  • the above-mentioned cache proxy warehouse, central mirror warehouse, and message center can be set on different servers or platforms. The above-mentioned cache proxy warehouse and central mirror warehouse can all have the function of Docker mirror copying.
  • the cache proxy warehouse is located in the image distribution system, the image distribution system further includes a central image warehouse and a message center, the cache agent warehouse is located in the target computer room,
  • the target computer room also includes the target cluster;
  • the content that needs to be pulled in the image pull request is pulled from the central image repository and stored in the preset cache.
  • the image pull method described in the above embodiment of the application is applied to the cache proxy warehouse to receive the image pull request.
  • the image pull request is initiated by the target cluster.
  • the cache proxy warehouse is located in the image distribution system, and the image distribution system also Including the central mirror warehouse and the message center.
  • the cache proxy warehouse is located in the target computer room.
  • the target computer room also includes the target cluster. It detects whether the image pull request hits the preset cache. When the image pull request hits the preset cache, pull from the preset cache. Fetch the content that the mirror pull request needs to pull. When the mirror pull request misses the preset cache, pull the content that the mirror pull request needs to pull from the central mirror warehouse and save it in the preset cache. It can be perceived The usage of application mirroring in the computer room, and mirroring can be pulled based on the usage of application mirroring in the computer room, so network bandwidth and disk space can be saved.
  • FIG. 1C is a schematic flowchart of an image pulling method provided by an embodiment of the present application.
  • the image pulling method described in this embodiment is applied to the server shown in FIG. 1A or the system shown in FIG. 1B Architecture, the image pull method includes:
  • the cache proxy warehouse is located in the mirror distribution system, the mirror distribution system further includes a central mirror warehouse and a message center, the cache proxy warehouse is located in the target Computer room, the target computer room further includes the target cluster.
  • the target cluster in the embodiment of the present application may be any type of cluster, for example, the target cluster may be a K8S cluster.
  • the image distribution system may include a central image warehouse, a message center, and multiple computer rooms, and each computer room may include a cache agent warehouse and a cluster (for example, a K8S cluster). Both the cache proxy warehouse and the central mirror warehouse can be Harbor mirror warehouses.
  • the cache proxy warehouse and the target cluster may be set on the same server, or may be set on different servers.
  • the cache proxy warehouse can receive an image pull request, which is initiated by the target cluster.
  • the preset cache may be a cache list or a cache area.
  • the preset cache may be set in a cache proxy warehouse or may be set in a local disk (cache disk).
  • the cache proxy warehouse can detect whether the image pull request hits the preset cache, and the preset cache can be preset by the user or defaulted by the system.
  • the cache proxy warehouse includes an image preheating module; the foregoing step 102, detecting whether the image pull request hits the preset cache, may include the following steps:
  • the cache proxy warehouse may include an image preheating module, which is mainly used to monitor the image push event of the message center.
  • the cache proxy warehouse can monitor the image push event of the message center through the image warm-up module.
  • the image push event can include the image name, and then the image name can be parsed to obtain the target application name, and the target application name can be detected Whether the corresponding application has been deployed in the target cluster of the target computer room, if the application corresponding to the target application name has been deployed in the cluster of the target computer room, you can confirm that the image pull request hits the preset cache. On the contrary, if the application is in the target application If the application corresponding to the name is not deployed in the cluster of the target computer room, it can be confirmed that the image pull request does not hit the preset cache.
  • the cache proxy warehouse can pull the content that the mirror pull request needs to pull from the preset cache when the mirror pull request hits the preset cache.
  • the cache proxy warehouse when the cache proxy warehouse misses the preset cache in the mirror pull request, it can pull the content that the mirror pull request needs to pull from the central mirror warehouse and save it in the preset cache.
  • the mirror synchronization of multiple computer rooms can be realized through the Harbor mirror warehouse, and Harbor supports the mirror copy function, and the mirror can be actively copied to Harbor in other computer rooms.
  • the cache proxy warehouse further includes a mirror elimination module, and before or after any of the above steps 101 to 104, the following steps may also be included:
  • the above preset threshold can be set by the user or the system defaults.
  • the cache proxy warehouse can obtain the first cache disk usage rate of the cache disk, which means that the disk usage rate is too high and needs to be cleaned up.
  • the mirror cleaning task is executed.
  • the usage of the first cache disk is less than or equal to the preset threshold, it indicates that the disk cache space is sufficient, and the mirror cleaning task can be ended.
  • step A2 executing the image cleaning task, may include the following steps:
  • mirror i is any mirror in the preset cache.
  • the cache proxy warehouse can detect whether the mirror i in the preset cache is deployed in the target cluster. If it is, the application associated with the mirror i is deployed. , You can keep the mirror i. If not, delete the mirror i. In this way, the disk usage can be reduced.
  • the cache proxy warehouse can obtain the second cache disk usage rate of the cache disk.
  • the second cache disk usage rate is less than or equal to the preset threshold, it means that the disk cache space is sufficient and the end can be ended.
  • step A3 the following steps may be further included:
  • A4. Acquire an application list when the usage rate of the second cache disk is greater than the preset threshold
  • the preset version can be set by the user or the system defaults.
  • the cache proxy warehouse can obtain the application list when the usage rate of the second cache disk is greater than the preset threshold, and further, can keep the mirrors that are in use in the application list, but delete the mirrors that are not in use in the application list. A mirror with a version lower than the preset version, so that you can keep a mirror with a lower version that is not being used by the user.
  • step A6 the following steps may be further included:
  • the cache proxy warehouse can obtain the third cache disk usage rate of the cache disk.
  • the third cache disk usage rate is greater than a preset threshold, it can delete all unused images in the application list. Free up more free disk space.
  • step A7 the following steps may be further included:
  • the cache proxy warehouse can indicate that the disk space is sufficient when the third cache disk usage rate is less than or equal to the preset threshold, and the image cleaning task can be terminated.
  • the cache proxy warehouse can obtain the utilization rate of the fourth cache disk of the cache disk, and when the utilization rate of the fourth cache disk is greater than a preset threshold, an alarm operation can be triggered.
  • the alarm operation can specifically be at least one of the following: voice prompt , Vibration prompt, sending information to the specified device, etc., are not limited here.
  • voice prompt e.g., a prompt for the user's voice prompt
  • Vibration prompt e.g., a voice prompt
  • sending information to the specified device e.g., a preset threshold
  • the fourth cache disk usage is less than or equal to the preset threshold, it means that the disk space is sufficient, and the image cleaning task can be ended.
  • a method for pulling images across computer rooms a cache acceleration method, an image warm-up method, and an image elimination mechanism are provided.
  • Cross-computer rooms especially overseas computer rooms, are limited by the limited network bandwidth between the computer rooms, and mirroring is often slow, especially in the case of high-concurrency mirroring.
  • the mirror cache proxy warehouse is deployed in each computer room. The mirror client of each computer room pulls the image from the agent warehouse of the computer room nearby, and if it does not exist, the agent will return the source to the central warehouse to pull it.
  • the application-aware image warm-up mechanism it can effectively solve the problem of image pull efficiency with limited bandwidth in multiple computer rooms, thereby effectively improving the efficiency of application publishing.
  • the cache proxy warehouse can support pulling images on demand, effectively sharing the load of the central warehouse, and improving the efficiency of image pulling. In addition, it can also improve the efficiency of image first pulling based on application-aware image preheating methods, and based on applications
  • the perceptual mirror elimination mechanism ensures a high cache hit rate while ensuring disk usage.
  • the image warm-up module can monitor the image push event of the message center in real time.
  • the image push event can include the newly pushed image name.
  • the application name can be obtained by parsing the image name. And query whether the application is deployed in the K8S cluster in the local computer room. If there is a deployment, pull the corresponding content of the image to the cache in advance, otherwise ignore the event.
  • the mirror elimination module can periodically check the cache disk usage rate, and when the usage rate reaches a preset threshold (for example, 80%), it can start a mirror cleaning task to release disk space. For example, in the first round of scanning, each image in the cache can be checked to see if there is an application associated with the image deployed in the k8s cluster in the local computer room. If it is, keep the image, otherwise delete it.
  • a preset threshold for example, 80%
  • step 101 between step 101 and step 102, the following steps may also be included:
  • the physiological state parameters may be various parameters used to reflect the physiological functions of the user, and the physiological state parameters may be at least one of the following: heart rate, blood pressure, blood temperature, blood lipid content, blood glucose content, and thyroxine content , Adrenaline content, platelet content, blood oxygen content, etc., are not limited here.
  • the preset emotion type can be set by the user or the system defaults. The preset emotion type can be at least one of the following: dull, crying, calm, irritable, excited, depressed, etc., which are not limited here.
  • the cache agent warehouse can obtain the user's target physiological state parameters through the wearable device that can communicate with the cache agent warehouse.
  • Different physiological state parameters reflect the user's emotional type, and the physiological state can be stored in the cache agent warehouse in advance.
  • the mapping relationship between the parameter and the emotion type, and further, the target emotion type corresponding to the target physiological state parameter can be determined according to the mapping relationship.
  • step 102 can be executed, otherwise, you can Step 102 is not performed.
  • the above step B1 determining the target emotion type corresponding to the target physiological state parameter, can be implemented in the following manner:
  • B17 Determine the target weight pair corresponding to the target heart rate level according to the preset mapping relationship between the heart rate level and the weight value pair, and the weight value pair includes a first weight value and a second weight value.
  • a weight value is a weight value corresponding to the first emotion value
  • the second weight value is a weight value corresponding to the second emotion value;
  • the specified time period can be set by the user or the system defaults.
  • the cache proxy warehouse can pre-store the mapping relationship between the preset heart rate level and the first emotion value, and the preset mean square error and the second emotion value.
  • the mapping relationship between the preset heart rate level and the weighted value pair, and the preset emotional value and the emotional type mapping relationship, the above weighted value pair may include a first weight value and a second weight value ,
  • the first weight value is the weight value corresponding to the first sentiment value
  • the second weight value is the weight value corresponding to the second sentiment value.
  • the sum of the first weight value and the second weight value can be 1, and the first weight value
  • the value range of the value and the second weight are both 0 ⁇ 1.
  • the emotion can be evaluated by the heart rate change curve.
  • the cache proxy warehouse can sample the heart rate curve.
  • the specific sampling method can be: uniform sampling or random sampling to obtain multiple heart rate values, and can perform average calculations based on multiple heart rate values to obtain the average heart rate value.
  • Cache The proxy warehouse can pre-store the mapping relationship between the heart rate value and the heart rate level, and then the target heart rate level corresponding to the average heart rate value can be determined according to the mapping relationship, and further, can be based on the above preset heart rate level and the first emotional value. To determine the target first emotion value corresponding to the target heart rate level.
  • the mean square error operation can be performed based on multiple heart rate values to obtain the target mean square error, and the target mean square error can be calculated according to the preset mean square deviation and the second emotion value. The mapping relationship between the two to determine the target second sentiment value corresponding to the target mean square error.
  • the cache proxy warehouse may also determine the target weight pair corresponding to the target heart rate level according to the above preset mapping relationship between the heart rate level and the weight value pair, and the target weight value pair may include the target first weight value and the target weight value.
  • the first weight, the target first weight is the weight corresponding to the target first sentiment value
  • the target second weight is the weight corresponding to the target second sentiment value.
  • the cache proxy warehouse can be based on the target first sentiment value, The target second sentiment value, the target first weight and the target second weight are weighted to obtain the final sentiment value.
  • the specific calculation formula is as follows:
  • the target emotion type corresponding to the target emotion value can be determined according to the foregoing preset mapping relationship between the emotion value and the emotion type.
  • the above average heart rate reflects the user's heart rate value
  • the mean square error of the heart rate reflects the stability of the heart rate
  • the user's emotion is reflected through the two dimensions of the average heart rate and the mean square error, and the user's emotion type can be accurately determined.
  • the image pull method described in the above embodiment of the application is applied to the cache proxy warehouse to receive the image pull request.
  • the image pull request is initiated by the target cluster.
  • the cache proxy warehouse is located in the image distribution system, and the image distribution system also Including the central mirror warehouse and the message center.
  • the cache proxy warehouse is located in the target computer room.
  • the target computer room also includes the target cluster. It detects whether the image pull request hits the preset cache. When the image pull request hits the preset cache, pull from the preset cache. Fetch the content that the mirror pull request needs to pull. When the mirror pull request misses the preset cache, pull the content that the mirror pull request needs to pull from the central mirror warehouse and save it in the preset cache. It can be perceived The usage of application mirroring in the computer room, and mirroring can be pulled based on the usage of application mirroring in the computer room, so network bandwidth and disk space can be saved.
  • FIG. 2 is a schematic flow diagram of another image pulling method provided in an embodiment of the present application.
  • the image pulling method described in this embodiment is applied to the server as shown in FIG. 1A or In the system architecture shown in FIG. 1B, the method may include the following steps:
  • 201 Receive a mirror pull request, the mirror pull request is initiated by a target cluster, the cache proxy warehouse is located in the mirror distribution system, the mirror distribution system further includes a central mirror warehouse and a message center, and the cache proxy warehouse is located in the target cluster.
  • Computer room, the target computer room further includes the target cluster.
  • the image pull method described in the above embodiment of the application is applied to a cache proxy warehouse.
  • it can perceive the application mirror usage in the computer room, and realize the image pull based on the application mirror usage in the computer room. Therefore, it can save network bandwidth and disk space.
  • mirror cache proxy warehouses are deployed in each computer room. If it does not exist, the agent will return the source to the central warehouse to pull it. Combined with the application-aware image warm-up mechanism, it can effectively solve the problem of image pull efficiency with limited bandwidth in multiple computer rooms, thereby effectively improving the efficiency of application publishing.
  • FIG. 3 is a server provided by an embodiment of the present application, including: a processor and a memory; and one or more programs, the one or more programs are stored in the memory , And is configured to be executed by the processor, the cache proxy warehouse is set on the server, and the program includes instructions for executing the following steps:
  • the cache proxy warehouse is located in the image distribution system, the image distribution system further includes a central image warehouse and a message center, the cache agent warehouse is located in the target computer room,
  • the target computer room also includes the target cluster;
  • the content that needs to be pulled in the image pull request is pulled from the central image repository and stored in the preset cache.
  • the server described in the above embodiment of the present application includes a cache proxy warehouse and receives mirror pull requests.
  • the mirror pull request is initiated by the target cluster.
  • the cache proxy warehouse is located in the mirror distribution system, and the mirror distribution system also includes a center. Image warehouse and message center.
  • the cache proxy warehouse is located in the target computer room.
  • the target computer room also includes the target cluster. It detects whether the image pull request hits the preset cache. When the image pull request hits the preset cache, pull the image from the preset cache.
  • the content that the pull request needs to pull when the mirror pull request misses the preset cache, pull the content that the mirror pull request needs to pull from the central mirror warehouse and save it in the preset cache, which can be perceived in the computer room
  • the application mirroring usage of the computer room, and the mirroring is pulled according to the application mirroring usage in the computer room, so it can save network bandwidth and disk space.
  • the cache proxy warehouse includes an image warm-up module; in terms of detecting whether the image pull request hits the preset cache, the program includes instructions for executing the following steps:
  • the cache proxy warehouse further includes a mirror elimination module
  • the program further includes instructions for executing the following steps:
  • the program includes instructions for executing the following steps:
  • the program further includes instructions for executing the following steps:
  • the image cleaning task is ended.
  • the program further includes instructions for executing the following steps:
  • the program further includes instructions for executing the following steps:
  • the program further includes instructions for executing the following steps:
  • the program further includes instructions for executing the following steps:
  • FIG. 4A is a schematic structural diagram of a mirror image pulling device provided in this embodiment.
  • the image pulling device is applied to the server shown in FIG. 1A or the system architecture shown in FIG.
  • the receiving unit 401 is configured to receive a mirror pull request, the mirror pull request is initiated by a target cluster, the cache proxy warehouse is located in the mirror distribution system, and the mirror distribution system further includes a central mirror warehouse and a message center.
  • the cache proxy warehouse is located in a target computer room, and the target computer room further includes the target cluster;
  • the detection unit 402 is configured to detect whether the image pull request hits a preset cache
  • the image pull unit 403 is configured to pull the content that the image pull request needs to pull from the preset cache when the image pull request hits the preset cache;
  • the image pull unit 403 is further configured to pull the content that the image pull request needs to pull from the central image repository when the image pull request misses the preset cache, and save it in The preset cache.
  • the mirror pull device described in the above embodiment of the application is applied to a cache proxy warehouse to receive a mirror pull request.
  • the mirror pull request is initiated by the target cluster.
  • the cache proxy warehouse is located in the mirror distribution system, and the mirror distribution system also Including the central mirror warehouse and the message center.
  • the cache proxy warehouse is located in the target computer room.
  • the target computer room also includes the target cluster. It detects whether the image pull request hits the preset cache. When the image pull request hits the preset cache, pull from the preset cache. Fetch the content that the mirror pull request needs to pull. When the mirror pull request misses the preset cache, pull the content that the mirror pull request needs to pull from the central mirror warehouse and save it in the preset cache. It can be perceived The usage of application mirroring in the computer room, and mirroring can be pulled based on the usage of application mirroring in the computer room, so network bandwidth and disk space can be saved.
  • the cache proxy warehouse includes an image warm-up module; in terms of detecting whether the image pull request hits the preset cache, the detecting unit 402 is specifically configured to:
  • the cache proxy warehouse further includes a mirror elimination module, as shown in FIG. 4B, and FIG. 4B is another modified device of the mirror pulling device shown in FIG. 4A, which can be compared with FIG. 4A.
  • the acquiring unit 404 is configured to acquire the usage rate of the first cache disk
  • the cleaning unit 405 is configured to perform an image cleaning task when the usage rate of the first cache disk is greater than a preset threshold.
  • the cleaning unit 405 is specifically configured to:
  • the acquiring unit 404 is further configured to acquire the usage rate of the second cache disk
  • the cleaning unit 405 is further configured to end the image cleaning task when the usage rate of the second cache disk is less than or equal to the preset threshold.
  • the obtaining unit 404 is further configured to obtain an application list when the usage rate of the second cache disk is greater than the preset threshold;
  • the cleaning unit 405 is further configured to keep the mirrors that are in use in the application list; and delete mirrors whose versions are lower than the preset version in the unused mirrors in the application list.
  • the acquiring unit 404 is configured to acquire the usage rate of the third cache disk
  • the cleaning unit 405 is further configured to delete all unused images in the application list when the usage rate of the third cache disk is greater than the preset threshold.
  • the cleaning unit 405 is further configured to end the execution of the image cleaning task when the usage rate of the third cache disk is less than or equal to the preset threshold.
  • the acquiring unit 404 is further configured to acquire the usage rate of the fourth cache disk
  • the cleaning unit 405 is further configured to trigger an alarm operation when the usage rate of the fourth cache disk is greater than the preset threshold.
  • each program module of the image pulling device of this embodiment can be implemented according to the method in the above method embodiment.
  • the functions of each program module of the image pulling device of this embodiment can be implemented according to the method in the above method embodiment.
  • An embodiment of the present application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes a computer to execute any of the image pull methods described in the above method embodiments. Part or all of the steps.
  • the embodiments of the present application also provide a computer program product.
  • the computer program product includes a non-transitory computer-readable storage medium storing a computer program.
  • the computer program is operable to cause a computer to execute the method described in the foregoing method embodiment. Part or all of the steps of any mirror pull method.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or in the form of software program modules.
  • the integrated unit is implemented in the form of a software program module and sold or used as an independent product, it can be stored in a computer readable memory.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory.
  • a number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
  • the foregoing memory includes: U disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • the program can be stored in a computer-readable memory, and the memory can include: a flash disk , ROM, RAM, magnetic disk or CD, etc.

Abstract

一种镜像拉取方法及相关产品,应用于缓存代理仓库,该方法包括:接收镜像拉取请求,所述镜像拉取请求由目标集群发起,所述缓存代理仓库位于镜像分发系统,所述镜像分布系统还包括中心镜像仓库和消息中心,所述缓存代理仓库位于目标机房,所述目标机房还包括所述目标集群(101);检测所述镜像拉取请求是否命中预设缓存(102);在所述镜像拉取请求命中预设缓存时,从所述预设缓存中拉取所述镜像拉取请求需要拉取的内容(103);在所述镜像拉取请求未命中所述预设缓存时,从所述中心镜像仓库拉取所述镜像拉取请求需要拉取的内容,并保存在所述预设缓存中(104)。采用上述方法可以依据机房内的应用镜像使用情况实现镜像拉取。

Description

镜像拉取方法及相关产品 技术领域
本申请涉及计算机领域,具体涉及一种镜像拉取方法及相关产品。
背景技术
Harbor基于策略的Docker镜像复制功能,可在不同的数据中心、不同的运行环境之间同步镜像,并提供友好的管理界面,大大简化了实际运维中的镜像管理工作。但是,Harbor的镜像复制功能无法感知所在机房内的应用镜像使用情况,会造成无意义的镜像复制,浪费网络带宽及磁盘空间。
发明内容
本申请实施例提供了一种镜像拉取方法及相关产品,能够感知到机房内的应用镜像使用情况,并且依据机房内的应用镜像使用情况实现镜像拉取,能够节省网络带宽以及磁盘空间。
第一方面,本申请实施例一种镜像拉取方法,应用于缓存代理仓库,包括:
接收镜像拉取请求,所述镜像拉取请求由目标集群发起,所述缓存代理仓库位于镜像分发系统,所述镜像分布系统还包括中心镜像仓库和消息中心,所述缓存代理仓库位于目标机房,所述目标机房还包括所述目标集群;
检测所述镜像拉取请求是否命中预设缓存;
在所述镜像拉取请求命中预设缓存时,从所述预设缓存中拉取所述镜像拉取请求需要拉取的内容;
在所述镜像拉取请求未命中所述预设缓存时,从所述中心镜像仓库拉取所述镜像拉取请求需要拉取的内容,并保存在所述预设缓存中。
第二方面,本申请实施例提供了一种镜像拉取装置,应用于缓存代理仓库,所述装置包括:接收单元、检测单元和镜像拉取单元,其中,
所述接收单元,用于接收镜像拉取请求,所述镜像拉取请求由目标集群发起,所述缓存代理仓库位于镜像分发系统,所述镜像分布系统还包括中心镜像仓库和消息中心,所述缓存代理仓库位于目标机房,所述目标机房还包括所述目标集群;
所述检测单元,用于检测所述镜像拉取请求是否命中预设缓存;
所述镜像拉取单元,用于在所述镜像拉取请求命中预设缓存时,从所述预设缓存中拉取所述镜像拉取请求需要拉取的内容;
所述镜像拉取单元,还用于在所述镜像拉取请求未命中所述预设缓存时,从所述中心镜像仓库拉取所述镜像拉取请求需要拉取的内容,并保存在所述预设缓存中。
第三方面,本申请实施例提供一种服务器,包括处理器、存储器、通信接口,以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行本申请实施例第一方面中的步骤的指令。
第四方面,本申请实施例提供了一种计算机可读存储介质,其中,上述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,上述计算机程序使得计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。
第五方面,本申请实施例提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
附图说明
下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请实施例提供的一种服务器的结构示意图;
图1B本申请实施例提供的实施镜像拉取方法的架构示意图;
图1C是本申请实施例公开的一种镜像拉取方法的流程示意图;
图2是本申请实施例公开的另一种镜像拉取方法的流程示意图;
图3是本申请实施例公开的另一种服务器的结构示意图;
图4A是本申请实施例公开的一种镜像拉取装置的结构示意图;
图4B是本申请实施例公开的另一种镜像拉取装置的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
本申请实施例所涉及到的缓存代理仓库可以为服务器或者可以设置于服务器。
下面对本申请实施例进行详细介绍。
请参阅图1A,图1A是本申请实施例公开的一种服务器的结构示意图,服务器100可以包括控制电路,该控制电路可以包括存储和处理电路110。该存储和处理电路110可以存储器,例如硬盘驱动存储器,非易失性存储器(例如闪存或用于形成固态驱动器的其它电子可编程只读存储器等),易失性存储器(例如静态或动态随机存取存储器等)等,本申请实施例不作限制。存储和处理电路110中的处理电路可以用于控制服务器100的运转。该处理电路可以基于一个或多个微处理器,微控制器,基带处理器,功率管理单元,音频编解码器芯片,专用集成电路,显示驱动器集成电路等来实现。
存储和处理电路110可用于运行服务器100中的软件,例如互联网浏览应用程序,互联网协议语音(voice over internet protocol,VOIP)电话呼叫应用程序,电子邮件应用程序,媒体播放应用程序,操作系统功能等。这些软件可以用于执行一些控制操作,例如,基于照相机的图像采集,基于环境光传感器的环境光测量,基于接近传感器的接近传感器测量,基于诸如发光二极管的状态指示灯等状态指示器实现的信息显示功能,基于触摸传感器的触摸事件检测,与在多个(例如分层的)显示器上显示信息相关联的功能,与执行无线通 信功能相关联的操作,与收集和产生音频信号相关联的操作,与收集和处理按钮按压事件数据相关联的控制操作,以及服务器100中的其它功能等,本申请实施例不作限制。
服务器100还可以包括输入-输出电路150。输入-输出电路150可用于使服务器100实现数据的输入和输出,即允许服务器100从外部设备接收数据和也允许服务器100将数据从服务器100输出至外部设备。输入-输出电路150可以进一步包括传感器170。传感器170可以包括环境光传感器,基于光和电容的接近传感器,触摸传感器(例如,基于光触摸传感器和/或电容式触摸传感器,其中,触摸传感器可以是触控显示屏的一部分,也可以作为一个触摸传感器结构独立使用),加速度传感器,重力传感器,和其它传感器等。
输入-输出电路150还可以包括一个或多个显示器,例如显示器130。显示器130可以包括液晶显示器,有机发光二极管显示器,电子墨水显示器,等离子显示器,使用其它显示技术的显示器中一种或者几种的组合。显示器130可以包括触摸传感器阵列(即,显示器130可以是触控显示屏)。触摸传感器可以是由透明的触摸传感器电极(例如氧化铟锡(ITO)电极)阵列形成的电容式触摸传感器,或者可以是使用其它触摸技术形成的触摸传感器,例如音波触控,压敏触摸,电阻触摸,光学触摸等,本申请实施例不作限制。
音频组件140可以用于为服务器100提供音频输入和输出功能。服务器100中的音频组件140可以包括扬声器,麦克风,蜂鸣器,音调发生器以及其它用于产生和检测声音的组件。
通信电路120可以用于为服务器100提供与外部设备通信的能力。通信电路120可以包括模拟和数字输入-输出接口电路,和基于射频信号和/或光信号的无线通信电路。通信电路120中的无线通信电路可以包括射频收发器电路、功率放大器电路、低噪声放大器、开关、滤波器和天线。举例来说,通信电路120中的无线通信电路可以包括用于通过发射和接收近场耦合电磁信号来支持近场通信(near field communication,NFC)的电路。例如,通信电路120可以包括近场通信天线和近场通信收发器。通信电路120还可以包括蜂窝电话收发器和天线,无线局域网收发器电路和天线等。
服务器100还可以进一步包括电池,电力管理电路和其它输入-输出单元160。输入-输出单元160可以包括按钮,操纵杆,点击轮,滚动轮,触摸板,小键盘,键盘,照相机,发光二极管和其它状态指示器等。
用户可以通过输入-输出电路150输入命令来控制服务器100的操作,并且可以使用输入-输出电路150的输出数据以实现接收来自服务器100的状态信息和其它输出。
基于此,请参阅图1B,图1B提供了实施本申请实施例所涉及的方法的系统架构,本申请实施例所所及的方法可以应用于缓存代理仓库,该缓存代理仓库可以设置于服务器,缓存代理仓库位于缓存分发系统,该系统可以包含一个中心镜像仓库,一个消息中心,N个机房,N为正整数,每个机房内部署一个缓存代理仓库。其中,中心镜像仓库可以用于保存所有的应用镜像,当客户端将一个应用的镜像推送到中心镜像仓库后,仓库会将该镜像通知到消息中心。镜像名称中包含应用名称;消息中心可以用于保存镜像推送记录,支持实时监听接口,通过该接口,客户端可以实时收到推送新镜像的事件。缓存代理仓库则可以服务于同机房内的镜像客户端,缓存着按需拉取的应用镜像。同时缓存代理仓库还支持应用感知的镜像预热机制(镜像预热模块)及镜像淘汰机制(镜像淘汰模块),当然,缓存代理仓库还可以包括注册表模块。上述缓存代理仓库、中心镜像仓库和消息中心可以设置于不同的服务器或者平台。上述缓存代理仓库、中心镜像仓库均可以具备Docker镜像复制功能。
基于图1B所示的系统框架,可以实现如下方法,该方法应用于任一缓存代理仓库,具体如下:
接收镜像拉取请求,所述镜像拉取请求由目标集群发起,所述缓存代理仓库位于镜像分发系统,所述镜像分布系统还包括中心镜像仓库和消息中心,所述缓存代理仓库位于目标机房,所述目标机房还包括所述目标集群;
检测所述镜像拉取请求是否命中预设缓存;
在所述镜像拉取请求命中预设缓存时,从所述预设缓存中拉取所述镜像拉取请求需要拉取的内容;
在所述镜像拉取请求未命中所述预设缓存时,从所述中心镜像仓库拉取所述镜像拉取请求需要拉取的内容,并保存在所述预设缓存中。
可以看出,上述本申请实施例所描述的镜像拉取方法,应用于缓存代理仓库,接收镜像拉取请求,镜像拉取请求由目标集群发起,缓存代理仓库位于镜像分发系统,镜像分布系统还包括中心镜像仓库和消息中心,缓存代理仓库位于目标机房,目标机房还包括目标集群,检测镜像拉取请求是否命中预设缓存,在镜像拉取请求命中预设缓存时,从预设缓存中拉取镜像拉取请求需要拉取的内容,在镜像拉取请求未命中预设缓存时,从中心镜像仓库拉取镜像拉取请求需要拉取的内容,并保存在预设缓存中,能够感知到机房内的应用镜像使用情况,并且依据机房内的应用镜像使用情况实现镜像拉取,因此,能够节省网络带宽以及磁盘空间。
请参阅图1C,图1C是本申请实施例提供的一种镜像拉取方法的流程示意图,本实施例中所描述的镜像拉取方法,应用于如图1A的服务器或者图1B所示的系统架构,该镜像拉取方法包括:
101、接收镜像拉取请求,所述镜像拉取请求由目标集群发起,所述缓存代理仓库位于镜像分发系统,所述镜像分布系统还包括中心镜像仓库和消息中心,所述缓存代理仓库位于目标机房,所述目标机房还包括所述目标集群。
本申请实施例中目标集群可以为任一类型的集群,例如,目标集群可以为K8S集群。镜像分发系统可以包括中心镜像仓库、消息中心和多个机房,每个机房可以包括一个缓存代理仓库和一个集群(例如,K8S集群)。缓存代理仓库、中心镜像仓库均可以为Harbor镜像仓库。
具体实现中,缓存代理仓库和目标集群可以设置于同一服务器,或者,可以设置于不同的服务器。缓存代理仓库可以接收镜像拉取请求,该镜像拉取请求由目标集群发起。
102、检测所述镜像拉取请求是否命中预设缓存。
其中,预设缓存可以为一个缓存列表,或者为一个缓存区域,例如,预设缓存可以设置于缓存代理仓库或者可以设置于本地磁盘(缓存磁盘)。缓存代理仓库可以检测镜像拉取请求是否命中预设缓存,该预设缓存可以由用户预先设置或者系统默认。
在一个可能的示例中,所述缓存代理仓库包括镜像预热模块;上述步骤102,检测所述镜像拉取请求是否命中预设缓存,可以包括如下步骤:
21、通过所述镜像预热模块监听所述消息中心的镜像推送事件,所述镜像推送事件中包括镜像名称;
22、对所述镜像名称进行解析,得到目标应用名称;
23、检测所述目标应用名称对应的应用是否已在所述目标机房的目标集群中部署;
24、在所述目标应用名称对应的应用已在所述目标机房的集群中进行部署,确认所述镜像拉取请求命中所述预设缓存;
25、在所述目标应用名称对应的应用未在所述目标机房的集群中进行部署,确认所述镜像拉取请求未命中所述预设缓存。
其中,缓存代理仓库可以包括镜像预热模块,镜像预热模块主要用于监听消息中心的 镜像推送事件。
具体实现中,缓存代理仓库可以通过镜像预热模块监听消息中心的镜像推送事件,该镜像推送事件中可以包括镜像名称,进而,可以对镜像名称进行解析,得到目标应用名称,可以检测目标应用名称对应的应用是否已在目标机房的目标集群中部署,倘若在目标应用名称对应的应用已在目标机房的集群中进行部署,则可以确认镜像拉取请求命中预设缓存,反之,倘若在目标应用名称对应的应用未在目标机房的集群中进行部署,则可以确认镜像拉取请求未命中预设缓存。
103、在所述镜像拉取请求命中预设缓存时,从所述预设缓存中拉取所述镜像拉取请求需要拉取的内容。
其中,具体实现中,缓存代理仓库可以在镜像拉取请求命中预设缓存时,可以从预设缓存中拉取该镜像拉取请求需要拉取的内容。
104、在所述镜像拉取请求未命中所述预设缓存时,从所述中心镜像仓库拉取所述镜像拉取请求需要拉取的内容,并保存在所述预设缓存中。
其中,具体实现中,缓存代理仓库在镜像拉取请求未命中预设缓存时,则可以从中心镜像仓库拉取镜像拉取请求需要拉取的内容,并保存在预设缓存中。
本申请实施例,可以通过Harbor镜像仓库实现多机房镜像同步,Harbor支持镜像复制功能,可以将镜像主动复制到其他机房的Harbor。
在一个可能的示中,所述缓存代理仓库还包括镜像淘汰模块,上述步骤101-步骤104中任一步骤之前或者之后,还可以包括如下步骤:
A1、获取第一缓存磁盘使用率;
A2、在所述第一缓存磁盘使用率大于预设阈值时,执行镜像清理任务。
其中,上述预设阈值可以由用户自行设置或者系统默认,具体实现中,缓存代理仓库可以获取缓存磁盘的第一缓存磁盘使用率,则说明磁盘使用率过高,需要对其进行清理,则可以在第一缓存磁盘使用率大于预设阈值时,执行镜像清理任务,反之,在第一缓存磁盘使用率小于或等于预设阈值时,则说明磁盘缓存空间充足,则可以结束镜像清理任务。
在一个可能的示例中,上述步骤A2,执行镜像清理任务,可以包括如下步骤:
A21、检测所述预设缓存中的镜像i是否在所述目标集群中部署与该镜像i相关联的应用,所述镜像i为所述预设缓存中的任一镜像;
A22、若是,保留所述镜像i,若否,删除所述镜像i。
具体实现中,以镜像i为例,镜像i为预设缓存中的任一镜像,缓存代理仓库可以检测预设缓存中的镜像i是否在目标集群中部署与该镜像i相关联的应用,若是,则可以保留镜像i,若否,删除镜像i,如此,可以降低磁盘使用率。
进一步地,在一个可能的示中,上述步骤A2之后,还可以包括如下步骤:
A3、获取第二缓存磁盘使用率;
A4、在所述第二缓存磁盘使用率小于或等于所述预设阈值时,结束所述镜像清理任务。
其中,在执行镜像清理任务之后,缓存代理仓库则可以获取缓存磁盘的第二缓存磁盘使用率,在第二缓存磁盘使用率小于或等于预设阈值时,则说明磁盘缓存空间充足,则可以结束镜像清理任务。
进一步地,在一个可能的示例中,上述步骤A3之后,还可以包括如下步骤:
A4、在所述第二缓存磁盘使用率大于所述预设阈值时,获取应用列表;
A5、保留所述应用列表中被正在使用的镜像;
A6、删除所述应用列表中未被正在使用的镜像中版本低于预设版本的镜像。
具体实现中,预设版本可以由用户自行设置或者系统默认。缓存代理仓库可以在第二缓存磁盘使用率大于所述预设阈值时,获取应用列表,进而,可以保留应用列表中被正在 使用的镜像,而是,删除应用列表中未被正在使用的镜像中版本低于预设版本的镜像,如此,可以保留用户未被正在使用且低版本的镜像。
进一步地,在一个可能的示例中,上述步骤A6之后,还可以包括如下步骤:
A7、获取第三缓存磁盘使用率;
A8、在所述第三缓存磁盘使用率大于所述预设阈值时,删除所述应用列表中所有未被正在使用的镜像。
具体实现中,缓存代理仓库可以获取缓存磁盘的第三缓存磁盘使用率,在第三缓存磁盘使用率大于预设阈值时,可以删除应用列表中所有未被正在使用的镜像,如此,尽可能腾出较多空闲磁盘空间。
进一步地,在一个可能的示例中,上述步骤A7之后,还可以包括如下步骤:
A9、在所述第三缓存磁盘使用率小于或等于所述预设阈值时,结束执行所述镜像清理任务。
具体实现中,缓存代理仓库可以在第三缓存磁盘使用率小于或等于预设阈值时,则说明磁盘空间充足,则可以结束执行镜像清理任务。
进一步地,在一个可能的示例中,上述步骤A8之后,还可以包括如下步骤
A10、获取第四缓存磁盘使用率;
A11、在所述第四缓存磁盘使用率大于所述预设阈值时,触发告警操作。
具体实现中,缓存代理仓库可以获取缓存磁盘的第四缓存磁盘使用率,在第四缓存磁盘使用率大于预设阈值时,则可以触发告警操作,告警操作具体可以为以下至少一种:语音提示、振动提示、向指定设备发送信息等等,在此不做限定,反之,在第四缓存磁盘使用率小于或等于预设阈值时,则说明磁盘空间充足,则可以结束执行镜像清理任务。
上述本申请实施例中,提供了一种跨机房镜像拉取方法,缓存加速方法,镜像预热方法及镜像淘汰机制。跨机房尤其是海外机房,受制于机房之间有限的网络带宽,拉取镜像往往会很慢,尤其是高并发拉取镜像的情况下。通过应用感知的镜像分发系统,在各个机房部署镜像缓存代理仓库,各机房的镜像客户端就近从所属机房的代理仓库拉取镜像,如果不存在则由代理回源到中心仓库拉取。结合应用感知的镜像预热机制,可以有效解决多机房带宽受限的镜像拉取效率问题,从而有效提高应用发布效率。
具体实现中,缓存代理仓库可以支持按需拉取镜像,有效分摊中心仓库负载,提高镜像拉取效率,另外,还可以基于应用感知的镜像预热方法,提高镜像首次拉取效率,以及基于应用感知的镜像淘汰机制,确保磁盘使用使用率的同时,保证较高的缓存命中率。
举例说明下,镜像预热模块可以实时监听消息中心的镜像推送事件,该镜像推送事件中可以包含新推送的镜像名称,当监听到镜像推送事件后,可以通过解析镜像名称,得出应用名称,并查询该应用是否在本机房的K8S集群中有部署,如果有部署,则提前拉取该镜像相应的内容到缓存,否则忽略该事件。
另外,镜像淘汰模块可以周期性的检查缓存磁盘使用率,当使用率达到预设阈值(例如,80%)时,则可以启动镜像清理任务以释放磁盘空间。例如,第一轮扫描,可以检查缓存中的每一个镜像,是否在本机房的k8s集群中有部署该镜像关联的应用。如果是,则保留该镜像,否则删除。进一步地,检查磁盘空间是否低于预设阈值,如果是,则结束本次清理任务;否则,进行第二轮扫描,第二轮扫描,则可以通过K8S的API查询本机房中部署的所有应用列表,针对每个应用,最多保留预定的N(例如,N=3)个最新版本的镜像,优先保留正在使用的镜像。删除其余版本的镜像,检查磁盘空间是否低于预设阈值,如果是,则结束本次清理任务;否则进行第三轮扫描,第三轮扫描,删除所有未被正在使用的镜像,检查磁盘空间是否低于预设阈值,如果是,则结束本次清理任务。否则触发告警,进而,可以由人工介入处理。
在一个可能的示例中,步骤101-步骤102之间,还可以包括如下步骤:
B1、获取用户的目标生理状态参数;
B2、确定所述目标生理状态参数对应的目标情绪类型;
B3、在所述目标情绪类型为预设情绪类型时,执行所述检测所述镜像拉取请求是否命中预设缓存的步骤。
其中,本申请实施例中,生理状态参数可以为用于反映用户生理机能的各种参数,生理状态参数可以为以下至少一种:心率、血压、血温、血脂含量、血糖含量、甲状腺素含量、肾上腺素含量、血小板含量、血氧含量等等,在此不做限定。预设情绪类型可以由用户自行设置或者系统默认。预设情绪类型可以为以下至少一种:沉闷、哭泣、平静、暴躁、兴奋、郁闷等等,在此不做限定。
具体实现中,缓存代理仓库可以通过可该缓存代理仓库进行通信连接的可穿戴设获取用户的目标生理状态参数,不同的生理状态参数反映了用户的情绪类型,缓存代理仓库中可以预先存储生理状态参数与情绪类型之间的映射关系,进而,可以依据该映射关系确定目标生理状态参数对应的目标情绪类型,进而,可以在目标情绪类型为预设情绪类型时,执行步骤102,否则,则可以不执行步骤102。
在一个可能的示例中,在所述目标生理状态参数为指定时间段内的心率变化曲线时,上述步骤B1,确定所述目标生理状态参数对应的目标情绪类型,可以按照如下方式实施:
B11、对所述心率变化曲线进行采样,得到多个心率值;
B12、依据所述多个心率值进行均值运算,得到平均心率值;
B13、确定所述平均心率值对应的目标心率等级;
B14、按照预设的心率等级与第一情绪值之间的映射关系,确定所述目标心率等级对应的目标第一情绪值;
B15、依据所述多个心率值进行均方差运算,得到目标均方差;
B16、按照预设的均方差与第二情绪值之间的映射关系,确定所述目标均方差对应的目标第二情绪值;
B17、按照预设的心率等级与权值对之间的映射关系,确定所述目标心率等级对应的目标权值对,所述权值对包括第一权值和第二权值,所述第一权值为所述第一情绪值对应的权值,所述第二权值为所述第二情绪值对应的权值;
B18、依据所述目标第一情绪值、所述目标第二情绪值和所述目标权值对进行加权运算,得到最终情绪值;
B19、按照预设的情绪值与情绪类型之间的映射关系,确定所述目标情绪值对应的所述目标情绪类型。
其中,指定时间段可以由用户自行设置或者系统默认,缓存代理仓库中可以预先存储预设的心率等级与第一情绪值之间的映射关系,以及预设的均方差与第二情绪值之间的映射关系,以及预设的心率等级与权值对之间的映射关系,以及预设的情绪值与情绪类型之间的映射关系,上述权值对可以包括第一权值和第二权值,第一权值为第一情绪值对应的权值,第二权值为第二情绪值对应的权值,其中,第一权值与第二权值之和可以为1,且第一权值、第二权值的取值范围均为0~1。本申请实施例中,可以通过心率变化曲线来评估情绪。
具体实现中,缓存代理仓库可以对心率变化曲线进行采样,具体采样方式可以为:均匀采样或者随机采样,得到多个心率值,并且可以依据多个心率值进行均值运算,得到平均心率值,缓存代理仓库中可以预先存储心率值与心率等级之间的映射关系,进而,可以依据该映射关系确定平均心率值对应的目标心率等级,进而,可以按照上述预设的心率等级与第一情绪值之间的映射关系,确定目标心率等级对应的目标第一情绪值,进而,还可 以依据多个心率值进行均方差运算,得到目标均方差,并且可以按照预设的均方差与第二情绪值之间的映射关系,确定该目标均方差对应的目标第二情绪值。
进一步地,缓存代理仓库还可以按照上述预设的心率等级与权值对之间的映射关系,确定目标心率等级对应的目标权值对,该目标权值对可以包括目标第一权值和目标第一权值,目标第一权值为目标第一情绪值对应的权值,目标第二权值为目标第二情绪值对应的权值,进而,缓存代理仓库可以依据目标第一情绪值、目标第二情绪值、目标第一权值和目标第二权值进行加权运算,得到最终情绪值,具体计算公式如下:
最终情绪值=目标第一情绪值*目标第一权值+目标第二情绪值*目标第二权值
进而,可以按照上述预设的情绪值与情绪类型之间的映射关系,确定目标情绪值对应的目标情绪类型。其中,上述平均心率反映了用户的心率值,心率的均方差反映了心率稳定性,通过平均心率和均方差两个维度反映了用户的情绪,能够精准确定用户的情绪类型。
可以看出,上述本申请实施例所描述的镜像拉取方法,应用于缓存代理仓库,接收镜像拉取请求,镜像拉取请求由目标集群发起,缓存代理仓库位于镜像分发系统,镜像分布系统还包括中心镜像仓库和消息中心,缓存代理仓库位于目标机房,目标机房还包括目标集群,检测镜像拉取请求是否命中预设缓存,在镜像拉取请求命中预设缓存时,从预设缓存中拉取镜像拉取请求需要拉取的内容,在镜像拉取请求未命中预设缓存时,从中心镜像仓库拉取镜像拉取请求需要拉取的内容,并保存在预设缓存中,能够感知到机房内的应用镜像使用情况,并且依据机房内的应用镜像使用情况实现镜像拉取,因此,能够节省网络带宽以及磁盘空间。
与上述一致地,请参阅图2,图2是本申请实施例提供的另一种镜像拉取方法的流程示意图,本实施例中所描述的镜像拉取方法,应用于如图1A的服务器或者图1B所示的系统架构,该方法可包括以下步骤:
201、接收镜像拉取请求,所述镜像拉取请求由目标集群发起,所述缓存代理仓库位于镜像分发系统,所述镜像分布系统还包括中心镜像仓库和消息中心,所述缓存代理仓库位于目标机房,所述目标机房还包括所述目标集群。
202、检测所述镜像拉取请求是否命中预设缓存。
203、在所述镜像拉取请求命中预设缓存时,从所述预设缓存中拉取所述镜像拉取请求需要拉取的内容。
204、在所述镜像拉取请求未命中所述预设缓存时,从所述中心镜像仓库拉取所述镜像拉取请求需要拉取的内容,并保存在所述预设缓存中。
205、获取第一缓存磁盘使用率。
206、在所述第一缓存磁盘使用率大于预设阈值时,执行镜像清理任务。
207、获取第二缓存磁盘使用率。
208、在所述第二缓存磁盘使用率小于或等于所述预设阈值时,结束所述镜像清理任务。
209、在所述第二缓存磁盘使用率大于所述预设阈值时,获取应用列表。
210、保留所述应用列表中被正在使用的镜像,删除所述应用列表中未被正在使用的镜像中版本低于预设版本的镜像。
211、获取第三缓存磁盘使用率。
212、在所述第三缓存磁盘使用率大于所述预设阈值时,删除所述应用列表中所有未被正在使用。
213、在所述第三缓存磁盘使用率小于或等于所述预设阈值时,结束执行所述镜像清理任务。
214、获取第四缓存磁盘使用率。
215、在所述第四缓存磁盘使用率大于所述预设阈值时,触发告警操作。
其中,上述步骤201-步骤215的具体描述可以参照图1C所示的镜像拉取方法,在此不再赘述。
可以看出,上述本申请实施例所描述的镜像拉取方法,应用于缓存代理仓库,一方面,能够感知到机房内的应用镜像使用情况,并且依据机房内的应用镜像使用情况实现镜像拉取,因此,能够节省网络带宽以及磁盘空间,另一方面,通过应用感知的镜像分发系统,在各个机房部署镜像缓存代理仓库,各机房的镜像客户端就近从所属机房的代理仓库拉取镜像,如果不存在则由代理回源到中心仓库拉取。结合应用感知的镜像预热机制,可以有效解决多机房带宽受限的镜像拉取效率问题,从而有效提高应用发布效率。
以下是实施上述镜像拉取方法的装置,具体如下:
与上述一致地,请参阅图3,图3是本申请实施例提供的一种服务器,包括:处理器和存储器;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述处理器执行,缓存代理仓库设置于服务器,所述程序包括用于执行以下步骤的指令:
接收镜像拉取请求,所述镜像拉取请求由目标集群发起,所述缓存代理仓库位于镜像分发系统,所述镜像分布系统还包括中心镜像仓库和消息中心,所述缓存代理仓库位于目标机房,所述目标机房还包括所述目标集群;
检测所述镜像拉取请求是否命中预设缓存;
在所述镜像拉取请求命中预设缓存时,从所述预设缓存中拉取所述镜像拉取请求需要拉取的内容;
在所述镜像拉取请求未命中所述预设缓存时,从所述中心镜像仓库拉取所述镜像拉取请求需要拉取的内容,并保存在所述预设缓存中。
可以看出,上述本申请实施例所描述的服务器,该服务器包括缓存代理仓库,接收镜像拉取请求,镜像拉取请求由目标集群发起,缓存代理仓库位于镜像分发系统,镜像分布系统还包括中心镜像仓库和消息中心,缓存代理仓库位于目标机房,目标机房还包括目标集群,检测镜像拉取请求是否命中预设缓存,在镜像拉取请求命中预设缓存时,从预设缓存中拉取镜像拉取请求需要拉取的内容,在镜像拉取请求未命中预设缓存时,从中心镜像仓库拉取镜像拉取请求需要拉取的内容,并保存在预设缓存中,能够感知到机房内的应用镜像使用情况,并且依据机房内的应用镜像使用情况实现镜像拉取,因此,能够节省网络带宽以及磁盘空间。
在一个可能的示例中,所述缓存代理仓库包括镜像预热模块;在所述检测所述镜像拉取请求是否命中预设缓存方面,所述程序包括用于执行以下步骤的指令:
通过所述镜像预热模块监听所述消息中心的镜像推送事件,所述镜像推送事件中包括镜像名称;
对所述镜像名称进行解析,得到目标应用名称;
检测所述目标应用名称对应的应用是否已在所述目标机房的目标集群中部署;
在所述目标应用名称对应的应用已在所述目标机房的集群中进行部署,确认所述镜像拉取请求命中所述预设缓存;
在所述目标应用名称对应的应用未在所述目标机房的集群中进行部署,确认所述镜像拉取请求未命中所述预设缓存。
在一个可能的示例中,所述缓存代理仓库还包括镜像淘汰模块,所述程序还包括用于执行以下步骤的指令:
获取第一缓存磁盘使用率;
在所述第一缓存磁盘使用率大于预设阈值时,执行镜像清理任务。
在一个可能的示例中,在所述执行镜像清理任务方面,所述程序包括用于执行以下步骤的指令:
检测所述预设缓存中的镜像i是否在所述目标集群中部署与该镜像i相关联的应用,所述镜像i为所述预设缓存中的任一镜像;
若是,保留所述镜像i,若否,删除所述镜像i。
在一个可能的示例中,所述程序还包括用于执行以下步骤的指令:
获取第二缓存磁盘使用率;
在所述第二缓存磁盘使用率小于或等于所述预设阈值时,结束所述镜像清理任务。
在一个可能的示例中,所述程序还包括用于执行以下步骤的指令:
在所述第二缓存磁盘使用率大于所述预设阈值时,获取应用列表;
保留所述应用列表中被正在使用的镜像;
删除所述应用列表中未被正在使用的镜像中版本低于预设版本的镜像。
在一个可能的示例中,所述程序还包括用于执行以下步骤的指令:
获取第三缓存磁盘使用率;
在所述第三缓存磁盘使用率大于所述预设阈值时,删除所述应用列表中所有未被正在使用的镜像。
在一个可能的示例中,所述程序还包括用于执行以下步骤的指令:
在所述第三缓存磁盘使用率小于或等于所述预设阈值时,结束执行所述镜像清理任务。
在一个可能的示例中,所述程序还包括用于执行以下步骤的指令:
获取第四缓存磁盘使用率;
在所述第四缓存磁盘使用率大于所述预设阈值时,触发告警操作。
请参阅图4A,图4A是本实施例提供的一种镜像拉取装置的结构示意图。该镜像拉取装置应用于如图1A所示的服务器或者图1B所示的系统架构,应用于缓存代理仓库,所述装置包括:接收单元401、检测单元402和镜像拉取单元403,其中,
所述接收单元401,用于接收镜像拉取请求,所述镜像拉取请求由目标集群发起,所述缓存代理仓库位于镜像分发系统,所述镜像分布系统还包括中心镜像仓库和消息中心,所述缓存代理仓库位于目标机房,所述目标机房还包括所述目标集群;
所述检测单元402,用于检测所述镜像拉取请求是否命中预设缓存;
所述镜像拉取单元403,用于在所述镜像拉取请求命中预设缓存时,从所述预设缓存中拉取所述镜像拉取请求需要拉取的内容;
所述镜像拉取单元403,还用于在所述镜像拉取请求未命中所述预设缓存时,从所述中心镜像仓库拉取所述镜像拉取请求需要拉取的内容,并保存在所述预设缓存中。
可以看出,上述本申请实施例所描述的镜像拉取装置,应用于缓存代理仓库,接收镜像拉取请求,镜像拉取请求由目标集群发起,缓存代理仓库位于镜像分发系统,镜像分布系统还包括中心镜像仓库和消息中心,缓存代理仓库位于目标机房,目标机房还包括目标集群,检测镜像拉取请求是否命中预设缓存,在镜像拉取请求命中预设缓存时,从预设缓存中拉取镜像拉取请求需要拉取的内容,在镜像拉取请求未命中预设缓存时,从中心镜像仓库拉取镜像拉取请求需要拉取的内容,并保存在预设缓存中,能够感知到机房内的应用镜像使用情况,并且依据机房内的应用镜像使用情况实现镜像拉取,因此,能够节省网络带宽以及磁盘空间。
在一个可能的示例中,所述缓存代理仓库包括镜像预热模块;在所述检测所述镜像拉取请求是否命中预设缓存方面,所述检测单元402具体用于:
通过所述镜像预热模块监听所述消息中心的镜像推送事件,所述镜像推送事件中包括镜像名称;
对所述镜像名称进行解析,得到目标应用名称;
检测所述目标应用名称对应的应用是否已在所述目标机房的目标集群中部署;
在所述目标应用名称对应的应用已在所述目标机房的集群中进行部署,确认所述镜像拉取请求命中所述预设缓存;
在所述目标应用名称对应的应用未在所述目标机房的集群中进行部署,确认所述镜像拉取请求未命中所述预设缓存。
在一个可能的示例中,所述缓存代理仓库还包括镜像淘汰模块,如图4B所示,图4B为图4A所示的镜像拉取装置的又一变型装置,其与图4A相比较还可以:获取单元404和清理单元405,其中,
所述获取单元404,用于获取第一缓存磁盘使用率;
所述清理单元405,用于在所述第一缓存磁盘使用率大于预设阈值时,执行镜像清理任务。
在一个可能的示例中,在所述执行镜像清理任务方面,所述清理单元405具体用于:
检测所述预设缓存中的镜像i是否在所述目标集群中部署与该镜像i相关联的应用,所述镜像i为所述预设缓存中的任一镜像;
若是,保留所述镜像i,若否,删除所述镜像i。
在一个可能的示例中,其中,
所述获取单元404,还用于获取第二缓存磁盘使用率;
所述清理单元405,还用于在所述第二缓存磁盘使用率小于或等于所述预设阈值时,结束所述镜像清理任务。
在一个可能的示例中,其中,
所述获取单元404,还用于在所述第二缓存磁盘使用率大于所述预设阈值时,获取应用列表;
所述清理单元405,还用于保留所述应用列表中被正在使用的镜像;以及删除所述应用列表中未被正在使用的镜像中版本低于预设版本的镜像。
在一个可能的示例中,其中,
所述获取单元404,用于获取第三缓存磁盘使用率;
所述清理单元405,还用于在所述第三缓存磁盘使用率大于所述预设阈值时,删除所述应用列表中所有未被正在使用的镜像。
在一个可能的示例中,其中,
所述清理单元405,还用于在所述第三缓存磁盘使用率小于或等于所述预设阈值时,结束执行所述镜像清理任务。
在一个可能的示例中,其中,
所述获取单元404,还用于获取第四缓存磁盘使用率;
所述清理单元405,还用于在所述第四缓存磁盘使用率大于所述预设阈值时,触发告警操作。
可以理解的是,本实施例的镜像拉取装置的各程序模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任何一种镜像拉取方法的部分或全部步骤。
本申请实施例还提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如上述方法实施例中记载的任何一种镜像拉取方法的部分或全部步骤。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。
所述集成的单元如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、ROM、RAM、磁盘或光盘等。
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种镜像拉取方法,其特征在于,应用于缓存代理仓库,包括:
    接收镜像拉取请求,所述镜像拉取请求由目标集群发起,所述缓存代理仓库位于镜像分发系统,所述镜像分布系统还包括中心镜像仓库和消息中心,所述缓存代理仓库位于目标机房,所述目标机房还包括所述目标集群;
    检测所述镜像拉取请求是否命中预设缓存;
    在所述镜像拉取请求命中预设缓存时,从所述预设缓存中拉取所述镜像拉取请求需要拉取的内容;
    在所述镜像拉取请求未命中所述预设缓存时,从所述中心镜像仓库拉取所述镜像拉取请求需要拉取的内容,并保存在所述预设缓存中。
  2. 根据权利要求1所述的方法,其特征在于,所述缓存代理仓库包括镜像预热模块;所述检测所述镜像拉取请求是否命中预设缓存,包括:
    通过所述镜像预热模块监听所述消息中心的镜像推送事件,所述镜像推送事件中包括镜像名称;
    对所述镜像名称进行解析,得到目标应用名称;
    检测所述目标应用名称对应的应用是否已在所述目标机房的目标集群中部署;
    在所述目标应用名称对应的应用已在所述目标机房的集群中进行部署,确认所述镜像拉取请求命中所述预设缓存;
    在所述目标应用名称对应的应用未在所述目标机房的集群中进行部署,确认所述镜像拉取请求未命中所述预设缓存。
  3. 根据权利要求1或2所述的方法,其特征在于,所述缓存代理仓库还包括镜像淘汰模块,所述方法还包括:
    获取第一缓存磁盘使用率;
    在所述第一缓存磁盘使用率大于预设阈值时,执行镜像清理任务。
  4. 根据权利要求3所述的方法,其特征在于,所述执行镜像清理任务,包括:
    检测所述预设缓存中的镜像i是否在所述目标集群中部署与该镜像i相关联的应用,所述镜像i为所述预设缓存中的任一镜像;
    若是,保留所述镜像i,若否,删除所述镜像i。
  5. 根据权利要求3或4所述的方法,其特征在于,所述方法还包括:
    获取第二缓存磁盘使用率;
    在所述第二缓存磁盘使用率小于或等于所述预设阈值时,结束所述镜像清理任务。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    在所述第二缓存磁盘使用率大于所述预设阈值时,获取应用列表;
    保留所述应用列表中被正在使用的镜像;
    删除所述应用列表中未被正在使用的镜像中版本低于预设版本的镜像。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    获取第三缓存磁盘使用率;
    在所述第三缓存磁盘使用率大于所述预设阈值时,删除所述应用列表中所有未被正在使用的镜像。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    在所述第三缓存磁盘使用率小于或等于所述预设阈值时,结束执行所述镜像清理任务。
  9. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    获取第四缓存磁盘使用率;
    在所述第四缓存磁盘使用率大于所述预设阈值时,触发告警操作。
  10. 一种镜像拉取装置,其特征在于,应用于缓存代理仓库,所述装置包括:接收单元、检测单元和镜像拉取单元,其中,
    所述接收单元,用于接收镜像拉取请求,所述镜像拉取请求由目标集群发起,所述缓存代理仓库位于镜像分发系统,所述镜像分布系统还包括中心镜像仓库和消息中心,所述缓存代理仓库位于目标机房,所述目标机房还包括所述目标集群;
    所述检测单元,用于检测所述镜像拉取请求是否命中预设缓存;
    所述镜像拉取单元,用于在所述镜像拉取请求命中预设缓存时,从所述预设缓存中拉取所述镜像拉取请求需要拉取的内容;
    所述镜像拉取单元,还用于在所述镜像拉取请求未命中所述预设缓存时,从所述中心镜像仓库拉取所述镜像拉取请求需要拉取的内容,并保存在所述预设缓存中。
  11. 根据权利要求10所述的装置,其特征在于,所述缓存代理仓库包括镜像预热模块;在所述检测所述镜像拉取请求是否命中预设缓存方面,所述检测单元具体用于:
    通过所述镜像预热模块监听所述消息中心的镜像推送事件,所述镜像推送事件中包括镜像名称;
    对所述镜像名称进行解析,得到目标应用名称;
    检测所述目标应用名称对应的应用是否已在所述目标机房的目标集群中部署;
    在所述目标应用名称对应的应用已在所述目标机房的集群中进行部署,确认所述镜像拉取请求命中所述预设缓存;
    在所述目标应用名称对应的应用未在所述目标机房的集群中进行部署,确认所述镜像拉取请求未命中所述预设缓存。
  12. 根据权利要求10或11所述的装置,其特征在于,所述缓存代理仓库还包括镜像淘汰模块,所述装置还包括:获取单元和清理单元,其中,
    所述获取单元,用于获取第一缓存磁盘使用率;
    所述清理单元,用于在所述第一缓存磁盘使用率大于预设阈值时,执行镜像清理任务。
  13. 根据权利要求12所述的装置,其特征在于,在所述执行镜像清理任务方面,所述清理单元具体用于:
    检测所述预设缓存中的镜像i是否在所述目标集群中部署与该镜像i相关联的应用,所述镜像i为所述预设缓存中的任一镜像;
    若是,保留所述镜像i,若否,删除所述镜像i。
  14. 根据权利要求12或13所述的装置,其特征在于,其中,
    所述获取单元,还用于获取第二缓存磁盘使用率;
    所述清理单元,还用于在所述第二缓存磁盘使用率小于或等于所述预设阈值时,结束所述镜像清理任务。
  15. 根据权利要求14所述的装置,其特征在于,其中,
    所述获取单元,还用于在所述第二缓存磁盘使用率大于所述预设阈值时,获取应用列表;
    所述清理单元,还用于保留所述应用列表中被正在使用的镜像;以及删除所述应用列表中未被正在使用的镜像中版本低于预设版本的镜像。
  16. 根据权利要求15所述的装置,其特征在于,其中,
    所述获取单元,用于获取第三缓存磁盘使用率;
    所述清理单元,还用于在所述第三缓存磁盘使用率大于所述预设阈值时,删除所述应用列表中所有未被正在使用的镜像。
  17. 根据权利要求16所述的装置,其特征在于,其中,
    所述清理单元,还用于在所述第三缓存磁盘使用率小于或等于所述预设阈值时,结束 执行所述镜像清理任务。
  18. 一种服务器,其特征在于,包括处理器、存储器、通信接口,以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置由所述处理器执行,所述程序包括用于执行如权利要求1-9任一项所述的方法中的步骤的指令。
  19. 一种计算机可读存储介质,其特征在于,存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-9任一项所述的方法。
  20. 一种计算机程序产品,其特征在于,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如权利要求1-9任一项所述的方法。
PCT/CN2020/091316 2020-05-20 2020-05-20 镜像拉取方法及相关产品 WO2021232289A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/091316 WO2021232289A1 (zh) 2020-05-20 2020-05-20 镜像拉取方法及相关产品
CN202080099553.0A CN115380269A (zh) 2020-05-20 2020-05-20 镜像拉取方法及相关产品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/091316 WO2021232289A1 (zh) 2020-05-20 2020-05-20 镜像拉取方法及相关产品

Publications (1)

Publication Number Publication Date
WO2021232289A1 true WO2021232289A1 (zh) 2021-11-25

Family

ID=78709073

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/091316 WO2021232289A1 (zh) 2020-05-20 2020-05-20 镜像拉取方法及相关产品

Country Status (2)

Country Link
CN (1) CN115380269A (zh)
WO (1) WO2021232289A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785770A (zh) * 2022-04-01 2022-07-22 京东科技信息技术有限公司 镜像层文件发送方法、装置、电子设备和计算机可读介质
CN117033325A (zh) * 2023-10-08 2023-11-10 恒生电子股份有限公司 镜像文件的预热拉取方法及装置
CN117369952A (zh) * 2023-12-08 2024-01-09 中电云计算技术有限公司 集群的处理方法、装置、设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115794139B (zh) * 2023-01-16 2023-04-28 腾讯科技(深圳)有限公司 镜像数据处理方法、装置、设备以及介质
CN116614517B (zh) * 2023-04-26 2023-09-29 江苏博云科技股份有限公司 一种针对边缘计算场景的容器镜像预热及分发方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110078681A1 (en) * 2009-09-30 2011-03-31 International Business Machines Corporation Method and system for running virtual machine image
CN107733977A (zh) * 2017-08-31 2018-02-23 北京百度网讯科技有限公司 一种基于Docker的集群管理方法及装置
CN110099076A (zh) * 2018-01-29 2019-08-06 中兴通讯股份有限公司 一种镜像拉取的方法及其系统
CN110096333A (zh) * 2019-04-18 2019-08-06 华中科技大学 一种基于非易失内存的容器性能加速方法
CN110908671A (zh) * 2018-09-18 2020-03-24 北京京东尚科信息技术有限公司 构建docker镜像的方法、装置及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110078681A1 (en) * 2009-09-30 2011-03-31 International Business Machines Corporation Method and system for running virtual machine image
CN107733977A (zh) * 2017-08-31 2018-02-23 北京百度网讯科技有限公司 一种基于Docker的集群管理方法及装置
CN110099076A (zh) * 2018-01-29 2019-08-06 中兴通讯股份有限公司 一种镜像拉取的方法及其系统
CN110908671A (zh) * 2018-09-18 2020-03-24 北京京东尚科信息技术有限公司 构建docker镜像的方法、装置及计算机可读存储介质
CN110096333A (zh) * 2019-04-18 2019-08-06 华中科技大学 一种基于非易失内存的容器性能加速方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785770A (zh) * 2022-04-01 2022-07-22 京东科技信息技术有限公司 镜像层文件发送方法、装置、电子设备和计算机可读介质
CN117033325A (zh) * 2023-10-08 2023-11-10 恒生电子股份有限公司 镜像文件的预热拉取方法及装置
CN117033325B (zh) * 2023-10-08 2023-12-26 恒生电子股份有限公司 镜像文件的预热拉取方法及装置
CN117369952A (zh) * 2023-12-08 2024-01-09 中电云计算技术有限公司 集群的处理方法、装置、设备及存储介质
CN117369952B (zh) * 2023-12-08 2024-03-15 中电云计算技术有限公司 集群的处理方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN115380269A (zh) 2022-11-22

Similar Documents

Publication Publication Date Title
WO2021232289A1 (zh) 镜像拉取方法及相关产品
WO2020192311A1 (zh) 主从服务器切换方法、装置、电子设备及存储介质
CN112307405B (zh) 跨设备的应用接力方法、装置、设备、系统及存储介质
US10506292B2 (en) Video player calling method, apparatus, and storage medium
CN108039963B (zh) 一种容器配置方法、装置及存储介质
CN108235754A (zh) 一种提示用户更新应用版本的方法及设备
CN109067984B (zh) 数据传输方法、电子装置及计算机可读存储介质
WO2014198116A1 (en) Method, apparatus and system for short message-based information push and mobile client supporting the same
WO2020103070A1 (zh) 一种对应用程序进行处理的方法、装置和电子设备
CN113420051A (zh) 一种数据查询方法、装置、电子设备和存储介质
US20140324892A1 (en) Method, apparatus and system for filtering data of web page
JP6915074B2 (ja) メッセージ通知方法及び端末
CN109669662A (zh) 一种语音输入方法、装置、存储介质及移动终端
CN112003978A (zh) 会议管理界面的显示方法、装置、存储介质及终端设备
CN110058980B (zh) 终端启动时间预警方法、电子装置及计算机可读存储介质
WO2015055125A1 (zh) 网页信息的处理方法及装置
CN108769286B (zh) Dns服务器配置方法及相关产品
CN113609107A (zh) 数据库管理方法、装置、电子设备及存储介质
CN111314206B (zh) 一种信息处理方法及电子设备
CN107766351B (zh) 文件目录的识别方法及装置
CN109902232B (zh) 显示控制方法及终端
WO2020052354A1 (zh) 游戏加载方法及相关产品
CN115828845A (zh) 多媒体数据查看方法、装置、介质及设备
CN115589432A (zh) 消息推送管理方法、装置、介质及设备
CN104717283A (zh) 文件下载的控制方法、终端及逻辑处理服务器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20936508

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 180423)

122 Ep: pct application non-entry in european phase

Ref document number: 20936508

Country of ref document: EP

Kind code of ref document: A1