CN115268949A - Mirror preheating method, device, equipment and storage medium - Google Patents

Mirror preheating method, device, equipment and storage medium Download PDF

Info

Publication number
CN115268949A
CN115268949A CN202210863190.4A CN202210863190A CN115268949A CN 115268949 A CN115268949 A CN 115268949A CN 202210863190 A CN202210863190 A CN 202210863190A CN 115268949 A CN115268949 A CN 115268949A
Authority
CN
China
Prior art keywords
preheating
mirror image
image
mirror
edge server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210863190.4A
Other languages
Chinese (zh)
Inventor
郭瑞英
阮兆银
胡建锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202210863190.4A priority Critical patent/CN115268949A/en
Publication of CN115268949A publication Critical patent/CN115268949A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5055Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Facsimiles In General (AREA)

Abstract

The application provides a mirror preheating method, a mirror preheating device, mirror preheating equipment and a mirror preheating storage medium, which are applied to a K8S cluster and relate to the technical field of cloud, in particular to the technical field of arrangement of cloud primary containers; in the application, a cluster mirror image preheating task arrangement is received, wherein the cluster mirror image preheating task arrangement comprises the following steps: preheating a mirror image version and including a matching rule of at least one edge server; according to the preheating mirror image version and the matching rule, determining the latest node mirror image preheating task arrangement corresponding to at least one edge server; and aiming at any edge server, based on corresponding latest node image preheating task arrangement, after monitoring that the node image preheating task arrangement changes, carrying out image pulling according to a preheating image version in the latest node image preheating task arrangement. The mirror image is pulled to the edge server in advance, so that the time consumed by pulling the mirror image is reduced, the mirror image construction time is shortened, and the efficiency is improved when containers are created or upgraded in batches.

Description

Mirror preheating method, device, equipment and storage medium
Technical Field
The present application relates to the field of cloud technologies, and in particular, to a mirror preheating method, apparatus, device, and storage medium.
Background
As the size of the Content Distribution Network (CDN) service market grows, the number of edge servers of CDN service providers increases. For convenience of management, the edge server is managed by a K8S (kubernets) cluster, and a large-scale edge server accesses to a main node of the K8S cluster.
At present, as the number of edge servers accessed by a master node of a K8S cluster increases, when a container is created or upgraded by an edge server, too long time is consumed, which may result in inconsistent capacity expansion and distribution speed with reality. For a single edge server, the process of creating, scheduling, mounting, network allocation, mirror pulling, and starting of the application of Pod needs to be performed. The time consumed by the mirror image pulling occupies the most time, and particularly in a large-scale cluster, even if means such as a point-to-point technology and the like are used, a larger mirror image still needs a longer time to be pulled, so that the mirror image construction efficiency is low, the speed is slow, and the container creating and upgrading efficiency is further reduced.
Therefore, when the edge server creates and upgrades the container, how to reduce the pulling time of the mirror image, shorten the construction time of the mirror image, and improve the construction efficiency of the mirror image, and further improve the creation and upgrade efficiency of the container is a problem to be solved at present.
Disclosure of Invention
The application provides a mirror image preheating method, device, equipment and storage medium, which are used for pulling a mirror image to an edge server in advance, so that when the edge server creates or upgrades containers in batches, the time consumed by pulling the mirror image is reduced, the mirror image construction time is shortened, and the creating and upgrading efficiency of the containers is improved.
In a first aspect, an embodiment of the present application provides a mirror preheating method, which is applied to a K8S cluster, and the method includes:
receiving cluster mirror preheating task arrangement, wherein the cluster mirror preheating task arrangement comprises the following steps: preheating a mirror image version and including a matching rule of at least one edge server;
according to the preheating mirror image version and the matching rule, determining the latest node mirror image preheating task arrangement corresponding to at least one edge server;
aiming at any edge server, based on corresponding latest node image preheating task arrangement, after monitoring that the node image preheating task arrangement changes, carrying out image pulling according to a preheating image version in the latest node image preheating task arrangement.
In a second aspect, an embodiment of the present application provides a mirror preheating device, which is applied to a K8S cluster, and the device includes:
a receiving unit, configured to receive a cluster mirror preheating task arrangement, where the cluster mirror preheating task arrangement includes: preheating a mirror image version and including a matching rule of at least one edge server;
the determining unit is used for determining the latest node image preheating task arrangement corresponding to at least one edge server according to the preheating image version and the matching rule;
and the pulling unit is used for pulling the mirror image according to the preheating mirror image version in the latest node mirror image preheating task arrangement after monitoring that the node mirror image preheating task arrangement is changed based on the corresponding latest node mirror image preheating task arrangement aiming at any edge server.
In a possible implementation manner, the receiving unit is specifically configured to:
and receiving the cluster mirror image preheating task arrangement issued by the mirror image preheating system through the expansion manager of the K8S cluster.
In one possible implementation, at least one edge server is registered to the master node of the K8S cluster through a cloud edge channel.
In a possible implementation manner, the determining unit is specifically configured to:
writing the preheating image version into a node image preheating task arrangement of an edge server corresponding to at least one matched edge server name through an expansion manager of the K8S cluster according to a matching rule, and determining the latest node image preheating task arrangement corresponding to at least one edge server; or
And writing the preheating mirror image version into the node mirror image preheating task arrangement of at least one edge server corresponding to the matched function label through an expansion manager of the K8S cluster according to the matching rule, and determining the latest node mirror image preheating task arrangement corresponding to at least one edge server.
In a possible implementation manner, the node mirroring preheating task orchestration of the edge server is created and managed by an expansion manager of the K8S cluster after the edge server is successfully registered to the master node of the K8S cluster.
In a possible implementation manner, the pulling unit is specifically configured to:
monitoring the node mirror image preheating task arrangement change condition in a main node of the K8S cluster through a cloud management server of the K8S cluster, and distributing change information to an edge server through a cloud edge channel after determining that the node mirror image preheating task arrangement changes;
and (4) carrying out mirror image pulling according to the preheating mirror image version in the latest node mirror image preheating task arrangement through the edge server.
In a possible implementation manner, the pulling unit is specifically configured to:
calling a container runtime interface through an edge server, and pulling a mirror image corresponding to the preheating mirror image version from a content distribution network;
wherein, the mirror image stored in the content distribution network is obtained from the mirror image warehouse based on the preheating mirror image version.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory and a processor, wherein the memory is used for storing computer instructions; and the processor is used for executing computer instructions to realize the steps of the image preheating method provided by the embodiment of the application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where computer instructions are stored, and when executed by a processor, the computer instructions implement the steps of the image pre-heating method provided in the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, which includes computer instructions stored in a computer-readable storage medium; when the processor of the electronic device reads the computer instructions from the computer-readable storage medium, the processor executes the computer instructions, so that the electronic device executes the steps of the image preheating method provided by the embodiment of the application.
The beneficial effects of this application are as follows:
the embodiment of the application provides a mirror preheating method, a mirror preheating device, mirror preheating equipment and a mirror preheating storage medium, which are applied to a K8S cluster; firstly, receiving cluster mirror image preheating task arrangement, wherein the cluster mirror image preheating task arrangement comprises the following steps: preheating a mirror image version and including a matching rule of at least one edge server; then, according to the preheating mirror image version and the matching rule in the mirror image preheating task arrangement, determining the latest node mirror image preheating arrangement corresponding to at least one edge server; and finally, aiming at any edge server, based on corresponding latest node image preheating task arrangement, after monitoring that the node image preheating task arrangement changes, carrying out image pulling according to a preheating image version in the latest node image preheating task arrangement. The method has the advantages that the K8S cluster supports the capability of operating the mirror image, the problem of mirror image preheating of a large-scale edge server is solved, and the mirror image is pulled to the edge server in advance, so that the time consumed by pulling the mirror image is reduced, the construction time of the mirror image is shortened, the construction efficiency of the mirror image is improved, and further the construction and upgrading efficiency of the container is improved when the edge server establishes or upgrades containers in batches.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 2 is a flowchart of a mirror preheating method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an embodiment of mirror preheating provided in this application;
FIG. 4 is a schematic diagram of another embodiment of mirror preheating provided in an embodiment of the present application;
fig. 5 is a structural diagram of a mirror preheating device according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the technical solutions in the embodiments of the present application will be described below clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
In order to facilitate a better understanding of the technical solutions of the present application for those skilled in the art, a part of the concepts related to the present application will be described below.
And the mirror image preheating is to pull the mirror image to a designated edge server in advance.
Pod is the basis for all traffic types in a K8S cluster, being a combination of one or more containers. These containers share storage, networks, and namespaces, as well as specifications of how to operate. In Pod, all containers are uniformly arranged and scheduled and run in a shared context. For a particular application, the pods are their logical hosts, and contain multiple application containers that are business related.
The word "exemplary" is used hereinafter to mean "serving as an example, embodiment, or illustration. Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The terms "first" and "second" are used herein for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first" and "second" may explicitly or implicitly include one or more features, and in the description of embodiments of this application, unless otherwise indicated, "plurality" means two or more.
The following briefly introduces the design concept of the embodiments of the present application:
as the size of CDN service markets increases, the number of edge servers for CDN service providers increases. For convenience of management, the edge servers are generally managed by the K8S cluster, but due to the excessive number of edge servers, performance bottleneck and single point problems may occur when a single K8S cluster manages all the edge servers. Secondly, since the edge servers are distributed in different regions and different operator network environments, problems such as network fluctuation easily occur, and the like, affect message transmission between the edge servers and the Master node (K8S Master) of the K8S cluster, that is, affect message transmission between the edge side and the center side, and further affect the stability of the service of the whole CDN system.
Therefore, with the rapid development of technologies such as cloud native and edge computing, the cloud edge channel provides a solution for accessing a large-scale edge server to the K8S Master. The principle of the cloud edge channel is as follows: establishing a tunnel server at the cloud end, and connecting the large-scale edge server to the cloud end based on a Websocket or a fast UDP (user Datagram protocol) network connection (QUIC) protocol through the established tunnel server; and the edge server processes the received message and sends the message to the cloud end through the tunnel, so that cloud-side communication is realized.
At present, a large-scale edge server can be accessed through a cloud edge channel K8S Master. However, as the number of edge servers increases, the time consumed by the edge servers to create or upgrade the container is too long, which may result in inconsistent capacity expansion and distribution speed. For a single edge server, the process of Pod creation, scheduling, mount, network allocation, mirror pull, and application startup needs to be undergone. The time consumed by the mirror image pulling occupies the most time, and particularly in a large-scale cluster, even if means such as a point-to-point technology and the like are used, a larger service program mirror image still needs a longer time to be pulled, so that the mirror image construction efficiency is low, the speed is slow, and the container creating and upgrading efficiency is further reduced.
Therefore, when the edge server creates and upgrades the container, how to reduce the pulling time of the mirror image, shorten the construction time of the mirror image, and improve the construction efficiency of the mirror image, and further improve the creation and upgrade efficiency of the container is a problem to be solved at present.
In view of this, embodiments of the present application provide a method, an apparatus, a device, and a storage medium for preheating a mirror image, so as to solve the problem of preheating a mirror image of a large-scale edge server, and pull a mirror image to the edge server in advance, so as to reduce the pulling time of the mirror image, shorten the construction time of the mirror image, improve the construction efficiency of the mirror image, and further improve the construction and upgrade efficiency of a container when the edge server creates and upgrades the container.
Moreover, it is considered that the native K8S cluster does not provide an operation-oriented mirror image capability, and in the related art, the mirror image preheating technology generally uses a message queue in combination with a Docker container to perform mirror image preheating, which can only solve mirror image pull of a small-scale cluster, is not applicable to the edge computing scenario of the embodiment of the present application, and cannot solve the problem of mirror image preheating of a large-scale edge server. In the embodiment of the application, a cloud primary concept is adopted, and a mirror preheating capacity facing a large-scale edge server is realized by using a K8S cluster and a cloud edge channel of a cloud edge architecture.
Considering that the K8S cluster has a strong high-expansion function, and Custom Resource Definitions (CDRs) are a common expansion manner, developers can add customized resources to the K8S cluster. Therefore, in order to realize the mirror image preheating capacity facing the large-scale edge server through the K8S cluster and the cloud edge channel of the cloud edge architecture. In the embodiment of the Application, two CRD resources are defined through an Application Program Interface (API) of a K8S, which are respectively a cluster image preheating task (ClusterImageTask) resource of a cluster dimension and a node image preheating task (NodeImageTask) resource of a node dimension, a custom Controller (Controller) of an Operator is started through an extended manager (Operator) of the K8S cluster, and the ClusterImageTask resource and the NodeImageTask resource are maintained through tuning (register), so that the K8S cluster supports an image preheating capability. The Cluster ImageTask resource defines a mirror preheating rule in cluster dimensions, and specifies a mirror version needing preheating and a preheating edge server range; the NodeImageTask resource defines the mirror image and the corresponding mirror image version which are required to be pulled by each edge server in the node dimension.
In the embodiment of the application, the large-scale edge server is managed through the cloud edge channel, and the mirror image of the edge server is pulled through message distribution, so that the mirror image preheating capacity is realized.
Among other things, the Operator' S design is directed to simplifying complex stateful application management, which automatically creates, manages, and configures application instances through the CRD extended K8S API. The method is essentially a tool for making a state service aiming at a specific scene or simplifying operation and maintenance management aiming at a complex application scene. Generally, the method is deployed into a K8S cluster in a deployment form. After the Operator is deployed, the configuration information of the cluster does not need to be managed, only CRD resources need to be created, and the Operator can monitor the resource object, so that the difficulty and the cost of operation and maintenance are greatly reduced.
In one possible implementation, first, a cluster image warming task orchestration is received, where the cluster image warming task orchestration includes: preheating a mirror image version and including a matching rule of at least one edge server; then, according to the preheating mirror image version and the matching rule in the mirror image preheating task arrangement, determining the latest node mirror image preheating arrangement corresponding to at least one edge server; and finally, aiming at any edge server, based on corresponding nearest node mirror image preheating task arrangement, after monitoring that the node mirror image preheating task arrangement changes, carrying out mirror image pulling according to a preheating mirror image version in the latest node mirror image preheating task arrangement. The method has the advantages that the K8S cluster supports the capability of operating the mirror image, the problem of mirror image preheating of a large-scale edge server is solved, and the mirror image is pulled to the edge server in advance, so that the time consumed by pulling the mirror image is reduced, the construction time of the mirror image is shortened, the construction efficiency of the mirror image is improved, and further the construction and upgrading efficiency of the container is improved when the edge server establishes or upgrades containers in batches.
The preferred embodiments of the present application will be described in conjunction with the drawings of the specification, it should be understood that the preferred embodiments described herein are only for illustrating and explaining the present application, and are not intended to limit the present application, and the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of an embodiment of the present application. The application scenario includes a cloud 110, an edge server 120, and a terminal device 130; the cloud end 110 and the edge server 120 may communicate with each other through a cloud edge channel, and the edge server 120 and the terminal device 130 may communicate with each other through a communication network.
In an alternative embodiment, the communication network may be a wired network or a wireless network. Thus, the edge server 120 and the terminal device 130 may be directly or indirectly connected through wired or wireless communication. For example, the terminal device 130 may be indirectly connected to the edge server 120 through a wireless access point, or the terminal device 130 may be directly connected to the edge server 120 through the internet, which is not limited herein.
In the embodiment of the present application, the terminal device 130 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a desktop computer, an e-book reader, an intelligent voice interaction device, an intelligent household appliance, a vehicle-mounted terminal, and other devices; various clients can be installed on the terminal device, and the clients can be application programs (such as browsers, game software and the like) and also can be web pages, applets and the like;
the edge server 120 may be a backend server corresponding to a client installed in the terminal device 130. The edge server 120 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, content distribution network, and a big data and artificial intelligence platform.
Based on the above application scenarios, the image preheating method provided by the exemplary embodiment of the present application is described below with reference to the above application scenarios and according to the accompanying drawings, it should be noted that the above application scenarios are only shown for the convenience of understanding the spirit and principle of the present application, and the embodiments of the present application are not limited in this respect.
Referring to fig. 2, fig. 2 exemplarily provides a mirror preheating method in the embodiment of the present application, which is applied to a K8S cluster, and the method includes:
step S200, receiving cluster mirror image preheating task arrangement, wherein the cluster mirror image preheating task arrangement comprises the following steps: a pre-heated mirrored version, and matching rules including at least one edge server.
Wherein, at least one edge server is registered to the K8S Master through a cloud edge channel.
In a possible implementation manner, the ClusterImageTask arrangement issued by the mirror preheating system is received through the K8S Operator.
Step S201, according to the preheating mirror image version and the matching rule, determining the latest node mirror image preheating task arrangement corresponding to at least one edge server.
In a possible implementation manner, the preheating mirror image version is written into the NodeImageTask arrangement of the edge server corresponding to the matched at least one edge server name through the K8S Operator according to the matching rule, and the latest NodeImageTask arrangement corresponding to the at least one edge server is determined.
In another possible implementation manner, the pre-heating mirror image version is written into the NodeImageTask layout of at least one edge server corresponding to the matched function tag through the K8S Operator according to the matching rule, and the latest NodeImageTask layout corresponding to at least one edge server is determined.
The NodeImageTask arrangement of the edge server is created for the edge server which is successfully registered through the K8S Operator after the edge server is successfully registered to the K8S Master, and is managed by the K8S Operator.
Step S202, aiming at any edge server, based on corresponding latest node image preheating task arrangement, after monitoring that the node image preheating task arrangement changes, carrying out image pulling according to a preheating image version in the latest node image preheating task arrangement.
In a possible implementation manner, a Cloud management server (Cloud-Manager) of a K8S cluster is used for monitoring the arrangement change condition of the NodeImageTask in a K8S Master, and after the arrangement change of the NodeImageTask is determined, change information is distributed to an edge server through a Cloud edge channel; calling a Container Runtime Interface (CRI) through an edge server, and pulling a mirror image corresponding to a preheated mirror image version from the CDN according to the preheated mirror image version in the latest NodeImageTask arrangement; the mirror image stored in the CDN is obtained from a mirror image warehouse based on a preheated mirror image version.
In one possible implementation, the Cloud Edge channel is formed by the Cloud-Manager and an Edge-Agent (Edge-Agent) in the Edge server.
According to the preheating mirror image version and the matching rule in the mirror image preheating task arrangement, the latest node mirror image preheating arrangement corresponding to at least one edge server can be determined; and aiming at any edge server, based on corresponding nearest node image preheating task arrangement, after monitoring that the node image preheating task arrangement changes, carrying out image pulling according to a preheating image version in the latest node image preheating task arrangement. The method has the advantages that the K8S cluster supports the capability of operating mirror images, the problem of mirror image preheating of large-scale edge servers is solved, and the mirror images are pulled to the edge servers in advance, so that the time consumption of mirror image pulling is reduced, the mirror image construction time is shortened, the mirror image construction efficiency is improved, and further the container construction and upgrading efficiency is improved when the edge servers create or upgrade containers in batches.
Referring to fig. 3, fig. 3 is a schematic diagram of an embodiment of mirror preheating provided in this application; as can be seen from fig. 3:
firstly, a K8S cluster receives at least one ClusterImageTask arrangement sent by a mirror preheating system through a Controller in an Operator; for example, the clusterImageTask1 arrangement and the clusterImageTask2 arrangement sent by the mirror preheating system are received. The ClusterImageTask layout is generated by the mirror preheating system according to a plurality of created mirror preheating rules.
In one possible implementation, the ClusterImageTask orchestration defines rules for image preheating in cluster dimensions, which specify which ranges of edge servers an image is to be preheated in bulk. Clusterimage task, comprising: mirror version (image) that needs preheating, matching rule (selector) that preheats edge server, mirror preheating policy (strategy). Wherein, the selector supports two matching modes: matching edge server names (nodenames) and matching function labels (nodelabes); the schedule includes a task timeout time (deadline), a retry number (retry), and a retry timeout time (retry).
For example, the ClusterImageTask1 arrangement is defined as follows:
apiVersion:cdn.custom.io/v1
kind:ClusterImageTask
spec:
image:nginx:1.15.11
selector:
nodenames:
-node1
-node2
strategy:
deadline:1h
retrynumber:3
retryttl:30s
and then, after receiving at least one ClusterImageTask layout, a Controller in the Operator processes the ClusterImageTask layout, decomposes the contents of the ClusterImageTask layout, and writes the contents into the NodeImageTask layout of the matched edge server for guiding the large-scale edge server to carry out mirror image preheating.
When the contents organized by the ClusterImageTask are decomposed and written into the NodeImageTask of the matched edge server, the mirror image version needing to be preheated and the corresponding mirror image preheating strategy are written into the NodeImageTask arrangement of the matched edge server mainly according to the matching rule of preheating the edge server in the ClusterImageTask arrangement.
The matching rule of the preheating edge server supports two matching modes, namely a matching edge server name and a matching function label; therefore, when the matched edge server is determined, the edge server is directly determined according to the name of the matched edge server arranged by the ClusterImageTask, or the edge server supporting the corresponding function is determined according to the matched function label arranged by the ClusterImageTask.
For example, for the content decomposition of the above ClusterImageTask1 arrangement, the image version (image: nginx: 1.15.11) and the corresponding image preheating policy (geometry: offline: 1h, retry umber:3, retry tls: 30 s) that need to be preheated are written into the NodeImageTask1 arrangement of the corresponding edge server Node1 and the NodeImageTask2 arrangement of the edge server Node2, respectively.
In the same way, decomposing the contents of the ClusterImageTask1 arrangement and the ClusterImageTask2 arrangement, and writing the contents into the NodeImageTask arrangement of the corresponding edge server, wherein the NodeImageTask arrangement respectively comprises the following steps: nodeImageTask1, nodeImageTask2, and NodeImageTask 3.
Therefore, the NodeImageTask layout proposed in the embodiment of the present application defines, in the node dimension, the mirror image and the corresponding mirror image version that the edge server needs to be preheated. The NodeImageTask is arranged by comprising: mirror sets (containers) that need to be preheated, including a mirror version (image) and a mirror preheating policy (strategy).
For example, the NodeImageTask orchestration is defined as follows:
apiVersion:cdn.custom.io/v1
kind:NodeImageTask
spec:
containers:
image:nginx:1.15.11
strategy:
deadline:1h
retrynumber:3
retryttl:30s
image:nginx:1.15.12
strategy:
deadline:1h
retrynumber:3
retryttl:30s
it should be noted that, in the embodiment of the present application, each edge server corresponds to a NodeImageTask resource, and the NodeImageTask resource is a corresponding NodeImageTask resource that is automatically created for the edge server by a Controller of the K8S Operator after the edge server is successfully registered to the K8S Master through a cloud edge channel, and when the number of edge servers at an edge end changes, the Controller of the K8S Operator also deletes or creates the corresponding NodeImageTask resource correspondingly.
Then, a Controller of the K8S Operator sends the latest NodeImageTask layout corresponding to each edge server obtained after the processing based on the ClusterImageTask layout to the corresponding edge servers respectively.
In a possible implementation manner, a Controller of the K8S Operator lays out and submits the determined latest NodeImageTask to an interface service (API-server) of the K8S Master; and monitoring the node ImageTask layout change condition of the API-server in real time through the Cloud-Manager, namely monitoring whether the node ImageTask layout corresponding to each edge server changes or not, and after monitoring that the node ImageTask layout of the edge server changes, the Cloud-Manager distributes the change information to the edge servers through Cloud side channels, for example, the latest node ImageTask layout is sent to the corresponding edge servers through the Cloud side channels, or the changed mirror image version and the corresponding mirror image preheating strategy are sent to the corresponding edge servers through the Cloud side channels.
And finally, after receiving the corresponding latest NodeImageTask arrangement or the changed mirror image version and the corresponding mirror image preheating strategy, the Edge server preheats the mirror image through the corresponding Edge-Agent.
In a possible implementation mode, the Edge-Agent calls CRI (e.g. docker) to pull a corresponding mirror image according to a mirror image version and a mirror image preheating strategy in the latest NodeImageTask arrangement; or calling the CRI to pull the corresponding mirror according to the received changed mirror version and the corresponding mirror preheating strategy.
And in the process of pulling the mirror image, the Edge-Agent sends pulling information carrying the mirror image version to the CDN.
And if the CDN detects that the mirror image corresponding to the mirror image version is stored in the CDN, the mirror image is directly fed back to the Edge-Agent.
If the CDN does not detect that the CDN stores the mirror image corresponding to the mirror image version, the CDN sends reading information carrying the mirror image version to the mirror image warehouse so as to read the corresponding mirror image from the mirror image warehouse and store the mirror image.
And the CDN acceleration mode is adopted to pull the mirror image, so that the pressure on a mirror image warehouse is reduced.
And in the process of pulling the mirror image, pulling the mirror image according to a mirror image preheating strategy, specifically, in the time-out period of the task, the mirror image pulling which is less than the retry time is executed according to the retry time, and the time of pulling the mirror image every time does not exceed the retry time-out period.
Referring to fig. 4, fig. 4 exemplarily provides a schematic diagram of another mirror preheating embodiment in the embodiment of the present application, and it can be known from fig. 4 that:
registering the version of the API-server registered by the K8S Operator to the K8S master as ClusterImageTask resource and NodeImageTask resource; appointing mirror preheating rules through ClusterImageTask and appointing mirror images required to be pulled by each edge server through NodeImageTask;
the K8S Operator starts a Controller, and ClusterImageTask resources and NodeImageTask resources are maintained through the register;
the K8S Edge server is registered on the K8S Master through a Cloud Edge channel formed by the Cloud-Manager and the Edge-Agent;
after the K8S edge server is successfully registered, a node ImageTask is created for each edge server by the Reconfile of the Controller and used for initializing a mirror image version which needs to be pulled by the edge server;
a user executes the operation of mirror image preheating through the mirror image preheating system, and the mirror image preheating system firstly persists the mirror image preheating task to a database so as to be convenient for subsequently inquiring and modifying the mirror image preheating task;
after the mirror preheating system persists the mirror preheating task, the information such as the preheated mirror version, the appointed edge server range and the like is written into the custom resource arrangement of the ClusterImageTask resource, and the ClusterImageTask arrangement is issued to the K8S cluster and received by the K8S Operator;
a Controller in an Operator tunes according to information arranged in a ClusterImageTask through a register, writes specified mirror images and mirror image version information into NodeImageTask arrangement of a specified edge server according to a mirror image version (image) and a matching rule (selector) in a mirror image preheating rule (spec), updates NodeImageTask arrangement of a node dimension and submits the NodeImageTask arrangement to an API-server of a K8S Master;
the Cloud-Manager at the Cloud end can monitor the NodeImageTask layout change condition of the API-server in real time and identify whether the NodeImageTask layout of the edge server changes or not;
distributing the NodeImageTask layout change message to Edge-agents of Edge ends by the Cloud-manager through a Cloud Edge channel;
the Edge-Agent of the Edge end receives the editing change message and calls the CRI to pull the mirror image; during the pulling process, the CDN accelerates to reduce the pressure on the mirror warehouse.
In the application, mirror preheating rules and a pulling mode are defined based on an declared API, controller logic is defined through a K8S Operator, so that a K8S cluster supports the capacity of operating a mirror, the problem of mirror preheating of a large-scale edge server in the scene of using a cloud edge channel of a cloud edge framework to manage the large-scale edge server is solved, mirror pulling messages are distributed to the edge server through the cloud edge channel, the edge server pulls the mirror in advance, and pressure on a mirror warehouse is reduced through CDN acceleration when the mirror is pulled. Furthermore, when the edge server creates or upgrades containers in batches, time consumed by pulling the mirror images is reduced, the construction time of the mirror images is shortened, the construction efficiency of the mirror images is improved, and the creation and upgrade efficiency of the containers is improved.
The embodiment of the method is based on the same inventive concept, and the embodiment of the present application further provides a digital mirror preheating device, and the principle of the device for solving the problem is similar to that of the method of the embodiment, so the implementation of the device can refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 5, fig. 5 exemplarily provides a mirror preheating device 500 applied to a K8S cluster according to an embodiment of the present application, where the mirror preheating device 500 includes:
a receiving unit 501, configured to receive a cluster mirror preheating task arrangement, where the cluster mirror preheating task arrangement includes: preheating a mirror image version and including a matching rule of at least one edge server;
a determining unit 502, configured to determine, according to the preheat mirror version and the matching rule, a preheat task arrangement of a latest node mirror corresponding to at least one edge server;
a pulling unit 503, configured to pull, for any edge server, a mirror image according to a preheat mirror image version in the corresponding latest node mirror image preheat task arrangement after monitoring that the node mirror image preheat task arrangement changes based on the corresponding latest node mirror image preheat task arrangement.
In a possible implementation manner, the receiving unit 501 is specifically configured to:
and receiving the cluster mirror image preheating task arrangement issued by the mirror image preheating system through the expansion manager of the K8S cluster.
In one possible implementation, at least one edge server is registered to the master node of the K8S cluster through a cloud edge channel.
In a possible implementation manner, the determining unit 502 is specifically configured to:
writing the preheating image version into a node image preheating task arrangement of an edge server corresponding to at least one matched edge server name through an expansion manager of the K8S cluster according to a matching rule, and determining the latest node image preheating task arrangement corresponding to at least one edge server; or
And writing the preheating mirror image version into the node mirror image preheating task arrangement of at least one edge server corresponding to the matched function label through an expansion manager of the K8S cluster according to the matching rule, and determining the latest node mirror image preheating task arrangement corresponding to at least one edge server.
In a possible implementation manner, the node mirroring preheating task orchestration of the edge server is created and managed by an expansion manager of the K8S cluster after the edge server is successfully registered to the master node of the K8S cluster.
In a possible implementation manner, the pulling unit 503 is specifically configured to:
monitoring the node mirror image preheating task arrangement change condition in a main node of the K8S cluster through a cloud management server of the K8S cluster, and distributing change information to an edge server through a cloud edge channel after determining that the node mirror image preheating task arrangement changes;
and (4) carrying out mirror image pulling according to the preheating mirror image version in the latest node mirror image preheating task arrangement through the edge server.
In a possible implementation manner, the pulling unit 503 is specifically configured to:
calling a container runtime interface through an edge server, and pulling a mirror image corresponding to the preheating mirror image version from a content distribution network;
wherein, the mirror image stored in the content distribution network is obtained from the mirror image warehouse based on the preheating mirror image version.
For convenience of description, the above parts are separately described as units (or modules) according to functional division. Of course, the functionality of the various elements (or modules) may be implemented in the same one or more pieces of software or hardware in the practice of the present application.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Having described the mirror image warm-up method and apparatus of the exemplary embodiments of the present application, next, an electronic device for mirror image warm-up according to another exemplary embodiment of the present application will be described.
The embodiment of the method is based on the same inventive concept, and the embodiment of the application also provides an electronic device which can be a server. In this embodiment, the electronic device may be configured as shown in fig. 6, and include a memory 601, a communication module 603, and one or more processors 602.
A memory 601 for storing computer programs executed by the processor 602. The memory 601 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a program required for running an instant messaging function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
The memory 601 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 601 may also be a non-volatile memory (non-volatile memory), such as a read-only memory (rom), a flash memory (flash memory), a hard disk (HDD) or a solid-state drive (SSD); or the memory 601 is any other medium that can be used to carry or store a desired computer program in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 601 may be a combination of the above memories.
The processor 602 may include one or more Central Processing Units (CPUs), a digital processing unit, and the like. A processor 602 for implementing the above-mentioned mirror preheating method when calling the computer program stored in the memory 601.
The communication module 603 is used for communicating with the terminal device and other servers.
The embodiment of the present application does not limit the specific connection medium among the memory 601, the communication module 603, and the processor 602. In the embodiment of the present application, the memory 601 and the processor 602 are connected through a bus 604 in fig. 6, the bus 604 is depicted by a thick line in fig. 6, and the connection manner between other components is merely illustrative and is not limited thereto. The bus 604 may be divided into an address bus, a data bus, a control bus, and the like. For ease of description, only one thick line is depicted in fig. 6, but only one bus or one type of bus is not depicted.
The memory 601 stores a computer storage medium, and the computer storage medium stores computer-executable instructions for implementing the image preheating method according to the embodiment of the present application. The processor 602 is configured to perform the mirror preheating method described above.
In some possible embodiments, the aspects of the image pre-heating method provided by the present application may also be implemented in the form of a program product, which includes a computer program for causing an electronic device to perform the steps in the image pre-heating method according to various exemplary embodiments of the present application described above in this specification, when the program product is run on the electronic device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include a computer program, and may be run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with a command execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with a readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with a command execution system, apparatus, or device.
The computer program embodied on the readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having a computer-usable computer program embodied therein.
While the preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A mirror preheating method is applied to a K8S cluster, and comprises the following steps:
receiving cluster mirror image preheating task arrangement, wherein the cluster mirror image preheating task arrangement comprises the following steps: preheating a mirror image version and including a matching rule of at least one edge server;
according to the preheating mirror image version and the matching rule, determining the latest node mirror image preheating task arrangement corresponding to the at least one edge server;
aiming at any edge server, based on corresponding latest node image preheating task arrangement, after monitoring that the node image preheating task arrangement changes, carrying out image pulling according to a preheating image version in the latest node image preheating task arrangement.
2. The method of claim 1, wherein the receiving cluster image preheat task orchestration comprises:
and receiving the cluster mirror image preheating task arrangement sent by the mirror image preheating system through the expansion manager of the K8S cluster.
3. The method of claim 1, wherein the at least one edge server is registered on the master node of the K8S cluster through a cloud-edge channel.
4. The method of claim 1, wherein the determining a most recent node image pre-heating task orchestration for the at least one edge server based on the pre-heating image version and the matching rules comprises:
writing the preheating mirror image version into a node mirror image preheating task arrangement of an edge server corresponding to at least one matched edge server name through an extension manager of the K8S cluster according to the matching rule, and determining the latest node mirror image preheating task arrangement corresponding to at least one edge server; or
And writing the preheating mirror image version into the node mirror image preheating task arrangement of at least one edge server corresponding to the matched functional label through the expansion manager of the K8S cluster according to the matching rule, and determining the latest node mirror image preheating task arrangement corresponding to the at least one edge server.
5. The method of claim 4, wherein the node mirroring pre-heat tasking of the edge server is created and managed by an extension manager of the K8S cluster after the edge server successfully registers with a master node of the K8S cluster.
6. The method of claim 1, wherein after monitoring that the node image preheating task arrangement changes, performing image pull according to a preheating image version in the latest node image preheating task arrangement, comprises:
monitoring the node mirror image preheating task arrangement change condition in the main node of the K8S cluster through the cloud management server of the K8S cluster, and distributing change information to the edge server through a cloud edge channel after determining that the node mirror image preheating task arrangement changes;
and carrying out mirror image pulling according to the preheating mirror image version in the latest node mirror image preheating task arrangement through the edge server.
7. The method of claim 1, wherein the performing image pull based on the preheat image version in the most recent node image preheat task schedule comprises:
calling a container runtime interface through the edge server, and pulling a mirror image corresponding to the preheating mirror image version from a content distribution network;
wherein the image stored in the content distribution network is obtained from an image repository based on a pre-heated image version.
8. A mirror preheating device, applied to a K8S cluster, the device comprising:
a receiving unit, configured to receive a cluster mirror preheating task arrangement, where the cluster mirror preheating task arrangement includes: preheating a mirror image version and including a matching rule of at least one edge server;
a determining unit, configured to determine, according to the preheating mirror version and the matching rule, a latest node mirror preheating task arrangement corresponding to the at least one edge server;
and the pulling unit is used for pulling the mirror image according to the preheating mirror image version in the latest node mirror image preheating task arrangement after monitoring that the node mirror image preheating task arrangement is changed based on the corresponding latest node mirror image preheating task arrangement aiming at any edge server.
9. An electronic device, comprising: a memory and a processor, wherein:
the memory for storing a computer program;
the processor, configured to execute the computer program, implements the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202210863190.4A 2022-07-20 2022-07-20 Mirror preheating method, device, equipment and storage medium Pending CN115268949A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210863190.4A CN115268949A (en) 2022-07-20 2022-07-20 Mirror preheating method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210863190.4A CN115268949A (en) 2022-07-20 2022-07-20 Mirror preheating method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115268949A true CN115268949A (en) 2022-11-01

Family

ID=83767863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210863190.4A Pending CN115268949A (en) 2022-07-20 2022-07-20 Mirror preheating method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115268949A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115964070A (en) * 2022-12-02 2023-04-14 北京凌云雀科技有限公司 Cloud native online incremental upgrading method and device and cloud native platform
CN117033325A (en) * 2023-10-08 2023-11-10 恒生电子股份有限公司 Mirror image file preheating and pulling method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115964070A (en) * 2022-12-02 2023-04-14 北京凌云雀科技有限公司 Cloud native online incremental upgrading method and device and cloud native platform
CN117033325A (en) * 2023-10-08 2023-11-10 恒生电子股份有限公司 Mirror image file preheating and pulling method and device
CN117033325B (en) * 2023-10-08 2023-12-26 恒生电子股份有限公司 Mirror image file preheating and pulling method and device

Similar Documents

Publication Publication Date Title
CN109976667B (en) Mirror image management method, device and system
CN115268949A (en) Mirror preheating method, device, equipment and storage medium
US20160261693A1 (en) Cloud-based data backup and operation method and system
CN112799825A (en) Task processing method and network equipment
CN111245634B (en) Virtualization management method and device
US20170033980A1 (en) Agent manager for distributed transaction monitoring system
CN109522055B (en) Connection preheating method and system based on distributed service calling
CN109104368B (en) Connection request method, device, server and computer readable storage medium
CN107800779B (en) Method and system for optimizing load balance
CN113079098B (en) Method, device, equipment and computer readable medium for updating route
CN114296953A (en) Multi-cloud heterogeneous system and task processing method
CN112187916B (en) Cross-system data synchronization method and device
CN111061723A (en) Workflow implementation method and device
CN115562887A (en) Inter-core data communication method, system, device and medium based on data package
CN114979286A (en) Access control method, device and equipment for container service and computer storage medium
WO2021022947A1 (en) Method for deploying virtual machine and related device
CN110365839B (en) Shutdown method, shutdown device, shutdown medium and electronic equipment
CN114625479A (en) Cloud edge collaborative application management method in edge computing and corresponding device
CN111008043A (en) Server starting method of cloud platform and terminal
CN113296968A (en) Address list updating method, device, medium and electronic equipment
CN113704187B (en) Method, apparatus, server and computer readable medium for generating file
CN114884956B (en) Method and device for realizing multi-cluster architecture and multi-cluster architecture system
CN114936098B (en) Data transfer method, device, back-end equipment and storage medium
CN115604333B (en) Distributed big data analysis service scheduling method and system based on dubbo
CN117493027B (en) Thermal upgrading method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination