CN115248721A - Starting method of container application, mirror image management method and related equipment - Google Patents

Starting method of container application, mirror image management method and related equipment Download PDF

Info

Publication number
CN115248721A
CN115248721A CN202210302434.1A CN202210302434A CN115248721A CN 115248721 A CN115248721 A CN 115248721A CN 202210302434 A CN202210302434 A CN 202210302434A CN 115248721 A CN115248721 A CN 115248721A
Authority
CN
China
Prior art keywords
data
container application
container
mirror image
data access
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210302434.1A
Other languages
Chinese (zh)
Inventor
卞盛伟
张森
齐飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to PCT/CN2022/083545 priority Critical patent/WO2022206722A1/en
Publication of CN115248721A publication Critical patent/CN115248721A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances

Abstract

The application provides a starting method of a container application, which comprises the following steps: acquiring an empty shell mirror image corresponding to the container application and a data access characteristic model of the container application from a mirror image warehouse, creating an instance of the container application according to the empty shell mirror image, acquiring target mirror image data from mirror image data of a primary container mirror image according to the data access characteristic model, and providing data required by operation to the instance of the container application according to the target mirror image data so as to operate the instance of the container application, thereby starting the container application. The data access characteristic model can carry out deep analysis on the data access link, accurate prediction on the data access link is achieved, even if execution paths are different, data really needed by cold start of the container application can be prefetched, cache hit rate is improved, therefore, the frequency of remotely acquiring the data is reduced, and the efficiency of the cold start of the container application is improved.

Description

Starting method of container application, mirror image management method and related equipment
The present application claims priority of chinese patent application entitled "method for starting a container application, method for mirror image management, and related apparatus for managing container application" filed by the chinese intellectual property office of china on 2021, 04/01, 202110355627.9, which is incorporated herein by reference in its entirety.
Technical Field
The present application relates to the field of cloud computing technologies, and in particular, to a method for starting a container application, a method for managing a mirror image, and a corresponding apparatus, system, device, computer-readable storage medium, and computer program product.
Background
Container (container) is a virtualization technology for encapsulating applications. Specifically, the containers on the same host share an Operating System (OS) of the host, and the applications in different containers are isolated and limited by lightweight kernel characteristics, so that the containers can greatly save resource overhead of the host and improve deployment density compared with virtual machines.
The starting modes of the container application comprise cold starting, hot starting and the like. The cold start means that no mirror image exists locally, the mirror image needs to be downloaded from a mirror image warehouse to a node, and then a container instance is started from the mirror image on the node; the hot boot means that the node has a mirror locally, and the container instance is directly started from the mirror locally of the node. When a new container application is deployed to a node, the container application triggers a cold start.
Downloading the images from the image repository takes a significant amount of time, greatly affecting the efficiency of the cold start of the container application. To this end, a cold boot method of prefetching and loading on demand (lazy loading) data is proposed in the industry. Specifically, the native container image is reconstructed into a new image format with addressable content, and the reconstructed new image is pre-run once to obtain data which can be read when the container application is in cold start. And when the container application is in cold start, executing data pre-fetching operation and loading-on-demand operation according to the data which is possibly read when the container application is in cold start in the pre-running result. The prefetched data can improve the cache hit rate loaded according to needs, reduce the times of remotely acquiring the data and improve the cold start efficiency of the container application.
However, when the execution path of the container application at the cold start is inconsistent with the execution path of the pre-run, the prefetching policy may fail, and the prefetched data is not the required data, which not only affects the efficiency of the container application at the cold start, but also causes the waste of network resources due to the prefetching of the unnecessary data.
Disclosure of Invention
The application provides a starting method of container application, the method carries out depth analysis on a data access link by utilizing a data access characteristic model, accurate prediction on the data access link is achieved, even if execution paths are different, data really needed by cold start of the container application can be prefetched, cache hit rate is improved, the number of times of remotely obtaining the data is reduced, the cold start efficiency of the container application is improved, and resource waste caused by obtaining of the unnecessary data is avoided. The application also provides an image management method related to the method, and a corresponding device, system, equipment, computer readable storage medium and computer program product.
In a first aspect, the present application provides a method for starting a container application. The method can be applied to a management system of a container application (also simply referred to as a management system). The management system comprises a mirror repository and a container operation node. The mirror image warehouse refers to warehouse equipment for managing mirror images, and the container operation node is a node for operating container applications.
Specifically, the container operation node may obtain an empty shell mirror image corresponding to the container application from a mirror image repository, create an instance of the container application according to the empty shell mirror image, obtain a data access characteristic model of the container application from the mirror image repository, and obtain target mirror image data from mirror image data of a native container mirror image according to the data access characteristic model. The target mirror data is expected access data of the container application in the starting process. The container operation node also provides data required by operation to the container application instance according to the target mirror image data so as to operate the container application instance, and accordingly the container application is started.
According to the method, the data access link is deeply analyzed by using the data access characteristic model, so that the accurate prediction of the data access link is realized, even if the execution paths are different, the data really required by cold start of the container application can be prefetched, the cache hit rate is improved, the frequency of remotely acquiring the data is reduced, and the cold start efficiency of the container application is improved.
In addition, the method prefetches data really needed by the instance of the container application instead of the complete container mirror image, thereby effectively reducing the cold start time consumption of the container application, reducing the bandwidth usage of the cold start of the container application and the storage consumption of the deployment node, improving the resource utilization rate and avoiding the resource waste.
In some possible implementations, the container running node may obtain a data access influence factor of the container application, where the data access influence factor refers to a factor that affects access to data by the container application, and may be, for example, at least one of environment information, running state information, and external request information.
The environment information includes an execution environment of the container application, and the execution environment may include a hardware environment and a software environment for executing the container application. The hardware environment is used to describe at least one of the hardware of the computing device, the storage device, the network device, etc., for example, the type, specification, load rate, etc. of the hardware. The software environment is used to describe an operating system or application software, etc. The run state information includes a run state of the container application, which may be characterized by execution progress.
And then the container operation node determines the identification of the target mirror image data from the mirror image data of the primary container mirror image through the data access characteristic model according to the data access influence factor. And then the container operation node acquires the target mirror image data according to the identification of the target mirror image data.
According to the method, the influence of the running environment and the running state of the container application (for example, through execution progress representation) and the influence of an external request on the container application access data are considered, so that the identification of the target mirror image data can be accurately predicted, the accurate target mirror image data really required by starting the container application is obtained, the cache hit rate is improved, the frequency of remotely obtaining the data is reduced, and the cold start efficiency of the container application is improved.
In some possible implementations, after the container application is started, the container running node may further obtain actual access data of the container application during the starting process, and then send the data access impact factor of the container application and the actual access data of the container application during the starting process to the mirror repository. Therefore, the data access characteristic model is updated by the mirror image warehouse according to the data access influence factor of the container application and the actual access data of the container application in the starting process, and the accuracy of the data access characteristic model is further improved.
In some possible implementations, the container runtime node may also cache the target mirrored data. Thus, when the instance of the container application requests data, the container operation node may search for the data locally, for example, in a cache device built in the container operation node or an external storage device, and if the search is successful, the data is directly returned to the instance of the container application, and if the search is failed, the data is downloaded from the mirror image repository.
The number of times of remotely acquiring data can be effectively reduced by preferentially searching the local, the response efficiency is improved, and the occupation of network resources is reduced.
In a second aspect, the present application provides a mirror management method. The mirror image management method may be specifically executed by a mirror image repository in the management system. Specifically, the mirror image warehouse sends an empty shell mirror image corresponding to the container application to the container operation node, sends the data access characteristic model of the container application to the container operation node, and then receives the identifier of the target mirror image data sent by the container operation node. The target mirror image data is mirror image data of a native container mirror image, the target mirror image data is expected access data of the container application in the starting process, and the identification of the target mirror image data is determined by the container operation node according to the data access characteristic model. And then the mirror image warehouse sends the target mirror image data to the container operation node according to the identification of the target mirror image data. Wherein the target image data is used for starting the container application when being accessed by the container running node according to the instance of the container application created by the shell image.
In the method, the mirror image warehouse splits mirror image data from the primary container mirror image and provides a container access characteristic model for the container operation node, so that the container operation node can actively acquire the data required by the container application operation from the mirror image warehouse according to the container access characteristic model, the acquisition on demand is realized, unnecessary resource waste is avoided, all mirror image data are not required to be acquired, and the cold start efficiency of the container application is improved.
In some possible implementation manners, before sending the data access characteristic model of the container application to the container running node, the mirror image warehouse obtains actual access data of the container application in the starting process under the condition of the multiple groups of data access influence factors according to the multiple groups of data access influence factors of the container application. Wherein the data access impact factor includes at least one of environmental information, operational state information, and external request information. And aiming at each group of data access influence factors, the mirror image warehouse operates a new mirror image in a corresponding scene, and actual access data of the container application in the scene is determined. Each set of data access impact factors and their corresponding actual access data may form a sample. The mirror repository may train the data access feature model based on a plurality of samples.
The mirror image warehouse builds a data access characteristic model based on multiple groups of data access influence factors of container application and actual access data of the container application in the starting process under the condition of the multiple groups of data access influence factors, can realize deep analysis on a data access link, realize accurate prediction on the data access link, can pre-fetch data really needed by cold starting of the container application even if execution paths are different, improve cache hit rate, reduce the frequency of remotely acquiring data and improve the efficiency of cold starting of the container application.
In a third aspect, the present application provides a container operation node. The container operation node includes:
the driving module is used for acquiring the empty shell mirror image corresponding to the container application from the mirror image warehouse;
the creating module is used for creating an instance of the container application according to the empty shell mirror image;
the data downloading module is used for acquiring a data access characteristic model of the container application from the mirror image warehouse and acquiring target mirror image data from mirror image data of a primary container mirror image according to the data access characteristic model, wherein the target mirror image data is expected access data of the container application in the starting process;
and the file system module is used for providing data required by running to the container application instance according to the target mirror image data so as to run the container application instance, thereby starting the container application.
In some possible implementations, the data downloading module is specifically configured to:
acquiring a data access influence factor of the container application, wherein the data access influence factor comprises at least one of environment information, running state information and external request information;
determining the identifier of the target mirror image data from the mirror image data of the primary container mirror image through the data access characteristic model according to the data access influence factor;
and acquiring the target mirror image data according to the identifier of the target mirror image data.
In some possible implementations, the node further includes:
and the data reporting module is used for acquiring actual access data of the container application in the starting process and sending the data access influence factor of the container application and the actual access data of the container application in the starting process to the mirror image warehouse.
In some possible implementations, the node further includes:
and the cache module is used for caching the target mirror image data downloaded by the data downloading module and returning the requested data to the file system module according to the target data.
In a fourth aspect, the present application provides a mirror warehouse. The mirror image warehouse includes:
the empty shell mirror image storage module is used for sending the empty shell mirror image corresponding to the container application to the container operation node;
the characteristic model storage module is used for sending the data access characteristic model of the container application to the container operation node;
the mirror image data storage module is used for receiving an identifier of target mirror image data sent by the container operation node, and sending the target mirror image data to the container operation node according to the identifier of the target mirror image data, wherein the target mirror image data is mirror image data of a native container mirror image, the target mirror image data is expected access data of the container application in a starting process, the identifier of the target mirror image data is determined by the container operation node according to the data access characteristic model, and the target mirror image data is used for starting the container application when the target mirror image data is accessed by an instance of the container application created by the container operation node according to the empty shell mirror image.
In some possible implementations, the mirror repository further includes:
and before sending the data access characteristic model of the container application to the container operation node, obtaining actual access data of the container application in a starting process under the condition of the multiple groups of data access influence factors according to the multiple groups of data access influence factors of the container application, and training the data access characteristic model according to the multiple groups of data access influence factors and the corresponding actual access data, wherein the data access influence factors include at least one of environment information, operation state information and external request information.
In some possible implementations, the mirror repository further includes:
and the reconstruction module is used for reconstructing the native container mirror image of the container application to obtain a new mirror image, and the new mirror image comprises a shell mirror image and mirror image data corresponding to the container application.
In some possible implementations, the mirror repository further includes:
and the testing module is used for providing a testing method so that the optimizing module operates the new mirror image under different scenes according to the testing method to obtain actual access data of the container in the starting process under the condition of multiple groups of data access influence factors.
In a fifth aspect, the present application provides a management system for container applications. The management system includes a mirror store and at least one container operation node. The container running node is configured to implement the method for starting the container application in any implementation manner of the first aspect or the first aspect, and the mirror repository is configured to implement the method for managing the mirror in any implementation manner of the second aspect or the second aspect.
In a sixth aspect, the present application provides a container operation node. The container operation node includes a processor and a memory. The processor and the memory are in communication with each other. The processor is configured to execute the instructions stored in the memory to cause the container execution node to execute the method for starting a container application as in the first aspect or any implementation manner of the first aspect.
In a seventh aspect, the present application provides a mirror repository. The mirror store includes a processor and a memory. The processor and the memory are in communication with each other. The processor is configured to execute the instructions stored in the memory to cause the image repository to perform the image management method as in the second aspect or any implementation manner of the second aspect.
In an eighth aspect, the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and the instructions instruct a container execution node to execute the method for starting a container application according to the first aspect or any implementation manner of the first aspect.
In a ninth aspect, the present application provides a computer-readable storage medium, having stored therein instructions for instructing an image repository to execute the image management method according to the second aspect or any implementation manner of the second aspect.
In a tenth aspect, the present application provides a computer program product containing instructions that, when run on a container running node, cause the container running node to execute the method for starting a container application according to the first aspect or any implementation manner of the first aspect.
In an eleventh aspect, the present application provides a computer program product containing instructions that, when run on an image repository, cause the image repository to perform the image management method according to the second aspect or any implementation manner of the second aspect.
In a twelfth aspect, the present application provides a method for starting a container application. The method may be performed by a cloud platform. The cloud platform refers to a platform for providing computing, storage, network and other capabilities by a cloud service provider according to hardware and software. In this embodiment, the cloud platform may deploy the container application, thereby providing the corresponding cloud service.
Specifically, the cloud platform may create an instance of the container application, then obtain a data access characteristic model of the container application, obtain target mirror data according to the data access characteristic model, where the target mirror data is expected access data of the container application in a starting process, and then provide data required for running to the instance of the container application according to the target mirror data, so as to run the instance of the container application, thereby starting the container application.
In the method, the cloud platform carries out deep analysis on the data access link by using the data access characteristic model, so that accurate prediction on the data access link is realized, data really required by cold start of the container application can be prefetched even if execution paths are different, the cache hit rate is improved, the number of times of remotely acquiring the data is reduced, and the cold start efficiency of the container application is improved. In addition, the method prefetches data really needed by the instance of the container application instead of the complete container mirror image, thereby effectively reducing the cold start time consumption of the container application, reducing the bandwidth usage of the cold start of the container application and the storage consumption of the deployment node, improving the resource utilization rate and avoiding the resource waste.
In some possible implementations, the cloud platform may create the instance of the container application on a cloud host or on a virtual machine of the cloud host. Due to the fact that the container has good adaptability and is easy to transplant, more and more users choose to deploy the container application on the cloud platform.
According to the method, the container application instance is created on the cloud host or the virtual machine of the cloud host, so that the portability of the container application is improved. Moreover, the cloud host or the virtual machine of the cloud host can isolate the instance of the container application from the instances of other container applications, and the security is guaranteed.
In some possible implementations, the cloud platform may present a configuration interface to a user and then receive launch parameters of the container application configured by the user through the configuration interface. The starting parameters of the container application may include parameters of a cloud host that deploys the container application, such as the architecture of the processor and the size of the memory. Further, the start-up parameters of the container application may also include environment variables. The cloud platform may create an instance of the container application according to the startup parameters of the container application.
In the method, a user can customize a starting parameter according to the requirements of the service, for example, the specification of a cloud host which the deployment container should have is customized, and the cloud platform can start the application of the container according to the starting parameter which is customized by the user, so that the requirements of different services are flexibly met.
In some possible implementation manners, the cloud platform may further obtain a data sequence accessed by the container application in the starting process, and then upload the data sequence, where the data sequence is used to update the data access characteristic model, so that the data access characteristic model may be optimized, the accuracy of pre-fetching data is improved, the cache hit rate is improved, the number of times of remotely obtaining data is reduced, and the starting time is shortened.
In some possible implementations, before creating the instance of the container application, the cloud platform may further receive a user-configured test set, the test set including test data. The test data may be used to train the data access characteristic model. Therefore, the data prefetching guidance can be improved for the starting of the subsequent container application, and the starting efficiency of the container application is improved.
It should be noted that, when the user does not configure the test set or the cloud platform does not preset the test set, the cloud platform may also collect data from the actual starting process of the container application to obtain the test set, and provide the test set, so that the model management platform trains to obtain the data access feature model according to the test data in the test set.
In a thirteenth aspect, the present application provides a model management method. The method may be performed by a model management platform. Specifically, the model management platform creates a data access characteristic model of the container application, then the model management platform may receive a model obtaining request sent by the cloud platform, where the model obtaining request is used to request to obtain the data access characteristic model of the container application, and then the model management platform may return the data access characteristic model of the container application to the cloud platform, so that the cloud platform obtains expected access data of the container application in a starting process according to the data access characteristic model.
In the method, the model management platform maintains data access characteristic models of different container applications, and when the cloud platform requests to acquire the data access characteristic models of a certain or some container applications, the model management platform can return the corresponding data access characteristic models, so that accurate data prefetching is performed on the cloud platform, and the starting efficiency of the container applications is improved.
In some possible implementation manners, the model management platform may further receive a data sequence, which is uploaded by the cloud platform and accessed by the container application in the starting process, and then update the data access characteristic model according to the data sequence.
According to the method, the data access characteristic model can be continuously updated along with the starting of the container application, the accuracy of the prediction of the data access characteristic model is ensured, the accuracy of data prefetching is improved, and the starting efficiency of the container application is improved.
In some possible implementations, the model management platform may be deployed in a mirror repository. In other words, the mirror repository may integrate a function of model management, and is configured to provide the data access feature model to the cloud platform, so that the cloud platform accurately prefetches data according to the data access feature model, and the efficiency of starting the container application is improved.
In a fourteenth aspect, the present application provides a cloud platform. The cloud platform specifically comprises:
a creation module for creating an instance of the container application;
the data downloading module is used for acquiring a data access characteristic model of the container application and acquiring target mirror image data according to the data access characteristic model, wherein the target mirror image data is expected access data of the container application in the starting process;
and the file system module is used for providing data required by running to the container application instance according to the target mirror image data so as to run the container application instance, thereby starting the container application.
In some possible implementations, the creation module is specifically configured to:
creating an instance of the container application on a cloud host; alternatively, the first and second electrodes may be,
creating an instance of the container application on a virtual machine of the cloud host.
In some possible implementations, the cloud platform further includes:
the interaction module is used for presenting a configuration interface to a user and receiving the starting parameters of the container application configured by the user through the configuration interface;
the creation module is specifically configured to:
and creating an instance of the container application according to the starting parameters of the container application.
In some possible implementations, the file system module is further to:
acquiring a data sequence accessed by the container application in a starting process;
uploading the data sequence, wherein the data sequence is used for updating the data access characteristic model.
In some possible implementations, the cloud platform further includes:
an interaction module for receiving a user-configured test set prior to said creating an instance of said container application;
and the data access characteristic model is obtained by training according to the test data in the test set.
In a fifteenth aspect, the present application provides a model management platform. The model management platform comprises:
the optimization module is used for creating a data access characteristic model of the container application;
the communication module is used for receiving a model acquisition request sent by a cloud platform, wherein the model acquisition request is used for requesting to acquire a data access characteristic model of the container application;
the communication module is further configured to return the data access characteristic model of the container application to the cloud platform, so that the cloud platform obtains expected access data of the container application in a starting process according to the data access characteristic model.
In some possible implementations, the communication module is further configured to:
receiving a data sequence which is uploaded by the cloud platform and accessed by the container application in the starting process;
the optimization module is further configured to:
and updating the data access characteristic model according to the data sequence.
In some possible implementations, the model management platform is deployed in a mirror repository.
In a sixteenth aspect, the present application provides a cloud platform. The cloud platform is implemented by a computer cluster. The cloud platform includes at least one computer. The at least one computer includes at least one processor and at least one memory. The at least one processor, the at least one memory are in communication with each other. The at least one processor is configured to execute the instructions stored in the memory to cause the cloud platform to perform the method for launching a container application as in any implementation manner of the twelfth aspect or the twelfth aspect.
In a seventeenth aspect, the present application provides a model management platform. The model management platform is implemented by a computer cluster. The model management platform includes at least one computer. The at least one computer includes at least one processor and at least one memory. The at least one processor, the at least one memory are in communication with each other. The at least one processor is configured to execute the instructions stored in the at least one memory to cause the model management platform to perform the model management method as in any implementation of the thirteenth aspect or the thirteenth aspect.
In an eighteenth aspect, the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and the instructions instruct a cloud platform to execute the method for starting a container application according to any one of the implementations of the twelfth aspect or the twelfth aspect.
In a nineteenth aspect, the present application provides a computer-readable storage medium having instructions stored therein, the instructions instructing a model management platform to execute the model management method according to any one of the implementations of the thirteenth aspect or the thirteenth aspect.
In a twentieth aspect, the present application provides a computer program product containing instructions that, when run on a cloud platform, cause the cloud platform to perform the method for starting a container application according to any one of the implementations of the twelfth aspect or the twelfth aspect.
In a twenty-first aspect, the present application provides a computer program product comprising instructions that, when run on a model management platform, cause the model management platform to perform the model management method of any of the implementations of the thirteenth or third aspect.
The present application may further combine to provide more implementation manners on the basis of the implementation manners provided by the above aspects.
Drawings
In order to more clearly illustrate the technical method of the embodiments of the present application, the drawings used in the embodiments will be briefly described below.
FIG. 1 is a schematic structural diagram of a mirror image and a container provided in an embodiment of the present application;
FIG. 2 is a diagram of a system architecture of a management system according to an embodiment of the present application;
fig. 3A is a system architecture diagram of a management system according to an embodiment of the present application;
fig. 3B is a system architecture diagram of a management system according to an embodiment of the present application;
fig. 4 is an interaction flowchart of a method for starting a container application according to an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating modeling of a data access characteristic model according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a method for starting a container application according to an embodiment of the present application;
fig. 7 is a flowchart of a mirror management method according to an embodiment of the present application;
fig. 8 is a hardware structure diagram of a container operation node according to an embodiment of the present application;
fig. 9 is a hardware structure diagram of a mirror repository according to an embodiment of the present application;
fig. 10A is a system architecture diagram of a management system for a container application according to an embodiment of the present application;
FIG. 10B is a diagram of a system architecture of a management system for another container application provided in an embodiment of the present application;
FIG. 11 is a diagram illustrating data prefetching in an embodiment of the present application;
fig. 12 is a flowchart of a method for starting a container application according to an embodiment of the present application;
fig. 13 is a hardware structure diagram of a cloud platform according to an embodiment of the present disclosure;
fig. 14 is a hardware structure diagram of a model management platform according to an embodiment of the present application.
Detailed Description
The terms "first" and "second" in the embodiments of the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
Some technical terms referred to in the embodiments of the present application will be first described.
The image (image) is specifically a unified file system (union file system) including a plurality of read-only layers (read-only layers). The joint file system can integrate different read-only layers into a file system, and provides a uniform visual angle for the read-only layers, so that the existence of multiple layers is hidden, and a read-only file system exists in a mirror image from the perspective of a user. FIG. 1 provides a schematic diagram of a mirror image 100, as shown in FIG. 1, the mirror image 100 includes a plurality of overlapping read-only layers 102. Wherein the read-only layers 102 of the plurality of read-only layers 102 except the read-only layer 102 at the lowest layer all have a downward pointer, and the pointer points to the next layer.
The container is in particular a federated file system comprising at least one read-only layer and one read-write layer. The combined file system can integrate the at least one read-only layer and the read-write layer into a file system, thereby providing a uniform view angle for the read-only layers, hiding the existence of multiple layers, and enabling the container to have a readable-write file system from the user's view. Fig. 1 also provides a schematic structural diagram of a container 10, as shown in fig. 1, the container 10 includes at least one read-only layer 102 and a read-write layer 104. At least one read-only layer 102 and read-write layer 104 are overlapped, and the read-write layer 104 is located at the top layer. Layers other than the lowest read-only layer 102 (e.g., the other read-only layers 102 and the read-write layer 104) each have a downward pointer to the next layer.
Container applications refer to applications where packaging is performed through a container. The application may include at least one mirror. The container application may be obtained by creating an instance of the application in the container according to at least one image of the application. An instance of an application may be understood as an object obtained by instantiating a program (which may be regarded as a class) of the application. The container application may also support start, terminate, delete, etc. operations after being created.
The starting modes of the container application comprise cold starting, hot starting and the like. The cold start means that no mirror image exists locally, the mirror image needs to be downloaded from a mirror image warehouse to a node, and then an instance of the container application is started from the mirror image on the node; the hot boot refers to that the node has a mirror image locally, and the instance of the container application is directly started from the mirror image locally at the node. When a new container application is deployed to a node, the container application triggers a cold start. Downloading the images from the image repository takes a significant amount of time, greatly affecting the efficiency of the cold start of the container application.
In order to improve the cold start efficiency of container applications, the industry provides a cold start method based on prefetching data and loading data as required. Specifically, the mirror repository reconstructs the native container mirror into a new mirror image with addressable content, for example, the native container mirror image is split into an empty shell mirror image and mirror image data, and then the mirror repository pre-runs the reconstructed new mirror image to obtain data that may be read when the container application is cold-started. Since the container application cold-starts may read different data from case to case, a significant amount of storage overhead may be incurred if the data that may be read in each case is stored. To this end, the mirror store typically pre-runs a reconstructed new mirror once, resulting in data that may be read by the container application at cold start. And when the container application is in cold start, performing data pre-fetching operation and on-demand loading operation according to the data which is possibly read when the container application is in cold start in the pre-running result. The prefetched data can improve the cache hit rate loaded according to needs, reduce the times of remotely acquiring the data and improve the cold start efficiency of the container application.
However, when the execution path of the container application at the cold start is not consistent with the pre-run execution path, for example, when the execution path is not consistent due to the inconsistency between the environment at the cold start and the pre-run environment, a prefetching policy may fail, the prefetched data is not the required data, the cache hit rate of the on-demand loading is reduced, and the number of times of remotely acquiring the data is increased, which not only affects the cold start efficiency of the container application, but also causes waste of network resources due to prefetching the unnecessary data.
In view of the above, the present application provides a method for starting a container application. The method may be performed by a management system of the container application (hereinafter referred to as management system). The management system comprises a mirror image warehouse and a container operation node. Specifically, the container operation node obtains an empty shell mirror image corresponding to the container application from a mirror image warehouse, and creates an instance of the container application according to the empty shell mirror image. The container operation node also acquires a data access characteristic model of the container application from the mirror image warehouse, acquires target mirror image data from mirror image data of the native container mirror image according to the data access characteristic model, wherein the target mirror image data is specifically expected access data of the container application in the starting process, and then provides data required by operation to the container application instance according to the target data so as to operate the container application instance, thereby starting the container application.
According to the method, the data access link is deeply analyzed by using the data access characteristic model, so that the accurate prediction of the data access link is realized, even if the execution paths are different, the data really required by cold start of the container application can be prefetched, the cache hit rate is improved, the frequency of remotely acquiring the data is reduced, and the cold start efficiency of the container application is improved. In addition, the method prefetches data really needed by the instance of the container application instead of the complete container mirror image, thereby effectively reducing the cold start time consumption of the container application, reducing the bandwidth usage of the cold start of the container application and the storage consumption of the deployment node, improving the resource utilization rate and avoiding the resource waste.
In order to make the technical solution of the present application clearer and easier to understand, a system architecture of a management system for executing the starting method of the container application is described below.
Referring to the system architecture diagram of the management system shown in fig. 2, as shown in fig. 2, the management system 200 includes an image repository 202 and a container operation node 204. Wherein the mirror repository 202 and the container operation node 204 establish a communication connection. The communication connection may be a wired communication connection such as a coaxial cable connection, a fiber optic connection, or the like. In some examples, the communication connection may also be a wireless communication connection, such as a cellular network connection, a wireless local area network connection, and so on.
In particular, the mirror repository 202 is used to provide an empty shell mirror that is capable of indexing mirrored data of the native container mirror. The empty shell mirror image may be considered a standard container mirror image. The ghost image includes metadata of the native container image and an index of the metadata to the image data. In the present application, the metadata is specifically data describing the mirrored data, for example, data describing the attribute of the mirrored data. Metadata is used to support functions such as indicating storage locations, history data, resource lookups, file records, and the like.
The container operation node 204 is configured to obtain the empty shell image from the image repository 202, and create an instance of the container application according to the empty shell image. At this point, the instance of the container application may be considered a blank instance. The mirror repository 202 is further configured to provide a data access characteristic model of the container application, and the container running node 204 is further configured to obtain the data access characteristic model of the container application from the mirror repository 202, obtain target mirror data from mirror data of the native container mirror according to the data access characteristic model, and provide data required for running to an instance of the container application according to the target mirror data to run the instance of the container application, so as to start the container application.
In some possible implementations, the image repository 202 may include a reconfiguration module 2022, an optimization module 2024, and a storage module. The different types of data can be stored in the same storage module in a centralized manner or separately in different storage modules. For example, the shell image and the image data may be stored separately in the shell image storage module 2025 and the image data storage module 2026, and the data access feature model modeled by the optimization module 2024 may be stored in the feature model storage module 2027.
The reconstruction module 2022 is specifically configured to reconstruct the native container mirror image to obtain a new mirror image. The new image includes an empty shell image and image data. The content in the new image is addressable. For example, where the bare shell image includes an index of metadata to image data, addressing can be via the index.
The empty shell image storage module 2025 is configured to store an empty shell image in the new image, and send an empty shell image corresponding to the container application to the container operation node 204. The mirror data storage module 2026 is configured to store the mirror data in the new mirror, receive an identifier of target mirror data sent by the container operation node 204, and send the target mirror data to the container operation node 204 according to the identifier of the target mirror data. The target mirrored data is mirrored data from the native container mirror, the target mirrored data being expected access data of the container application during the startup process. The mirror image data and the like may be stored in a block storage manner, a file storage manner, an object storage manner, or the like, which is not limited in this embodiment. Further, fig. 2 is merely illustrated as a case where the shell image, the image data, and the like are separately stored.
The optimization module 2024 is configured to obtain actual access data of the container application in the starting process under the condition of the multiple sets of data access influence factors according to the multiple sets of data access influence factors of the container application, and then train the data access feature model according to the multiple sets of data access influence factors and the corresponding actual access data. Wherein the data access impact factor includes at least one of environmental information, operational state information, and external request information. Further, the feature model storage module 2027 is configured to store the data access feature model of the container application, so that when the container operation node 204 requests data, the mirror repository 202 may first provide the data access feature model to the container operation node 204, and the container operation node 204 may download target mirror data from the mirror repository 202 according to the data access feature model.
Further, the image repository 202 may also include a test module 2023. The testing module 2023 is configured to provide a testing method, so that the optimizing module 2024 pre-runs a new mirror image obtained by reconstructing the native container mirror image by the reconstructing module 20 according to the testing method, and obtains actual access data of the container in the starting process when the container is applied under the condition of the multiple groups of data access influence factors. The test method can comprise one or more of test cases and binary file static analysis. Test module 2023 may receive a test method provided or specified from the image provider and provide the test method to optimization module 2024.
It should be noted that the reconfiguration module 2022, the test module 2023, and the optimization module 2024 are optional modules, and in some possible implementations, functions of these modules may also be implemented by other devices in advance, and the mirror repository 202 is used to store the shell mirror, the mirror data, and the data access feature model.
In some possible implementations, the container operation node 204 may include a driver module 2042, a creation module 2043, a container instance 2044, a data download module 2046, and a file system module 2045. Next, the functions of the respective modules of the container operation node 204 will be described in detail.
The driver module 2042 is used to obtain the empty shell image from the image repository 202. For example, the driver module 2042 may pull the empty shell image of the container application from the empty shell image storage module 2025 in the image repository 202 to be local to the container runtime node 204. The local is a built-in storage module of the container operation node 204 and/or an external storage module directly controlled by the container operation node 204.
The creation module 2043 is used to create an instance of a container application from the empty shell image, such as the container instance 2044 shown in fig. 2. The container instance 2044 may be an entity on which the container application runs. The creation module 2043 may provide metadata for the container instance 2044 for data access based on the empty shell image pulled by the driver module 2042.
The data downloading module 2046 is configured to obtain the data access characteristic model of the container application from the image repository 202, and obtain the target image data from the image data of the native container image according to the data access characteristic model. The file system module 2045 is configured to provide data required for running to the instance of the container application according to the target image data, so as to run the instance of the container application, thereby starting the container application.
Further, the container running node 204 may further include a caching module 2047. The file system module 2045 is configured to intercept an access request of the container instance 2044 for mirrored data, send a prefetch trigger signal to the data downloading module 2046, so as to trigger prefetching of data that may be needed by the container instance 2044, and forward the access request to the caching module 2047, so as to locally obtain the data that is needed by the container instance 2044.
The data downloading module 2046 may execute the prefetch action in response to the prefetch trigger signal. Specifically, the data download module 2046 may obtain the data access feature model of the container application from the image repository 202 (e.g., the feature model storage module 2027 in the image repository 202), and obtain the target image data from the image data of the native container image according to the data access feature model.
The cache module 2047 is configured to receive an access request for data, search for data locally, return data to the container instance 2044 if the search is successful, and send a data download request to the data download module 2046 if the search is failed. The data download module 2046 may also receive a data download request from the caching module 2047 to download corresponding data, such as target image data, from the image repository 202 (such as the image data storage module 2026 in the image repository 202). In this way, the caching module 2047 can obtain the target image data through the data downloading module 2046 and return the target image data to the container instance 2044.
Fig. 2 is merely an exemplary partitioning of the management system 200. In some possible implementations, the management system 200 may divide the functional modules according to other dividing manners. The components of the management system 200 (such as the mirror repository 202, the container operation node 204, or the modules in the mirror repository 202, the modules in the container operation node 204) may be deployed in a centralized manner in a cloud environment, an edge environment, or a terminal, or in a distributed manner in different environments.
Wherein the cloud environment indicates a central cluster of computing devices owned by a cloud service provider for providing computing, storage, and communication resources. The central computing device cluster includes one or more central computing devices (e.g., as a central server). An edge environment indicates a cluster of edge computing devices geographically close to an end device (e.g., a terminal) for providing computing, storage, and communication resources. The edge computing device cluster includes one or more edge computing devices (e.g., edge servers). The terminal includes, but is not limited to, a desktop computer, a notebook computer, a tablet computer, a smart phone, and the like.
As shown in fig. 3A, the management system 200 may be deployed in a cloud environment, such as a central server deployed on the cloud environment. The management system 200 may also be deployed in an edge computing environment, such as an edge server deployed on an edge environment. The management system 200 may also be deployed on a terminal.
As shown in FIG. 3B, various portions of the management system 200 may be distributively deployed in different environments. For example, portions of the management system 200 may be deployed separately on three environments, a cloud environment, an edge environment, a terminal, or any two of them. In some embodiments, the container running node 204 may be deployed in a terminal, and the mirror repository 202 may be deployed in a cloud environment or an edge environment.
Because the management system 200 predicts data that may be needed for cold start of a container application based on the data access characteristic model, compared with the conventional method for acquiring data based on a single pre-operation result, the data can be accurately pre-acquired in the embodiment of the present application, and the number of times for remotely acquiring data is reduced. Even if the mirror image warehouse 202 in the management system 200 is deployed in an edge environment, and the bandwidth resource of the edge environment is limited and the network state is unstable (for example, the packet loss rate is high), because the number of times that the container operation node 204 acquires data from the mirror image warehouse 202 is reduced, the time consumption of cold start can be effectively reduced, and the efficiency of cold start of container application is improved.
The embodiment shown in fig. 2 to fig. 3B describes the management system 200 provided in the embodiment of the present application in detail, and the embodiment of the present application also provides a method for starting a container application executed by the container running node 204 in the management system 200. The method of launching the container application will be described in detail from an interactive perspective.
Referring to fig. 4, a flow chart of a method for starting a container application is shown, the method comprising:
s402: the container runtime node 204 obtains the shell image corresponding to the container application from the image repository 202 and obtains the data access characteristic model of the container application.
The ghost image is capable of indexing the mirrored data of the native container image. In some embodiments, the ghost image includes metadata of the native container image and an index of the metadata to the image data. As such, the ghost image may be based on the metadata to mirror data index, thereby indexing the mirror data of the native container image.
In the present application, metadata is specifically data describing mirrored data, for example, data describing attributes of the mirrored data. Metadata is used to support functions such as indicating storage locations, historical data, resource lookups, file records, and the like. In some embodiments, the metadata of the mirrored data may include the mirrored storage location, the mirrored author, the mirrored version, and so forth.
The shell image may be obtained by reconstructing the native container image by the image repository 202 (e.g., the reconstruction module 2022 in the image repository 202). The mirror repository 202 reconstructs the native container mirror image, and after obtaining the empty shell mirror image, the empty shell mirror image may be stored in the empty shell mirror storage module 2025 of the mirror repository 202. Based on this, the container operation node 204 (e.g., the driver module 2042 in the container operation node 204) may pull the empty shell image from the empty shell image storage module 2025.
The mirror repository 202 (e.g., the feature model storage module 2027 in the mirror repository 202) stores a data access feature model, and the container runtime node 204 may obtain the data access feature model of the container application from the mirror repository 202 (e.g., the feature model storage module 2027).
The data access characteristic model is used to describe data access characteristics. The data access characteristic model may be used to predict data that an instance of the container application needs to access. The container runtime node 204 may send a feature model download request to the image repository 202 (e.g., the feature model storage module 2027 in the image repository 202) and then receive a feature model download response sent by the image repository 202. Wherein the feature model download response carries the data access feature model of the container application.
It should be noted that fig. 4 is an example of the container operation node 204 simultaneously acquiring the shell image and acquiring the data access feature model. In other possible implementation manners of the embodiment of the present application, the obtaining of the shell image and the obtaining of the data access characteristic model may be performed in a non-parallel manner, for example, the container operation node 204 may first obtain the shell image corresponding to the container application, and then obtain the data access characteristic model of the container application.
S404: the container runtime node 204 creates an instance of the container application from the ghost image.
In particular, container runtime node 204 may create one or more instances of the container application from the shell image via a create command. Taking the docker container deployment engine as an example, the container runtime node 204 may execute a docker container run command, or a docker service create command, to create one or more instances of the container application from the shell image.
In some possible implementations, the shell image includes multiple overlapping read-only layers, and container runtime node 204 may add a read-write layer to the multiple read-only layers that overlaps the read-only layers, thereby creating an instance of the container application (e.g., container instance 2044). It should be noted that once an instance of a container application is created from an empty shell image, the instance of the container application and the empty shell image become interdependent, and the empty shell image is difficult to delete until all containers created on the empty shell image are stopped. Attempting to delete an empty shell image without stopping or destroying the container instance using the empty shell image may result in an error.
S406: the container operation node 204 obtains the identifier of the target mirror data according to the data access characteristic model.
S408: the container operation node 204 obtains the target mirror data from the mirror data of the native container mirror stored in the mirror repository 202 according to the identifier of the target mirror data.
The target mirrored data is specifically intended access data of the container application during the start-up process. The container operation node 204 may predict the target mirror data through the data access characteristic model, obtain an identifier of the target mirror data, for example, an address of the target mirror data, and then the container operation node 204 obtains the target mirror data from the mirror data stored in the mirror repository 202 (for example, the mirror data storage module 2026 in the mirror repository 202) according to the identifier of the target mirror data.
Considering that data accessed by the container application can be affected by different factors, the container operation node 204 may determine, according to the factor affecting the data accessed by the container application, that is, the data access influence factor of the container application, an identifier of the target mirror image data through the data access characteristic model, and then obtain the target mirror image data from the mirror image data of the native container mirror image according to the identifier.
Wherein the data access influencing factor comprises at least one of environment information, running state information and external request information. The environment information includes an execution environment of the container application, which may include a hardware environment, a software environment, which executes the container application. The hardware environment is used to describe at least one of the hardware of the computing device, the storage device, the network device, etc., for example, the type, specification, load rate, etc. of the hardware. The software environment is used to describe an operating system or application software, etc. The run state information includes a run state of the container application, which may be characterized by execution progress. The external request information includes request information received by the container operation node 204 from the outside or request information sent by the container operation node 204.
The container operation node 204 inputs the data access characteristic model by using the data access influence factor applied by the current container, and predicts through the data access characteristic model to obtain the identifier of the target mirror image data. The container operation node 204 obtains the target mirror image data according to the identifier of the mirror image data.
S410: the container run node 204 provides the instance of the container application with the data needed to run according to the target image data to run the instance of the container application to start the container application.
Specifically, the container running node 204 provides the target mirrored data to the instance of the container application on demand, and thus, the instance of the container application can access the data to run the instance of the container application to start the container application. Since all the mirror image data is not required to be provided to the container application together, the time consumption of cold start of the container application can be reduced, and the cold start efficiency of the container application can be improved. Moreover, the resource occupation of unnecessary data in transmission is avoided, so that the resource waste is avoided, and the resource utilization rate is improved.
In some possible implementations, the container running node 204 may also obtain actual access data of the container application during the start-up process after implementing the cold start of the container application. For example, if the container application accesses data block 1, data block 3, data block 4, data block 7, data block 9, and data block 10 during the cold boot process, the actual access data of the container application during the cold boot process may be denoted as {1,3,4,7,9,10}. Correspondingly, the mirror repository 202 may also update the data access characteristic model according to the data access influence factor of the container application and the actual access data in the current startup process. The mirror repository 202 may specifically update the data access characteristic model in an online learning manner, or update the data access characteristic model in an offline learning manner after collecting multiple sets of data access influence factors applied to the container and actual access data corresponding to the multiple sets of data access influence factors one by one.
Based on the above description, the embodiments of the present application provide a method for starting a container application. According to the method, the data access characteristic model is used for carrying out deep analysis on the data access link, accurate prediction on the data access link is achieved, even if execution paths are different, data really needed by cold start of the container application can be prefetched, cache hit rate is improved, therefore, the frequency of remotely acquiring data is reduced, and the efficiency of cold start of the container application is improved. In addition, the method prefetches data really needed by the instance of the container application instead of the complete container mirror image, thereby effectively reducing the cold start time consumption of the container application, reducing the bandwidth usage of the cold start of the container application and the storage consumption of the deployment node, improving the resource utilization rate and avoiding the resource waste.
In addition, even if the mirror repository 202 is deployed in an edge environment with limited bandwidth resources and an unstable network state, the time consumed by cold start of the container application can be effectively reduced due to the reduction of the number of times of remotely acquiring data. The method can improve the cold start efficiency of the container application in both the edge environment and the cloud environment, and has high usability.
The embodiment shown in fig. 4 describes the method of container application launch from the perspective of the interaction of the mirror repository 202 and the container runtime node 204. The technical solution of the present application will be described in detail below from the perspective of the mirror repository 202 and the container operation node 204, respectively.
The starting method of the container application provided by the embodiment of the application depends on the data access characteristic model, and the process of modeling the data access characteristic model of the mirror warehouse is explained in detail below.
Referring to the schematic flow chart of the data access characteristic model modeling shown in fig. 5, the method specifically includes the following steps:
s502: the reconstruction module 2022 separates the metadata and the mirrored data of the native container mirror.
Specifically, the restructure module 2022 obtains the native container image provided or specified by the image provider, then separates the image data of the native container image from the native container image, and determines metadata describing the image data, thereby enabling separation of the metadata and the image data of the native container image.
S504: the reconstruction module 2022 builds an index from the metadata to the mirror image data, and obtains the empty shell mirror image according to the index.
Specifically, the reconstruction module 2022 may use the database to build an index of the metadata to the mirror data and then obtain the empty shell mirror from the index. Where the bare mirror includes an index to mirrored data, e.g., metadata and a metadata to mirrored data index. In this manner, the ghost image may be indexed to the image data by the metadata.
S506: the reconfiguration module 2022 stores the empty shell image to the empty shell image storage module 2025.
S508: the reconstruction module 2022 stores the mirrored data to the mirrored data storage module 2026.
The reconfiguration module 2022 may store the bare shell image and the image data separately, e.g., to the bare shell image storage module 2025 and the image data storage module 2026, respectively. In some possible implementations, the reconfiguration module 2022 may also store the above mentioned bare shell image and image data centrally.
It should be noted that S506 and S508 may be executed in parallel, or may be executed sequentially according to a set time sequence, which is not limited in this embodiment of the application.
S510: optimization module 2024 obtains the ghost image from ghost image storage module 2025.
S512: the optimization module 2024 retrieves the mirrored data from the mirrored data storage module 2026.
S514: optimization module 2024 obtains test methods from test module 2023.
Test methods are described for use with containers. The test method may include test case or binary static analysis. The binary file may be an executable code obtained by compiling a source code of the application. Test module 2023 may receive a provided or specified test method from the image provider, and optimization module 2024 may retrieve the test method from the test module.
It should be noted that S510 to S514 may be executed in parallel, or may be executed sequentially according to a set time sequence, which is not limited in this embodiment of the application.
S516: the optimization module 2024 pre-runs the reconstructed new mirror image in different scenes according to a test method, obtains actual access data of the container in the starting process under the condition of multiple groups of data access influence factors, and trains a data access characteristic model based on the multiple groups of data access influence factors and the actual access data corresponding to the data access influence factors one by one.
Different scenarios may include scenarios with different computing architectures, different computing capabilities, different storage capabilities, or different network conditions. The optimization module 2024 may pre-run a reconstructed new image under the different scenarios according to a test method, such as a test case, where the reconstructed new image includes a shell image and image data, so as to obtain actual access data of the container application. The actual access data in the pre-operation process is referred to as initial actual access data.
The optimization module 2024 may obtain a training sample according to the data access influence factor and actual access data corresponding to the data access influence factor, and then perform model training by using the training sample through a machine learning algorithm, thereby obtaining a data access feature model. The machine learning algorithm may include a conventional machine learning algorithm, such as a logic tree algorithm, a random forest algorithm, and the like. The machine learning algorithm may also be a Deep Learning (DL) algorithm. The deep learning algorithm is exemplified below.
In some possible implementations, the optimization module 2024 may construct a network architecture of the data access feature model according to a Recurrent Neural Network (RNN), and then perform weight initialization on the network architecture. Next, the optimization module 2024 inputs the training samples into the model, and updates the weights of the model according to the output result of the model, thereby implementing model training.
Wherein RNN is a sequence model (sequence model). Sequence models can include different types, such as types that include sequence-to-sequence (also known as many-to-many, many-to-many), non-sequence-to-sequence (also known as one-to-many, one-to-many), or sequence-to-non-sequence (also known as many-to-one). Sequence-to-sequence means that the input and output are both sequences, and the length of the input sequence and the length of the output sequence may be equal or different. Non-sequence to sequence means that the input is not a sequence and the output is a sequence. Sequence to non-sequence means that the input is a sequence and the output is not a sequence.
The optimization module 2024 may employ non-sequence to sequence type RNNs for model training. The RNN may take as input a data access impact factor including environment information, running state information, and external request information, and take as output an identifier of the target mirrored data. Model training based on this type of RNN can result in a data access characteristic model that can be predicted based on data access impact factors such as environmental information. In some embodiments, the optimization module 2024 may also perform model training using a sequence-to-sequence type RNN, where the RNN may take as input historical actual access data and as output an identification of expected access data. Model training based on this type of RNN may result in a data access feature model that can be predicted based on execution progress.
S518: the optimization module 2024 stores the data access feature model to the feature model storage module 2027.
The optimization module 2024 stores the data access characteristic models separately, so that when a request for the data access characteristic model by the container operation node 204 is received, the data access characteristic model can be found quickly, and response efficiency is improved.
In some possible implementations, the optimization module 2024 may also store the data access feature model centrally with the shell-mirrored, mirrored data, e.g., there is the same storage module. Thus, storage resources can be saved.
Next, from the perspective of the container operation node 204, a cold start process of the container application will be described in detail.
Referring to the flowchart of the starting method of the container application shown in fig. 6, the method specifically includes the following steps:
s602: the drive module 2042 obtains the empty shell image from the image repository 202.
Specifically, the driver module 2042 may pull the empty shell image from the image repository 202 (e.g., the empty shell image storage module 2025 in the image repository 202) when a preset condition is triggered. The preset condition may be set according to a service requirement, for example, the preset condition may be set to detect that a user starts an operation of the container application.
S604: the creation module 2043 receives the empty shell image from which the container instance 2044 is created.
The creation module 2043 receives the empty shell image sent by the drive module 2042, and creates a container instance 2044 on the empty shell image. Since the creation module 2043 is a container instance 2044 created on an empty shell image, this container instance 2044 is currently a blank container instance.
In some possible implementations, the creating module 2044 may also be directly executed by the driver module 2042, and accordingly, the container running node 204 may not include the creating module 2043.
S606: the container instance 2044 sends a first data access request to the file system module 2045.
The file system module 2045 may include a user mode and a kernel mode. The user mode and the kernel mode are two running levels of the operating system, and the user mode and the kernel mode are different mainly in privilege level. The user mode has the lowest privilege level and the kernel mode has the higher privilege level. Applications running in user mode cannot directly access operating system kernel data structures and programs.
In this embodiment, the container instance 2044 may be in a user mode or a kernel mode. The container instance 2044 sends a first data access request to the file system module 2045 to prefetch data through the file system module 2045.
S608: the file system module 2045 sends a prefetch trigger to the data download module 2046 and forwards the first data access request to the cache module 2047.
The file system module 2045 is responsible for intercepting the first data access request of the container instance 2044 and performs two actions: forwarding the request to the caching module 2047 and triggering the container instance 2044 may require prefetching of the data.
S610: the data download module 2046 sends a feature model download request to the image repository 202 in response to the prefetch trigger.
The characteristic model downloading request carries an identifier of the data access characteristic model, and the identifier can uniquely represent the data access characteristic model. The data downloading module 2046 carries the identifier in the feature model downloading request, so as to request the data corresponding to the identifier to access the feature model.
S612: the data download module 2046 receives the feature model download response.
The feature model download response includes the data access feature model requested by the data download module 2046. The identity of the data access feature model is the same as the identity carried in the feature model download request.
S614: the data downloading module 2046 sends a data downloading request to the mirror repository 202 according to the data access characteristic model.
The data downloading module 2046 may predict data required for cold start of the container application, that is, target image data, according to the data access characteristic model. Specifically, a data access influence factor applied by the container is input into a data access characteristic model for prediction, and a prediction result is obtained, wherein the prediction result comprises an identifier of target mirror image data. The data downloading module 2046 then sends a data downloading request to the image repository 202 according to the prediction result, so as to download the target image data.
S616: the data download module 2046 receives the data download response.
The data downloading response carries target mirror image data, specifically, data requested to be downloaded by the data downloading module 2046, that is, data really needed by the container application predicted by the data access characteristic model.
S618: the caching module 2047 stores the target image data in the data download response.
S620: the caching module 2047 returns the first copy of data to the file system module 2045.
The cache module 2047 determines a first copy of data from the target data and then returns the first copy of data to the file system module 2045. Wherein the first copy of data is the data requested by the first data access request.
S622: the file system module 2045 returns the first copy of data to the container instance 2044.
The file system module 2045 returns the first copy of data to the container instance 2044 to enable a response to the first data access request.
S624: the container instance 2044 sends an nth data access request to the file system module 2045.
S626: the file system module 2045 forwards the nth data access request to the caching module 2047.
S628: the cache module 2047 returns the nth data to the file system module 2045.
S630: the file system module 2045 returns the nth data to the container instance 2044.
The target image data may include multiple copies of data. When the container instance 2044 continues to send data access requests, the cache module 2047 may preferentially search whether the local area includes corresponding data, if so, return the data to the file system module 2045, otherwise, the cache module 2047 requests the image repository 202 for data, and then returns the requested data to the file system module 2045. The file system module 2045 then returns the data to the container instance 2044. The container operation node 204 may repeatedly perform the above S624 to S630 (e.g., perform N-1 times, where N starts to take a value from 2) until the cold start of the container application is completed.
S632: the file system module 2045 reports the data access influence factor of the container application and the actual access data of the container application in the current starting process to the mirror image repository 202.
After the container application completes the cold start, the file system module 2045 may also report actual access data in the starting process, and the actual access data may be characterized by a sequence. In this manner, the mirror repository 202 may update the data access characteristic model based on the data access impact factors and the corresponding actual access data, and the updated data access characteristic model may be used for subsequent predictions.
In the above embodiment, the container runtime node 204 launching the container application relies on the management of the image by the image repository 202. Based on the method, the embodiment of the application also provides a mirror image management method. The following describes the mirror image management method provided in the embodiment of the present application in detail from the perspective of the mirror image repository 202.
Referring to fig. 7, a flow chart of a method of image management is shown, the method comprising:
s702: the mirror repository 202 sends an empty shell mirror corresponding to the container application to the container run node 204 and sends the data access characteristic model of the container application to the container run node 204.
Specifically, after obtaining the native container image, the image repository 202 may reconstruct the native container image to obtain a new image, where the new image includes the shell image and the image data. Further, the mirror repository 202 may pre-run a new mirror in different scenarios to obtain actual access data of the container application in the starting process under the condition of multiple sets of data access influence factors. The mirror repository 202 may train the data access characteristic model based on the sets of data access impact factors and the actual access data corresponding to each set of data access impact factors one-to-one.
The mirror repository 202 may send an empty shell mirror corresponding to the container application, and a data access characteristic model of the container application, to the container runtime node 204 of the container application to be started. The mirror image warehouse 202 may send the above shell mirror image and the data access characteristic model at the same time, or may send them sequentially according to a set order, for example, send the shell mirror image first and then send the data access characteristic model.
S704: the mirroring warehouse 202 receives an identification of the target mirrored data sent by the container runtime node 204.
The target mirror data is mirror data of a native container mirror, the target mirror data is expected access data of the container application in a starting process, and an identifier of the target mirror data is determined by the container operation node 204 according to the data access characteristic model. The identifier of the target mirror image data may be a data block number corresponding to the target mirror image data, or a data block address.
S706: the mirror repository 202 sends the target mirror data to the container operation node 204 according to the identifier of the target mirror data.
The mirror repository 202 may search the target mirror data from the mirror data of the primary container mirror according to the identifier of the target mirror data, and when the search is successful, the mirror repository 202 sends the target mirror data to the container operation node 204. Wherein the target image data is used for starting the container application when being accessed by the container running node 204 according to the instance of the container application created by the shell image.
The starting method and the mirror image management method of the container application provided in the embodiment of the present application are described in detail above with reference to fig. 2 to fig. 7, and the following describes the apparatus provided in the embodiment of the present application with reference to the accompanying drawings.
Referring to the schematic structural diagram of the container operation node 204 shown in fig. 2, the container operation node 204 includes:
the driving module 2042 is configured to obtain an empty shell mirror image corresponding to the container application from a mirror image warehouse;
a creating module 2043, configured to create a container instance 2044 according to the empty shell mirror image;
the data downloading module 2046 is configured to obtain the data access characteristic model of the container application from the mirror repository, and obtain target mirror image data from mirror image data of a native container mirror image according to the data access characteristic model, where the target mirror image data is expected access data of the container application in a starting process;
the file system module 2045 is configured to provide data required for running to the container instance 2044 according to the target image data, so as to run the container instance 2044, thereby starting the container application.
In some possible implementations, the data downloading module 2046 is specifically configured to:
acquiring a data access influence factor of the container application, wherein the data access influence factor comprises at least one of environment information, running state information and external request information;
determining the identifier of the target mirror image data from the mirror image data of the primary container mirror image through the data access characteristic model according to the data access influence factor;
and acquiring the target mirror image data according to the identifier of the target mirror image data.
In some possible implementations, the node 204 further includes:
and the data reporting module is used for acquiring actual access data of the container application in the starting process and sending the data access influence factor of the container application and the actual access data of the container application in the starting process to the mirror image warehouse.
In some possible implementations, the node 204 further includes:
the caching module 2047 is configured to cache the target image data downloaded by the data downloading module 2046, and return the requested data to the file system module 2045 according to the target data.
The container operation node 204 according to the embodiment of the present application may correspond to execute the method described in the embodiment of the present application, and the above and other operations and/or functions of each module/unit of the container operation node 204 are respectively for implementing corresponding processes of each method executed by the container operation node 204 in the embodiments shown in fig. 4 or fig. 6, and are not described herein again for brevity.
Next, referring to the schematic structural diagram of the mirror repository 202 shown in fig. 2, the mirror repository 202 includes:
the empty shell mirror image storage module 2025 is configured to send an empty shell mirror image corresponding to the container application to the container running node 204;
a feature model storage module 2027, configured to send the data access feature model of the container application to the container operation node 204;
the mirror image data storage module 2026 is configured to receive an identifier of target mirror image data sent by the container operation node 204, and send the target mirror image data to the container operation node 204 according to the identifier of the target mirror image data, where the target mirror image data is mirror image data of a native container mirror image, the target mirror image data is expected access data of the container application in a starting process, the identifier of the target mirror image data is determined by the container operation node 204 according to the data access feature model, and the target mirror image data is used for starting the container application when being accessed by an instance of the container application created by the container operation node 204 according to the shell mirror image.
In some possible implementations, the mirror repository 202 further includes:
an optimizing module 2024, configured to obtain actual access data of the container application in a starting process under the condition of the multiple groups of data access influence factors according to the multiple groups of data access influence factors of the container application before sending the data access characteristic model of the container application to the container operation node 204, and train the data access characteristic model according to the multiple groups of data access influence factors and corresponding actual access data, where the data access influence factors include at least one of environment information, operation state information, and external request information.
In some possible implementations, the mirror repository 202 further includes:
the reconfiguration module 2022 is configured to reconfigure the native container mirror image of the container application to obtain a new mirror image, where the new mirror image includes a shell mirror image and mirror image data corresponding to the container application.
In some possible implementations, the mirror repository 202 further includes:
the testing module 2023 is configured to provide a testing method, so that the optimizing module 2024 runs the new mirror image in different scenarios according to the testing method, and obtains actual access data of the container in a starting process when the container is applied under the condition of multiple groups of data access influence factors.
The mirror repository 202 according to the embodiment of the present application may correspond to performing the method described in the embodiment of the present application, and the above and other operations and/or functions of each module/unit of the mirror repository 202 are respectively for implementing corresponding flows of each method performed by the mirror repository 202 in the embodiments shown in fig. 4 or fig. 5, and are not described herein again for brevity.
The mirror repository 202 and the container operation node 204 of the embodiment of the present application are introduced from the perspective of functional modularization, and the mirror repository 202 and the container operation node 204 will be described from the perspective of hardware instantiation.
First, the present embodiment provides a container operation node 204. The container execution node 204 is a node for executing a container application, and the node may be a terminal or a server. The terminal includes, but is not limited to, a desktop, a notebook, a tablet, or a smart phone. The server comprises a local server and a cloud server.
Fig. 8 provides a hardware structure diagram of the container operation node 204, and as shown in fig. 8, the container operation node 204 includes a bus 801, a processor 802, a communication interface 803, and a memory 804. The processor 802, memory 804, and communication interface 803 communicate over a bus 801.
The bus 801 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
The processor 802 may be any one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Micro Processor (MP), a Digital Signal Processor (DSP), and the like.
The communication interface 803 is used for communication with the outside. For example, an empty shell image corresponding to the container application is obtained from the image repository 202, or a data access characteristic model of the container application is obtained from the image repository, target image data is obtained from image data of a native container image, and so on.
The memory 804 may include volatile memory (volatile memory), such as Random Access Memory (RAM). The memory 804 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory, a Hard Disk Drive (HDD), or a Solid State Drive (SSD).
The memory 804 stores executable code that the processor 802 executes to perform the aforementioned method of launching a container application.
Specifically, in the case of implementing the embodiment shown in fig. 2, and in the case that the modules of the container operation node 204 described in the embodiment of fig. 2 are implemented by software, software or program codes required for executing the functions of the driver module 2042, the creation module 2043, the file system module 2045, the data download module 2046, and the cache module 2047 in fig. 2 are stored in the memory 804. The communication interface 803 receives the shell image corresponding to the container application and the data access characteristic model of the container application sent by the image repository 202, and transmits them to the processor 802 through the bus 801, and the processor 802 executes the program codes corresponding to the modules stored in the memory 804 to execute the aforementioned starting method of the container application.
Second, the embodiment of the present application provides a mirror repository 202. The mirror repository 202 is a repository device that manages mirrors. Similar to the container operation node 204, the warehouse device may be a terminal or a server. The terminal includes, but is not limited to, a desktop, a notebook, a tablet, or a smart phone. The server comprises a local server and a cloud server.
Fig. 9 provides a hardware block diagram of the image repository 202. As shown in fig. 9, the image repository 202 includes a bus 901, a processor 902, a communication interface 903, and a memory 904. The processor 902, memory 904, and communication interface 903 communicate over a bus 901. The specific implementation of the bus 901, the processor 902, the communication interface 903 and the memory 904 can be described with reference to the related contents of the embodiment shown in fig. 8.
The memory 904 has stored therein executable code that the processor 902 executes to perform the aforementioned image management method. Specifically, in the case of implementing the embodiment shown in fig. 2, and in the case where the modules of the image repository 202 described in the embodiment of fig. 2 are implemented by software, software or program codes required to perform the functions of the reconstruction module 2022, the test module 2023, the optimization module 2024, the empty-shell image storage module 2025, the image data storage module 2026, and the feature model storage module 2027 in fig. 2 are stored in the memory 904. The processor 902 executes the codes corresponding to the modules to execute the image management method.
Due to the fact that the container has good adaptability and is easy to transplant, more and more users choose to deploy the container application on the cloud platform. Further, the user can mix and deploy the container application on a plurality of cloud platforms according to the service requirement. In other words, the instance of the container application may be run not only in a local node such as a terminal, but also in a node of a cloud environment (environment provided by a cloud platform), for example, a cloud host. The following is illustrated with an instance of the container application running in the cloud host.
Fig. 10A provides a system architecture diagram of a management system for a container application, the system 1000 including a cloud platform 1020, a model management platform 1040, and a mirror warehouse 1060. The cloud platform 1020 is connected with the model management platform 1040 and the mirror warehouse 1060 respectively.
The cloud platform 1020 may provide a cloud host service and a cloud hard disk service. The cloud host, which may also be referred to as an Elastic Cloud Server (ECS), is a computing service that can be obtained by self at any time and is elastically scalable on the cloud, and provides a safe, reliable, flexible, and efficient application environment. Based on this, the user may trigger operations of deploying the container application at the cloud host through the interactive interface provided by the cloud platform 1020 to run an instance of the container application at the cloud host. Yun Yingpan, also called Elastic Volume Service (EVS), is a service providing persistent block storage for computing services such as ECS, and can provide high availability and persistence, and stable low latency performance through data redundancy and cache acceleration. A user may trigger an operation of creating a storage volume through an interactive interface provided by the cloud platform 1020 to provide data for the container application through the storage volume or to store data generated by the container application. The model management platform 1040 is configured to provide an access feature model of the container application, so that the cloud platform 1020 obtains data required for running an instance of the container application from the mirror repository 1060 according to the access feature model and starts the container application according to the data.
In particular, the cloud platform 1020 may include a creation module 1022, a file system module 1024, and a data download module 1026. The creating module 1022 may be a background module of the cloud platform 1020, the file system module 1024 may be a module in a cloud host, and the data downloading module 1026 may be a module in a cloud hard disk, for example, a background module in a cloud hard disk.
A creation module 1022 for creating an instance 1023 of the container application. The data downloading module 1026 is configured to obtain a data access characteristic model of the container application, and obtain target mirror image data according to the data access characteristic model, where the target mirror image data is expected access data of the container application in the starting process. The file system module 1024 is configured to provide data required for running to the instance of the container application according to the target image data, so as to run the instance 1023 of the container application, thereby starting the container application.
In some possible implementations, the cloud platform 1020 also includes an interaction module 1021. The interaction module 1021 may present a configuration interface, which may be a Graphical User Interface (GUI) or a Command User Interface (CUI), to the user, and then receive the start parameters of the container application configured by the user configuration interface. The starting parameters of the container application may include parameters of a cloud host that deploys the container application, such as the architecture of the processor and the size of the memory. Further, the start-up parameters of the container application may also include environment variables. Accordingly, the creation module 1022 creates an instance 1023 of the container application according to the startup parameters of the container application.
It should be noted that fig. 10A is an example 1023 of creating a container application on a cloud host by the creating module 1022. In other possible implementation manners of the embodiment of the present application, referring to the system architecture diagram of another management system 1000 of a container application shown in fig. 10B, the creating module 1022 may also create an instance 1023 of the container application on a Virtual Machine (VM) of a cloud host. For example, the start parameter of the container application may further include whether to use a virtual machine, and when the user selects to use the virtual machine, the creating module 1022 may create the virtual machine on the cloud host according to the start parameter of the container application, and create the instance 1023 of the container application on the virtual machine. It should be noted that, when the creating module 1022 creates the instance 1023 of the container application on the virtual machine of the cloud host, the file system module 1024 is also deployed on the virtual machine of the cloud host.
The cloud hard disk may also include a storage volume 1027, and an instance of the container application mounts the storage volume 1027. The storage volume 1027 generally functions as a data volume. It should be noted that the storage volume 1027 may be created in advance, for example, the storage volume 1027 may be created before the cloud platform 100 creates the instance 1023 of the container application, or the storage volume 1027 required to be used by the instance 1023 of the container application may be created together when the instance 1023 of the container application is created.
In order to improve the starting efficiency of the container application, the data downloading module 1026 in the cloud hard disk may obtain a data access characteristic model of the container application, and obtain the target mirror image data according to the data access characteristic model. The target mirrored data is expected access data of the container application during startup. Wherein the target mirrored data may include at least one data block to be accessed. The data download module 1026 can also write the target mirrored data to the storage volume. When the target mirrored data includes a plurality of data blocks to be accessed, the data download module 1026 preferentially prefetches the plurality of data blocks to be accessed.
Referring specifically to the data prefetching diagram shown in fig. 11, the data downloading module 1026 prefetches the following data blocks according to priority: data block 1, data block 3, data block 5, data block 6. Data download module 1026 may write the data blocks to the storage volume according to a priority. In this manner, the file system module 1024 can obtain the data blocks from the storage volume and return the data blocks to the instance 1023 of the container application, thereby providing the instance 1023 of the container application with data needed to run the instance of the container application, thereby starting the container application. By prefetching the target mirror image data, the method reduces the times of acquiring the data required by the operation of the container application instance from the mirror image warehouse 1060, and improves the efficiency of starting the container application.
Further, the cloud platform 1020 may also include a caching module 1025. The caching module 1025 may be a module local to the cloud host for caching data. The file system module 1024 may first access the cache module 1025, and when the cache module 1025 stores a data block to be accessed, the data block to be accessed may be a data block in data required for running the instance 1023 of the container application, and the file system module 1024 may return the data block to the instance 1023 of the container application to run the instance 1023 of the container application, so as to start the container application. It should be noted that, when the creating module 1022 creates the instance 1023 of the container application on the virtual machine of the cloud host, the caching module 1025 is also deployed on the virtual machine of the cloud host.
When the data block to be accessed is not stored in the cache module 1025, the file system module 1024 may access the storage volume mounted by the instance 1023 of the container application in the cloud hard disk. If the storage volume stores the data block to be accessed, the file system module 1024 may write the data block to the cache module 1025, so that the file system module 102 returns the data block to the instance 1023 of the container application. Among other things, file system module 1024 may put through requests to effect writing of data blocks to storage volumes.
In some possible implementations, when the storage volume 1027 does not store a data block to be accessed, the file system module 1024 may further obtain the data block to be accessed from the image of the image repository 1060, write the data block to the storage volume 1027 in the cloud hard disk, put a request to a data volume, and write the data block to the cache module 1025 local to the cloud host.
When the data blocks included in the data required for the running of the instance 1023 of the container application are all returned to the instance 1023 of the container application, so that the instance 1023 of the container application is run, the completion of the starting of the container application is characterized.
In some possible implementations, the file system module 1024 may further obtain actual access data of the container application during the starting process after the container application is started, where the actual access data is specifically a data sequence formed by multiple data blocks that are actually accessed. Each data block may have an identifier, and the data sequence may be characterized by an identifier sequence formed by the identifiers of the data blocks. The file system module 1024 may then feed back the data sequences to the model management platform 1040 to facilitate the model management platform 1040 to update the data access characteristic model based on the data sequences to optimize performance of the data access characteristic model.
For ease of understanding, the description is made with reference to fig. 11. The file system module 1024 may obtain a data sequence accessed by the container application in the starting process, where the data sequence may specifically be represented as a sequence of 1-3-5-6, and then feed back the data sequence to the model management platform 1040, so that the model management platform 1040 further updates parameters of the data access feature model according to the data sequence, thereby optimizing performance such as accuracy of the data access feature model.
The model management platform 1040 includes a feature model storage module 1042 and a communication module (not shown in the figures). The feature model storage module 1042 is configured to store a data access feature model of a container application, and the communication module is configured to receive a model acquisition request sent by a cloud platform, where the model acquisition request is used to request to acquire the data access feature model of the container application, and then return the data access feature model of the container application to the cloud platform, so that the cloud platform acquires expected access data of the container application in a starting process according to the data access feature model.
In some possible implementations, model management platform 1040 may also include an optimization module 1044. The optimization module 1044 is configured to update the data access characteristic model of the container application according to the data sequence (which may be represented by the identification sequence) fed back by the cloud platform 1020. It should be noted that the data access feature model stored by the feature model storage module 1042 may be preset, for example, the feature model storage module 1042 may preset an open-source data access feature model. The data access characteristic model may also be obtained by the optimization module 1044 through training according to test data in the test set.
Specifically, mirror repository 1060 may include a test module (not shown in fig. 10A or fig. 10B) for providing a test method, so that optimization module 1044 pre-runs test data in the test set according to the test method, obtains a data sequence accessed by the container application during the startup process, and updates the data access characteristic model according to the data sequence. The test method can comprise one or more of test cases and binary files. The test modules described above may also be modules in model management platform 1040. Further, when the model management platform 1040 presets the data access characteristic model, the mirror repository 1060 or the model management platform 1040 may not include the test module.
In addition, similar to the embodiment shown in fig. 2, the image warehouse 1060 may further include a reconfiguration module (not shown in fig. 10A or fig. 10B) for reconfiguring the native container image to obtain a new image. The new image includes an empty shell image and image data. The content in the new image is addressable. For example, where the bare shell image includes an index of metadata to image data, addressing can be through the index. Accordingly, the target image data prefetched by the data download module 1026 comes from the image data in the new image. For example, the target image data may be one or more data blocks of image data in the new image.
Fig. 10A and 10B illustrate different architectures of the management system 1000 for the container application. Based on the above-mentioned architecture of the management system 1000 for the container application, the embodiment of the present application further provides a method for starting the container application on the cloud host or the virtual machine of the cloud host. For ease of understanding, the following is illustrated with the container application launched on the cloud host.
Referring to fig. 12, a flow chart of a method for starting a container application is shown, the method comprising:
s1202: the cloud platform 1020 presents a configuration interface to the user.
The configuration interface is specifically an interface for configuring a container application to be started. The configuration interface may be a GUI or a CUI. For convenience of description, the embodiment of the present application uses a configuration interface as an example of a GUI interface. The configuration interface may carry at least one startup parameter configuration control, where the startup parameter configuration control supports a user to configure a startup parameter of the container application.
S1204: the cloud platform 1020 receives launch parameters of the container application configured by the user through the configuration interface.
Specifically, a user may trigger a configuration operation through a start parameter configuration control on the configuration interface, and the cloud platform 1020 receives a start parameter of the container application configured by the user through the configuration interface in response to the operation of the user. The starting parameters of the container application may include parameters of a cloud host that deploys the container application, such as the architecture of the processor and the size of the memory. Further, the start-up parameters of the container application may also include environment variables.
S1206: the cloud platform 1020 creates an instance 1023 of the container application on the cloud host according to the startup parameters of the container application.
Specifically, the cloud platform 1020 may create an instance 1023 of the container application on the cloud host according to the startup parameters of the container application. In some embodiments, the cloud platform 1020 may also create an instance 1023 of the container application on a virtual machine of a cloud host according to the startup parameters of the container application.
It should be noted that, the specific implementation of the example 1023 of the application for creating the container by the cloud platform 1020 is similar to the embodiment type shown in fig. 2, and specific reference may be made to the description of the relevant contents, which is not described herein again.
S1207: the cloud platform 1020 creates a storage volume for the container application on the cloud hard disk.
The storage volume is a storage volume of a container application, an instance of which mounts the storage volume. The cloud platform 1020 may be a storage volume 1027 that is needed to be used when creating the instance 1023 of the container application, along with the instance 2023 of the container application. That is, S1206 and S1207 may be executed in parallel, or may be executed sequentially.
In some possible implementations, S1207 may not be executed in the method for starting the container application according to the embodiment of the present application. For example, the instance of the container application may also mount a pre-created storage volume, such as storage volume 1027 created before cloud platform 100 creates instance 1023 of the container application.
S1208: the cloud platform 1020 obtains the data access feature model of the container application from the model management platform 1040.
In particular, the model management platform 1040 includes a feature model storage module 1042, the feature model storage module 1042 storing a data access feature model. The cloud platform 1020 may obtain the data access feature model of the container application from the feature model storage module 1042 through the data download module 1026.
S1210: the cloud platform 1020 obtains target image data from the image repository 1060 according to the data access characteristic model.
The target mirrored data is expected access data of the container application during startup. Wherein the target mirrored data may include at least one data block to be accessed. The data download module 1026 in the cloud platform 1020 may perform prediction through the data access characteristic model to determine a data block included in the target image data, and then obtain the target image data from the image repository 1060 according to an identifier of the data block.
In some possible implementation manners, the data accessed by the container application may be affected by different factors, and the cloud platform 1020 may further obtain a factor that affects the data accessed by the container application, that is, a data access influence factor of the container application, and then obtain the target mirror image data through the data access characteristic model according to the data access influence factor. Specifically, the cloud platform 1020 may determine an identifier of the target mirror data through the data access characteristic model according to the data access influence factor, and then obtain the target mirror data from the mirror data stored in the mirror warehouse 1060 according to the identifier.
The specific implementation of S1210 may refer to the description of S408, and is not described herein again.
S1212: the cloud platform 1020 writes the target mirrored data to the storage volume.
In this embodiment, when the target mirror data includes a plurality of data blocks to be accessed, the data downloading module 1026 in the cloud platform 1020 may pre-fetch the plurality of data blocks to be accessed with priority and then write the plurality of data blocks to be accessed to the storage volume, so as to facilitate loading and use.
S1214: the cloud platform 1020 accesses the cache module 1025 of the cloud host. When the data block to be accessed is not stored in the cache module 1025, S1216 is executed; when the data block to be accessed is stored in the buffer module 1025, S1222 is performed.
In particular, the cloud platform 1020 includes a file system module 1024, and the file system module 1024 may access a cache module 1025 of the cloud host to determine whether the cache module 1025 stores a data block to be accessed. Specifically, the file system module 1024 may intercept a request (e.g., an access request), and find whether the cache module 1025 of the cloud host stores a data block matching the attribute of the data block in the request according to the attribute of the data block in the request.
When the caching module 1025 of the cloud host does not store a data block that matches the attributes of the data block in the request, the cloud platform 1020 (and in particular the file system module 1024) may perform S1216, thereby continuing to access the storage volume of the container application. When the caching module 1025 of the cloud host stores a data block that matches the attributes of the data block in the request, the cloud platform 1020 (and in particular the file system module 1024) may perform S1222 to return the data block to the instance 1023 of the container application.
S1216: the cloud platform 1020 accesses a storage volume of the container application. When the data block to be accessed is not stored in the storage volume, S1218 is executed; when the storage volume stores the data block to be accessed, S1220 is performed.
Similarly, the cloud platform 1020 may access the storage volume of the container application in a similar manner as the access cache module 1025. When the storage volume in the cloud hard disk does not store a data block matching the attribute of the data block in the request, the cloud platform 1020 (specifically, the file system module 1024) may execute S1218. When the storage volume in the cloud hard disk stores a data block that matches the attributes of the data block in the request, the cloud platform 1020 (specifically, the file system module 1024) may perform S1222 to return the data block to the instance 1023 of the container application.
S1218: the cloud platform 1020 obtains the data block to be accessed from the mirror repository 1060, writes the data block to the storage volume, puts the request to the storage volume, and writes the data block to the caching module 1025.
S1220: the cloud platform 1020 puts the request on the storage volume and writes the data block to the caching module 1025.
S1222: the cloud platform 1020 returns the data block to the instance 1023 of the container application.
In the foregoing S1212 to S1222, in this embodiment, the cloud platform 1020 provides, according to the target image data, the data required for running to the instance 1023 of the container application, so as to run the instance of the container application, so as to start a specific implementation manner of the container application, and in other possible implementation manners of this embodiment, the cloud platform 1020 may also provide, through other manners, the data required for running to the instance 1023 of the container application. For example, the cloud platform 1020 may write the target image data to the cache at one time after obtaining the target image data from the image repository 1060 according to the data access characteristic model and writing the target image data to the storage volume.
In some possible implementations, the cloud platform 1020 may also obtain a data sequence that the container application accesses during the startup process. The data sequence is actual access data of the container application in the starting process, and can be specifically characterized by an identifier of a data block returned by the cloud platform 1020 to the instance 1023 of the container application. The cloud platform 1020 may then upload a data sequence, such as an identification sequence formed by the identification of the data chunks. Accordingly, the model management platform 1040 may update the data access characteristic model according to the data sequence (i.e., the actual access data of the container application during the startup process). The updated data access characteristic model has high accuracy and can provide guidance for starting of subsequent container applications.
Similar to the embodiment shown in fig. 4, when the cloud platform 1020 uploads the data sequence, the data access impact factor in the current starting process, such as at least one of the environmental information, the operating state information, and the external request information in the current starting process, may also be uploaded, and accordingly, the model management platform 1040 may update the data access characteristic model according to the data access impact factor and the corresponding data sequence.
Based on the above description, the embodiments of the present application provide a method for starting a container application. According to the method, the data access link is deeply analyzed by using the data access characteristic model, so that the accurate prediction of the data access link is realized, even if the execution paths are different, the data really required by cold start of the container application can be prefetched, the cache hit rate is improved, the frequency of remotely acquiring the data is reduced, and the cold start efficiency of the container application is improved. In addition, the method prefetches data really needed by the instance of the container application instead of the complete container mirror image, thereby effectively reducing the cold start time consumption of the container application, reducing the bandwidth usage of the cold start of the container application and the storage consumption of the deployment node, improving the resource utilization rate and avoiding the resource waste.
Based on the starting method of the container application shown in the embodiment of fig. 12, the embodiment of the present application further provides a cloud platform 1020 and a model management platform 1040, and the cloud platform 1020 and the model management platform 1040 are introduced below respectively.
Referring to fig. 10A or fig. 10B, a structural diagram of a cloud platform 1020 in the management system 1000 of the container application is shown, where the cloud platform 1020 includes:
a creating module 1022 for creating an instance of the container application;
a data downloading module 1026, configured to obtain a data access characteristic model of the container application, and obtain target mirror image data according to the data access characteristic model, where the target mirror image data is expected access data of the container application in a starting process;
the file system module 1024 is configured to provide data required for running to the instance of the container application according to the target image data, so as to run the instance of the container application, thereby starting the container application.
In some possible implementations, the creating module 1022 is specifically configured to:
creating an instance of the container application on a cloud host; alternatively, the first and second electrodes may be,
creating an instance of the container application on a virtual machine of the cloud host.
In some possible implementations, the cloud platform 1020 further includes:
an interaction module 1021, configured to present a configuration interface to a user, and receive a start parameter of the container application configured by the user through the configuration interface;
the creating module 1022 is specifically configured to:
and creating the instance of the container application according to the starting parameters of the container application.
In some possible implementations, the file system module 1024 is further configured to:
acquiring a data sequence accessed by the container application in a starting process;
uploading the data sequence, wherein the data sequence is used for updating the data access characteristic model.
In some possible implementations, the cloud platform 1020 further includes:
an interaction module 1021 for receiving a user-configured test set prior to said creating an instance of said container application;
and the data access characteristic model is obtained by training according to the test data in the test set.
The cloud platform 1020 according to the embodiment of the present application may correspondingly execute the method described in the embodiment of the present application, and the above and other operations and/or functions of each module/unit of the cloud platform 1020 are respectively for implementing a corresponding flow of each method executed by the cloud platform 1020 in the embodiment shown in fig. 12, and are not described herein again for brevity.
Continuing next, with reference to the schematic structural diagram of the model management platform 1040 in the management system 1000 of the container application shown in fig. 10A or fig. 10B, the platform 1040 includes:
an optimization module 1044 for creating a data access characteristic model of the container application;
the communication module is used for receiving a model acquisition request sent by a cloud platform, wherein the model acquisition request is used for requesting to acquire a data access characteristic model of the container application;
the communication module is further configured to return the data access characteristic model of the container application to the cloud platform, so that the cloud platform obtains expected access data of the container application in a starting process according to the data access characteristic model.
In some possible implementations, the communication module is further configured to:
receiving a data sequence uploaded by the cloud platform 1020 and accessed by the container application in a starting process;
the optimization module 1044 is further configured to:
and updating the data access characteristic model according to the data sequence.
In some possible implementations, the model management platform 1040 is deployed at the mirror warehouse 1060.
The model management platform 1040 according to the embodiment of the present application may correspond to performing the method described in the embodiment of the present application, and the above and other operations and/or functions of each module/unit of the model management platform 1040 are respectively for implementing corresponding processes of each method performed by the model management platform 1040 in the embodiment shown in fig. 12, and are not described herein again for brevity.
The cloud platform 1020 and the model management platform 1040 according to the embodiment of the present application are introduced from the perspective of functional modularization, and the cloud platform 1020 and the model management platform 1040 will be described below from the perspective of hardware materialization.
First, an embodiment of the present application provides a cloud platform 1020. The cloud platform 1020 may be a cluster of computers for running a container application. The computer cluster includes at least one computer, which may be a cloud server. The cloud server refers to a server capable of elastically stretching in a cloud environment, such as an ECS.
Fig. 13 provides a hardware block diagram of the cloud platform 1020, and as shown in fig. 13, the cloud platform 1020 includes at least one computer, and each computer includes a bus 1301, a processor 1302, a communication interface 1303, and a memory 1304. Communication among processor 1302, memory 1304, and communication interface 1303 is via bus 1301.
Bus 1301 can be a peripheral component interconnect standard PCI bus or an extended industry standard architecture EISA bus or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 13, but this is not intended to represent only one bus or type of bus.
The processor 1302 may be any one or more of a central processing unit CPU, a graphics processing unit GPU, a microprocessor MP, or a digital signal processor DSP. The communication interface 1303 is used for communication with the outside. For example, a data access characteristic model is obtained, or target mirror data is obtained according to the data access characteristic model, and the like.
The memory 1304 may include volatile memory such as random access memory RAM. The memory 1304 may also include non-volatile memory, such as read only memory ROM, flash memory, hard disk drive HDD, or solid state drive SSD.
The memory 1304 stores executable code that the processor 1302 executes to perform the aforementioned container application launching method.
Specifically, in the case of implementing the embodiment shown in fig. 10A or 10B, and in the case where each module described in the embodiment shown in fig. 10A or 10B is implemented by software, software or program codes necessary for performing the functions of each module in fig. 10A or 10B are stored in the memory 1304. The processor 1302 executes the program codes corresponding to the modules stored in the memory 1304 to execute the aforementioned method for starting the container application.
Secondly, the embodiment of the present application provides a model management platform 1040. The model management platform 1040 may be a cluster of computers that manage a data access feature model. Wherein the computer cluster comprises at least one computer.
FIG. 14 provides a hardware block diagram of a model management platform, as shown in FIG. 14, model management platform 1040 includes bus 1401, processor 1402, communication interface 1403, and memory 1404. Communication between the processor 1402, the memory 1404, and the communication interface 1403 occurs via a bus 1401. The specific implementation of the bus 1401, the processor 1402, the communication interface 1403 and the memory 1404 can be described with reference to the embodiment shown in fig. 13.
The memory 1404 has stored therein executable code that the processor 1402 executes to perform a model management method (the method performed by the model management platform 1040 in the embodiment shown in FIG. 12). Specifically, in the case of implementing the embodiment shown in fig. 10A or 10B, and in the case that each module described in the embodiment shown in fig. 10A or 10B is implemented by software, the processor 902 executes codes corresponding to the modules to execute the aforementioned model management method.
Based on the cloud platform 1020 and the model management platform 1040 provided in the embodiment of the present application, the embodiment of the present application further provides a management system 1000 for a container application.
Referring to the schematic architecture of the management system 1000 of the container application shown in fig. 10A or fig. 10B, the management system 1000 of the container application includes a cloud platform 1020 and a model management platform 1040. The cloud platform 1020 is configured to create an instance of a container application, and send a model obtaining request to the model management platform 1040, where the model obtaining request is used to request to obtain a data access feature model of the container application. The model management platform 1040 is configured to return the data access characteristic model of the container application to the cloud platform 1020. The cloud platform 1020 is further configured to obtain target mirror data according to the data access characteristic model, where the target mirror data is expected access data of the container application in a starting process, and then provide data required for running to an instance of the container application according to the target mirror data to run the instance of the container application, so as to start the container application.
In some possible implementations, the management system 1000 of the container application further includes an image warehouse 1060, where the image warehouse 1060 is used to provide an image of the container application, and the cloud platform 1020 may prefetch target image data from the image warehouse 1060 according to the data access characteristic model when acquiring the target image data.
In some possible implementations, the function of the model management platform 1040, such as learning the data access characteristic model of the container application, or optimizing the data access characteristic model, may be implemented by the mirror repository 1060, and accordingly, the management system 1000 of the container application may not include the model management platform 1040.
The embodiment of the application also provides a computer readable storage medium. The computer-readable storage medium can be any available medium that a computer or cluster of computers can store or a data storage device, such as a data center, that contains one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others. The computer-readable storage medium includes instructions that instruct a computer or a computer cluster to execute the method for running or the method for mirror image management, the method for model management of the container application.
The embodiment of the application also provides a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computing device, cause the processes or functions described in accordance with embodiments of the application to occur, in whole or in part.
The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, or data center to another website site, computer, or data center by wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.).
The computer program product may be a software installation package, which may be downloaded and executed on a computer or a cluster of computers in case it is desired to use any of the aforementioned methods of startup or mirror management, model management of container applications.
The description of the flow or structure corresponding to each of the above drawings has emphasis, and a part not described in detail in a certain flow or structure may refer to the related description of other flows or structures.

Claims (38)

1. A method for starting a container application, the method comprising:
acquiring an empty shell mirror image corresponding to the container application from a mirror image warehouse, and creating an instance of the container application according to the empty shell mirror image;
acquiring a data access characteristic model of the container application from the mirror image warehouse, and acquiring target mirror image data from mirror image data of a primary container mirror image according to the data access characteristic model, wherein the target mirror image data is expected access data of the container application in a starting process;
and providing data required for running to the container application instance according to the target mirror image data so as to run the container application instance, thereby starting the container application.
2. The method of claim 1, wherein obtaining target mirrored data from mirrored data of a native container mirror according to the data access characteristic model comprises:
acquiring a data access influence factor of the container application, wherein the data access influence factor comprises at least one of environment information, running state information and external request information;
determining the identifier of the target mirror image data from the mirror image data of the primary container mirror image through the data access characteristic model according to the data access influence factor;
and acquiring the target mirror image data according to the identifier of the target mirror image data.
3. The method according to claim 1 or 2, wherein after starting the container application, the method further comprises:
acquiring actual access data of the container application in the starting process;
and sending the data access influence factor of the container application and the actual access data of the container application in the starting process to the mirror image warehouse.
4. A method for image management, the method comprising:
sending a shell mirror image corresponding to the container application to a container operation node, and sending a data access characteristic model of the container application to the container operation node;
receiving an identifier of target mirror image data sent by the container operation node, wherein the target mirror image data is mirror image data of a native container mirror image, the target mirror image data is expected access data of the container application in a starting process, and the identifier of the target mirror image data is determined by the container operation node according to the data access characteristic model;
and sending the target mirror image data to the container operation node according to the identification of the target mirror image data, wherein the target mirror image data is used for starting the container application when being accessed by the container operation node according to the instance of the container application created by the empty shell mirror image.
5. The method of claim 4, wherein prior to sending the container application's data access characteristic model to the container running node, the method further comprises:
acquiring actual access data of the container application in a starting process under the condition of the multiple groups of data access influence factors according to the multiple groups of data access influence factors of the container application, wherein the data access influence factors comprise at least one of environment information, running state information and external request information;
and training the data access characteristic model according to the multiple groups of data access influence factors and the corresponding actual access data.
6. A container operation node, characterized in that the node comprises:
the driving module is used for acquiring the empty shell mirror image corresponding to the container application from the mirror image warehouse;
a creation module for creating an instance of the container application from the ghost image;
the data downloading module is used for acquiring a data access characteristic model of the container application from the mirror image warehouse and acquiring target mirror image data from mirror image data of a primary container mirror image according to the data access characteristic model, wherein the target mirror image data is expected access data of the container application in the starting process;
and the file system module is used for providing data required by running to the container application instance according to the target mirror image data so as to run the container application instance, thereby starting the container application.
7. The node of claim 6, wherein the data download module is specifically configured to:
acquiring a data access influence factor of the container application, wherein the data access influence factor comprises at least one of environment information, running state information and external request information;
determining the identifier of the target mirror image data from the mirror image data of the primary container mirror image through the data access characteristic model according to the data access influence factor;
and acquiring the target mirror image data according to the identifier of the target mirror image data.
8. The node according to claim 6 or 7, characterized in that the node further comprises:
and the data reporting module is used for acquiring actual access data of the container application in the starting process and sending the data access influence factor of the container application and the actual access data of the container application in the starting process to the mirror image warehouse.
9. A mirror repository, the mirror repository comprising:
the empty shell mirror image storage module is used for sending the empty shell mirror image corresponding to the container application to the container operation node;
the characteristic model storage module is used for sending the data access characteristic model of the container application to the container operation node;
the mirror image data storage module is used for receiving an identifier of target mirror image data sent by the container operation node, and sending the target mirror image data to the container operation node according to the identifier of the target mirror image data, wherein the target mirror image data is mirror image data of a native container mirror image, the target mirror image data is expected access data of the container application in a starting process, the identifier of the target mirror image data is determined by the container operation node according to the data access characteristic model, and the target mirror image data is used for starting the container application when the target mirror image data is accessed by an instance of the container application created by the container operation node according to the empty shell mirror image.
10. The mirror repository of claim 9, further comprising:
and before sending the data access characteristic model of the container application to the container operation node, obtaining actual access data of the container application in a starting process under the condition of the multiple groups of data access influence factors according to the multiple groups of data access influence factors of the container application, and training the data access characteristic model according to the multiple groups of data access influence factors and the corresponding actual access data, wherein the data access influence factors include at least one of environment information, operation state information and external request information.
11. A computer cluster comprising at least one computer, the computer comprising a processor and a memory, the memory having stored therein computer-readable instructions, the processor executing the computer-readable instructions to perform the method of any of claims 1 to 3.
12. A cluster of computers comprising at least one computer, the computer comprising a processor and a memory, the memory having stored therein computer-readable instructions, the processor executing the computer-readable instructions to perform the method of claim 4 or 5.
13. A computer readable storage medium comprising computer readable instructions which, when run on a computer, cause the computer to perform the method of any of claims 1 to 3.
14. A computer readable storage medium comprising computer readable instructions which, when run on a computer, cause the computer to perform the method of claim 4 or 5.
15. A computer program product comprising computer readable instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 3.
16. A computer program product comprising computer readable instructions which, when run on a computer, cause the computer to perform the method of claim 4 or 5.
17. A method for starting a container application, the method comprising:
creating an instance of the container application;
acquiring a data access characteristic model of the container application, and acquiring target mirror image data according to the data access characteristic model, wherein the target mirror image data is expected access data of the container application in a starting process;
and providing data required for running to the container application instance according to the target mirror image data so as to run the container application instance, thereby starting the container application.
18. The method of claim 17, wherein the creating the instance of the container application comprises:
creating an instance of the container application on a cloud host; alternatively, the first and second electrodes may be,
creating an instance of the container application on a virtual machine of the cloud host.
19. The method of claim 17 or 18, wherein the creating the instance of the container application comprises:
presenting a configuration interface to a user, and receiving starting parameters of the container application configured by the user through the configuration interface;
and creating the instance of the container application according to the starting parameters of the container application.
20. The method of any one of claims 17 to 19, further comprising:
acquiring a data sequence accessed by the container application in a starting process;
uploading the data sequence, wherein the data sequence is used for updating the data access characteristic model.
21. The method of any of claims 17 to 20, wherein prior to said creating the instance of the container application, the method further comprises:
receiving a test set configured by a user;
and the data access characteristic model is obtained by training according to the test data in the test set.
22. A method of model management, the method comprising:
creating a data access characteristic model of the container application;
receiving a model acquisition request sent by a cloud platform, wherein the model acquisition request is used for requesting to acquire a data access characteristic model of the container application;
and returning the data access characteristic model of the container application to the cloud platform, so that the cloud platform obtains expected access data of the container application in the starting process according to the data access characteristic model.
23. The method of claim 22, further comprising:
receiving a data sequence which is uploaded by the cloud platform and accessed by the container application in the starting process;
and updating the data access characteristic model according to the data sequence.
24. The method of claim 22 or 23, wherein the method is performed by a model management platform deployed at a mirror repository.
25. A cloud platform, the cloud platform comprising:
a creation module to create an instance of the container application;
the data downloading module is used for acquiring a data access characteristic model of the container application and acquiring target mirror image data according to the data access characteristic model, wherein the target mirror image data is expected access data of the container application in the starting process;
and the file system module is used for providing data required by running to the container application instance according to the target mirror image data so as to run the container application instance, thereby starting the container application.
26. The cloud platform of claim 25, wherein the creation module is specifically configured to:
creating an instance of the container application on a cloud host; alternatively, the first and second electrodes may be,
creating an instance of the container application on a virtual machine of the cloud host.
27. The cloud platform of claim 25 or 26, wherein the cloud platform further comprises:
the interaction module is used for presenting a configuration interface to a user and receiving the starting parameters of the container application configured by the user through the configuration interface;
the creation module is specifically configured to:
and creating an instance of the container application according to the starting parameters of the container application.
28. The cloud platform of any of claims 25 to 27, wherein the file system module is further configured to:
acquiring a data sequence accessed by the container application in a starting process;
uploading the data sequence, wherein the data sequence is used for updating the data access characteristic model.
29. The method of any one of claims 25 to 28, wherein the cloud platform further comprises:
an interaction module for receiving a user-configured test set prior to said creating an instance of said container application;
and the data access characteristic model is obtained by training according to the test data in the test set.
30. A model management platform, comprising:
the optimization module is used for creating a data access characteristic model of the container application;
the communication module is used for receiving a model acquisition request sent by a cloud platform, wherein the model acquisition request is used for requesting to acquire a data access characteristic model of the container application;
the communication module is further configured to return the data access characteristic model of the container application to the cloud platform, so that the cloud platform obtains expected access data of the container application in a starting process according to the data access characteristic model.
31. The model management platform of claim 30, wherein the communication module is further configured to:
receiving a data sequence which is uploaded by the cloud platform and accessed by the container application in the starting process;
the optimization module is further configured to:
and updating the data access characteristic model according to the data sequence.
32. The model management platform according to claim 30 or 31, wherein the model management platform is deployed in a mirror warehouse.
33. A cluster of computers comprising at least one computer, the computer comprising a processor and a memory, the memory having stored therein computer-readable instructions, the processor executing the computer-readable instructions to perform the method of any of claims 17 to 21.
34. A cluster of computers comprising at least one computer, the computer comprising a processor and a memory, the memory having stored therein computer-readable instructions, the processor executing the computer-readable instructions to perform the method of any one of claims 22 to 24.
35. A computer readable storage medium comprising computer readable instructions which, when run on a computer, cause the computer to perform the method of any one of claims 17 to 21.
36. A computer readable storage medium comprising computer readable instructions which, when run on a computer, cause the computer to perform the method of any one of claims 22 to 24.
37. A computer program product comprising computer readable instructions which, when run on a computer, cause the computer to perform the method of any of claims 17 to 21.
38. A computer program product comprising computer readable instructions which, when run on a computer, cause the computer to perform the method of any of claims 22 to 24.
CN202210302434.1A 2021-04-01 2022-03-25 Starting method of container application, mirror image management method and related equipment Pending CN115248721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/083545 WO2022206722A1 (en) 2021-04-01 2022-03-29 Container application starting method, image management method, and related devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021103556279 2021-04-01
CN202110355627 2021-04-01

Publications (1)

Publication Number Publication Date
CN115248721A true CN115248721A (en) 2022-10-28

Family

ID=83698735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210302434.1A Pending CN115248721A (en) 2021-04-01 2022-03-25 Starting method of container application, mirror image management method and related equipment

Country Status (1)

Country Link
CN (1) CN115248721A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115665172A (en) * 2022-10-31 2023-01-31 北京凯思昊鹏软件工程技术有限公司 Management system and management method of embedded terminal equipment
CN115756733A (en) * 2023-01-10 2023-03-07 北京数原数字化城市研究中心 Container mirror image calling system and container mirror image calling method
CN116339920A (en) * 2023-03-27 2023-06-27 北京天融信网络安全技术有限公司 Information processing method, device, equipment and medium based on cloud platform

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115665172A (en) * 2022-10-31 2023-01-31 北京凯思昊鹏软件工程技术有限公司 Management system and management method of embedded terminal equipment
CN115665172B (en) * 2022-10-31 2023-04-28 北京凯思昊鹏软件工程技术有限公司 Management system of embedded terminal equipment
CN115756733A (en) * 2023-01-10 2023-03-07 北京数原数字化城市研究中心 Container mirror image calling system and container mirror image calling method
CN116339920A (en) * 2023-03-27 2023-06-27 北京天融信网络安全技术有限公司 Information processing method, device, equipment and medium based on cloud platform
CN116339920B (en) * 2023-03-27 2024-03-15 北京天融信网络安全技术有限公司 Information processing method, device, equipment and medium based on cloud platform

Similar Documents

Publication Publication Date Title
US11403028B2 (en) Virtualized block device backing for virtualization containers
US10922118B2 (en) Distributed container image repository service
EP3414661B1 (en) Efficient live-migration of remotely accessed data
US9898354B2 (en) Operating system layering
JP6621543B2 (en) Automatic update of hybrid applications
CN106227579B (en) Docker container construction method and Docker management console
CN115248721A (en) Starting method of container application, mirror image management method and related equipment
CN106506587B (en) Docker mirror image downloading method based on distributed storage
US20200084274A1 (en) Systems and methods for efficient distribution of stored data objects
KR101793306B1 (en) Virtual application extension points
CN102387197B (en) System and method for streaming virtual machines from a server to a host
US11016785B2 (en) Method and system for mirror image package preparation and application operation
CN111901294A (en) Method for constructing online machine learning project and machine learning system
JP5886447B2 (en) Location independent files
US9881351B2 (en) Remote translation, aggregation and distribution of computer program resources in graphics processing unit emulation
US11625253B2 (en) Application-level runtime environment for executing applications native to mobile devices without full installation
US11023180B2 (en) Method, equipment and system for managing the file system
KR101991537B1 (en) Autonomous network streaming
US10909086B2 (en) File lookup in a distributed file system
US11610155B2 (en) Data processing system and data processing method
CN114465877A (en) Edge cloud migration method and system suitable for wireless self-organizing network environment
KR20220132639A (en) Provides prediction of remote-stored files
WO2021094885A1 (en) Intelligent data pool
WO2022206722A1 (en) Container application starting method, image management method, and related devices
CN107667343B (en) System and method for loading on-demand resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination