CN117827377A - Container access method, system, electronic equipment and storage medium - Google Patents

Container access method, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN117827377A
CN117827377A CN202410007884.7A CN202410007884A CN117827377A CN 117827377 A CN117827377 A CN 117827377A CN 202410007884 A CN202410007884 A CN 202410007884A CN 117827377 A CN117827377 A CN 117827377A
Authority
CN
China
Prior art keywords
container
ttyd
node
algorithm
modified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410007884.7A
Other languages
Chinese (zh)
Inventor
曹旭皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202410007884.7A priority Critical patent/CN117827377A/en
Publication of CN117827377A publication Critical patent/CN117827377A/en
Pending legal-status Critical Current

Links

Landscapes

  • Stored Programmes (AREA)

Abstract

The invention provides a container access method, a system, electronic equipment and a storage medium, and relates to the technical field of software development. The container access method provided by the invention comprises the following steps: the development platform maps ttyd execution programs in a local disk of the node into the AI algorithm container through configuration files in the AI algorithm container deployed on the node; starting an AI algorithm container and starting a ttyd execution program; and responding to an AI algorithm container access operation sent by the browser client, and accessing the AI algorithm container through a port provided by a ttyd execution program. Through the container access method, a user can directly access the AI algorithm container based on the port provided by the browser client through the ttyd execution program, so that the user can still operate the content in the container through the browser client without adding ttyd software when the mirror image is manufactured.

Description

Container access method, system, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of software development, in particular to a container access method, a system, electronic equipment and a storage medium.
Background
With the development of artificial intelligence technology, machine learning is becoming more and more popular. All fields are training and reasoning own models, and a set of training environment is manually built, so that reusability is low, and all manufacturers start to develop own AI (Artificial Intelligence ) development platforms. A traditional AI development platform can start a docker mirror image uploaded to a harbor by a user through a kubernet system, and mount a data set selected by the user into a container so as to fulfill the aim of training or reasoning.
And AI development engineers make the dock mirror image locally and upload the dock mirror image to the harbor, after the AI development engineers deploy the dock mirror image to a server, the AI development engineers often encounter a series of situations that the dock mirror image needs to be modified, such as code version updating, and the like, which have problems of codes in the dock mirror image. In view of security and usability, AI development engineers that typically use AI development platforms cannot log directly into a server and operate, and thus cannot connect to already-started containers directly through command line tools. The current preferred methods are: the AI development engineer needs to add a ttyd tool into the dock mirror image when the AI development engineer locally makes the dock mirror image, and start the ttyd program in the background when the container is started, so that the AI development engineer can use a command line in a web browser to finish the operation of accessing the container after the container is started.
However, the approach of adding ttyd tools in making a docker mirror image also presents some problems: first, not every AI development engineer will integrate ttyd tools into its own mirror image, the method of integration is learning-costly; secondly, AI development engineers usually only pay attention to the AI algorithm itself, often forgetting to pack ttyd tools into the mirror image when the mirror image is manufactured; then, as different AI development engineers may pack different ttyd versions into their own docker images, problems are encountered during use, and the investigation is difficult; finally, in the architectural design considerations such as system decoupling, the AI algorithm image itself should only contain AI algorithms, and should not contain web termination tools like ttyd, which should be low-coupling between them.
Disclosure of Invention
Embodiments of the present invention provide a container access method, system, electronic device, and storage medium, so as to at least partially solve the problems in the related art.
An embodiment of the present invention provides a method for accessing a container, where the method includes:
the development platform maps ttyd execution programs in a local disk of a node into an AI algorithm container through a configuration file in the AI algorithm container deployed on the node;
The development platform starts the AI algorithm container and starts the ttyd execution program;
and the development platform responds to an AI algorithm container access operation sent by the browser client, and accesses the AI algorithm container through a port provided by the ttyd execution program.
Optionally, the node is a k8s node, a k8s master is deployed in the development platform, and the k8smaster is connected with a plurality of k8s nodes; the method further comprises the steps of:
the k8s master analyzes and runs the daemonset configuration file of ttyd to distribute the daemonset configuration file of ttyd to each k8s node;
each k8s node deploys a ttyd container on the k8s node based on the daemonset configuration file of the ttyd, wherein the ttyd container stores the ttyd execution program;
each k8s node maps the ttyd execution program in the ttyd container to a local disk of the k8s node.
Optionally, the development platform starts the AI algorithm container and starts the ttyd execution program, including:
the development platform responds to the container starting operation sent by the browser client to acquire a preset parameter for starting ttyd;
the development platform adds the parameters of the starting ttyd into a container starting command to obtain a container starting command carrying the parameters of the starting ttyd;
And the development platform executes the container starting command carrying the parameter for starting ttyd so as to start the AI algorithm container and start the ttyd execution program.
Optionally, the method further comprises:
the development platform responds to the AI algorithm container modification operation sent by the browser client, and returns a port number corresponding to the AI algorithm container to be modified to the browser client;
the browser client displays a modification page corresponding to the AI algorithm container to be modified based on the port number, wherein the modification page is used for a user to modify the AI algorithm container;
and the node where the modified AI algorithm container is located responds to the mirror image uploading request sent by the browser client, runs an execution program of the mirror image uploading service, packages the modified AI algorithm container into a mirror image, and uploads the mirror image to the harbor server.
Optionally, the nodes are k8s nodes in a first kubernet cluster, a k8s master in the first kubernet cluster is deployed in the development platform, and the k8s master is connected with each k8s node; the method further comprises the steps of:
the k8s master analyzes and runs the daemonset configuration file of the mirror image uploading service to distribute the daemonset configuration file of the mirror image uploading service to each k8s node;
Each k8s node deploys a mirror image uploading service container on the basis of the daemonset configuration file of the mirror image uploading service, wherein the mirror image uploading service container stores an execution program of the mirror image uploading service.
Optionally, the node where the modified AI algorithm container is located runs an execution program of a mirror image uploading service in response to a mirror image uploading request sent by the browser client, packages the modified AI algorithm container into a mirror image, and uploads the mirror image to the hardor server, including:
the node where the modified AI algorithm container is located receives a mirror image uploading request sent by the browser client through an http interface;
the node where the modified AI algorithm container is located determines the modified AI algorithm container based on the container ID in the mirror image uploading request;
starting a sub-thread by a node where the modified AI algorithm container is located, and logging in the harbor server through the sub-thread;
the node where the modified AI algorithm container is located packages the modified AI algorithm container into a mirror image and pushes the mirror image to the harbor server;
and deleting the local modified AI algorithm container by the node where the modified AI algorithm container is located, and returning to a packaging state to the browser client.
Optionally, the daemonset configuration file of the mirror image uploading service includes a dock mirror image in the harbor server; the method further comprises the steps of:
the development platform makes an execution program of the mirror image uploading service into a dock mirror image;
the development platform logs in the harbor server;
and the development platform pushes the dock mirror image to the harbor server.
Optionally, the method further comprises:
the server responds to a downloading request of a k8s node in a second kubernet cluster, and sends a mirror image corresponding to the modified AI algorithm container to the k8s node in the second kubernet cluster;
and the k8s node in the second kubernet cluster locally deploys the modified AI algorithm container based on the mirror image corresponding to the modified AI algorithm container.
Optionally, after the node where the modified AI algorithm container is located packages the modified AI algorithm container into a mirror image and uploads the mirror image to the hardor server, the method further includes:
the node where the modified AI algorithm container is located downloads the modified AI algorithm container from the harbor server;
and the development platform executes a container starting command carrying a parameter for starting ttyd so as to start the modified AI algorithm container and start the ttyd execution program.
A second aspect of an embodiment of the present invention provides a container access system, the system comprising:
the first mapping module is deployed on the development platform and is used for mapping ttyd execution programs in a local disk of the node into the AI algorithm container through configuration files in the AI algorithm container deployed on the node;
the container starting module is deployed on the development platform and is used for starting the AI algorithm container and starting the ttyd execution program;
and the container access module is deployed on the development platform and is used for responding to the AI algorithm container access operation sent by the browser client and accessing the AI algorithm container through a port provided by the ttyd execution program.
Optionally, the node is a k8s node, a k8s master is deployed in the development platform, and the k8smaster is connected with a plurality of k8s nodes; the system further comprises:
the first analysis module is deployed in the k8s master and is used for analyzing and running the daemonset configuration file of ttyd so as to distribute the daemonset configuration file of ttyd to each k8s node;
the first deployment module is deployed on each k8s node and is used for deploying a ttyd container on the k8s node based on the daemonset configuration file of the ttyd, wherein the ttyd container stores the ttyd execution program;
And the second mapping module is deployed at each k8s node and is used for mapping the ttyd execution program in the ttyd container to a local disk of the k8s node.
Optionally, the container starting module includes:
the parameter acquisition module is deployed on the development platform and is used for responding to the container starting operation sent by the browser client to acquire the preset starting ttyd parameter;
the command generation module is deployed on the development platform and is used for adding the parameter of the starting ttyd into a container starting command to obtain the container starting command carrying the parameter of the starting ttyd;
the first starting module is deployed on the development platform and is used for executing the container starting command carrying the parameter for starting ttyd so as to start the AI algorithm container and start the ttyd execution program.
Optionally, the system further comprises:
the port number acquisition module is deployed on the development platform and is used for responding to the AI algorithm container modification operation sent by the browser client and returning the port number corresponding to the AI algorithm container to be modified to the browser client;
the page display module is deployed at the browser client and is used for displaying a modification page corresponding to the AI algorithm container to be modified based on the port number, and the modification page is used for a user to modify the AI algorithm container;
And the container modification module is deployed at a node where the modified AI algorithm container is located and is used for responding to the mirror image uploading request sent by the browser client, running an execution program of the mirror image uploading service and packaging the modified AI algorithm container into a mirror image to be uploaded to the harbor server.
Optionally, the nodes are k8s nodes in a first kubernet cluster, a k8s master in the first kubernet cluster is deployed in the development platform, and the k8s master is connected with each k8s node; the system further comprises:
the second analysis module is deployed in the k8s master and is used for analyzing and running the daemonset configuration file of the mirror image uploading service so as to distribute the daemonset configuration file of the mirror image uploading service to each k8s node;
the second deployment module is deployed on each k8s node and is used for deploying a mirror image uploading service container on the k8s nodes based on the daemonset configuration file of the mirror image uploading service, and the mirror image uploading service container stores an execution program of the mirror image uploading service.
Optionally, the container modification module includes:
the request receiving module is deployed at the node where the modified AI algorithm container is located and is used for receiving the mirror image uploading request sent by the browser client through an http interface;
The container determining module is deployed at the node where the modified AI algorithm container is located and is used for determining the modified AI algorithm container based on the container ID in the mirror image uploading request;
the first login module is deployed at a node where the modified AI algorithm container is located and is used for starting a sub-thread and logging in the harbor server through the sub-thread;
the mirror image packaging module is deployed at the node where the modified AI algorithm container is located and is used for packaging the modified AI algorithm container into a mirror image and pushing the mirror image to the harbor server;
the state return module is deployed at the node where the modified AI algorithm container is located and is used for deleting the local modified AI algorithm container and returning the packaging state to the browser client.
Optionally, the daemonset configuration file of the mirror image uploading service includes a dock mirror image in the harbor server; the system further comprises:
the mirror image manufacturing module is deployed on the development platform and used for manufacturing an execution program of the mirror image uploading service into a dock mirror image;
the second login module is deployed on the development platform and used for logging in the harbor server;
and the image uploading module is deployed on the development platform and is used for pushing the dock image to the harbor server.
Optionally, the system further comprises:
the mirror image sending module is deployed on the harbor server and is used for responding to a downloading request of a k8s node in a second kubernet cluster and sending a mirror image corresponding to the modified AI algorithm container to the k8s node in the second kubernet cluster;
the mirror image deployment module is deployed at a k8s node in the second kubernets cluster and is used for locally deploying the modified AI algorithm container based on the mirror image corresponding to the modified AI algorithm container.
Optionally, the system further comprises:
the image downloading module is deployed at the node where the modified AI algorithm container is located and is used for downloading the modified AI algorithm container from the harbor server after the modified AI algorithm container is packaged into an image by the node where the modified AI algorithm container is located and uploaded to the harbor server;
and the second starting module is deployed on the development platform and is used for executing a container starting command carrying a parameter for starting ttyd so as to start the modified AI algorithm container and start the ttyd execution program.
A third aspect of an embodiment of the present invention provides an electronic device, including: comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor performs the steps of the container access method according to the first aspect of the embodiment of the invention.
A fourth aspect of the embodiments of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the container access method of the first aspect of the embodiments of the present invention.
In this embodiment, the development platform may map the ttyd execution program local to the AI algorithm container of the node, and when the user starts the AI algorithm container through the browser client, the user may start the ttyd execution program while starting the AI algorithm container, so that the user may directly access the AI algorithm container based on the port provided by the browser client through the ttyd execution program, thereby implementing that the user does not need to add software related to ttyd and other non-algorithm during mirror image manufacturing, and still can operate the content in the container through the browser client, thereby reducing complexity of making mirror images by AI development engineers, reducing coupling degree of docker mirror images and the AI development platform, and improving working efficiency of the AI development engineers.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of a browser interacting with a container as provided in the related art;
FIG. 2 is a schematic illustration of another browser interaction with a container provided in the related art;
FIG. 3 is a flow chart of a method for accessing a container according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a mirror image making and uploading method according to an embodiment of the present invention;
FIG. 5 is a block diagram of a container access web terminal according to an embodiment of the present invention;
FIG. 6 is a block diagram of a container access system according to one embodiment of the present invention;
fig. 7 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, when a code in a docker image is problematic, a series of situations that the docker image needs to be modified, such as code version updating, can be solved by adopting two schemes:
Scheme 1: AI development engineers pack ttyd tools into the docker mirror image when making the docker mirror image, and users directly access corresponding interfaces of ttyd opening in the container through a browser when using the tools. As shown in fig. 1, fig. 1 is a schematic diagram of interaction between a browser and a container provided in the related art. In fig. 1, the browser and the container interact directly through websocket, and a user operates the browser to access a corresponding interface opened by ttyd in the container mirror image, so as to realize the access and modification of the container.
Scheme 2: and interacting with the container by adopting a dock mirror native remote api, wherein websocket connection is kept between the browser and the back-end server. As shown in fig. 2, fig. 2 is a schematic diagram of interaction with a container by another browser provided in the related art. In fig. 2, the browser transmits the command line entered by the user to the backend service through websocket, and the backend service resends the command line to the docker container through docker remote api. After the dock container is processed, the result is returned to the back-end service, and finally, the back-end server returns the result to the browser through websocket.
Therefore, in order to at least partially solve one or more of the above problems and other potential problems, an embodiment of the present invention proposes a container access method, where ttyd execution programs are stored in local nodes, a development platform may map the ttyd execution programs in the local nodes to AI algorithm containers of the nodes, and when a user starts the AI algorithm containers through a browser client of the development platform, the user may start the ttyd execution programs while starting the AI algorithm containers, so that the user may access the AI algorithm containers directly through a port provided by the ttyd execution program of the browser client, thereby realizing that the user does not need to add software related to the ttyd and other non-algorithm during the mirror image manufacturing, and still can operate contents in the containers through the browser client, reducing complexity of the AI development engineer for making the mirror image, reducing coupling degree of the docker mirror image and the AI development platform, and improving working efficiency of the AI development engineer.
TABLE 1
Please refer to table 1, table 1 is a comparison table of the container access method proposed in this embodiment and the above schemes 1 and 2. As can be seen from table 1, in scheme 1, the user is required to put the ttyd tool into the docker mirror image, but the AI algorithm development engineer usually only pays attention to the algorithm itself, and forgets to pack the ttyd tool into the mirror image when the mirror image is often made; secondly, as different AI algorithm development engineers may pack different ttyd versions into own mirror images, if problems are encountered in the use process, the investigation is difficult; finally, the AI algorithm image itself should only contain AI algorithms and should not contain web termination tools like ttyd, in terms of architectural design such as system decoupling.
In the scheme 2, data needs to be transmitted to the back-end service first and then forwarded to the dock container by the back-end service when in data interaction, so that the transmission link is long and the transmission efficiency is low.
Therefore, the container access method provided by the embodiment of the invention has obvious advantages in the aspects of usability, transmission efficiency and management convenience. Hereinafter, specific examples of the present scheme will be described in more detail with reference to the accompanying drawings.
Referring to fig. 3, a flowchart of a container access method according to an embodiment of the present invention is shown, where the container access method may include the following steps:
S101, the development platform maps ttyd execution programs in local disks of the nodes to the AI algorithm container through configuration files in the AI algorithm container deployed on the nodes.
The development platform in this embodiment serves as the back end of the AI development platform, and implements multiple functions of the AI development platform, where the nodes may be physical machines or computers. In this embodiment, a ttyd execution program exists in the local disk of each node, where the ttyd execution program is a web page terminal tool for sharing a terminal through a web page. And each node is deployed with an AI algorithm container, and a configuration file exists in the AI algorithm container, wherein the configuration file is used for mapping ttyd execution programs of local disks of the nodes to the AI algorithm container of the nodes, and the local disks in the embodiment are disks of hosts where the nodes are located, namely disks of physical machines.
Aiming at each node, the development platform can map ttyd execution programs in local disks of the node to the AI algorithm container deployed on the node through configuration files in the AI algorithm container deployed on the node, so that non-algorithm related software such as ttyd and the like is not required to be added when users make AI algorithm images, and ttyd execution programs can be integrated in the AI algorithm container.
In an optional implementation manner, the development platform may configure the starting parameters of the AI algorithm container, and append ttyd mapping parameters in the container starting command, so that when the development platform starts the algorithm container based on the container starting command, the development platform may map ttyd execution program in the local disk of the node to the AI algorithm container based on the configuration file in the AI algorithm container.
S102, the development platform starts the AI algorithm container and starts the ttyd execution program.
In this embodiment, the development platform may start the ttyd execution program in the AI algorithm container while starting the AI algorithm container in the node, and after the ttyd execution program is started, a port may be provided, where the port is used to access the AI algorithm container.
And S103, the development platform responds to an AI algorithm container access operation sent by the browser client, and accesses the AI algorithm container through a port provided by the ttyd execution program.
In this embodiment, the user may select and access the AI algorithm container through the browser client. The browser client in this embodiment is a front-end browser of the AI development platform, and is mainly configured to provide a visual interface, respond to various operations of a user, generate a corresponding instruction or request, and send the instruction or request to the development platform in this embodiment (i.e. a back-end service of the AI development platform).
The development platform can respond to the AI algorithm container access operation sent by the browser client and access the AI algorithm container through a port provided by the ttyd execution program so as to realize the modification of the subsequent AI algorithm container.
In this embodiment, the development platform may map the ttyd execution program in the local node to the AI algorithm container of the node, when a user (for example, a developer, a client at the B end) starts the AI algorithm container through the browser client, the development platform may start the ttyd execution program in the AI algorithm container while starting the AI algorithm container, so that the user may directly access the AI algorithm container based on the browser client through the port provided by the ttyd execution program, so that the user may still operate the content in the container through the browser client without adding ttyd and other non-algorithm related software when making the image, thereby reducing the complexity of making the image by the AI development engineer, reducing the coupling degree of the docker image and the AI development platform, and improving the working efficiency of the AI development engineer.
In combination with the above embodiments, the embodiments of the present invention provide a method for accessing a container, where the method may further include the following steps in addition to the above steps:
S201, the k8S master analyzes and runs the daemonset configuration file of ttyd to distribute the daemonset configuration file of ttyd to each k8S node.
The nodes in this embodiment are k8s nodes, that is, kubernets nodes (Kubernets nodes), a k8s master, that is, a Kubernets master, is deployed in the development platform, and the k8s master is connected to a plurality of k8s nodes.
Wherein the daemonset profile of ttyd for deploying ttyd containers in each k8s node may be set in advance. In this embodiment, the preengaged daemonset configuration file of ttyd may be parsed and run in the k8smaster to automatically distribute the ttyd daemonset configuration file to each k8s node in communication with the k8s master connection.
S202, each k8S node deploys a ttyd container on the basis of the daemonset configuration file of the ttyd, wherein the ttyd container stores the ttyd execution program.
In this embodiment, after each k8s node receives the daemonset configuration file of ttyd distributed by the k8s master, a ttyd container may be deployed on each k8s node, where a ttyd execution program is stored in the ttyd container based on the daemonset configuration file of ttyd.
S203, each k8S node maps the ttyd execution program in the ttyd container to a local disk of the k8S node.
In this embodiment, each k8s node may map the ttyd execution program in the ttyd container to the local disk of each node through the mapping of k8s, so that the ttyd execution program exists in the local disk of each node.
In this embodiment, the k8s master in the k8s cluster parses and runs the daemonset configuration file of ttyd, so that each k8s node in the whole k8s cluster automatically adds the ttyd container, and even if the k8s node is turned off and restarted or a new k8s node is added, each k8s node in the k8s cluster automatically deploys the ttyd container.
In combination with the above embodiments, the embodiments of the present invention provide a container access method, in which the step S102 may specifically include sub-steps S301 to S303:
s301: and the development platform responds to the container starting operation sent by the browser client to acquire the parameters of the preset starting ttyd.
In this embodiment, a user may execute a container startup operation at a browser client to select an AI algorithm container to be started, the browser client sends the container startup operation to a development platform in response to the container startup operation of the user, and the development platform obtains parameters of a preset startup ttyd in response to the container startup operation sent by the browser client. In this embodiment, a parameter of a start ttyd may be preconfigured, where the parameter of the start ttyd includes a port number to designate that a port is used by a ttyd executing program. For example, the parameters for starting ttyd may be: and & (ttyd-p 8080 bash &).
S302: and the development platform adds the parameter of the starting ttyd into a container starting command to obtain the container starting command carrying the parameter of the starting ttyd.
S303: and the development platform executes the container starting command carrying the parameter for starting ttyd so as to start the AI algorithm container and start the ttyd execution program.
In this embodiment, the development platform may add the acquired parameter of the start ttyd to the container start command, to obtain the container start command carrying the parameter of the start ttyd. The development platform executes a container start command carrying a parameter for starting ttyd to realize starting of the AI algorithm container selected by the user and starting of ttyd execution program in the AI algorithm container.
Further, when the user starts the AI algorithm container through the browser client, the ttyd mapped parameter and the ttyd started parameter are added in the container start command, so that the container start command carrying the ttyd mapped parameter and the ttyd started parameter is executed, when the AI algorithm container is started, the ttyd executed program in the local node is mapped to the AI algorithm container, and meanwhile, the ttyd executed program in the AI algorithm container is started.
In combination with the above embodiments, the embodiments of the present invention provide a method for accessing a container, where the method may further include the following steps in addition to the above steps:
S401: and the development platform responds to the AI algorithm container modification operation sent by the browser client, and returns the port number corresponding to the AI algorithm container to be modified to the browser client.
In this embodiment, the development platform may be used for a user (such as a developer, an AI algorithm opening engineer, and a B-terminal user) to modify the AI algorithm container. The user may select an AI algorithm container to be modified at the browser client, the AI algorithm container to be modified being an AI algorithm container that has already been deployed. The method comprises the steps that a browser client responds to AI algorithm container modification operation of a user, the AI algorithm container modification operation is sent to a development platform to be used for requesting port numbers of AI algorithm containers to be modified, and the development platform responds to the AI algorithm container modification operation sent by the browser client and returns the port numbers corresponding to the AI algorithm containers to be modified to the browser client.
S402: and the browser client displays a modification page corresponding to the AI algorithm container to be modified based on the port number, wherein the modification page is used for a user to modify the AI algorithm container.
In this embodiment, after the browser client obtains the port number corresponding to the AI algorithm container to be modified, the modification page corresponding to the AI algorithm container to be modified may be displayed based on the port number corresponding to the AI algorithm container to be modified. For example, the browser client may access the port number corresponding to the AI algorithm container to be modified in the newly opened webpage, that is, may enter the control terminal of the webpage version, and display the modification page corresponding to the AI algorithm container to be modified. The modification page of the embodiment is used for a user to modify the AI algorithm container, and the user can modify the AI algorithm container to be modified in the modification page.
S403: and the node where the modified AI algorithm container is located responds to the mirror image uploading request sent by the browser client, runs an execution program of the mirror image uploading service, packages the modified AI algorithm container into a mirror image, and uploads the mirror image to the harbor server.
In this embodiment, after the user modifies the container by modifying the page, the development platform may obtain a modified AI algorithm container, the user may click an upload mirror button corresponding to the AI algorithm container to be modified at a browser client, and the browser client sends a mirror upload request to a node where the modified AI algorithm container is located in response to a click operation of the user, where the modified AI algorithm container is located, may run an execution program of a mirror upload service in the node, and packages the modified AI algorithm container into a mirror and uploads the mirror to the harbor server.
In this embodiment, after the user (such as an AI algorithm development engineer) completes debugging and modifying the algorithm container, the user may implement the function of packaging the image in the container of kubernets and uploading the packaged image to the harbor warehouse through the image uploading service provided by the AI development platform, so that the user can directly deploy and reuse the packaged image in the future.
In combination with the above embodiments, in another embodiment, the present embodiment provides a container access method, in which the step S403 may specifically include sub-steps S501 to S505:
S501: and the node where the modified AI algorithm container is located receives the mirror image uploading request sent by the browser client through an http interface.
In this embodiment, the execution program of the mirror image uploading service in the node where the modified AI algorithm container is located may communicate with the development platform through an http protocol, after the development platform receives the mirror image uploading request sent by the browser client, the development platform may send the mirror image uploading request to the execution program of the mirror image uploading service through an http interface, so that the execution program of the mirror image uploading service may receive the mirror image uploading request, where the mirror image uploading request at least includes: container ID (i.e., modified AI algorithm container name), packaged mirror name, mirror label.
S502: and the node where the modified AI algorithm container is located determines the modified AI algorithm container based on the container ID in the mirror image uploading request.
In this embodiment, the execution program of the mirror image upload service in the node where the modified AI algorithm container is located may determine the modified AI algorithm container based on the container ID in the mirror image upload request.
S503: and starting a sub-thread by the node where the modified AI algorithm container is located, and logging in the harbor server through the sub-thread.
In this embodiment, since the AI algorithm container is generally larger, the execution program of the mirror image uploading service in the node where the modified AI algorithm container is located may start a sub-thread, and log into the harbor server through the sub-thread.
S504: and the node where the modified AI algorithm container is located packages the modified AI algorithm container into a mirror image and pushes the mirror image to the harbor server.
In this embodiment, after the execution program of the image upload service logs in the harbor server, the execution program of the image upload service in the node where the modified AI algorithm container is located may package the modified AI algorithm container into an image, and push the packaged image to the harbor server.
S505: and deleting the local modified AI algorithm container by the node where the modified AI algorithm container is located, and returning to a packaging state to the browser client.
In this embodiment, after pushing the packaged objects to the harbor server, the execution program of the image upload service may delete the modified AI algorithm container local to the node where the modified AI algorithm container is located, and then communicate with the development platform through the http interface based on the http protocol, and return the packaged state to the browser client through the development platform. Therefore, the development platform can modify the state of the corresponding mirror image in the database according to the parameters of the packing state sent by the execution program of the mirror image uploading service, such as modifying 'in-process' into 'success' or 'failure'.
In this embodiment, the modified AI algorithm container may be packaged into a mirror image by the mirror image uploading service in the node where the modified AI algorithm container is located and uploaded to the harbor server, so that the modified AI algorithm container can be deployed and reused directly.
In an embodiment, please refer to fig. 4, fig. 4 is a flowchart illustrating a method for uploading image production according to an embodiment of the present invention. In fig. 4, the backend service of the AI development platform may communicate with the mirror image upload service deployed by each node through an http interface based on an http protocol. The user can click a 'commit mirror image' button at a browser client of the AI development platform to issue a mirror image uploading instruction, and the back-end service of the AI development platform creates a record in mirror image production in a database based on the mirror image uploading instruction sent by the browser client, displays the 'mirror image production' at the browser client based on the record, and generates a mirror image uploading request. And then the back-end service of the AI development platform communicates with the mirror image uploading service of the node where the modified container is located through an Http protocol, and sends a mirror image uploading request to the mirror image uploading service of the corresponding node, wherein the mirror image uploading request (i.e. the Http request in FIG. 4) comprises: AI training task name (i.e., modified AI algorithm container name), mirror name, and mirror label.
After receiving the image uploading request, the image uploading server can search the corresponding container in the node locally based on the AI training task name, then start a new sub-thread, return a notification of successful request receiving to the AI development platform, and log in the harbor server. After successful login, the searched container is made and packaged into a docker mirror image, and the packaged mirror image is uploaded to a harbor server. After the uploading is successful, the local mirror image can be deleted first, and the mirror image making status interface of the AI development platform is called back, so that the AI development platform is informed of success/failure of mirror image making, and the rear-end service of the AI development platform can modify the mirror image making status in the database based on the received status to be success/failure.
In combination with the above embodiments, the embodiments of the present invention provide a container access method, in which steps S601 and S602 may be included in addition to the above steps:
s601, the k8S master analyzes and runs the daemonset configuration file of the mirror image uploading service to distribute the daemonset configuration file of the mirror image uploading service to each k8S node.
The nodes in this embodiment are k8s nodes in the first kubernet cluster, a k8s master in the first kubernet cluster is deployed in the development platform, and the k8s master is connected with each k8s node in the first kubernet cluster.
In this embodiment, the daemonset configuration file of the mirror image uploading service configured in advance may be parsed and run in the k8s master, so as to automatically distribute the daemonset configuration file of the mirror image uploading service to each k8s node in communication with the k8s master. The daemonset configuration file of the image uploading service can be set in advance, and the daemonset configuration file of the image uploading service is used for deploying an image uploading service container in each k8s node.
S602, each k8S node deploys an image uploading service container on the basis of a daemonset configuration file of the image uploading service, wherein an execution program of the image uploading service is stored in the image uploading service container.
In this embodiment, after each k8s node receives the daemonset configuration file of the image upload service distributed by the k8s master, an image upload service container may be deployed on each k8s node based on the daemonset configuration file of the image upload service, where an execution program of the image upload service is stored.
In this embodiment, the k8s master in the first kubernet cluster is used to parse and run the daemonset configuration file of the image uploading service, so that each k8s node in the whole first k8s cluster can automatically add the image uploading service container, and even if the k8s node is turned off and restarted or a new k8s node is added, each k8s node in the k8s cluster can automatically deploy the image uploading service container, so as to implement image making and uploading of the modified AI algorithm container.
In an alternative embodiment, before setting the daemonset configuration file of the image upload service, the image upload service may be written first, for example, may be written by java code, and an http interface is provided for receiving a request of the image upload service. The image uploading service can be written according to the following steps: 1. since k8s adds a random string suffix after the container name when generating the container, the container id of the secondary package (i.e., the modified AI algorithm container name) needs to be found by executing the container name search command; 2. since AI algorithm mirroring is typically large, e.g., at least 4GB or more, the transfer of the mirror to the harbor warehouse process can be time consuming, thus requiring a new sub-thread to be started to do: in the sub-thread, firstly, a command for logging in a dock needs to be executed, and the dock logs in a server corresponding to the harbor; packaging the containers locally into mirror images; then pushing the locally generated mirror image to a harbor server; and deleting the local current production process so as not to occupy the hard disk space. Finally, after the local deleting process is finished, the packing state interface of the AI development platform is called back, the development platform is informed of the packing state, if any step from logging in the harbor server to deleting the local mirror image is wrong, the packing failure state is returned, and if not, the packing success state is returned after the execution is finished.
In combination with the above embodiments, the embodiments of the present invention provide a container access method, in which the method may further include the following steps in addition to the above steps:
s701, the development platform makes an execution program of the image uploading service into a dock image.
In this embodiment, the daemonset configuration file of the mirror image upload service includes a dock image in a harbor server, and is configured to obtain the dock image of the mirror image upload service from the harbor server, so as to implement deployment of a mirror image upload service container in each k8s node.
Specifically, the development platform compiles the compiled image uploading service into a jar package, then compiles a Dockerfile file, and places the jar package and the Dockerfile file in the same catalog, so that an execution program of the image uploading service is manufactured into a docker image.
S702, the development platform logs in the harbor server.
In this embodiment, after the development platform makes a dock image of the image upload service, the development platform may log in the harbor server.
S703, pushing the dock image to the harbor server by the development platform.
In this embodiment, after logging in the harbor server, the development platform may push the dock image of the manufactured image upload service to the harbor server.
In this embodiment, after the image upload service is written, the image upload service is made into a dock image and uploaded to the harbor server, so that the deployment of the image upload service container for each k8s node can be performed based on the dock image in the harbor server based on the daemonset configuration file of the set image upload service.
In an embodiment, as shown in fig. 5, fig. 5 is a schematic diagram of a container access web terminal according to an embodiment of the present invention. In fig. 5, the k8s system includes a Kubernets master (k 8s master) and a plurality of Kubernets nodes (k 8s nodes), the k8s master being communicatively coupled to the plurality of k8s nodes. The k8s master can deploy ttyd containers in each k8s node based on the daemonset configuration file of the configured ttyd, then map ttyd execution programs in the ttyd containers to local disks of each node based on the k8s mapping, so that when a user starts the AI algorithm container through a browser client, the ttyd execution programs in the nodes where the AI algorithm container is located are mapped into the AI algorithm container, ttyd execution programs exist in the AI algorithm container, packaging of the ttyd execution programs into a docker mirror image is not needed, and the terminal access container can be realized through a webpage to operate the container. In addition, the k8s master can also deploy a mirror image uploading service container (namely a HarborUplad container in FIG. 5) in each k8s node based on the daemonset configuration file of the configured mirror image uploading service, so that the function of packaging and mirroring modified algorithm containers of kubernet and uploading the packaged algorithm containers to a harbor warehouse is realized through the mirror image uploading service in the nodes, and the direct deployment and multiplexing can be realized conveniently.
With reference to any one of the foregoing embodiments, a method for accessing a container according to an embodiment of the present invention may further include the following steps, in addition to the foregoing steps:
s801, the harbor server responds to a downloading request of a k8S node in a second kubernet cluster, and sends a mirror image corresponding to the modified AI algorithm container to the k8S node in the second kubernet cluster.
In this embodiment, the harbor server communicates with a plurality of kubernet clusters, and after the k8s node in one kubernet cluster uploads the modified AI algorithm container to the harbor server, the nodes of other kubernet clusters may download the modified AI algorithm container from the harbor server for deployment, thereby implementing AI algorithm container deployment across kubernet clusters. The second kubernet cluster and the first kubernet cluster in this embodiment are two different kubernet clusters.
That is, the k8s nodes in the second kubernet cluster may send a download request to the harbor server to request a modified AI algorithm container uploaded by the k8s nodes in the first kubernet cluster. And the harbor server responds to a downloading request of the k8s node in the second kubernet cluster, and sends the mirror image corresponding to the modified AI algorithm container to the k8s node in the second kubernet cluster.
S802, the k8S nodes in the second kubernet cluster deploy the modified AI algorithm container locally based on the mirror image corresponding to the modified AI algorithm container.
In this embodiment, after the k8s node in the second kubernet cluster obtains the image corresponding to the modified AI algorithm container, the modified AI algorithm container may be deployed locally on the k8s node in the second kubernet cluster based on the image corresponding to the modified AI algorithm container.
In this embodiment, after the modified container image is uploaded to the harbor server, the modified container image may be directly deployed in other kubernets clusters, so as to implement cross-cluster sharing of the modified AI algorithm container, and promote reusability of the AI algorithm container.
With reference to any one of the foregoing embodiments, a method for accessing a container according to an embodiment of the present invention may further include the following steps, in addition to the foregoing steps:
and S901, the node where the modified AI algorithm container is located downloads the modified AI algorithm container from the harbor server.
In this embodiment, after the node where the modified AI algorithm container is located packages the modified AI algorithm container into a mirror image and uploads the mirror image to the harbor server, the node where the modified AI algorithm container is located may download the modified AI algorithm container from the harbor server.
S902, the development platform executes a container starting command carrying a parameter for starting ttyd to start the modified AI algorithm container and starts the ttyd execution program.
In this embodiment, after downloading the modified AI algorithm container, the development platform may execute the container start command carrying the parameter for starting ttyd based on the start instruction of the user for the browser client, so as to start the modified AI algorithm container and start the ttyd execution program.
In this embodiment, after the modified container image is uploaded to the harbor server, the modified container image can be directly started by the dock without being created again through the platform, so that the reusability of the modified AI algorithm container is improved.
In an embodiment, the user may deploy the trained model into an inference service based on the development platform provided by the embodiment for completing server deployment, so as to complete the use of the business scenario, for example: character recognition, face recognition, defect detection and the like. In addition, after the reasoning service corresponding to the model is successfully started, the user can click an access terminal button of the service list at the browser client of the development platform, and the development platform operates the container through a ttyd program in the algorithm container based on the container access method, for example, modifies the reasoning script, reasoning parameters and other file contents. And finally, clicking an uploading mirror image button of the service list at a browser client of the development platform by a user, and packaging the modified container by the development platform to manufacture a new mirror image. In this way, the user can directly deploy the self-uploaded image in other kubernets clusters or directly start with a docker without creating again through a platform.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Based on the same inventive concept, an embodiment of the present invention provides a container access system 600. Referring to fig. 6, fig. 6 is a block diagram illustrating a structure of a container access system according to an embodiment of the present invention. As shown in fig. 6, the system 600 includes:
the first mapping module 601 is deployed on a development platform, and is configured to map a ttyd execution program in a local disk of a node to an AI algorithm container through a configuration file in the AI algorithm container deployed on the node;
the container starting module 602 is deployed on the development platform, and is configured to start the AI algorithm container and start the ttyd execution program;
and the container access module 603 is deployed on the development platform and is used for responding to the AI algorithm container access operation sent by the browser client and accessing the AI algorithm container through a port provided by the ttyd execution program.
Optionally, the node is a k8s node, a k8s master is deployed in the development platform, and the k8smaster is connected with a plurality of k8s nodes; the system 600 further comprises:
the first analysis module is deployed in the k8s master and is used for analyzing and running the daemonset configuration file of ttyd so as to distribute the daemonset configuration file of ttyd to each k8s node;
the first deployment module is deployed on each k8s node and is used for deploying a ttyd container on the k8s node based on the daemonset configuration file of the ttyd, wherein the ttyd container stores the ttyd execution program;
and the second mapping module is deployed at each k8s node and is used for mapping the ttyd execution program in the ttyd container to a local disk of the k8s node.
Optionally, the container start module 602 includes:
the parameter acquisition module is deployed on the development platform and is used for responding to the container starting operation sent by the browser client to acquire the preset starting ttyd parameter;
the command generation module is deployed on the development platform and is used for adding the parameter of the starting ttyd into a container starting command to obtain the container starting command carrying the parameter of the starting ttyd;
The first starting module is deployed on the development platform and is used for executing the container starting command carrying the parameter for starting ttyd so as to start the AI algorithm container and start the ttyd execution program.
Optionally, the system 600 further includes:
the port number acquisition module is deployed on the development platform and is used for responding to the AI algorithm container modification operation sent by the browser client and returning the port number corresponding to the AI algorithm container to be modified to the browser client;
the page display module is deployed at the browser client and is used for displaying a modification page corresponding to the AI algorithm container to be modified based on the port number, and the modification page is used for a user to modify the AI algorithm container;
and the container modification module is deployed at a node where the modified AI algorithm container is located and is used for responding to the mirror image uploading request sent by the browser client, running an execution program of the mirror image uploading service and packaging the modified AI algorithm container into a mirror image to be uploaded to the harbor server.
Optionally, the nodes are k8s nodes in a first kubernet cluster, a k8s master in the first kubernet cluster is deployed in the development platform, and the k8s master is connected with each k8s node; the system 600 further comprises:
The second analysis module is deployed in the k8s master and is used for analyzing and running the daemonset configuration file of the mirror image uploading service so as to distribute the daemonset configuration file of the mirror image uploading service to each k8s node;
the second deployment module is deployed on each k8s node and is used for deploying a mirror image uploading service container on the k8s nodes based on the daemonset configuration file of the mirror image uploading service, and the mirror image uploading service container stores an execution program of the mirror image uploading service.
Optionally, the container modification module includes:
the request receiving module is deployed at the node where the modified AI algorithm container is located and is used for receiving the mirror image uploading request sent by the browser client through an http interface;
the container determining module is deployed at the node where the modified AI algorithm container is located and is used for determining the modified AI algorithm container based on the container ID in the mirror image uploading request;
the first login module is deployed at a node where the modified AI algorithm container is located and is used for starting a sub-thread and logging in the harbor server through the sub-thread;
the mirror image packaging module is deployed at the node where the modified AI algorithm container is located and is used for packaging the modified AI algorithm container into a mirror image and pushing the mirror image to the harbor server;
The state return module is deployed at the node where the modified AI algorithm container is located and is used for deleting the local modified AI algorithm container and returning the packaging state to the browser client.
Optionally, the daemonset configuration file of the mirror image uploading service includes a dock mirror image in the harbor server; the system 600 further comprises:
the mirror image manufacturing module is deployed on the development platform and used for manufacturing an execution program of the mirror image uploading service into a dock mirror image;
the second login module is deployed on the development platform and used for logging in the harbor server;
and the image uploading module is deployed on the development platform and is used for pushing the dock image to the harbor server.
Optionally, the system 600 further includes:
the mirror image sending module is deployed on the harbor server and is used for responding to a downloading request of a k8s node in a second kubernet cluster and sending a mirror image corresponding to the modified AI algorithm container to the k8s node in the second kubernet cluster;
the mirror image deployment module is deployed at a k8s node in the second kubernets cluster and is used for locally deploying the modified AI algorithm container based on the mirror image corresponding to the modified AI algorithm container.
Optionally, the system 600 further includes:
the image downloading module is deployed at the node where the modified AI algorithm container is located and is used for downloading the modified AI algorithm container from the harbor server after the modified AI algorithm container is packaged into an image by the node where the modified AI algorithm container is located and uploaded to the harbor server;
and the second starting module is deployed on the development platform and is used for executing a container starting command carrying a parameter for starting ttyd so as to start the modified AI algorithm container and start the ttyd execution program.
Based on the same inventive concept, another embodiment of the present invention provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the container access method according to any of the above embodiments of the present invention.
Based on the same inventive concept, another embodiment of the present invention provides an electronic device 700, as shown in fig. 7. Fig. 7 is a schematic diagram of an electronic device according to an embodiment of the present invention. The electronic device comprises a memory 702, a processor 701 and a computer program stored on the memory and executable on the processor, which when executed implements the steps of the container access method according to any of the embodiments of the invention.
For system embodiments, the description is relatively simple as it is substantially similar to method embodiments, and reference is made to the description of method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image processing terminal device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable image processing terminal device to cause a series of operational steps to be performed on the computer or other programmable terminal device to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The above description of the container access method, system, electronic device and storage medium provided by the present invention applies specific examples to illustrate the principles and embodiments of the present invention, and the above examples are only used to help understand the method and core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (12)

1. A method of accessing a container, the method comprising:
the development platform maps ttyd execution programs in a local disk of a node into an AI algorithm container through a configuration file in the AI algorithm container deployed on the node;
the development platform starts the AI algorithm container and starts the ttyd execution program;
and the development platform responds to an AI algorithm container access operation sent by the browser client, and accesses the AI algorithm container through a port provided by the ttyd execution program.
2. The method of claim 1, wherein the node is a k8s node, a k8s master is deployed in the development platform, and the k8s master is connected to a plurality of k8s nodes; the method further comprises the steps of:
the k8s master analyzes and runs the daemonset configuration file of ttyd to distribute the daemonset configuration file of ttyd to each k8s node;
each k8s node deploys a ttyd container on the k8s node based on the daemonset configuration file of the ttyd, wherein the ttyd container stores the ttyd execution program;
each k8s node maps the ttyd execution program in the ttyd container to a local disk of the k8s node.
3. The method of claim 1, wherein the development platform initiates the AI algorithm container and initiates the ttyd execution program, comprising:
the development platform responds to the container starting operation sent by the browser client to acquire a preset parameter for starting ttyd;
the development platform adds the parameters of the starting ttyd into a container starting command to obtain a container starting command carrying the parameters of the starting ttyd;
and the development platform executes the container starting command carrying the parameter for starting ttyd so as to start the AI algorithm container and start the ttyd execution program.
4. The method according to claim 1, wherein the method further comprises:
the development platform responds to the AI algorithm container modification operation sent by the browser client, and returns a port number corresponding to the AI algorithm container to be modified to the browser client;
the browser client displays a modification page corresponding to the AI algorithm container to be modified based on the port number, wherein the modification page is used for a user to modify the AI algorithm container;
and the node where the modified AI algorithm container is located responds to the mirror image uploading request sent by the browser client, runs an execution program of the mirror image uploading service, packages the modified AI algorithm container into a mirror image, and uploads the mirror image to the harbor server.
5. The method of claim 4, wherein the nodes are k8s nodes in a first kubernet cluster, wherein a k8smaster in the first kubernet cluster is deployed within the development platform, and wherein the k8s master is connected to each k8s node; the method further comprises the steps of:
the k8s master analyzes and runs the daemonset configuration file of the mirror image uploading service to distribute the daemonset configuration file of the mirror image uploading service to each k8s node;
each k8s node deploys a mirror image uploading service container on the basis of the daemonset configuration file of the mirror image uploading service, wherein the mirror image uploading service container stores an execution program of the mirror image uploading service.
6. The method of claim 4, wherein the node where the modified AI algorithm container is located runs an execution program of a mirror upload service in response to a mirror upload request sent by the browser client, packages the modified AI algorithm container into a mirror, and uploads the mirror to a harbor server, comprising:
the node where the modified AI algorithm container is located receives a mirror image uploading request sent by the browser client through an http interface;
The node where the modified AI algorithm container is located determines the modified AI algorithm container based on the container ID in the mirror image uploading request;
starting a sub-thread by a node where the modified AI algorithm container is located, and logging in the harbor server through the sub-thread;
the node where the modified AI algorithm container is located packages the modified AI algorithm container into a mirror image and pushes the mirror image to the harbor server;
and deleting the local modified AI algorithm container by the node where the modified AI algorithm container is located, and returning to a packaging state to the browser client.
7. The method of claim 5, wherein the daemonset profile of the mirror upload service comprises a dock mirror in the harbor server; the method further comprises the steps of:
the development platform makes an execution program of the mirror image uploading service into a dock mirror image;
the development platform logs in the harbor server;
and the development platform pushes the dock mirror image to the harbor server.
8. The method of claim 5, wherein the method further comprises:
the server responds to a downloading request of a k8s node in a second kubernet cluster, and sends a mirror image corresponding to the modified AI algorithm container to the k8s node in the second kubernet cluster;
And the k8s node in the second kubernet cluster locally deploys the modified AI algorithm container based on the mirror image corresponding to the modified AI algorithm container.
9. The method of claim 4, wherein after the node in which the modified AI algorithm container resides packages the modified AI algorithm container as a mirror image for uploading to a harbor server, the method further comprises:
the node where the modified AI algorithm container is located downloads the modified AI algorithm container from the harbor server;
and the development platform executes a container starting command carrying a parameter for starting ttyd so as to start the modified AI algorithm container and start the ttyd execution program.
10. A container access system, the system comprising:
the first mapping module is deployed on the development platform and is used for mapping ttyd execution programs in a local disk of the node into the AI algorithm container through configuration files in the AI algorithm container deployed on the node;
the container starting module is deployed on the development platform and is used for starting the AI algorithm container and starting the ttyd execution program;
and the container access module is deployed on the development platform and is used for responding to the AI algorithm container access operation sent by the browser client and accessing the AI algorithm container through a port provided by the ttyd execution program.
11. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the container access method according to any of claims 1 to 9.
12. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implements the container access method of any of claims 1 to 9.
CN202410007884.7A 2024-01-02 2024-01-02 Container access method, system, electronic equipment and storage medium Pending CN117827377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410007884.7A CN117827377A (en) 2024-01-02 2024-01-02 Container access method, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410007884.7A CN117827377A (en) 2024-01-02 2024-01-02 Container access method, system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117827377A true CN117827377A (en) 2024-04-05

Family

ID=90505681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410007884.7A Pending CN117827377A (en) 2024-01-02 2024-01-02 Container access method, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117827377A (en)

Similar Documents

Publication Publication Date Title
CN108628661B (en) Automatic establishment method of cloud manufacturing service and cloud manufacturing system
JP6774499B2 (en) Providing access to hybrid applications offline
CN107491329B (en) Docker mirror image construction method, device, storage medium and electronic device
EP3709227B1 (en) System and method for interoperable communication of an automation system component with multiple information sources
US11442830B2 (en) Establishing and monitoring programming environments
US20170154017A1 (en) Web Application Management
US20190238478A1 (en) Using a template to update a stack of resources
CN108897527B (en) Docker mirror image automatic dynamic construction method for remote sensing image processing in cloud computing
CN112214330A (en) Method and device for deploying master nodes in cluster and computer-readable storage medium
US20030195951A1 (en) Method and system to dynamically detect, download and install drivers from an online service
US8578372B2 (en) Business-in-a-box integration server and integration method
US8375383B2 (en) Rolling upgrades in distributed applications
CN111901294A (en) Method for constructing online machine learning project and machine learning system
US20180198839A1 (en) Automatic Discovery of Management Nodes and Generation of CLI Using HA Module
US20160004512A1 (en) Method of projecting a workspace and system using the same
US9401957B2 (en) System and method for synchronization between servers
CN106874357B (en) Resource customization method and device for Web application
CN109213498A (en) A kind of configuration method and server of internet web front-end
US11797431B2 (en) REST API parser for test automation
CN104052792A (en) Augmenting Middleware Communication Services
CN116301951B (en) Micro-service application installation upgrading method and device based on kubernetes
CN117827377A (en) Container access method, system, electronic equipment and storage medium
CN115794253A (en) Application integration method and device, electronic equipment and computer readable storage medium
CN109814911A (en) Method, apparatus, computer equipment and storage medium for Manage Scripts program
US9936015B2 (en) Method for building up a content management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination