CN117742813A - Cloud edge terminal AI model management method, storage medium and electronic equipment - Google Patents

Cloud edge terminal AI model management method, storage medium and electronic equipment Download PDF

Info

Publication number
CN117742813A
CN117742813A CN202311457294.6A CN202311457294A CN117742813A CN 117742813 A CN117742813 A CN 117742813A CN 202311457294 A CN202311457294 A CN 202311457294A CN 117742813 A CN117742813 A CN 117742813A
Authority
CN
China
Prior art keywords
request
model
service
edge
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311457294.6A
Other languages
Chinese (zh)
Inventor
张保虎
王嵘
宋华婷
綦晓杰
马勇
尹海华
贾煜逸
刘至阳
吴承霖
徐宗泽
龙辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinjiang Xinhua Hydropower Investment Ltd By Share Ltd
Original Assignee
Xinjiang Xinhua Hydropower Investment Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinjiang Xinhua Hydropower Investment Ltd By Share Ltd filed Critical Xinjiang Xinhua Hydropower Investment Ltd By Share Ltd
Priority to CN202311457294.6A priority Critical patent/CN117742813A/en
Publication of CN117742813A publication Critical patent/CN117742813A/en
Pending legal-status Critical Current

Links

Landscapes

  • Stored Programmes (AREA)

Abstract

The invention discloses a cloud side AI model management method, a storage medium and electronic equipment, wherein the method comprises the steps of responding to a configuration request of side equipment, and configuring the side equipment based on the content indicated by the configuration request; based on receiving a model service deployment request, configuring configuration information required by service deployment, and creating a service deployment set in Kubernetes; and monitoring the state change of the service deployment set, and calling back the state change. By simple configuration and operation, the cloud end AI model can be rapidly deployed, the AI model can be rapidly used, and the deployed AI model has more control capability. Overcomes the defects of high deployment difficulty and high cost existing in the model deployment in the related technology.

Description

Cloud edge terminal AI model management method, storage medium and electronic equipment
Technical Field
The application relates to the technical field of information processing, in particular to a cloud edge terminal AI model management method, a storage medium and electronic equipment.
Background
The hydropower industry is a critical basic industry, the importance of which is that water resource management and power supply are critical to the development of socioeconomic performance. With the continuous advancement of technology, the application of Artificial Intelligence (AI) technology in the hydropower industry is gradually in the brand-new angle, and these AI models are widely used for data analysis, prediction, equipment monitoring and decision support to improve the efficiency, reliability and operation management of hydropower systems.
In the hydropower industry, training and deployment of AI models is a key step to ensure that they exert their greatest potential in practical operation. For example, an AI model for hydropower station equipment fault detection needs to be trained and then deployed to field devices to monitor equipment status in real-time and predict potential faults. Effective management and deployment of these models is critical to the sustainable development of the hydropower industry.
However, in the hydropower industry today, deployment of AI models at the end-of-line devices still presents some challenges. Traditional model deployment methods may involve complex programming and configuration deployment procedures that require not only specialized skills and significant time, but also deployment at field devices. In addition, updating and version control of the AI model may become confused, and management works such as start-stop and deletion after model deployment are also troublesome.
Disclosure of Invention
The application provides a cloud edge terminal AI model management method, a storage medium and electronic equipment, so as to solve the problems in the related technology.
In a first aspect, the present invention provides a cloud-edge AI model management method, including configuring an edge device based on content indicated by a configuration request in response to the configuration request of the edge device; based on receiving a model service deployment request, configuring configuration information required by service deployment, and creating a service deployment set in Kubernetes; and monitoring the state change of the service deployment set, and calling back the state change.
Optionally, the configuring the edge device based on the content indicated by the configuration request in response to the configuration request of the edge device includes: responding to a creation request of the side equipment, and completing creation of the side equipment through a creation field; responding to a request for adding the side equipment to the cluster, generating an adding script, and adding the script into the cluster after the script is called and executed to complete the activation of the created side equipment; the daemon of the edge device is deployed.
Optionally, the configuring the edge device based on the content indicated by the configuration request in response to the configuration request of the edge device further includes: in response to a delete request of an edge device, the daemon is exited and the edge device that is a node is deleted.
Optionally, creating the AI model service deployment set at Kubernetes includes: deployments were created in Kubernetes, where a download container was used for downloading the model and an algorithm container provided algorithm services to the model.
Optionally, monitoring the state change of the service deployment set and callback the state change includes: monitoring state changes of the Deployment and the container, and calling back the state changes through the request message.
Optionally, the method further comprises: in response to receiving a model processing request, invoking the algorithmic service based on information indicated by the request.
Optionally, the method further comprises: and adjusting the maximum and minimum numbers of copies of the service in response to a request for modifying the state of the service deployed in the service deployment set.
Optionally, the method further comprises: and deleting the service deployment set after receiving the deletion request of the service deployment set.
In a second aspect, the present invention provides a computer readable storage medium, wherein the storage medium stores a computer program, which when executed by a processor implements the method according to any of the above-mentioned implementations of the first aspect.
In a third aspect, the invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method provided in the first aspect when executing the program.
The invention discloses a cloud side AI model management method, a storage medium and electronic equipment, wherein the method comprises the steps of responding to a configuration request of side equipment, and configuring the side equipment based on the content indicated by the configuration request; based on receiving a model service deployment request, configuring configuration information required by service deployment, and creating a service deployment set in Kubernetes; and monitoring the state change of the service deployment set, and calling back the state change. By simple configuration and operation, the cloud end AI model can be rapidly deployed, the AI model can be rapidly used, and the deployed AI model has more control capability. The defects existing in the related art are fully taken up.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a flowchart of a cloud-edge AI model management method in the present application;
FIG. 2 is a schematic diagram of an application of a cloud-edge AI model management method in the present application;
FIG. 3 is a schematic diagram of another application of a cloud-edge AI model management method in the present application;
fig. 4 is a schematic diagram of an electronic device corresponding to fig. 1 provided in the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The system architecture suitable for the method of the embodiment can refer to an AI platform, an inference service management system, kubernetes, edgemode and the like, a user can implement configuration operation, data presentation and the like through the AI platform, the inference service management system can respond to a request, and the kubernetes can create, update or delete a deployment set and the like. A method applicable to the present embodiment may be an inference service management system.
The following is an exemplary description of a cloud-edge AI model management method shown in fig. 1. The method comprises the following steps:
step 101: and responding to the configuration request of the side equipment, and configuring the side equipment based on the content indicated by the configuration request.
In this embodiment, the AI model side deployment needs to create side host devices in advance, and management of the side devices is achieved through configuration, where the side device management includes creation, editing, querying, deleting, and the like of the side devices.
As an optional implementation manner of this embodiment, the configuring, in response to a configuration request of the edge device, the edge device based on the content indicated by the configuration request includes: responding to a creation request of the side equipment, and completing creation of the side equipment through a creation field; responding to a request for adding the side equipment to the cluster, generating an adding script, and adding the script into the cluster after the script is called and executed to complete the activation of the created side equipment; the daemon of the edge device is deployed.
In this alternative implementation, when the edge device is created, creation of the edge device information may be accomplished by creating fields including, but not limited to, device name, device IP, region in which the device is located, device tag, and the like. After the side device is created, the user side can trigger a script adding request, generate a script based on the request, and enable the side device to execute the script to join the cluster, and deploy the side daemon after joining the cluster. The host of the side device executes the activation script to acquire the side device information, including the system architecture and the hard disk ID, downloads the decompression activation script, and executes the command of joining the cluster.
As an optional implementation manner of this embodiment, the configuring, in response to a configuration request of the edge device, the edge device based on the content indicated by the configuration request further includes: in response to a delete request of an edge device, the daemon is exited and the edge device that is a node is deleted.
In this alternative implementation, the edge device may be created or deleted based on the configuration request, while the state change of the edge device may be monitored, and state information of the node, such as ready information, may be sent to Kafka to further present the heartbeat and consumption of the node at the AI platform.
Referring to the edge node management frame schematic diagram illustrated in fig. 1, the AI platform may send an add node script to the cloud, generate a script based on the request cloud, and be invoked by the AI platform, execute the script on the edge device after invoking the script, join the cluster after executing the script, and then deploy a daemon to complete deployment of the edge device. Node monitoring is carried out after the cluster is added, monitored state information is presented on an AI platform, and if the node is offline, the state of the node is monitored. If an edge node deleting request in the AI platform is received, deleting the node through kubernetes, requesting a daemon process to exit the process based on the request, enabling the edge node to exit, and finally displaying state information after the deletion is successful in the AI platform.
Step 102: based on receiving the model service deployment request, configuring configuration information required by service deployment, and creating a service deployment set in Kubernetes.
In this embodiment, the inference service management system may accept a service creation request, and configure resource parameters for the selected AI model, including parameters configured by the receiving user, including, but not limited to, resource parameters such as memory, CPU core number, GPU video memory, maximum number of instances, and the like.
Step 103: and monitoring the state change of the service deployment set, and calling back the state change.
In this embodiment, the Kubernetes container state change data is recalled and this information may be presented in the AI platform.
As an optional implementation manner of this embodiment, creating the AI model service deployment set at Kubernetes includes: deployments were created in Kubernetes, where a download container was used for downloading the model and an algorithm container provided algorithm services to the model.
In this alternative implementation, the depoyment is created at Kubernetes by Serverless, and the edge Deployment deploys model services to the edge machines based on native Kubernetes. The download container is responsible for downloading the model, and the algorithm container provides algorithm services based on the model.
As an optional implementation manner of this embodiment, the method further includes: in response to receiving a model processing request, invoking the algorithmic service based on information indicated by the request.
In the optional implementation mode, the successfully deployed AI model algorithm service is deployed, and the AI platform provides algorithm service url, token, sample data and a return structure, and can directly call the deployed algorithm service through the information. Illustratively, when Kubernetes creates a Pod container, the download container downloads the AI model files to the MinIO specified location according to the model name, the number of models being single or multiple, and being used by the algorithm container.
And calling the deployed algorithm service, and the Kong gateway can call the calling quantity and the time consuming time to the AI platform for recording through an http request.
As an optional implementation manner of this embodiment, monitoring a state change of the service deployment set, and calling back the state change includes: monitoring state changes of the Deployment and the container, and calling back the state changes through the request message.
In the alternative implementation mode, the state change of Deployment, pod is monitored, and when the state of the Kubernetes container is changed, the service state is returned to the AI platform through an http request to update the service state; service states include, but are not limited to: the method comprises the steps of non-creation, scheduling success, running failure, deletion and deletion success.
As an optional implementation manner of this embodiment, the method further includes: and adjusting the maximum and minimum numbers of copies of the service in response to a request for modifying the state of the service deployed in the service deployment set.
In this alternative implementation, the resident/non-resident state of the service is modified and the service is closed by adjusting the maximum copy number and the minimum copy number of the service node, and the service is recalled to the AI platform after the state change of the Kubernetes service is monitored.
The cloud deployment has modification of resident/non-resident states, the resident service can be deployed and operated all the time, resources are occupied all the time, the non-resident service allows the service to sleep, and the algorithm service is operated when the request call is made.
As an optional implementation manner of this embodiment, the method further includes: and deleting the service deployment set after receiving the deletion request of the service deployment set.
In this alternative implementation, the deployment set of Kubernetes is deleted after the deletion request is received, and the service data is updated after the deletion is successful.
Referring to fig. 2, the AI platform may send a create service request to an inference service management system, which checks the request, creates a service after the check passes, creates a deployment set through kubernets, updates a database after the creation is completed, and displays the created information on the AI platform. The user can send a request for modifying the service state through the AI platform, the reasoning service management system checks the request, updates the service state after the request passes, updates the deployment set, further updates the database, and finally displays successful modification display on the AI platform. The user can send a deletion service request through the AI platform, and delete the service update database and display on the AI platform after the request passes. The creation of the deployment set, the update of the deployment set or the deletion of the deployment set can all be performed through the controllers of kubernets to create, modify or delete the edge nodes pod.
The cloud side AI model can be rapidly deployed and used by the cloud side based on the cloud native technology function only through simple configuration and operation, the deployed AI model service has more control capability including resident/non-resident state modification, closing and deleting, and the like, the call quantity and call time consumption of the service can be observed, and a stable, efficient and controllable environment can be provided for the AI model.
The application also provides a cloud side AI model management device, which comprises a configuration unit, a cloud side AI model management module and a cloud side AI model management module, wherein the configuration unit is configured to respond to a configuration request of side equipment and configure the side equipment based on the content indicated by the configuration request; a service deployment set creation unit configured to configure configuration information required for service deployment based on receipt of a model service deployment request, and create a service deployment set at Kubernetes; the monitoring unit is configured to monitor the state change of the service deployment set and call back the state change.
Optionally, the configuring the edge device based on the content indicated by the configuration request in response to the configuration request of the edge device includes: responding to a creation request of the side equipment, and completing creation of the side equipment through a creation field; responding to a request for adding the side equipment to the cluster, generating an adding script, and adding the script into the cluster after the script is called and executed to complete the activation of the created side equipment; the daemon of the edge device is deployed.
Optionally, the configuring the edge device based on the content indicated by the configuration request in response to the configuration request of the edge device further includes: in response to a delete request of an edge device, the daemon is exited and the edge device that is a node is deleted.
Optionally, creating the AI model service deployment set at Kubernetes includes: deployments were created in Kubernetes, where a download container was used for downloading the model and an algorithm container provided algorithm services to the model.
Optionally, monitoring the state change of the service deployment set and callback the state change includes: monitoring state changes of the Deployment and the container, and calling back the state changes through the request message.
Optionally, the apparatus further comprises a processing unit configured to invoke the algorithm service based on information indicated by the request in response to receiving a model processing request.
Optionally, the apparatus further comprises a service state adjustment unit configured to adjust a maximum number of copies and a minimum number of copies of the service in response to a request for modifying a service state deployed in the service deployment set.
Optionally, the device further deletes the service deployment set when receiving a deletion request of the service deployment set.
The present application also provides a computer readable medium storing a computer program operable to perform the above method provided in fig. 1.
The present application also provides a schematic block diagram of the electronic device shown in fig. 4, corresponding to fig. 1. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as described in fig. 3, although other hardware required by other services may be included. The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs to implement a model loading method as described above with respect to fig. 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present application, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer media including memory storage devices.
All embodiments in the application are described in a progressive manner, and identical and similar parts of all embodiments are mutually referred, so that each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A cloud side AI model management method comprises the following steps:
responding to a configuration request of the side equipment, and configuring the side equipment based on the content indicated by the configuration request;
based on receiving a model service deployment request, configuring configuration information required by service deployment, and creating a service deployment set in Kubernetes;
and monitoring the state change of the service deployment set, and calling back the state change.
2. The cloud-edge AI model management method of claim 1, wherein configuring the edge device based on the content indicated by the configuration request in response to the configuration request of the edge device comprises:
responding to a creation request of the side equipment, and completing creation of the side equipment through a creation field;
responding to a request for adding the side equipment to the cluster, generating an adding script, and adding the script into the cluster after the script is called and executed to complete the activation of the created side equipment;
the daemon of the edge device is deployed.
3. The cloud-edge AI model management method of claim 2, wherein configuring the edge device based on the content indicated by the configuration request in response to the configuration request of the edge device further comprises:
in response to a delete request of an edge device, the daemon is exited and the edge device that is a node is deleted.
4. The cloud-edge AI model management method of claim 1, wherein creating an AI model service deployment set at Kubernetes comprises:
deployments were created in Kubernetes, where a download container was used for downloading the model and an algorithm container provided algorithm services to the model.
5. The cloud-edge AI model management method of claim 4, wherein listening for a state change of the service deployment set and callback the state change comprises:
monitoring state changes of the Deployment and the container, and calling back the state changes through the request message.
6. The cloud-edge AI model management method of claim 5, further comprising:
in response to receiving a model processing request, invoking the algorithmic service based on information indicated by the request.
7. The cloud-edge AI model management method of claim 1, further comprising:
and adjusting the maximum and minimum numbers of copies of the service in response to a request for modifying the state of the service deployed in the service deployment set.
8. The cloud-edge AI model management method of claim 1, further comprising:
and deleting the service deployment set after receiving the deletion request of the service deployment set.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-7 when executing the program.
CN202311457294.6A 2023-11-02 2023-11-02 Cloud edge terminal AI model management method, storage medium and electronic equipment Pending CN117742813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311457294.6A CN117742813A (en) 2023-11-02 2023-11-02 Cloud edge terminal AI model management method, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311457294.6A CN117742813A (en) 2023-11-02 2023-11-02 Cloud edge terminal AI model management method, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117742813A true CN117742813A (en) 2024-03-22

Family

ID=90278298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311457294.6A Pending CN117742813A (en) 2023-11-02 2023-11-02 Cloud edge terminal AI model management method, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117742813A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200322225A1 (en) * 2019-04-05 2020-10-08 Mimik Technology Inc. Method and system for distributed edge cloud computing
US10841152B1 (en) * 2017-12-18 2020-11-17 Pivotal Software, Inc. On-demand cluster creation and management
US20210042160A1 (en) * 2019-04-05 2021-02-11 Mimik Technology Inc. Method and system for distributed edge cloud computing
CN112799789A (en) * 2021-03-22 2021-05-14 腾讯科技(深圳)有限公司 Node cluster management method, device, equipment and storage medium
CN113742031A (en) * 2021-08-27 2021-12-03 北京百度网讯科技有限公司 Node state information acquisition method and device, electronic equipment and readable storage medium
US11343315B1 (en) * 2020-11-23 2022-05-24 International Business Machines Corporation Spatio-temporal social network based mobile kube-edge auto-configuration
EP4071728A1 (en) * 2021-04-08 2022-10-12 Accenture Global Solutions Limited Artificial intelligence model integration and deployment for providing a service
CN116627654A (en) * 2023-05-31 2023-08-22 联想(北京)有限公司 Control method and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10841152B1 (en) * 2017-12-18 2020-11-17 Pivotal Software, Inc. On-demand cluster creation and management
US20200322225A1 (en) * 2019-04-05 2020-10-08 Mimik Technology Inc. Method and system for distributed edge cloud computing
US20210042160A1 (en) * 2019-04-05 2021-02-11 Mimik Technology Inc. Method and system for distributed edge cloud computing
US11343315B1 (en) * 2020-11-23 2022-05-24 International Business Machines Corporation Spatio-temporal social network based mobile kube-edge auto-configuration
CN112799789A (en) * 2021-03-22 2021-05-14 腾讯科技(深圳)有限公司 Node cluster management method, device, equipment and storage medium
EP4071728A1 (en) * 2021-04-08 2022-10-12 Accenture Global Solutions Limited Artificial intelligence model integration and deployment for providing a service
CN113742031A (en) * 2021-08-27 2021-12-03 北京百度网讯科技有限公司 Node state information acquisition method and device, electronic equipment and readable storage medium
CN116627654A (en) * 2023-05-31 2023-08-22 联想(北京)有限公司 Control method and electronic equipment

Similar Documents

Publication Publication Date Title
CN106970873B (en) On-line mock testing method, device and system
CN108418851B (en) Policy issuing system, method, device and equipment
CN110401700B (en) Model loading method and system, control node and execution node
CN109684036B (en) Container cluster management method, storage medium, electronic device and system
CN108628688B (en) Message processing method, device and equipment
CN110032409B (en) Client screen adapting method and device and electronic equipment
CN108549562A (en) A kind of method and device of image load
CN110851285B (en) Resource multiplexing method, device and equipment based on GPU virtualization
CN113704117B (en) Algorithm testing system, method and device
CN117075930B (en) Computing framework management system
CN111273965B (en) Container application starting method, system and device and electronic equipment
CN116452920A (en) Image processing method and device, storage medium and electronic equipment
CN111796864A (en) Data verification method and device
CN111338655A (en) Installation package distribution method and system
CN108804088B (en) Protocol processing method and device
CN117742813A (en) Cloud edge terminal AI model management method, storage medium and electronic equipment
CN116382713A (en) Method, system, device and storage medium for constructing application mirror image
CN111984247B (en) Service processing method and device and electronic equipment
CN110874322A (en) Test method and test server for application program
CN114625410A (en) Request message processing method, device and equipment
CN111880922A (en) Processing method, device and equipment for concurrent tasks
CN117519912B (en) Mirror image warehouse deployment method, device, storage medium and equipment
CN117873535B (en) Service route updating method and device, storage medium and electronic equipment
CN117348999B (en) Service execution system and service execution method
CN117407124B (en) Service execution method based on constructed data arrangement strategy generation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination