CN111488254A - Deployment and monitoring device and method of machine learning model - Google Patents

Deployment and monitoring device and method of machine learning model Download PDF

Info

Publication number
CN111488254A
CN111488254A CN201910072556.4A CN201910072556A CN111488254A CN 111488254 A CN111488254 A CN 111488254A CN 201910072556 A CN201910072556 A CN 201910072556A CN 111488254 A CN111488254 A CN 111488254A
Authority
CN
China
Prior art keywords
machine learning
learning model
web
application
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910072556.4A
Other languages
Chinese (zh)
Inventor
陈东沂
姚小龙
钟萍
郭林东
周江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SF Technology Co Ltd
SF Tech Co Ltd
Original Assignee
SF Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SF Technology Co Ltd filed Critical SF Technology Co Ltd
Priority to CN201910072556.4A priority Critical patent/CN111488254A/en
Publication of CN111488254A publication Critical patent/CN111488254A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3017Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is implementing multitasking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3065Monitoring arrangements determined by the means or processing involved in reporting the monitored data

Abstract

The invention relates to a deployment and monitoring device of a machine learning model, which comprises: the system comprises one or more container application modules, a WEB application service interface, machine learning model codes and a python running environment are deployed in the container application modules, the container application modules are configured to deploy the running environment of a python application program into container application, provide WEB interface service for a user terminal through a WEB application framework based on flash, and provide WEB access service for the user terminal in a Restful API mode; the database module is configured for receiving and storing log output in the running process of the machine learning model; the container application module is pre-configured with an operating environment of a Python application program, so that the efficiency of the system is improved, and the resource utilization rate is improved.

Description

Deployment and monitoring device and method of machine learning model
Technical Field
The invention relates to the technical field of service deployment of machine learning models, in particular to a device and a method for deploying and monitoring a machine learning model.
Background
With the development of big data, artificial intelligence and other leading-edge hotspots and the accumulation of business data, how to rapidly utilize analysis and mining technologies such as machine learning to find the value of data assets and enable data decision to guide business operation becomes a problem to be jointly thought and urgently solved in the industry, and each large enterprise actively explores the floor application of machine learning and data mining in the aspect of business operation promotion.
At present, the industry does not lack technical tools for data analysis mining and machine learning, third-party tool libraries for analysis mining are comparatively good (such as sciit-spare, TensorFlow, Pythroch and the like), and knowledge of methodology is relatively perfect, the technical tools are often focused on implementation of feature engineering and algorithm modules of data, links such as cleaning processing, feature representation and the like of the data can be rapidly implemented by means of the tools or the methodology, but the links are only step links of experimental pre-research or idea verification, and an application landing scheme from a user side to a server side is not formed.
Meanwhile, in the model application operation process, the whole model is like a black box, the operation state and the performance effect of the model service are difficult to monitor, the reliability evaluation of the model service is not facilitated, and the limitation of the model service on the monitoring capability is displayed. The traditional machine learning model application deployment goes through 3 stages, including: the method comprises the steps of installing a toolkit and a support environment on a physical machine or a virtual machine to form a development environment, developing machine learning model codes in the development environment and testing, and deploying the machine learning model codes to an online system without errors in the test.
Therefore, how to deploy and apply the machine learning model in an engineering mode and provide operation monitoring on the model service effect is provided for an enterprise information system to use in a business operation decision process, and the core of an end-to-end model landing scheme is located.
Disclosure of Invention
Aiming at the current situation and the deficiency of the machine learning model application landing, the device and the method for deploying and monitoring the machine learning model are provided.
According to an aspect of the present invention, there is provided a deployment and monitoring apparatus for a machine learning model, comprising:
the system comprises one or more container application modules, a WEB application service interface, machine learning model codes and a python running environment are deployed in the container application modules, the container application modules are configured to deploy the running environment of a python application program into container application, provide WEB interface service for a user terminal through a WEB application framework based on flash, and provide WEB access service for the user terminal in a Restful AP I mode;
the database module is configured for receiving and storing log output in the running process of the machine learning model;
wherein the container application module is preconfigured with a running environment of a Python application.
The system further comprises a reverse proxy server, configured to distribute the web requests of the users to web interface services provided by different container application modules according to a preset policy, so as to implement load balancing of the request access of the user end to the machine learning model service.
Further, the database module is configured to send the log output to a third-party monitoring terminal to analyze and monitor the running state of the machine learning model.
Further, the operating environment of the Python application is constructed based on Anaconda and is used for introducing and managing the software packages and the dependent items thereof required by the machine learning model.
Further, the container application module is also configured to encapsulate the machine learning model code into Restful interface function with Http protocol through the web application framework based on flash, and provide the Restful interface function with external access to the web interface service through UR L.
According to another aspect of the present invention, a method for deploying and monitoring a machine learning model is provided, which includes:
deploying the running environment of the python application program into a container application, providing web interface service for a user side through a web application framework based on flash, providing web access service for the user side in a Restful API mode, and sending log output in the running process of a machine learning model to a database module for storage;
wherein the container application module is preconfigured with a running environment of a Python application.
Further, the method also comprises the following steps: and the user access module distributes the web requests of the users to the web interface services provided by different container application modules according to a preset strategy so as to realize load balancing of the request access of the user end to the machine learning model service.
Further, the method also comprises the following steps: and outputting and sending the log to a third-party monitoring terminal through a database module, and analyzing and monitoring the running state of the machine learning model.
Further, the operating environment of the Python application is constructed based on Anaconda and is used for introducing and managing the software packages and the dependent items thereof required by the machine learning model.
Further, the method also comprises the step that the web service built based on the flash encapsulates the machine learning model code into an interface function of Restful style of Http protocol, and the interface function is provided for external access to the web interface service in a UR L mode.
According to another aspect of the invention, one or more processors are provided;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of the above.
According to another aspect of the invention, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements a method as defined in any one of the above.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the deployment and monitoring device of the machine learning model, the container application is an independent virtual operation system environment, the Python application program is deployed into the container application, the web access service interface is provided for the user side through the web application framework based on the flash based on the machine learning model, one or more container applications can be started through the mirror image of the container application, the Python engineering is directly operated, the efficiency of the system is improved, the resource utilization rate is improved, and the machine learning model service is provided for the external application side to be called; meanwhile, log output in the machine learning model service process is collected and statistically analyzed, and real-time monitoring of the model service is achieved.
2. According to the deployment and monitoring method of the machine learning model, the python operating environment is managed by means of a container technology, application deployment of the machine learning model is rapidly achieved by combining a WEB application framework, a WEB access service interface of the machine learning model is provided, development and deployment cost of the machine learning model service is reduced, and a software package and an operating environment of a dependent item of the software package are provided for the model service; in addition, the logs of the model service operation process are output to the database, and the database logs are collected and analyzed in real time, so that the operation state of the model service is monitored, and the abnormal early warning of the model service is facilitated.
Drawings
FIG. 1 is a block diagram of the present invention.
FIG. 2 is a block diagram of the present invention.
FIG. 3 is a schematic diagram of a computer system according to the present invention.
Detailed Description
In order to better understand the technical scheme of the invention, the invention is further explained by combining the specific embodiment and the attached drawings of the specification.
Example 1:
the deployment and monitoring device of the machine learning model comprises one or more container application modules, a plurality of container application modules and a plurality of containers, wherein the number of the container application modules is two or more than two, a WEB application service interface, a machine learning model code and a Python operating environment are deployed in the container application modules, the machine learning model code is realized by code behind the machine learning model service, the Python operating environment is used for supporting the machine learning model code operating environment to enable the machine learning model service to operate normally, the WEB application service interface is an entrance for accessing the machine learning model service, the container application module is configured for deploying the operating environment of a Python application program into a container application, the machine learning model code is packaged into a Restful interface function with an Http protocol through a WEB application framework based on a flash, the Restful interface function is provided for external access of the WEB interface service through a UR L mode, as an alternative, the container application module can select a Docker container technology, the Docker mirror image container technology is not limited by different operating systems with windows, linux, mac and the like, the effects are basically consistent, but the extra operating environment management software needs to be used for independently issuing a corresponding system, the execution of a Docker application, a corresponding system, a corresponding system, a.
In the embodiment, the web application framework encapsulates the machine learning model code in a Restful style interface service mode, and provides a web access service interface to a user side in a Restful API mode; and an http service mode is provided externally, so that an external service system can use model services more quickly and more simply.
The database module is configured for receiving and storing log output in the operation process of the machine learning model, and as an alternative scheme, the database module can send the log output to a third-party monitoring terminal to analyze and monitor the operation state of the machine learning model according to service requirements, and realizes real-time monitoring on the model service by means of log collection and statistical analysis in the model service process; the database module is also configured to analyze and monitor the running state of the machine learning model by sending the log output to a third-party monitoring terminal, and is used for storing the log output in the running process of the machine learning model service, and the log output comprises the log and result output of the machine learning model and the log output of a flash Web application program, and is specific:
(1) log and result output of machine learning model: recording the log information of the machine learning model in the running process into a database so as to reflect the running state of the model in real time; after the machine learning model is completely operated, the model result (such as parameters, evaluation indexes and the like of the machine learning model) is recorded in a database and used for guiding subsequent production application.
(2) Log output of flash Web application: and recording the running state of the whole Web application program in the process of accessing the model service by the user.
(3) The third-party monitoring terminal feeds back the model operation progress by inquiring the model operation state in real time; meanwhile, the indexes, results and the like output by the model are monitored, so that model developers can adjust the model in time.
The reverse proxy server can select the reverse proxy server Ngnix and is configured to distribute the web requests of the users to the web interface services provided by the different container application modules according to a preset strategy so as to realize load balancing of the request access of the user end to the machine learning model service; the container application module is preconfigured with a running environment of a Python application program, and is used for introducing and managing software packages (such as packages carried by non-Python original installation packages) and dependent items thereof required by the machine learning model.
The Nginx is a universal main-stream open-source reverse proxy server, and can forward UR L requests of users in a polling mode and the like, and distribute web accesses to different Docker containers to realize load balancing of a machine learning model.
The method comprises the steps that a flash Web application program layer encapsulates machine learning model codes into Restful interface functions with an Http protocol through a flash-based Web application framework, provides the Restful interface functions with an Http protocol for the outside to access the Web interface services in a UR L mode, packages a model into a lightweight Web Restful style interface service through the flash Web application framework and provides the interface service for the outside to access in a URL mode, wherein the machine learning model comprises model core algorithm functions required by machine learning and functions of input output, prediction processing and the like, is used for encapsulating the flash Web application program and provides UR L modes for accessing different machine learning models, provides Web access services for a user end through the Web application framework based on the machine learning model, realizes environment creation and management of the machine learning model, the user can access service contents provided by the model only through a website url and corresponding parameters, for example, inputs some parameters, and obtains prediction results obtained by parameter input through invoking url links of the prediction model.
The method corresponding to the deployment and monitoring device of the machine learning model comprises the following steps:
s1, deploying the running environment of the Python application program into a container application, providing a web interface service for a user terminal through a web application framework based on a flash, providing a web access service for the user terminal in a Restful API mode, specifically, packaging a machine learning model code into a Restful style interface function of an Http protocol through the web service built based on the flash, providing the web Restful style interface service for the user terminal, and providing the web Restful style interface service for the outside to access the web interface service through a UR L mode, wherein the running environment of the Python application program is built based on Anaconda and is used for introducing and managing software packages and dependent items required by a machine learning model, and the container application module is preconfigured with the running environment of the Python application program.
S2: and the user access module distributes the web requests of the users to the web interface services provided by different container application modules according to a preset strategy so as to realize load balancing of the request access of the user end to the machine learning model service.
S3: sending the log output in the running process of the machine learning model to a database module for storage; and the database module outputs the log to a third-party monitoring terminal to analyze and monitor the running state of the machine learning model. The running state of the machine learning model comprises the log output rate of the machine learning model, when the monitoring module monitors that the log output rate is smaller than a preset minimum output rate threshold value, monitoring early warning is triggered, and one or more container application modules can be additionally deployed and registered to the reverse proxy server.
Specifically, a Docker container technology may be selected, a user access module (e.g., a reverse proxy server Ngnix) is used, the user access module distributes web access according to a preset policy (e.g., a polling manner) pre-configured in the proxy server by using the reverse proxy server, and the web access is distributed to different Docker container applications, so as to implement load balancing of web request access. Docker establishes the machine environment and the software tool in the container mirror image independently, so that the container application started by the container mirror image is an independent virtual operating system; in addition, the operating environment of the Python application program is constructed based on, but not limited to, Anaconda, the Anaconda can also be installed in the program operating environment when a Docker image is constructed, the dependent packages of the management tool library are installed in an auxiliary mode, and the dependent packages do not need to be additionally installed one by one, so that the environment unified image can be formed through the cooperation of the Docker and the Anaconda, one or more container applications can be started through the image, the Python engineering is directly operated, and the dependency influence of the machine environment and a third party package is reduced.
In the embodiment, the container application is directly started by a mirror image through the Docker, the development, test and production of the machine environment are not needed, only the machine environment supports the Docker application, and the mirror image of the system environment containing the independent Python project can be directly used in different machine environments, so that the operation and maintenance deployment cost can be greatly reduced.
The equipment disclosed by the invention can improve the efficiency of the system and improve the resource utilization rate by executing the deployment and monitoring method of the machine learning model through the processor.
The readable storage medium disclosed by the invention stores the deployment and monitoring method of the machine learning model, which is realized when the readable storage medium is executed by the processor, so that the use and popularization of the transit vehicle sequencing device are facilitated. Further introduction is as follows:
the computer system includes a Central Processing Unit (CPU)101, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)102 or a program loaded from a storage section into a Random Access Memory (RAM) 103. In the RAM103, various programs and data necessary for system operation are also stored. The CPU 101, ROM 102, and RAM103 are connected to each other via a bus 104. An input/output (I/O) interface 105 is also connected to bus 104.
To the I/O interface 105, AN input section 106 including a keyboard, a mouse, and the like, AN output section including AN input section such as a Cathode Ray Tube (CRT), a liquid crystal display (L CD), and the like, a speaker, and the like, a storage section 108 including a hard disk, and the like, and a communication section 109 including a network interface card such as a L AN card, a modem, and the like, the communication section 109 performs communication processing via a network such as the internet, a drive is also connected to the I/O interface 105 as necessary, a removable medium 511 such as a magnetic disk, AN optical disk, a magneto-optical disk, a semiconductor memory, and the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 108 as necessary.
In particular, the process described above with reference to the flowchart of fig. 3 may be implemented as a computer software program according to an embodiment of the present invention. For example, embodiment 1 of the invention comprises a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The above-described functions defined in the system of the present application are executed when the computer program is executed by the Central Processing Unit (CPU) 101.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments 1 of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves. The described units or modules may also be provided in a processor, and may be described as: a processor includes a container application module, a database module, where the names of these modules do not constitute a limitation on the unit or the module itself under certain circumstances, for example, the container application module may also be described as "a container application module for deploying the running environment of a python application into a container application, providing web interface services to a user end through a flash-based web application framework, and providing web access services to the user end in the form of Restful APIs".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the deployment and monitoring method of the machine learning model as described in the above embodiments.
For example, the electronic device may implement the following as shown in fig. 1: step S1: deploying the running environment of the python application program into the container application, providing a web interface service to the user terminal through a flash-based web application framework, and providing a web access service to the user terminal in a Restful API manner, step S1: and outputting the log in the running process of the machine learning model to a database module for storage.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the features described above have similar functions to (but are not limited to) those disclosed in this application.

Claims (10)

1. A device for deploying and monitoring a machine learning model, comprising:
the system comprises one or more container application modules, a WEB application service interface, machine learning model codes and a python running environment are deployed in the container application modules, the container application modules are configured to deploy the running environment of a python application program into container application, provide WEB interface service for a user terminal through a WEB application framework based on flash, and provide WEB access service for the user terminal in a Restful API mode;
the database module is configured for receiving and storing log output in the running process of the machine learning model;
wherein the container application module is preconfigured with a running environment of a Python application.
2. The device for deploying and monitoring the machine learning model according to claim 1, further comprising a reverse proxy server configured to distribute the web requests of the users to the web interface services provided by the different container application modules according to a preset policy, so as to implement load balancing of request access of the users to the machine learning model services.
3. The deployment and monitoring device of the machine learning model according to claim 2, wherein the database module is further configured to send the log output to a third-party monitoring terminal for analyzing and monitoring the operating state of the machine learning model.
4. The deployment and monitoring device of machine learning model according to claim 1, wherein the operating environment of the Python application is constructed based on Anaconda, and is used for introducing and managing the software packages and the dependent items thereof required by the machine learning model.
5. The deployment and monitoring apparatus of machine learning model according to claim 1, wherein the container application module is further configured to encapsulate the machine learning model code into Restful interface function with Http protocol through a flash-based web application framework, and provide the Restful interface function with external access to the web interface service through UR L.
6. A deployment and monitoring method of a machine learning model is characterized by comprising the following steps:
deploying the running environment of the python application program into a container application, providing web interface service for a user side through a web application framework based on flash, providing web access service for the user side in a Restful API mode, and sending log output in the running process of a machine learning model to a database module for storage;
wherein the container application module is preconfigured with a running environment of a Python application.
7. The method for deploying and monitoring a machine learning model of claim 6, further comprising: and the user access module distributes the web requests of the users to the web interface services provided by different container application modules according to a preset strategy so as to realize load balancing of the request access of the user end to the machine learning model service.
8. The method for deploying and monitoring a machine learning model of claim 7, further comprising: and outputting and sending the log to a third-party monitoring terminal through a database module, and analyzing and monitoring the running state of the machine learning model.
9. The deployment and monitoring method of the machine learning model according to claim 6, wherein the operating environment of the Python application is constructed based on Anaconda for introducing and managing the software packages and their dependent items required by the machine learning model.
10. The deployment and monitoring method of the machine learning model according to claim 6, further comprising encapsulating machine learning model codes into Restful style interface functions of Http protocol by a web service built based on flash, and providing the encapsulated Restful style interface functions to outside to access the web interface services by way of UR L.
CN201910072556.4A 2019-01-25 2019-01-25 Deployment and monitoring device and method of machine learning model Pending CN111488254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910072556.4A CN111488254A (en) 2019-01-25 2019-01-25 Deployment and monitoring device and method of machine learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910072556.4A CN111488254A (en) 2019-01-25 2019-01-25 Deployment and monitoring device and method of machine learning model

Publications (1)

Publication Number Publication Date
CN111488254A true CN111488254A (en) 2020-08-04

Family

ID=71812089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910072556.4A Pending CN111488254A (en) 2019-01-25 2019-01-25 Deployment and monitoring device and method of machine learning model

Country Status (1)

Country Link
CN (1) CN111488254A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112230911A (en) * 2020-09-27 2021-01-15 北京通付盾人工智能技术有限公司 Model deployment method, device, computer equipment and storage medium
CN112633501A (en) * 2020-12-25 2021-04-09 深圳晶泰科技有限公司 Development method and system of machine learning model framework based on containerization technology
CN112817581A (en) * 2021-02-20 2021-05-18 中国电子科技集团公司第二十八研究所 Lightweight intelligent service construction and operation support method
CN112882481A (en) * 2021-04-28 2021-06-01 北京邮电大学 Mobile multi-mode interactive navigation robot system based on SLAM
CN113032355A (en) * 2021-04-06 2021-06-25 上海英方软件股份有限公司 Method and device for collecting logs in batches by Web application
CN114172908A (en) * 2022-02-10 2022-03-11 浙江大学 End cloud cooperative processing method and equipment
WO2022134001A1 (en) * 2020-12-25 2022-06-30 深圳晶泰科技有限公司 Machine learning model framework development method and system based on containerization technology
US20220237503A1 (en) * 2021-01-26 2022-07-28 International Business Machines Corporation Machine learning model deployment within a database management system
WO2022166715A1 (en) * 2021-02-07 2022-08-11 中兴通讯股份有限公司 Intelligent pipeline processing method and apparatus, and storage medium and electronic apparatus
CN115617421A (en) * 2022-12-05 2023-01-17 深圳市欧瑞博科技股份有限公司 Intelligent process scheduling method and device, readable storage medium and embedded equipment
CN116880928A (en) * 2023-09-06 2023-10-13 菲特(天津)检测技术有限公司 Model deployment method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104793946A (en) * 2015-04-27 2015-07-22 广州杰赛科技股份有限公司 Application deployment method and system based on cloud computing platform
CN106790463A (en) * 2016-12-08 2017-05-31 广州杰赛科技股份有限公司 The access method and system of Web configuration file heavy loads
CN106874357A (en) * 2016-12-28 2017-06-20 新华三技术有限公司 A kind of Resources Customization method and apparatus of Web applications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104793946A (en) * 2015-04-27 2015-07-22 广州杰赛科技股份有限公司 Application deployment method and system based on cloud computing platform
CN106790463A (en) * 2016-12-08 2017-05-31 广州杰赛科技股份有限公司 The access method and system of Web configuration file heavy loads
CN106874357A (en) * 2016-12-28 2017-06-20 新华三技术有限公司 A kind of Resources Customization method and apparatus of Web applications

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112230911A (en) * 2020-09-27 2021-01-15 北京通付盾人工智能技术有限公司 Model deployment method, device, computer equipment and storage medium
CN112230911B (en) * 2020-09-27 2023-12-29 北京通付盾人工智能技术有限公司 Model deployment method, device, computer equipment and storage medium
CN112633501A (en) * 2020-12-25 2021-04-09 深圳晶泰科技有限公司 Development method and system of machine learning model framework based on containerization technology
WO2022134001A1 (en) * 2020-12-25 2022-06-30 深圳晶泰科技有限公司 Machine learning model framework development method and system based on containerization technology
US20220237503A1 (en) * 2021-01-26 2022-07-28 International Business Machines Corporation Machine learning model deployment within a database management system
WO2022166715A1 (en) * 2021-02-07 2022-08-11 中兴通讯股份有限公司 Intelligent pipeline processing method and apparatus, and storage medium and electronic apparatus
CN112817581A (en) * 2021-02-20 2021-05-18 中国电子科技集团公司第二十八研究所 Lightweight intelligent service construction and operation support method
CN113032355A (en) * 2021-04-06 2021-06-25 上海英方软件股份有限公司 Method and device for collecting logs in batches by Web application
CN112882481A (en) * 2021-04-28 2021-06-01 北京邮电大学 Mobile multi-mode interactive navigation robot system based on SLAM
CN114172908A (en) * 2022-02-10 2022-03-11 浙江大学 End cloud cooperative processing method and equipment
CN115617421A (en) * 2022-12-05 2023-01-17 深圳市欧瑞博科技股份有限公司 Intelligent process scheduling method and device, readable storage medium and embedded equipment
CN115617421B (en) * 2022-12-05 2023-04-14 深圳市欧瑞博科技股份有限公司 Intelligent process scheduling method and device, readable storage medium and embedded equipment
CN116880928A (en) * 2023-09-06 2023-10-13 菲特(天津)检测技术有限公司 Model deployment method, device, equipment and storage medium
CN116880928B (en) * 2023-09-06 2023-11-21 菲特(天津)检测技术有限公司 Model deployment method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111488254A (en) Deployment and monitoring device and method of machine learning model
CN111427701A (en) Workflow engine system and business processing method
CN112631590B (en) Component library generation method, device, electronic equipment and computer readable medium
US20190188010A1 (en) Remote Component Loader
CN110059064B (en) Log file processing method and device and computer readable storage medium
CN114237853A (en) Task execution method, device, equipment, medium and program product applied to heterogeneous system
CN111666079A (en) Method, device, system, equipment and computer readable medium for software upgrading
CN113127050A (en) Application resource packaging process monitoring method, device, equipment and medium
CN111382058B (en) Service testing method and device, server and storage medium
CN114449523B (en) Flow filtering method, device, equipment and medium for satellite measurement and control system
CN111488268A (en) Dispatching method and dispatching device for automatic test
CN115509744A (en) Container distribution method, system, device, equipment and storage medium
CN115291928A (en) Task automatic integration method and device of multiple technology stacks and electronic equipment
CN114816430A (en) Business code development method, system and computer readable storage medium
CN113569256A (en) Vulnerability scanning method and device, vulnerability scanning system, electronic equipment and computer readable medium
CN113138935A (en) Program testing method and device, electronic equipment and storage medium
CN113010174A (en) Service monitoring method and device
CN111679885A (en) Method, device, medium and electronic equipment for determining virtual machine drift
CN111949472A (en) Method and device for recording application logs
CN111831530A (en) Test method and device
US9436523B1 (en) Holistic non-invasive evaluation of an asynchronous distributed software process
CN113360368B (en) Method and device for testing software performance
CN113342633B (en) Performance test method and device
CN113535500A (en) Method and device for monitoring service
US20230385045A1 (en) Method, device, and computer program product for upgrading virtual system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination