CN110007946B - Method, device, equipment and medium for updating algorithm model - Google Patents

Method, device, equipment and medium for updating algorithm model Download PDF

Info

Publication number
CN110007946B
CN110007946B CN201910301402.8A CN201910301402A CN110007946B CN 110007946 B CN110007946 B CN 110007946B CN 201910301402 A CN201910301402 A CN 201910301402A CN 110007946 B CN110007946 B CN 110007946B
Authority
CN
China
Prior art keywords
model file
algorithm
plug
updating
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910301402.8A
Other languages
Chinese (zh)
Other versions
CN110007946A (en
Inventor
唐铃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Tianpeng Network Co ltd
Original Assignee
Chongqing Tianpeng Network Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Tianpeng Network Co ltd filed Critical Chongqing Tianpeng Network Co ltd
Priority to CN201910301402.8A priority Critical patent/CN110007946B/en
Publication of CN110007946A publication Critical patent/CN110007946A/en
Application granted granted Critical
Publication of CN110007946B publication Critical patent/CN110007946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Stored Programmes (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides an updating method, device, equipment and medium of an algorithm model. The method comprises the following steps: receiving plug-in corresponding to at least one algorithm; according to a preset period, a scheduler is utilized to allocate the plug-ins to the working nodes to run based on task scheduling standards; in each period, obtaining and storing a model file corresponding to each algorithm according to an operation result; and updating the old model file in the corresponding client by utilizing the model file. According to the method, the algorithm plug-in is scheduled, the plug-in is operated on the working node to obtain the model file, the old model file in the client is updated according to the model file, the model file is not required to be manually uploaded to the specified directory of the server by operation and maintenance, the complexity of the algorithm application during online is solved, and meanwhile, the offline training of the algorithm model is realized.

Description

Method, device, equipment and medium for updating algorithm model
Technical Field
The invention relates to the technical field of computers, in particular to an updating method, device, equipment and medium of an algorithm model.
Background
With the development of company business, more and more algorithms are applied to the company, and more off-line model files are relied on by different algorithms, which causes the following problems in the traditional online mode: firstly, the offline model has larger resources, and the operation and maintenance is troublesome to be manually online and is difficult to maintain; secondly, an online process is carried out every time the model is changed by a manual uploading mode; and thirdly, updating the data of the algorithm model is not timely, so that the correctness of the algorithm result is influenced.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an updating method, a device, equipment and a medium of an algorithm model, which can solve the complexity of the algorithm application on line and can realize off-line training of the algorithm model.
In a first aspect, the present invention provides an updating method of an algorithm model, including:
receiving plug-in corresponding to at least one algorithm;
according to a preset period, a scheduler is utilized to allocate the plug-ins to the working nodes to run based on task scheduling standards;
in each period, obtaining and storing a model file corresponding to each algorithm according to an operation result;
and updating the old model file in the corresponding client by utilizing the model file.
Optionally, the plug-in is written by the client according to a plug-in standard format.
Optionally, the updating the old model file in the corresponding client by using the model file includes:
detecting whether the state of the model file corresponding to the algorithm is an updating state;
if so, replacing the current old model file with the model file;
if not, the state of the model file corresponding to the corresponding algorithm is continuously detected.
Optionally, the detecting whether the state of the model file corresponding to the algorithm is an updated state includes:
and the client detects whether the state of the model file corresponding to the algorithm is an updated state or not by using the interface.
Optionally, before the step of allocating, by using a scheduler according to the preset period and based on a task scheduling standard, the plug-in to a work node to run, the method further includes:
and selecting a proper working node for each plug-in according to the available memory of the current node and the current remaining executable task data.
Optionally, before the step of updating the old model file in the corresponding client by using the model file, the method further includes:
checking the correctness of each model file;
if the model file is correct, the step of updating the old model file in the corresponding client by using the model file is executed;
and if not, switching the model file to the model file which is verified to be correct in the last period.
In a second aspect, the present invention provides an updating apparatus for an algorithm model, including:
the receiving module is used for receiving plug-ins corresponding to at least one algorithm;
the scheduling module is used for allocating the plug-in to the working node to run by utilizing a scheduler according to a preset period and based on a task scheduling standard;
the file obtaining module is used for obtaining and storing a model file corresponding to each algorithm according to the operation result in each period;
and the updating module is used for updating the old model file in the corresponding client by utilizing the model file.
In a third aspect, the present invention provides an updating apparatus for an algorithm model, including: a processor, an input device, an output device and a memory, which are interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform a method of updating an algorithmic model as provided in the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform a method of updating an algorithmic model as provided in the first aspect.
The invention provides an updating method of an algorithm model, which comprises the following steps: receiving plug-in corresponding to at least one algorithm; according to a preset period, a scheduler is utilized to allocate the plug-ins to the working nodes to run based on task scheduling standards; in each period, obtaining and storing a model file corresponding to each algorithm according to an operation result; and updating the old model file in the corresponding client by utilizing the model file. According to the method, the algorithm plug-in is scheduled, the plug-in is operated on the working node to obtain the model file, the old model file in the client is updated according to the model file, the model file is not required to be manually uploaded to the specified directory of the server by operation and maintenance, the complexity of the algorithm application during online is solved, and meanwhile, the offline training of the algorithm model is realized.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a flowchart of an updating method of an algorithm model according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an apparatus for updating an algorithm model according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an updating apparatus for an algorithm model according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
The invention provides an updating method, device, equipment and medium of an algorithm model. Embodiments of the present invention will be described below with reference to the drawings.
Referring to fig. 1, fig. 1 is a flowchart of an updating method of an algorithm model according to an embodiment of the present invention, where the updating method of an algorithm model according to the embodiment includes:
step S101: and receiving plug-in corresponding to at least one algorithm.
The execution subject of the invention is a server, and the server terminal can be a management platform, and the platform can comprise: the system comprises a main node, a working node and the like, wherein the main node is mainly responsible for tasks such as task scheduling, data storage and automatic client request distribution. The working nodes are mainly responsible for the execution of tasks.
The plug-in is compiled by the client according to the plug-in standard format, and means that the operation logic corresponding to each algorithm is compiled according to the plug-in standard format. And obtaining an algorithm model file corresponding to the plug-in by executing the plug-in. The plug-in belongs to the smallest unit of work of the platform, since it involves different training logic and algorithmic logic, the plug-in only needs to comply with the standard specifications.
The working node is used for executing the plug-in. Since a plurality of working nodes can be arranged in the server, plug-ins corresponding to a plurality of algorithms can be allowed to be executed simultaneously, wherein each algorithm can correspond to one plug-in or a plurality of plug-ins.
Step S102: and according to a preset period, allocating the plug-in to the working node to run by using a scheduler based on a task scheduling standard.
And scheduling the plug-ins by using a scheduler, distributing the plug-ins to the working nodes for operation, wherein the operation result is a model file corresponding to the algorithm. The period can be one day, that is, scheduled according to the day, the plug-in is operated once a day, and a model file is output as a result of each operation. The plug-in is provided with a scheduling time rule which prescribes a preset period.
When the model file is output, the model file is provided with a time stamp, and the time stamp can be used for distinguishing the sequence of the model file.
The task scheduling standard comprises three aspects, specifically:
1. task scheduling time standard format: and a scheme of a general cron timing expression is adopted, and a scheduling period is customized by a plug-in.
2. A data loader: when the platform schedules the tasks, the platform calls a data loader which realizes a standard interface to load data.
3. Model training: after the algorithm plug-in realizes the standard training unit interface and the platform finishes data loading, the training unit is called, the data file generated by the data loading unit is input, and finally the corresponding algorithm model file is output.
Because the number of the working nodes of the server is large, when a plurality of plug-ins exist, the working nodes can be randomly distributed, and a proper working node can be selected for each plug-in. When a task starts, the master node is responsible for selecting the most suitable current node from all the working nodes to execute the task, and the main selection indexes comprise the available memory of the current node, the number of the remaining executable tasks and the like. If the node can execute the task, the task is executed by the corresponding node, the node is responsible for information monitoring, data acquisition and the like in the whole life cycle process of the task, after the task is executed, the node uniformly submits corresponding execution result data to the main node, and the main node stores the data.
Step S103: and in each period, obtaining and storing the model file corresponding to each algorithm according to the operation result.
Wherein, the operation result is a model file of the algorithm.
Step S104: and updating the old model file in the corresponding client by utilizing the model file.
Updating the old model file in the corresponding client by using the model file, wherein the updating comprises the following steps: detecting whether the state of the model file corresponding to the algorithm is an updating state; if so, replacing the current old model file with the model file; if not, the state of the model file corresponding to the corresponding algorithm is continuously detected.
When the state of the model file corresponding to the algorithm is detected, the client side can carry out detection on the management platform through a corresponding interface, and the management platform can also carry out automatic detection.
When the state of the model file is detected, the state can be detected through a timestamp carried by the model file, and whether the model file is updated or not is judged according to the timestamp. And if so, replacing the current old model file with the latest model file. The specific process is as follows: the client can send an update request to the management platform, when the request reaches any node of the cluster, the node automatically routes the request to the nearest node according to the machine room to which the request IP of the client belongs, and the node downloads the model file according to the request and uploads the model file to the client. Therefore, the time of network transmission can be reduced, and the downloading speed of the latest model file can be improved.
When the state of the model file corresponding to the algorithm is detected, the detection can be performed according to a preset time interval. For example, every half hour. Therefore, the model file can be updated in real time, and the timeliness of algorithm data updating is improved.
Before the step of updating the old model file in the corresponding client by using the model file, the method may further include: checking the correctness of each model file; if the model file is correct, the step of updating the old model file in the corresponding client by using the model file is executed; and if not, switching the model file to the model file which is verified to be correct in the last period.
By the method, after the latest model has a problem, the model can be quickly switched to the model with the historical version, and the problem of managing the model file is solved.
When the model file is switched to the model file which is verified correctly in the last period, the client updates the model file which is verified correctly in the last period when the model file is updated, and therefore the correctness of the executed model file is guaranteed.
When the model file is used by the client side and a problem occurs, the version of the model file can be manually switched, so that the client side can download the correct model file.
When the model file is switched to the model file which is verified correctly in the last period, the timestamp of the model file needs to be changed, and the timestamp is changed into the current timestamp, so that the client can judge the state of the model file, and the correct model file is updated.
The method solves the complexity of the algorithm application on line, does not need to manually transmit the model file to the specified directory of the server by operation and maintenance, simultaneously solves the management problem of the model file, and can quickly switch to the model of the historical version after the latest model has problems; the algorithm training project is developed by adopting a standard plug-in mode and is handed to the platform for unified scheduling execution, so that the problem of off-line training of an algorithm model and the timeliness of algorithm data updating are solved.
The above is an updating method of the algorithm model provided by the invention.
Based on the same inventive concept as the above-mentioned method for updating an algorithm model, correspondingly, the embodiment of the present invention further provides an apparatus for updating an algorithm model, as shown in fig. 2. Because the device embodiment is basically similar to the method embodiment, the description is simpler, and the relevant points can be referred to partial description of the method embodiment.
The invention provides an updating device of an algorithm model, which comprises:
a receiving module 101, configured to receive a plug-in corresponding to at least one algorithm;
the scheduling module 102 is configured to allocate the plug-in to a working node to run by using a scheduler according to a preset period and based on a task scheduling standard;
the file obtaining module 103 is used for obtaining and storing a model file corresponding to each algorithm according to the operation result in each period;
and the updating module 104 is used for updating the old model file in the corresponding client by using the model file.
In a specific embodiment provided by the invention, the plug-in is written by the client according to a plug-in standard format.
In an embodiment of the present invention, the update module 104 is specifically configured to:
detecting whether the state of the model file corresponding to the algorithm is an updating state;
if so, replacing the current old model file with the model file;
if not, the state of the model file corresponding to the corresponding algorithm is continuously detected.
In a specific embodiment provided by the present invention, the detecting whether the state of the model file corresponding to the algorithm is the update state includes:
and the client detects whether the state of the model file corresponding to the algorithm is an updated state or not by using the interface.
In a specific embodiment provided by the present invention, before the scheduling module 102, the method further includes:
and the node selection module is used for selecting a proper working node for each plug-in according to the available memory of the current node and the current remaining executable task data.
In a specific embodiment provided by the present invention, before the updating module 104, the method further includes:
the checking module is used for checking the correctness of each model file;
if the result of the check module is correct, executing the content of the update module 104;
and if the verification result is incorrect, executing the content of a switching module, wherein the switching module is used for switching the model file into the model file which is verified to be correct in the last period.
The above is an updating apparatus of an algorithm model provided by the present invention.
Further, on the basis of the method and the device for updating the algorithm model provided by the embodiment, the embodiment of the invention also provides equipment for updating the algorithm model. As shown in fig. 3, the apparatus may include: one or more processors 201, one or more input devices 202, one or more output devices 203, and a memory 204, the processors 201, input devices 202, output devices 203, and memory 204 being interconnected by a bus 205. The memory 204 is used for storing a computer program comprising program instructions, the processor 201 being configured for invoking the program instructions for performing the methods of the above-described method embodiment parts.
It should be understood that, in the embodiment of the present invention, the Processor 201 may be a Central Processing Unit (CPU), and the Processor may also be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 202 may include a keyboard or the like, and the output device 203 may include a display (LCD or the like), a speaker, or the like.
The memory 204 may include both read-only memory and random access memory and provides instructions and data to the processor 201. A portion of memory 204 may also include non-volatile random access memory. For example, memory 204 may also store device type information.
In a specific implementation, the processor 201, the input device 202, and the output device 203 described in the embodiment of the present invention may execute an implementation manner described in the embodiment of the method for updating an algorithm model provided in the embodiment of the present invention, which is not described herein again.
Accordingly, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, the computer program comprising program instructions that, when executed by a processor, implement: and (3) updating the algorithm model.
The computer readable storage medium may be an internal storage unit of the system according to any of the foregoing embodiments, for example, a hard disk or a memory of the system. The computer readable storage medium may also be an external storage device of the system, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the system. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the system. The computer-readable storage medium is used for storing the computer program and other programs and data required by the system. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A method for updating an algorithm model, comprising:
receiving plug-in corresponding to at least one algorithm;
according to a preset period, a scheduler is utilized to allocate the plug-ins to the working nodes to run based on task scheduling standards; the operation result is a model file corresponding to the algorithm;
in each period, obtaining and storing a model file corresponding to each algorithm according to an operation result; the plug-in is operated once in each period, and a model file is output as a result of each operation and is provided with a timestamp;
and updating the old model file in the corresponding client by utilizing the model file.
2. The method of claim 1, wherein the plug-in is written by the client according to a plug-in standard format.
3. The method of claim 1, wherein updating the old model file in the corresponding client with the model file comprises:
detecting whether the state of the model file corresponding to the algorithm is an updating state;
if so, replacing the current old model file with the model file;
if not, the state of the model file corresponding to the corresponding algorithm is continuously detected.
4. The method according to claim 3, wherein detecting whether the state of the model file corresponding to the algorithm is an updated state comprises:
and the client detects whether the state of the model file corresponding to the algorithm is an updated state or not by using the interface.
5. The method of claim 1, wherein prior to the step of allocating the plug-in to run on the working node by using the scheduler based on task scheduling criteria at the preset period, the method further comprises:
and selecting a proper working node for each plug-in according to the available memory of the current node and the current remaining executable task data.
6. The method of claim 1, further comprising, prior to the step of updating the old model file in the corresponding client with the model file:
checking the correctness of each model file;
if the model file is correct, the step of updating the old model file in the corresponding client by using the model file is executed;
and if not, switching the model file to the model file which is verified to be correct in the last period.
7. An apparatus for updating an algorithm model, comprising:
the receiving module is used for receiving plug-ins corresponding to at least one algorithm;
the scheduling module is used for allocating the plug-in to the working node to run by utilizing a scheduler according to a preset period and based on a task scheduling standard; the operation result is a model file corresponding to the algorithm;
the file obtaining module is used for obtaining and storing a model file corresponding to each algorithm according to the operation result in each period; the plug-in is operated once in each period, and a model file is output as a result of each operation and is provided with a timestamp;
and the updating module is used for updating the old model file in the corresponding client by utilizing the model file.
8. An updating device of an algorithmic model, comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method according to any of claims 1 to 6.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-6.
CN201910301402.8A 2019-04-15 2019-04-15 Method, device, equipment and medium for updating algorithm model Active CN110007946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910301402.8A CN110007946B (en) 2019-04-15 2019-04-15 Method, device, equipment and medium for updating algorithm model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910301402.8A CN110007946B (en) 2019-04-15 2019-04-15 Method, device, equipment and medium for updating algorithm model

Publications (2)

Publication Number Publication Date
CN110007946A CN110007946A (en) 2019-07-12
CN110007946B true CN110007946B (en) 2020-06-09

Family

ID=67172004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910301402.8A Active CN110007946B (en) 2019-04-15 2019-04-15 Method, device, equipment and medium for updating algorithm model

Country Status (1)

Country Link
CN (1) CN110007946B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112817737A (en) * 2019-11-15 2021-05-18 北京沃东天骏信息技术有限公司 Method and device for calling model in real time
CN111078659B (en) * 2019-12-20 2023-04-21 腾讯科技(深圳)有限公司 Model updating method, device, computer readable storage medium and computer equipment
CN111522570B (en) * 2020-06-19 2023-09-05 杭州海康威视数字技术股份有限公司 Target library updating method and device, electronic equipment and machine-readable storage medium
CN115016815A (en) * 2022-05-26 2022-09-06 平安银行股份有限公司 Public file processing method, device, equipment and storage medium
CN118245094B (en) * 2024-05-29 2024-07-26 成都赢瑞科技有限公司 Platform optimization method and system based on model simulation design

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7852208B2 (en) * 2004-08-02 2010-12-14 Hill-Rom Services, Inc. Wireless bed connectivity
CN109542542A (en) * 2017-09-21 2019-03-29 北京金山安全软件有限公司 Method, device, server and terminal for updating user interaction interface

Also Published As

Publication number Publication date
CN110007946A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN110007946B (en) Method, device, equipment and medium for updating algorithm model
CN107193607B (en) Method and apparatus for updating code file, storage medium, processor, and terminal
CN107844343B (en) Upgrading system and method for complex server application system
US8301935B2 (en) Distributed batch runner
CN108055343A (en) For the method for data synchronization and device of computer room
CN110535776B (en) Gateway current limiting method, device, gateway, system and storage medium
CN112256989A (en) Page loading method and device based on offline package, terminal equipment and storage medium
CN116257438A (en) Updating method of interface test case and related equipment
CN114546588A (en) Task deployment method and device, storage medium and electronic device
CN103365684A (en) Updating method and multi-domain embedded system
CN107203471B (en) Joint debugging method, service platform and computer storage medium
EP2711836A1 (en) Data distribution system
CN115309457A (en) Application instance restarting method and device, electronic equipment and readable storage medium
CN105530140A (en) Cloud scheduling system, method and device for removing tight coupling of use case and environment
CN114879977A (en) Application deployment method, device and storage medium
US9477447B1 (en) Semantic representations of software extensions
CN114253906A (en) Method and device for managing configuration file, configuration distribution system and storage medium
JP6984120B2 (en) Load compare device, load compare program and load compare method
CN113553097B (en) Model version management method and device
CN110275699A (en) Code construction method, Serverless platform and object storage platform
CN117453257B (en) Upgrading method based on hierarchical management, terminal equipment and readable storage medium
CN116954869B (en) Task scheduling system, method and equipment
CN113762821B (en) Cargo information processing method, device, equipment and storage medium
CN116560722B (en) Operation and maintenance flow processing method and device, electronic equipment and storage medium
CN115022317B (en) Cloud platform-based application management method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant