CN112257733B - Model iteration method, second electronic equipment and storage medium - Google Patents

Model iteration method, second electronic equipment and storage medium Download PDF

Info

Publication number
CN112257733B
CN112257733B CN201911025291.9A CN201911025291A CN112257733B CN 112257733 B CN112257733 B CN 112257733B CN 201911025291 A CN201911025291 A CN 201911025291A CN 112257733 B CN112257733 B CN 112257733B
Authority
CN
China
Prior art keywords
model
training
data set
electronic device
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911025291.9A
Other languages
Chinese (zh)
Other versions
CN112257733A (en
Inventor
罗壮
高少帅
何云龙
赵何
蒋煜襄
刘奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201911025291.9A priority Critical patent/CN112257733B/en
Publication of CN112257733A publication Critical patent/CN112257733A/en
Application granted granted Critical
Publication of CN112257733B publication Critical patent/CN112257733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Stored Programmes (AREA)

Abstract

The embodiment of the application provides a model iteration method, a second electronic device and a storage medium, wherein the model iteration method comprises the following steps: acquiring algorithm registration information of model training and a data set from first electronic equipment; the algorithm registration information comprises information required by a preset training model; training to obtain a first model conforming to a preset standard based on the data set and algorithm registration information, and storing the first model to the second electronic equipment; acquiring a storage path of the first model in the second electronic equipment; and sending the storage path to the first electronic device.

Description

Model iteration method, second electronic equipment and storage medium
Technical Field
The present disclosure relates to, but not limited to, the field of computer technology, and in particular, to a model iteration method, a second electronic device, and a storage medium.
Background
With the development of artificial intelligence technology, more and more intelligent systems are in various application fields. The algorithm model is the core of the intelligent system, and the intelligent system will encounter more and more new data in the actual operation process. In order to maintain the accuracy of intelligent systems, model iterations are often required based on new data. At the present stage, the model iteration work is mainly completed manually. However, the model iteration is finished manually, so that not only are personnel of all parties required to cooperate and cannot respond to the model iteration requirement in time, but also a large number of repeated works are required to be manually performed.
Content of the application
The embodiment of the application provides a model iteration method, second electronic equipment and storage medium, which are used for solving the problems that in the related art, model iteration is finished manually, the model iteration needs cannot be responded timely due to the fact that personnel are needed to cooperate, and a large number of repeated works are needed to be executed manually, realizing model iteration process automation, greatly reducing the labor investment, and improving the model iteration efficiency and accuracy.
The technical scheme of the embodiment of the application is realized as follows:
a model iteration method, the method comprising:
acquiring algorithm registration information of model training and a data set from first electronic equipment; the algorithm registration information comprises information required by a preset training model;
training to obtain a first model conforming to a preset standard based on the data set and the algorithm registration information, and storing the first model to second electronic equipment;
acquiring a storage path of the first model in the second electronic equipment;
and sending the storage path to the first electronic equipment.
Optionally, the method further comprises:
performing format conversion processing on the data set through a proxy service interface in the second electronic equipment to obtain a target data set with a preset format; the preset format is a format of data processed by a Kubernets cluster in the second electronic device;
correspondingly, the training to obtain the first model meeting the preset standard based on the data set and the algorithm registration information comprises the following steps:
and training to obtain the first model through the proxy service interface based on the target data set and the algorithm registration information.
Optionally, the method further comprises:
storing the target data set to a second electronic device, and acquiring a data set path of the target data set;
correspondingly, the training, through the proxy service interface, based on the target data set and the algorithm registration information, to obtain the first model includes:
generating a configuration file based on the data set path and the algorithm registration information through the proxy service interface;
invoking a Kubernets application program interface through the proxy service interface, and sending the configuration file to a Kubernets cluster;
and acquiring the Kubernets cluster, and training to obtain the first model based on the configuration file.
Optionally, the obtaining the Kubernets cluster to obtain the first model based on the profile training includes:
acquiring a second model obtained by training the Kubernets cluster based on the configuration file;
evaluating the second model through the proxy service interface to obtain an evaluation result;
and determining that the evaluation result characterizes the second model to accord with a preset standard through the proxy service interface, and taking the second model as the first model.
Optionally, the evaluating, through the proxy service interface, the second model to obtain an evaluation result includes:
determining a model category based on the algorithm registration information through the proxy service interface;
determining a target algorithm matched with the model category through the proxy service interface;
and evaluating the second model through the proxy service interface based on the target algorithm to obtain the evaluation result.
Optionally, the configuration file includes a Docker mirror address, the dataset path, a pre-training model path, a training code path, a parameter configuration file path, and a model output path.
Optionally, the method further comprises:
obtaining a pre-training model and training codes;
and storing the pre-training model and the training codes into an object storage, and obtaining a pre-training model path for obtaining the pre-training model and a training code path for the training codes.
Optionally, the storing the first model to the second electronic device includes:
storing the first model in an object store of a second electronic device.
A second electronic device, the second electronic device comprising:
a memory for storing executable instructions;
and the processor is used for executing the executable instructions stored in the memory to realize the model iteration method.
A storage medium having stored thereon executable instructions which, when executed, are adapted to cause a processor to perform the model iteration method described above.
The application of the embodiment of the application realizes the following beneficial effects: the model iteration process is automatic, so that on one hand, the model iteration requirement can be responded immediately, on the other hand, the labor investment can be greatly reduced, and the model iteration effect and accuracy are improved.
Because the algorithm registration information of the model training and the data set from the first electronic device are acquired; the algorithm registration information comprises information required by a preset training model; training to obtain a first model conforming to a preset standard based on the data set and algorithm registration information, and storing the first model to the second electronic equipment; acquiring a storage path of the first model in the second electronic equipment; transmitting the storage path to the first electronic device; thus, an automatic model iteration flow is realized, after the second electronic equipment trains to obtain a first model matched with the data set from the first electronic equipment, a storage path of the first model is sent to the first electronic equipment, and the first electronic equipment can further quickly acquire the first model based on the storage path; the method solves the problems that the model iteration is finished manually in the related technology, and not only can the personnel of each party be matched so that the model iteration requirement cannot be responded timely, but also a large number of repeated works are needed to be executed manually, realizes the automation of the model iteration process, greatly reduces the labor investment, and improves the model iteration efficiency and accuracy.
Drawings
FIG. 1 is a schematic flow chart of a model iteration method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a model iteration architecture provided in an embodiment of the present application;
FIG. 3 is a schematic flow chart of another model iteration method provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Before further describing embodiments of the present application in detail, the terms and expressions that are referred to in the embodiments of the present application are described, and are suitable for the following explanation.
1) The application container engine Docker is an open-source application container engine, so that the developer can package their applications and rely on packages to a portable mirror image, then issue the packages to any popular Linux or Windows machine, and also can realize virtualization.
2) Kubernets, abbreviated as K8s, is an abbreviation that replaces 8 characters "ubennee" with 8, is an open source for managing containerized applications on multiple hosts in a cloud platform; it will be appreciated that K8s may be used to manage Docker.
3) Object storage (Object Storage Service, OSS), OSS has both storage local area network (Storage Area Network, SAN) advanced direct access disk features and network attached storage (Nework Attaehed Storage, NAS) distributed sharing features, well combining the advantages of block storage and file storage.
In the related art, at the present stage, the model iteration work is mainly completed by manpower, and the method comprises the following steps: (1) acquiring data and constructing a data set according to a certain format; (2) configuring a model training environment; (3) setting training parameters and training a model; (4) Repeating the step (3) until the trained model meets the requirement; (5) And loading the model meeting the requirements into the electronic equipment to finish online. However, the model iteration is realized in the mode, and the model iteration is finished manually, so that personnel of all parties are required to cooperate, the model iteration requirement cannot be responded timely, and a large number of repeated works are required to be executed manually.
Based on the foregoing, an embodiment of the present application provides a model iteration method, where the method is applied to a second electronic device, and the method is implemented through the steps in fig. 1; in this embodiment of the present application, the second electronic device may be understood as a model iteration platform, as shown in fig. 2, where the second electronic device may include a proxy Service Agent, OSS, K8s cluster, and Docker Hub; the Service Agent is responsible for providing a general model training interface, namely a Service Agent interface, and the Service Agent interface is used for automatic model training, and is connected with a first electronic device (the first electronic device is also called an intelligent system of a client) and a bottom training process in series, and shields algorithm training details for a calling party, and comprises the following Application Program Interfaces (API): training initiates an application program interface (Restful API) that initiates a complete training process. The interface parameters include the data set storage path, the classification (such as image classification, text classification, object detection, etc.) to which the model belongs, and the success of the start-up returns a unique model unique number (Identity document, ID) for training result acquisition. Obtaining training results: after the Service Agent monitors that the K8s training is completed, supporting pushing the training result through callback and message queues, and simultaneously providing a Restful API for real-time query. It can be understood that the second electronic device serves the first electronic device, after the user sends a model iteration request to the second electronic device through the first electronic device, the second electronic device starts the model iteration process provided by the application, and finally, the second electronic device feeds back a trained path of the first model to the first electronic device, so that the user can acquire the first model through the path of the first model.
In an embodiment of the present application, referring to fig. 1, the model iteration method provided in the present application includes:
step 101, acquiring algorithm registration information of model training and a data set from a first electronic device.
The algorithm registration information comprises information required by a preset training model.
In this embodiment, the algorithm registration information may be understood as information required by the training model prepared by the algorithm engineer according to the user requirement, or according to the user requirement and the own requirement, and the required information is defined as information obtained by a complete training algorithm. The algorithm registration information may define a pipeline.
Here, the algorithm engineer may prepare the above-described required information by:
the first step: configuring a system and a software environment capable of model training, and manufacturing the environment into a Docker mirror image;
and a second step of: formulating a training data set format specification, and processing the data set by a training code according to the format specification;
and a third step of: modifying the codes, writing training parameters into parameter configuration files according to a certain format, setting initial values of the parameter configuration files, and reading the training parameters from the parameter configuration files when training is started;
fourth step: model qualification criteria are defined. Thus, through the four steps, the finally obtained algorithm registration information comprises model training codes, pre-training model storage paths, docker mirror image addresses, classes (such as image classification, text classification, object detection and the like) to which the models belong, and parameter configuration file paths.
In the embodiment of the application, before the real-time model iteration method, a K8s cluster environment is built on the second electronic device, an object storage environment is built, and the pre-training model and training codes are put into the object storage.
In this embodiment, before the real-time model iteration method, a Service Agent may be deployed on the second electronic device, where the Service Agent is responsible for providing a general model training interface for automatic model training, and is connected in series with the first electronic device and the bottom training process, and masks algorithm training details for the caller, where the Service Agent includes the following APIs: the Restful API is used for starting a complete training process. The interface parameters include a data set storage path, classification (such as image classification, text classification, object detection, etc.) to which the model belongs, and a unique model ID is returned for training result acquisition when the model is successfully started.
After the Service Agent monitors that the K8s training is completed, supporting pushing the training result through callback and message queues, and simultaneously providing a Restful API for real-time query.
In practical application, an operator of the first electronic device starts the model iteration process by one key. The first electronic device initiates training by invoking a Service Agent in the second electronic device, entering a data set path and a class to which the model belongs. Further, the second electronic device may be self-contained with the data set of the first electronic device based on the data set path; the second electronic device may further obtain algorithm registration information for model training, where the algorithm registration information may be information preset by an algorithm engineer and stored in the second electronic device.
Step 102, training to obtain a first model meeting preset standards based on the data set and the algorithm registration information, and storing the first model to the second electronic equipment.
In the embodiment of the application, the preset standard is defined by a user according to own requirements. Under the condition that the second electronic equipment acquires algorithm registration information of model training and a data set from the first electronic equipment, training to obtain a first model meeting preset standards based on the data set and the algorithm registration information, and storing the first model to the second electronic equipment.
Step 103, obtaining a storage path of the first model in the second electronic device.
In this embodiment of the present invention, when the second electronic device trains to obtain the first model meeting the preset standard, the first model may be stored, and a storage path of the first model in the second electronic device is determined, where the storage path is used to instruct the first electronic device to obtain the first model from a position corresponding to the storage path.
Step 104, the storage path is sent to the first electronic device.
In this embodiment of the present application, after determining a storage path of a first model in a second electronic device, the second electronic device sends the storage path to the first electronic device, so that a user may quickly obtain the first model from the second electronic device through the storage path.
According to the model iteration method provided by the embodiment of the application, algorithm registration information of model training and a data set from first electronic equipment are obtained; the algorithm registration information comprises information required by a preset training model; training to obtain a first model conforming to a preset standard based on the data set and algorithm registration information, and storing the first model to the second electronic equipment; acquiring a storage path of the first model in the second electronic equipment; transmitting the storage path to the first electronic device; thus, an automatic model iteration flow is realized, after the second electronic equipment trains to obtain a first model matched with the data set from the first electronic equipment, a storage path of the first model is sent to the first electronic equipment, and the first electronic equipment can further quickly acquire the first model based on the storage path; the method solves the problems that the model iteration is finished manually in the related technology, and not only can the personnel of each party be matched so that the model iteration requirement cannot be responded timely, but also a large number of repeated works are needed to be executed manually, realizes the automation of the model iteration process, greatly reduces the labor investment, and improves the model iteration efficiency and accuracy.
According to the foregoing embodiment, an embodiment of the present application provides a model iteration method, which is applied to a second electronic device, as shown in fig. 3, and includes:
step 201, acquiring algorithm registration information of model training and a data set from a first electronic device.
The algorithm registration information comprises information required by a preset training model.
Step 202, performing format conversion processing on the data set through a proxy service interface in the second electronic device to obtain a target data set with a preset format.
The preset format is the format of data processed by the K8s cluster in the second electronic device. That is, when the second electronic device acquires the data set from the first electronic device, the second electronic device performs format conversion processing on the data set through a Service Agent interface in the second electronic device, so as to obtain a target data set in a preset format which can be processed by the K8 s.
In this embodiment of the present application, the second electronic device may further store the target data set to the second electronic device and obtain a data set path of the target data set when obtaining the target data set. Here, the second electronic device may store the target data set into the OSS.
In other embodiments of the present application, the second electronic device may further perform the following steps:
first, a pre-training model and training codes are obtained.
And a second step of storing the pre-training model and training codes into an object storage, and obtaining a pre-training model path for obtaining the pre-training model and a training code path for training codes. That is, the algorithm engineer prepares information required to train the model through the second electronic device.
Step 203, training to obtain a first model through a proxy service interface based on the target data set and the algorithm registration information.
In this embodiment, step 203 is training to obtain a first model through a proxy service interface based on a target data set and algorithm registration information, and may be implemented by the following steps:
step 203a, generating a configuration file based on the data set path and the algorithm registration information through the proxy service interface.
The configuration file comprises an application container engine Docker mirror address, a data set path, a pre-training model path, a training code path, a parameter configuration file path and a model output path.
Step 203b, calling a K8s application program interface through the proxy service interface, and sending the configuration file to the K8s cluster.
Here, the second electronic device calls the K8s API interface through the Service Agent interface, and sends the configuration file to the K8s cluster, so that the K8s cluster performs model training; further, in the process of model training of the K8s cluster, the second electronic equipment monitors a corresponding event of the K8s to obtain the Pod running state. As can be appreciated, pod operating states include: in operation, the operation is completed and the operation fails.
And 203c, acquiring the K8s cluster and training the K8s cluster based on the configuration file to obtain a first model.
In this embodiment, step 203c of obtaining the K8s cluster to obtain the first model based on the training of the configuration file may include the following steps:
a1, acquiring a second model obtained by training the K8s cluster based on the configuration file.
And A2, evaluating the second model through the proxy service interface to obtain an evaluation result.
In this embodiment of the present application, the evaluation of the second model by the A2 through the proxy service interface, to obtain an evaluation result, includes the following steps:
a21, determining model types based on algorithm registration information through the proxy service interface.
A22, determining a target algorithm matched with the model category through the proxy service interface.
Here, the second electronic device queries the pipeline registration information in the database through the Service Agent, and matches a specific target algorithm according to the classification to which the model belongs.
A23, evaluating the second model based on a target algorithm through the proxy service interface to obtain an evaluation result.
A3, determining that the evaluation result represents that the second model accords with a preset standard through the proxy service interface, and taking the second model as the first model.
Step 204, storing the first model in an object store of the second electronic device.
Step 205, a storage path of the first model in the second electronic device is obtained, and the storage path is sent to the first electronic device.
Here, the Service Agent acquires information of completion of the K8s training, asynchronously notifies or queries a training result by the first electronic device, where the training result includes a storage path of the first model and an evaluation result of the first model.
In the embodiment of the application, if the model obtained by training cannot meet the user requirement, the user sends a request to the second electronic device through the first electronic device, wherein the request is used for indicating that the model iteration is completed by manual intervention.
From the above, it can be seen that the second electronic device provided by the present application realizes automatic connection of links such as data set format conversion, pod starting, model training, model evaluation, and model online through Service Agent, thereby realizing model iteration process automation. Further, the full-automatic model iteration under partial conditions is realized by a mode of predefined standard and automatic model evaluation.
It should be noted that, in this embodiment, the descriptions of the same steps and the same content as those in other embodiments may refer to the descriptions in other embodiments, and are not repeated here.
Based on the foregoing embodiments, the embodiments of the present application provide a second electronic device, which may be applied to a model iteration method provided in the embodiments corresponding to fig. 1 and 3, and referring to fig. 4, the second electronic device 3 includes: a memory 31 for storing executable instructions;
a processor 32 for executing executable instructions stored in the memory 31, implementing the steps of:
acquiring algorithm registration information of model training and a data set from first electronic equipment; the algorithm registration information comprises information required by a preset training model;
training to obtain a first model conforming to a preset standard based on the data set and algorithm registration information, and storing the first model to the second electronic equipment;
acquiring a storage path of the first model in the second electronic equipment;
and sending the storage path to the first electronic device.
In the embodiment of the present application, the processor 32 is configured to execute executable instructions stored in the memory 31 to implement the following steps:
performing format conversion processing on the data set through a proxy service interface in the second electronic equipment to obtain a target data set with a preset format; the preset format is the format of data processed by the K8s cluster in the second electronic equipment;
correspondingly, based on the data set and the algorithm registration information, training to obtain a first model meeting preset standards comprises the following steps:
the first model is trained and obtained through the proxy service interface based on the target data set and the algorithm registration information.
In the embodiment of the present application, the processor 32 is configured to execute executable instructions stored in the memory 31 to implement the following steps:
storing the target data set to the second electronic equipment, and acquiring a data set path of the target data set;
correspondingly, training to obtain a first model based on the target data set and the algorithm registration information through the proxy service interface comprises the following steps:
generating a configuration file based on the data set path and the algorithm registration information through the proxy service interface;
calling a K8s application program interface through a proxy service interface, and sending a configuration file to the K8s cluster;
and obtaining a K8s cluster, and training based on the configuration file to obtain a first model.
In the embodiment of the present application, the processor 32 is configured to execute executable instructions stored in the memory 31 to implement the following steps:
acquiring a second model obtained by training the K8s cluster based on the configuration file;
evaluating the second model through the proxy service interface to obtain an evaluation result;
and determining that the evaluation result characterizes the second model to accord with a preset standard through the proxy service interface, and taking the second model as the first model.
In the embodiment of the present application, the processor 32 is configured to execute executable instructions stored in the memory 31 to implement the following steps:
determining model categories based on algorithm registration information through a proxy service interface;
determining a target algorithm matched with the model category through a proxy service interface;
and evaluating the second model based on a target algorithm through the proxy service interface to obtain an evaluation result.
In the embodiment of the present application, the processor 32 is configured to execute executable instructions stored in the memory 31 to implement the following steps:
the configuration file includes an application container engine dock mirror address, a dataset path, a pre-training model path, a training code path, a parameter configuration file path, and a model output path.
In the embodiment of the present application, the processor 32 is configured to execute executable instructions stored in the memory 31 to implement the following steps:
obtaining a pre-training model and training codes;
the pre-training model and training codes are stored in an object store, and a pre-training model path for acquiring the pre-training model and a training code path for training codes are obtained.
In the embodiment of the present application, the processor 32 is configured to execute executable instructions stored in the memory 31 to implement the following steps:
the first model is stored in an object store of the second electronic device.
The second electronic device provided by the embodiment of the application realizes an automatic model iteration flow, and after the second electronic device trains to obtain the first model matched with the data set from the first electronic device, the second electronic device sends the storage path of the first model to the first electronic device, so that the first electronic device can quickly acquire the first model based on the storage path; the method solves the problems that the model iteration is finished manually in the related technology, and not only can the personnel of each party be matched so that the model iteration requirement cannot be responded timely, but also a large number of repeated works are needed to be executed manually, realizes the automation of the model iteration process, greatly reduces the labor investment, and improves the model iteration efficiency and accuracy.
It should be noted that, in the specific implementation process of the steps executed by the processor in this embodiment, reference may be made to the implementation process in the model iteration method provided in the embodiment corresponding to fig. 1 and 3, which is not described herein again.
Based on the foregoing embodiments, embodiments of the present application provide a computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps of:
acquiring algorithm registration information of model training and a data set from first electronic equipment; the algorithm registration information comprises information required by a preset training model;
training to obtain a first model conforming to a preset standard based on the data set and algorithm registration information, and storing the first model to the second electronic equipment;
acquiring a storage path of the first model in the second electronic equipment;
and sending the storage path to the first electronic device.
In other embodiments of the present application, the one or more programs may be executed by one or more processors, and the following steps may also be implemented:
performing format conversion processing on the data set through a proxy service interface in the second electronic equipment to obtain a target data set with a preset format; the preset format is the format of data processed by the K8s cluster in the second electronic equipment;
correspondingly, based on the data set and the algorithm registration information, training to obtain a first model meeting preset standards comprises the following steps:
the first model is trained and obtained through the proxy service interface based on the target data set and the algorithm registration information.
In other embodiments of the present application, the one or more programs may be executed by one or more processors, and the following steps may also be implemented:
storing the target data set to the second electronic equipment, and acquiring a data set path of the target data set;
correspondingly, training to obtain a first model based on the target data set and the algorithm registration information through the proxy service interface comprises the following steps:
generating a configuration file based on the data set path and the algorithm registration information through the proxy service interface;
calling a K8s application program interface through a proxy service interface, and sending a configuration file to the K8s cluster;
and obtaining a K8s cluster, and training based on the configuration file to obtain a first model.
In other embodiments of the present application, the one or more programs may be executed by one or more processors, and the following steps may also be implemented:
acquiring a second model obtained by training the K8s cluster based on the configuration file;
evaluating the second model through the proxy service interface to obtain an evaluation result;
and determining that the evaluation result characterizes the second model to accord with a preset standard through the proxy service interface, and taking the second model as the first model.
In other embodiments of the present application, the one or more programs may be executed by one or more processors, and the following steps may also be implemented:
determining model categories based on algorithm registration information through a proxy service interface;
determining a target algorithm matched with the model category through a proxy service interface;
and evaluating the second model based on a target algorithm through the proxy service interface to obtain an evaluation result.
In other embodiments of the present application, the one or more programs may be executed by one or more processors, and the following steps may also be implemented:
the configuration file includes an application container engine dock mirror address, a dataset path, a pre-training model path, a training code path, a parameter configuration file path, and a model output path.
In other embodiments of the present application, the one or more programs may be executed by one or more processors, and the following steps may also be implemented:
obtaining a pre-training model and training codes;
the pre-training model and training codes are stored in an object store, and a pre-training model path for acquiring the pre-training model and a training code path for training codes are obtained.
In other embodiments of the present application, the one or more programs may be executed by one or more processors, and the following steps may also be implemented:
the first model is stored in an object store of the second electronic device.
It should be noted that, in the specific implementation process of the steps executed by the processor in this embodiment, reference may be made to the implementation process in the model iteration method provided in the embodiment corresponding to fig. 1 and 3, which is not described herein again.
It should be appreciated that reference throughout this specification to "an embodiment of the present application" or "the foregoing embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrase "in an embodiment of the present application" or "in the foregoing embodiments" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In various embodiments of the present application, the sequence number of each process does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
Without being specifically illustrated, the electronic device may perform any of the steps in the embodiments of the present application, and may be a processor of the electronic device performing the steps. The embodiment of the application does not limit the sequence of the following steps executed by the electronic device. Any step in the embodiments of the present application may be independently performed by the electronic device, that is, the electronic device may not depend on the execution of other steps when performing any step in the embodiments described below.
It should be noted that the processor may be at least one of an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a digital signal processor (Digital Signal Processor, DSP), a digital signal processing device (Digital Signal Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronic device implementing the above-mentioned processor function may be other, and embodiments of the present application are not specifically limited.
The computer storage medium/Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable programmable Read Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), a magnetic random access Memory (Ferromagnetic Random Access Memory, FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a compact disk Read Only Memory (Compact Disc Read-Only Memory, CD-ROM), or the like; but may also be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units. Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, or the like, which can store program codes.
The methods disclosed in the several method embodiments provided in the present application may be arbitrarily combined without collision to obtain a new method embodiment.
The features disclosed in the several product embodiments provided in the present application may be combined arbitrarily without conflict to obtain new product embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be arbitrarily combined without conflict to obtain new method embodiments or apparatus embodiments.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present application are intended to be included within the scope of the present application.

Claims (9)

1. A method of model iteration, the method comprising:
acquiring algorithm registration information of model training and a data set from first electronic equipment; the algorithm registration information comprises information required by a preset training model;
training to obtain a first model conforming to a preset standard based on the data set and the algorithm registration information, and storing the first model to second electronic equipment;
acquiring a storage path of the first model in the second electronic equipment;
transmitting the storage path to the first electronic device;
the method further comprises the steps of:
performing format conversion processing on the data set through a proxy service interface in the second electronic equipment to obtain a target data set with a preset format; the preset format is a format of data processed by a Kubernets cluster in the second electronic device;
correspondingly, the training to obtain the first model meeting the preset standard based on the data set and the algorithm registration information comprises the following steps:
and training to obtain the first model through the proxy service interface based on the target data set and the algorithm registration information.
2. The method according to claim 1, wherein the method further comprises:
storing the target data set to a second electronic device, and acquiring a data set path of the target data set;
correspondingly, the training, through the proxy service interface, based on the target data set and the algorithm registration information, to obtain the first model includes:
generating a configuration file based on the data set path and the algorithm registration information through the proxy service interface;
invoking a Kubernets application program interface through the proxy service interface, and sending the configuration file to a Kubernets cluster;
and acquiring the Kubernets cluster, and training to obtain the first model based on the configuration file.
3. The method of claim 2, wherein the obtaining the Kubernets cluster based on the profile training to obtain the first model comprises:
acquiring a second model obtained by training the Kubernets cluster based on the configuration file;
evaluating the second model through the proxy service interface to obtain an evaluation result;
and determining that the evaluation result characterizes the second model to accord with a preset standard through the proxy service interface, and taking the second model as the first model.
4. A method according to claim 3, wherein said evaluating the second model via the proxy service interface results in an evaluation result comprising:
determining a model category based on the algorithm registration information through the proxy service interface;
determining a target algorithm matched with the model category through the proxy service interface;
and evaluating the second model through the proxy service interface based on the target algorithm to obtain the evaluation result.
5. The method of claim 2, wherein the configuration file comprises an application container engine Docker mirror address, the dataset path, a pre-training model path, a training code path, a parameter configuration file path, a model output path.
6. The method according to any one of claims 1 to 5, further comprising:
obtaining a pre-training model and training codes;
and storing the pre-training model and the training codes into an object storage, and obtaining a pre-training model path for obtaining the pre-training model and a training code path for the training codes.
7. The method of any one of claims 1 to 5, wherein the storing the first model to a second electronic device comprises:
storing the first model in an object store of a second electronic device.
8. A second electronic device, the second electronic device comprising:
a memory for storing executable instructions;
a processor for executing executable instructions stored in the memory to implement the model iteration method of any one of claims 1 to 7.
9. A storage medium having stored thereon executable instructions which, when executed, are adapted to cause a processor to perform the model iteration method of any one of claims 1 to 7.
CN201911025291.9A 2019-10-25 2019-10-25 Model iteration method, second electronic equipment and storage medium Active CN112257733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911025291.9A CN112257733B (en) 2019-10-25 2019-10-25 Model iteration method, second electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911025291.9A CN112257733B (en) 2019-10-25 2019-10-25 Model iteration method, second electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112257733A CN112257733A (en) 2021-01-22
CN112257733B true CN112257733B (en) 2024-04-09

Family

ID=74224201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911025291.9A Active CN112257733B (en) 2019-10-25 2019-10-25 Model iteration method, second electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112257733B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11657415B2 (en) 2021-05-10 2023-05-23 Microsoft Technology Licensing, Llc Net promoter score uplift for specific verbatim topic derived from user feedback
US12079572B2 (en) 2021-05-17 2024-09-03 Microsoft Technology Licensing, Llc Rule-based machine learning classifier creation and tracking platform for feedback text analysis
CN113822322B (en) * 2021-07-15 2024-08-02 腾讯科技(深圳)有限公司 Image processing model training method and text processing model training method
CN113642622B (en) * 2021-08-03 2024-08-09 浙江数链科技有限公司 Effect evaluation method, system, electronic device and storage medium for data model
CN113642805A (en) * 2021-08-27 2021-11-12 Oppo广东移动通信有限公司 Algorithm optimization method of Internet of things equipment, electronic equipment and readable storage medium
CN113836012B (en) * 2021-09-17 2024-05-03 上海瑾盛通信科技有限公司 Algorithm testing method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558952A (en) * 2018-11-27 2019-04-02 北京旷视科技有限公司 Data processing method, system, equipment and storage medium
CN110334809A (en) * 2019-07-03 2019-10-15 成都淞幸科技有限责任公司 A kind of Component encapsulating method and system of intelligent algorithm

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107885762B (en) * 2017-09-19 2021-06-11 北京百度网讯科技有限公司 Intelligent big data system, method and equipment for providing intelligent big data service
US10878296B2 (en) * 2018-04-12 2020-12-29 Discovery Communications, Llc Feature extraction and machine learning for automated metadata analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558952A (en) * 2018-11-27 2019-04-02 北京旷视科技有限公司 Data processing method, system, equipment and storage medium
CN110334809A (en) * 2019-07-03 2019-10-15 成都淞幸科技有限责任公司 A kind of Component encapsulating method and system of intelligent algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MRI:面向并行迭代的MapReduce模型;马志强;张力;杨双涛;;计算机工程与科学(第12期);全文 *

Also Published As

Publication number Publication date
CN112257733A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN112257733B (en) Model iteration method, second electronic equipment and storage medium
CN107766126B (en) Container mirror image construction method, system and device and storage medium
CN108566290B (en) Service configuration management method, system, storage medium and server
CN112559525B (en) Data checking system, method, device and server
CN107544828A (en) Configuring load application method and device
CN113448862A (en) Software version testing method and device and computer equipment
CN109254791A (en) Develop management method, computer readable storage medium and the terminal device of data
CN115002099B (en) Human-computer interaction type file processing method and device for realizing IA (IA) based on RPA (remote procedure A) and AI (advanced technology attachment)
CN106603289A (en) LMT configuration file smooth upgrade method
CN113051102B (en) File backup method, device, system, storage medium and computer equipment
CN111049913B (en) Data file transmission method and device, storage medium and electronic equipment
CN114510322A (en) A pressure measurement control method, device, computer equipment and medium for a business cluster
CN113254332A (en) Multi-scenario testing method, system, terminal and storage medium for storage system
CN112328325A (en) Execution method and device of model file, terminal equipment and storage medium
CN117827879A (en) Method, device, equipment and medium for converting storage process
CN114610446B (en) Method, device and system for automatically injecting probe
CN112015436A (en) Short message platform deployment method and device, computing device, and computer storage medium
CN113900931B (en) A Docker-based testing method, device, equipment and storage medium
CN111125149B (en) Hive-based data acquisition method, hive-based data acquisition device and storage medium
CN116126819A (en) System log processing method, device and medium
CN111045787B (en) Rapid continuous experiment method and system
CN115309491A (en) Logic algorithm of platform system
CN117009205A (en) Interface simulation method, system and computer equipment
CN113918595A (en) Data query method and device
CN110990475B (en) Batch task inserting method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant