CN115563063A - Model construction method and device and electronic equipment - Google Patents
Model construction method and device and electronic equipment Download PDFInfo
- Publication number
- CN115563063A CN115563063A CN202110743908.1A CN202110743908A CN115563063A CN 115563063 A CN115563063 A CN 115563063A CN 202110743908 A CN202110743908 A CN 202110743908A CN 115563063 A CN115563063 A CN 115563063A
- Authority
- CN
- China
- Prior art keywords
- image file
- training
- parameter
- target
- training data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010276 construction Methods 0.000 title abstract description 12
- 238000012549 training Methods 0.000 claims abstract description 295
- 238000000034 method Methods 0.000 claims abstract description 66
- 238000012795 verification Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 7
- 230000008909 emotion recognition Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/172—Caching, prefetching or hoarding of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/283—Multi-dimensional databases or data warehouses, e.g. MOLAP or ROLAP
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application provides a model construction method, a model construction device and electronic equipment, wherein the method comprises the following steps: starting a target instance under the condition of receiving a training request of a target training task sent by a first client, wherein the training request comprises a first parameter and a second parameter, the first parameter is used for indicating the storage position of a first image file, and the second parameter is used for indicating the storage position of training data of the target training task; downloading the first image file to the target instance based on the first parameter, and downloading the training data to a preset position in the target instance based on the second parameter; and starting the first mirror image file in the target instance by taking the address information of the preset position as input to obtain a target model. The technical scheme provided by the application can at least solve the problem that the training process is more tedious in the existing model training method.
Description
Technical Field
The application relates to the field of cloud computing, in particular to a model construction method and device and electronic equipment.
Background
At present, before training a model, a relevant person usually writes a model training algorithm, and in the process of writing the algorithm, the relevant person needs to know a storage position of training data. And writing corresponding data calling and data processing programs based on the storage position of the training data so as to ensure that the training algorithm can acquire and use the training data in the training process. However, when the storage location of the training data is changed, the relevant personnel also need to adaptively update the model training algorithm to ensure the normal operation of the model training algorithm, thereby leading to a problem that the model training process is complicated.
Disclosure of Invention
According to the model construction method, the model construction device and the electronic equipment, the problem that the training process is complicated in the existing model training method can be solved.
In order to solve the technical problem, the specific implementation scheme of the application is as follows:
in a first aspect, an embodiment of the present application provides a model building method, which is applied to a server cluster, and the method includes:
starting a target instance under the condition of receiving a training request of a target training task sent by a first client, wherein the training request comprises a first parameter and a second parameter, the first parameter is used for indicating the storage position of a first mirror image file, the first mirror image file is the mirror image file of the target training task, and the second parameter is used for indicating the storage position of training data of the target training task;
downloading the first image file to the target instance based on the first parameter, and downloading the training data to a preset position in the target instance based on the second parameter;
and starting the first image file in the target instance by taking the address information of the preset position as input to obtain a target model, wherein the target model is obtained after the first image file calls the training data of the preset position for training.
In a second aspect, an embodiment of the present application further provides a model building apparatus, including:
the system comprises a starting module and a processing module, wherein the starting module is used for starting a target instance under the condition of receiving a training request of a target training task sent by a first client, the training request comprises a first parameter and a second parameter, the first parameter is used for indicating the storage position of a first mirror image file, the first mirror image file is the mirror image file of the target training task, and the second parameter is used for indicating the storage position of training data of the target training task;
the downloading module is used for downloading the first mirror image file to the target example based on the first parameter and downloading the training data to a preset position in the target example based on the second parameter;
and the training module is used for taking the address information of the preset position as input, starting the first image file in the target instance, and obtaining a target model, wherein the target model is obtained after the first image file calls the training data of the preset position for training.
In a third aspect, an embodiment of the present application further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the computer program implements the steps of the method according to the first aspect.
In a fourth aspect, this application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method in the first aspect.
In the embodiment of the application, training data and a first image file are downloaded to a target instance in the training process, and the first image file is started by taking the storage address of the training data as input, so that the first image file calls the training data at the preset position for training, and a trained target model is obtained. In the process, because the storage address of the training data is used as the input for starting the first image file, when the training algorithm is written, the storage position of the training data does not need to be embedded into the training algorithm, namely when the first image file is generated, the storage position of the training data does not need to be embedded into the first image file. Therefore, even if the storage position of the training data is changed, the code of the training algorithm does not need to be modified, namely, the first mirror image file does not need to be updated, so that the problem that the model training algorithm needs to be updated every time the position of the training data is changed in the prior art is solved, and the training process of the model is simplified.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flowchart of a model construction method provided in an embodiment of the present application;
FIG. 2 is a block diagram of a model building system provided by an embodiment of the present application;
fig. 3 is a block diagram of a model construction apparatus according to an embodiment of the present application;
fig. 4 is a structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a block diagram illustrating a model building method applied to a server cluster according to an embodiment of the present application, where the method includes:
102, downloading the first image file to the target instance based on the first parameter, and downloading the training data to a preset position in the target instance based on the second parameter;
The server cluster may be a kubernets cluster (k 8s cluster), and accordingly, the first client may be a k8s client. The training request may be a training instruction generated at the k8s client, where the training instruction may include a plurality of ordered steps, and since the k8s cluster is an open source system capable of automatically deploying, expanding, and managing the containerized application. Thus, the k8s cluster can execute each step in the training instructions in order based on its scheduling capabilities.
The target instance may refer to any one Pod started by the k8s cluster, that is, when the k8s cluster receives a training request sent by the k8s client, any one Pod is started. In a K8S cluster, pod is the basis for all traffic types, and is also the minimum unit level for K8S management, which is a combination of one or more containers. Where these containers share storage, networks, and namespaces, as well as specifications of how to operate.
The training data may be data pre-stored in a memory of the server cluster, and the second parameter may be determined after the training data is stored in the memory. The training data may be stored in the memory through various file forms, for example, the training data may be stored in the memory through any one of the following file forms: fastdfs files, ceph files, xsky files, ftp files, and the like.
It can be understood that, according to different training scenarios, specific data content of the training data may be different, for example, when a target model to be trained is a session summary model, the training data may be data processed based on chat records of the smart client service system and the client, specifically, obtained historical chat records of the smart client service system and the client, and data after session summary is performed on each historical chat record, that is, a tag may be set for each historical chat record, so as to obtain the training data. In this way, the session summary model trained based on the training data can be used to automatically summarize the chat records of the intelligent customer service system and the customer in the intelligent customer service system.
In addition, the history chat records are used as basic data, and the history chat records can be processed according to different requirements, so as to obtain different training data, for example, when the target model to be trained is the conversation summary model, a conversation summary label can be set for each record in the history chat records; when the target model to be trained is the emotion recognition model, an emotion recognition label can be set for each record in the historical chat records, so that training data for training the emotion recognition model can be obtained, and the emotion recognition model obtained by subsequent training based on the training data can be used for automatically recognizing the emotion of the chat records of the intelligent client service system and the clients in the intelligent client service system.
The first image file may be an image file of a training algorithm written in advance by a relevant person, wherein the image file may be a copy of a specific file, and the image file may be an executable file. Specifically, the first image file refers to a copy of a training algorithm file. After the relevant personnel complete the writing of the training algorithm, the written training algorithm can be packaged into a mirror image to obtain a first mirror image file. The training algorithm can be understood as a pre-built initial model, so that when the first image file is started in the Pod, the initial model can call the training data of the preset position for training, and a trained target model is obtained.
Specifically, a first image file may be stored in a Docker warehouse of a server cluster, and a download address of the first image file may be entered into the first client, so that the first parameter may be directly selected when the training request is generated based on the first client. The Docker warehouse is a storage unit used for centralized management of program operation packages in the server cluster.
In a specific embodiment of the present application, the specific process of the model construction method is as follows: (1) Generating a training request based on a first client, in the process, the first client may receive a second parameter and sequence indication information input by a user, and meanwhile, the first client may receive a selection operation of a first parameter input by the user, and after receiving the first parameter, the second parameter and the sequence indication information, the first client may generate the training request, where the sequence indication information is used to indicate an execution sequence of the steps, for example, the execution sequence may sequentially be: starting Pod → executing the downloading action of downloading the training data and the first image file → after the downloading is completed, starting the first image file for training based on the training data.
Compared with the prior art, in the prior art, because the storage position of the training data is embedded into the training algorithm, the code of the training algorithm needs to be updated every time the storage position of the training data is changed, so that the training algorithm can successfully call the training data with the changed address. In the embodiment of the application, training data and a first image file are downloaded to a target instance in the training process, and the first image file is started by taking the storage address of the training data as input, so that the first image file calls the training data at the preset position for training, and a trained target model is obtained. In the process, because the storage address of the training data is used as the input for starting the first image file, when the training algorithm is written, the storage position of the training data does not need to be embedded into the training algorithm, namely when the first image file is generated, the storage position of the training data does not need to be embedded into the first image file. Therefore, even if the storage position of the training data is changed, the code of the training algorithm does not need to be modified, namely, the first mirror image file does not need to be updated, so that the problem that the training algorithm needs to be updated every time the storage position of the training data is changed in the prior art is solved, and the model training process is simplified.
Optionally, the training request further includes a third parameter, where the third parameter is used to indicate a storage location of a second image file, the second image file is an image file of a pre-manufactured downloader, and downloading the training data to a preset location in the target instance based on the second parameter includes:
downloading the second image file to the target instance based on the third parameter;
and taking the second parameter as an input, starting the second image file in the target instance, and downloading the training data to the preset position through the second image file based on the second parameter.
Specifically, the pre-manufactured downloader is a downloader for downloading training data located at a preset position to a target instance. Accordingly, a pre-manufactured downloader for downloading training data from a preset location may be pre-programmed. And a second image file of the pre-manufactured downloader may be stored in the Docker repository. The Docker warehouse can store various downloaders of different types in advance, wherein training data of different file types can correspond to different downloaders, and download addresses of all prefabricated downloaders can be recorded into the first client.
It should be noted that the first client may pre-input location information of a plurality of different first image files, where the plurality of different first image files correspond to a plurality of different training algorithms, respectively, and different training algorithms in the plurality of different training algorithms may be used to train different types of models. Specifically, the location information of the plurality of different first image files may be entered into a command editing interface of the first client, and the location information of the plurality of different first image files may be stored in a menu form. Correspondingly, the download addresses of the different types of downloaders can also be input into the command editing interface of the first client in a menu form. In this way, the first client may determine the first parameter and the third parameter based on a click operation for commanding the editing interface, wherein the first client may display the address filling bar after receiving the click operation of the third parameter by the user: the address filling column is used for receiving the second parameters, and the command editing interface may further include a window for receiving sequence indication information, so that when the first client receives all the parameters and receives an instruction for generating a training request, the training request may be generated at the first client, and the training request may be sent to the master node of the server cluster. As such, the master node may sequentially execute each step in the training request based on the sequence indication information of the training request, and the master node may refer to a master server of the server cluster.
Optionally, the training request further includes order indication information, and the server cluster is configured to execute the model building method based on an order indicated by the order indication information.
In an embodiment of the present application, the execution sequence indicated by the sequence indication information may be: starting Pod → executing the downloading action of downloading the pre-manufactured downloader and the first image file → after the downloading is finished, downloading training data based on the pre-manufactured downloader → starting the first image file for training based on the training data. In this way, the master nodes in the server cluster may execute the model building method in the order indicated by the order indication information.
Furthermore, each downloader may be provided with a specific tag when entering the download address of the plurality of different types of downloaders into the command editing interface of the first client, for example, a unique tag "fst" may be provided when entering a downloader for downloading a fastdfs file into the first client, so that a subsequent user may determine the type of the pre-manufactured downloader based on the tag of the pre-manufactured downloader when selecting the pre-manufactured downloader.
The pre-manufactured downloader is similar to the browser, before starting, a download address of the training data needs to be input, that is, the second parameter needs to be filled in an address bar of the pre-manufactured downloader, and the pre-manufactured downloader can download the training data to the preset location, where the preset location may be a default address, for example, the default address may be/home/data/auto-download.
In this embodiment, the image file of the pre-manufactured downloader is downloaded to the target instance based on the third parameter, and the second image file in the target instance is started by taking the second parameter as an input, thereby realizing that the training data is downloaded from the memory to the preset position of the target instance.
Optionally, the downloading, by the second image file, the training data to the preset location based on the second parameter includes:
checking the second parameter;
under the condition that the verification is successful, downloading the training data to the preset position through the second mirror image file based on the second parameter;
and closing the second image file under the condition of failed verification.
Specifically, the checking the second parameter may refer to: and the prefabricated downloader downloads data based on the address indicated by the second parameter, and if the training data can be successfully downloaded, the verification is confirmed to be passed, and the downloaded data is stored in a preset position. Otherwise, if the downloading fails, the second mirror image file is closed, and meanwhile, the training process can also be stopped.
In addition, since the downloaders correspond to the file storage types one to one, the storage addresses of the training data of the file types corresponding to the prefabricated downloaders can be transmitted to the prefabricated downloaders in advance, so that when the second parameters are input into the prefabricated downloaders, the prefabricated downloaders can verify the second parameters based on the storage addresses of all the training data maintained by the prefabricated downloaders, if the storage addresses of all the training data maintained by the prefabricated downloaders comprise the second parameters, the verification is confirmed to be successful, otherwise, if the storage addresses of all the training data maintained by the prefabricated downloaders do not comprise the second parameters, the verification is confirmed to be failed.
Optionally, the second parameter includes at least two sub-parameters, the sub-parameters are in one-to-one correspondence with storage locations of sub-training data, and the downloading the training data to a preset location in the target instance based on the second parameter includes:
and downloading the sub-training data indicated by each sub-parameter to the preset position based on the at least two sub-parameters.
In particular, during model training, different training data may be required and may be stored in different locations. In this case, the second parameter may indicate a storage location of two training data. For example, the second parameter may include at least two sub-parameters, each different sub-parameter of the at least two sub-parameters corresponds to a different storage location of the sub-training data, and the at least two sub-parameters may be separated by a separator. Wherein, the different sub-training data may be the different training data.
In this way, when downloading the training data based on the second parameter, the pre-manufactured downloader may traverse all the sub-parameters in the second parameter, and further download the sub-training data indicated by each sub-parameter to the preset location.
The internal implementation steps of the prefabricated downloader are as follows: the preset downloader firstly defines a program starting parameter as src; and receiving an address requiring the presetter to download the training data through src, that is, receiving the second parameter through src. When the second parameter comprises at least two sub-parameters, i.e. the second parameter records at least two storage locations of training data, different addresses may be separated by a separator "," for example:
src = http:// ima.com/chat/train.json, http:// ima.com/chat/test.json. When the preset downloader is started, firstly, the entry judges whether the src address is contained (namely, whether the content after = is included); if not, directly ending the prefabricated downloader; otherwise, entering a preset downloader downloading method, wherein before downloading, the preset downloader firstly constructs a client accessed by the fas tdfs file server through codes (the client can be simply understood as a browser); client build typically requires many attributes. Such as a file server address; authentication information and the like are completed during construction, wherein the authentication information may refer to: and when the training data is encrypted storage data, verifying information.
After the client of the preset downloader is constructed, the downloading method of the preset downloader can be called. Meanwhile, traversing the addresses of all training data received by the src, specifically, firstly obtaining http:// ima.com/chat/train.json addresses; and the downloading method is transmitted to the client, and the process mainly comprises the following steps:
(1): a complete output catalog is constructed. Firstly, resolving a file name of a download address as train.json; a default download address/home/data/auto-download;
final full address: json/home/data/auto-download/train;
(2): storing the downloaded data stream into the final complete address of the step (1);
(3): and acquiring an address http:// ima.com/chat/test.json, and transmitting the address to a downloading method of the client to construct a complete output directory. Json, a default download address/home/data/auto-download;
final full address: json/home/data/auto-download/test;
(4): storing the downloaded data stream into the final complete address of the step (3);
(5): and after the data downloading is finished, the prefabricated downloader automatically exits.
Optionally, the server cluster includes a Docker warehouse, and the method further includes:
under the condition that a third image file sent by a second client is received, the third image file is stored in a Docker warehouse, and the storage position of the third image file is sent to the first client, wherein the third image file is an image file of a prefabricated downloader or an image file of a training task, namely the third image file can be a first image file or a second image file.
Specifically, under the condition that the third image file is the first image file, before model training, relevant personnel need to complete the writing of the training algorithm, and the written image file (i.e., the first image file) of the training algorithm can be stored in the Docker warehouse. For example, after the session summary training task is stored in the Docker warehouse, the storage address of the session summary training task in the Docker warehouse may be entered into the K8s client system.
In addition, under the condition that the third image file is the second image file, before model training, related personnel need to complete writing of the prefabricated downloader, and the written image file (namely, the second image file) of the prefabricated downloader can be stored in the Docker warehouse, and meanwhile, in order to facilitate writing of a training request by the first client, the storage position of the third image file in the Docker warehouse can be sent to the first client, so that the first client can record the storage position of the third image file into a menu bar for storing the address of the prefabricated downloader after receiving the storage position of the third image file. For example, a K8s platform administrator may use the lightweight language python to complete the writing of the fastdfs file storage preset downloader. And packed and uploaded to a Docker warehouse. And simultaneously, the prefabricated downloader is input into the K8S client, and a unique label "fst" and a corresponding Docker warehouse access address are marked on the prefabricated downloader.
Optionally, the second parameter includes location information and verification information, the training data is encrypted storage data, and downloading the training data to a preset location in the target instance based on the second parameter includes:
determining a storage location of the encrypted storage data based on the location information;
decrypting the encrypted storage data based on the verification information to obtain the training data;
and downloading the training data to a preset position in the target example.
Specifically, in order to improve the security of data storage, the training data may be encrypted and stored in the memory, and when the training data is subsequently downloaded in the memory, the training data may be acquired after decryption is performed through the verification information. Based on this, when configuring the training request, the user may input the verification information of the training data while inputting the storage location of the training data, so that when downloading the training data based on the second parameter, the pre-made downloader may first determine the storage location of the training data that is encrypted and stored based on the location information, then decrypt the training data that is encrypted and stored by using the verification information to obtain the training data, and download the training data to the preset location.
In another embodiment of the present application, taking a training session summary model as an example, a further explanation is made on the model construction method provided by the present application, which specifically includes the following steps:
(1) The conversation summary algorithm personnel configure a training request through a K8s client, wherein the storage positions of the training data are as follows: trainData = fst:// ima.com/chat/train.json; the fst in the address is a prefix that is automatically added based on a third parameter selected by the algorithm personnel to facilitate identification of the type of pre-manufactured downloader.
(2) Before a K8S client requests a K8S cluster, a storage decision device is used for judging that the address of the fst beginning needs to be loaded with a fastdfs preset downloader firstly. The storage decision device can automatically construct preset downloader parameters and takes the address fst:// ima.com/chat/train.json as the preset downloader starting parameters; the values of the training task trainData for the incoming session summary parameters are modified simultaneously as follows: json (namely, input parameters when the first image file is started are modified into preset addresses), a training request is obtained, and the training request is sent to the K8S cluster.
(3) And when the K8S cluster receives the training request, firstly starting a Pod in the K8S cluster, and starting a preset downloader according to a third parameter. The preset downloader automatically downloads the training data stored in fst:// ima.com/chat/train.jnon to the preset address, namely, the training data is downloaded to/home/data/auto-download/train.jnon.
(4) After the preset downloader finishes downloading, a session summary training task is continuously started in the current Pod; because the input data of the session summary training task is rewritten in the step (2) is a preset address; therefore, the session summary training task can directly load the address of the downloaded training data: json/home/data/auto-download/train, thereby completing the training process.
Referring to fig. 2, fig. 2 is a block diagram of a model building system according to an embodiment of the present application, where the model building system can be used to implement the model building method. The model building system includes: the system comprises a memory, a K8S cloud environment, a docker warehouse and a K8S client. The memory is used for storing training data, wherein the format of the training data can be as follows: fastdfs files, ceph files, xsky files, ftp files, and the like. The K8S cloud environment is the cloud environment provided by the server cluster, and the K8S cloud environment is used for creating a Pod and scheduling data required by training to the created Pod. The docker warehouse is used for storing mirror images of various downloaders and mirror images of various training algorithms, and for example, the docker warehouse can store mirror images of corpus downloaders, session summary training algorithm mirror images, text classification training algorithm mirror images, object detection training algorithm mirror images and the like. The K8S client is used for interacting with a user to generate a training request and meanwhile is used for sending the generated training request to a K8S cloud environment. Specifically, the K8S client may include a downloader module and a decision maker module, the downloader module includes selection controls of a plurality of different downloaders for selecting a desired downloader through the downloader module, and the decision maker may automatically construct a start parameter of a preset downloader after the K8S client receives a selection input of the downloader by a user and a download address input by the user.
In an embodiment of the present application, a training session summary task is taken as an example to further explain a specific process of implementing the model building method based on the model building system: the K8S client receives parameters required by training input by a user, generates a training request based on the received parameters, and then sends the training request to a K8S cloud environment. After a K8S cloud environment receives a training request, starting a Pod, downloading an image file of a corpus downloader and an image file of a session nodule training algorithm from a docker warehouse to the started Pod, then operating a corpus downloader process at the Pod, downloading training data from a memory to the Pod by the corpus downloader, then starting a session nodule training algorithm process by taking a storage position of the training data in the Pod as input, calling the training data in the Pod for training by the session nodule training algorithm process, and obtaining a target model after training.
Referring to fig. 3, fig. 3 is a block diagram of a model building apparatus 300 according to an embodiment of the present disclosure, including:
a starting module 301, configured to start a target instance when a training request of a target training task sent by a first client is received, where the training request includes a first parameter and a second parameter, the first parameter is used to indicate a storage location of a first image file, the first image file is an image file of the target training task, and the second parameter is used to indicate a storage location of training data of the target training task;
a downloading module 302, configured to download the first image file to the target instance based on the first parameter, and download the training data to a preset position in the target instance based on the second parameter;
the training module 303 is configured to start the first image file in the target instance to obtain a target model by using the address information of the preset location as an input, where the target model is obtained after the first image file calls the training data of the preset location to perform training.
Optionally, the training request further includes a third parameter, where the third parameter is used to indicate a storage location of a second image file, where the second image file is an image file of a pre-manufactured downloader, and the downloading module 302 includes:
a first download submodule, configured to download the second image file to the target instance based on the third parameter;
and the second downloading submodule is used for starting the second image file in the target instance by taking the second parameter as input so as to download the training data to the preset position through the second image file based on the second parameter.
Optionally, the second downloading sub-module includes:
the checking unit is used for checking the second parameter;
the downloading unit is used for downloading the training data to the preset position through the second mirror image file based on the second parameter under the condition that the verification is successful;
and the closing unit is used for closing the second mirror image file under the condition of failed verification.
Optionally, the second parameter includes at least two sub-parameters, the sub-parameters correspond to storage locations of sub-training data in a one-to-one manner, and the downloading module 302 is specifically configured to download the sub-training data indicated by each sub-parameter to the preset location based on the at least two sub-parameters.
Optionally, the server cluster includes a Docker warehouse, and the apparatus further includes:
the storage module is used for storing a third image file in a Docker warehouse and sending the storage position of the third image file to the first client under the condition that the third image file sent by a second client is received, wherein the third image file is an image file of a prefabricated downloader or an image file of a training task.
Optionally, the second parameter includes location information and verification information, the training data is encrypted storage data, and the downloading module 302 includes:
a determination submodule for determining a storage location of the encrypted storage data based on the location information;
the decryption submodule is used for decrypting the encrypted storage data based on the verification information to obtain the training data;
and the second downloading submodule is used for downloading the training data to a preset position in the target example.
Optionally, the training request further includes order indication information, and the server cluster is configured to execute the model building method based on an order indicated by the order indication information.
The model building apparatus 300 provided in this embodiment of the application can implement each process in the above method embodiments, and is not described here again to avoid repetition.
Referring to fig. 4, fig. 4 is a structural diagram of an electronic device according to another embodiment of the present disclosure, and as shown in fig. 4, the electronic device includes: the service interface flow control apparatus 400 includes: a processor 401, a memory 402 and a computer program stored on the memory 402 and operable on the processor, the various components in the data transmission device 400 being coupled together by a bus interface 403, the computer program, when executed by the processor 401, performing the steps of:
starting a target instance under the condition of receiving a training request of a target training task sent by a first client, wherein the training request comprises a first parameter and a second parameter, the first parameter is used for indicating the storage position of a first image file, the first image file is the image file of the target training task, and the second parameter is used for indicating the storage position of training data of the target training task;
downloading the first image file to the target instance based on the first parameter, and downloading the training data to a preset position in the target instance based on the second parameter;
and starting the first mirror image file in the target instance by taking the address information of the preset position as input to obtain a target model, wherein the target model is obtained after the first mirror image file calls the training data of the preset position for training.
Optionally, the training request further includes a third parameter, where the third parameter is used to indicate a storage location of a second image file, the second image file is an image file of a pre-manufactured downloader, and downloading the training data to a preset location in the target instance based on the second parameter includes:
downloading the second image file to the target instance based on the third parameter;
and taking the second parameter as an input, starting the second image file in the target instance, and downloading the training data to the preset position through the second image file based on the second parameter.
Optionally, the downloading, by the second image file, the training data to the preset location based on the second parameter includes:
checking the second parameter;
under the condition that the verification is successful, downloading the training data to the preset position through the second mirror image file based on the second parameter;
and closing the second image file under the condition of failed verification.
Optionally, the downloading the training data to the preset location in the target instance based on the second parameter includes:
and downloading the sub-training data indicated by each sub-parameter to the preset position based on the at least two sub-parameters.
Optionally, the server cluster includes a Docker warehouse, and the method further includes:
under the condition of receiving a third image file sent by a second client, storing the third image file in a Docker warehouse, and sending the storage position of the third image file to the first client, wherein the third image file is an image file of a prefabricated downloader or an image file of a training task.
Optionally, the second parameter includes location information and verification information, the training data is encrypted storage data, and downloading the training data to a preset location in the target instance based on the second parameter includes:
determining a storage location of the encrypted storage data based on the location information;
decrypting the encrypted storage data based on the verification information to obtain the training data;
and downloading the training data to a preset position in the target example.
Optionally, the training request further includes order indication information, and the server cluster is configured to execute the model building method based on an order indicated by the order indication information.
The embodiment of the present application further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, and when the computer program is executed by the processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effect, and is not described herein again to avoid repetition.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the foregoing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A model building method is applied to a server cluster, and is characterized by comprising the following steps:
starting a target instance under the condition of receiving a training request of a target training task sent by a first client, wherein the training request comprises a first parameter and a second parameter, the first parameter is used for indicating the storage position of a first image file, the first image file is the image file of the target training task, and the second parameter is used for indicating the storage position of training data of the target training task;
downloading the first image file to the target instance based on the first parameter, and downloading the training data to a preset position in the target instance based on the second parameter;
and starting the first mirror image file in the target instance by taking the address information of the preset position as input to obtain a target model, wherein the target model is obtained after the first mirror image file calls the training data of the preset position for training.
2. The method of claim 1, wherein the training request further comprises a third parameter indicating a storage location of a second image file, wherein the second image file is an image file of a pre-manufactured downloader, and wherein downloading the training data to the pre-set location in the target instance based on the second parameter comprises:
downloading the second image file to the target instance based on the third parameter;
and taking the second parameter as an input, starting the second image file in the target instance, and downloading the training data to the preset position through the second image file based on the second parameter.
3. The method of claim 2, wherein the downloading the training data to the predetermined location based on the second parameter via the second image file comprises:
checking the second parameter;
under the condition that the verification is successful, downloading the training data to the preset position through the second mirror image file based on the second parameter;
and closing the second image file under the condition of failed verification.
4. The method of claim 1, wherein the second parameter comprises at least two sub-parameters, the sub-parameters correspond to storage locations of sub-training data in a one-to-one correspondence, and the downloading the training data to the preset location in the target instance based on the second parameter comprises:
and downloading the sub-training data indicated by each sub-parameter to the preset position based on the at least two sub-parameters.
5. The method of claim 1, wherein the server cluster comprises a Docker warehouse, the method further comprising:
and under the condition that a third image file sent by a second client is received, storing the third image file in a Docker warehouse, and sending the storage position of the third image file to the first client, wherein the third image file is an image file of a prefabricated downloader or an image file of a training task.
6. The method of claim 1, wherein the second parameter comprises location information and verification information, the training data is encrypted storage data, and the downloading the training data to the preset location in the target instance based on the second parameter comprises:
determining a storage location of the encrypted storage data based on the location information;
decrypting the encrypted storage data based on the verification information to obtain the training data;
and downloading the training data to a preset position in the target example.
7. The method of claim 1, wherein the training request further comprises order indication information, and wherein the server cluster is configured to perform the model building method based on an order indicated by the order indication information.
8. A model building apparatus, comprising:
the system comprises a starting module and a processing module, wherein the starting module is used for starting a target instance under the condition of receiving a training request of a target training task sent by a first client, the training request comprises a first parameter and a second parameter, the first parameter is used for indicating the storage position of a first image file, the first image file is the image file of the target training task, and the second parameter is used for indicating the storage position of training data of the target training task;
the downloading module is used for downloading the first mirror image file to the target example based on the first parameter and downloading the training data to a preset position in the target example based on the second parameter;
and the training module is used for taking the address information of the preset position as input, starting the first image file in the target instance, and obtaining a target model, wherein the target model is obtained after the first image file calls the training data of the preset position for training.
9. An electronic device comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the model building method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the model building method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110743908.1A CN115563063A (en) | 2021-07-01 | 2021-07-01 | Model construction method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110743908.1A CN115563063A (en) | 2021-07-01 | 2021-07-01 | Model construction method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115563063A true CN115563063A (en) | 2023-01-03 |
Family
ID=84736623
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110743908.1A Pending CN115563063A (en) | 2021-07-01 | 2021-07-01 | Model construction method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115563063A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110413294A (en) * | 2019-08-06 | 2019-11-05 | 中国工商银行股份有限公司 | Service delivery system, method, apparatus and equipment |
CN111625316A (en) * | 2020-05-15 | 2020-09-04 | 苏州浪潮智能科技有限公司 | Environment deployment method and device, electronic equipment and storage medium |
CN112000450A (en) * | 2020-08-18 | 2020-11-27 | 中国银联股份有限公司 | Neural network architecture searching method and device |
CN112418438A (en) * | 2020-11-24 | 2021-02-26 | 国电南瑞科技股份有限公司 | Container-based machine learning procedural training task execution method and system |
CN112416333A (en) * | 2020-10-20 | 2021-02-26 | 北京迈格威科技有限公司 | Software model training method, device, system, equipment and storage medium |
CN113052322A (en) * | 2021-03-10 | 2021-06-29 | 广东博智林机器人有限公司 | Machine learning modeling method and device, storage medium and processor |
-
2021
- 2021-07-01 CN CN202110743908.1A patent/CN115563063A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110413294A (en) * | 2019-08-06 | 2019-11-05 | 中国工商银行股份有限公司 | Service delivery system, method, apparatus and equipment |
CN111625316A (en) * | 2020-05-15 | 2020-09-04 | 苏州浪潮智能科技有限公司 | Environment deployment method and device, electronic equipment and storage medium |
CN112000450A (en) * | 2020-08-18 | 2020-11-27 | 中国银联股份有限公司 | Neural network architecture searching method and device |
CN112416333A (en) * | 2020-10-20 | 2021-02-26 | 北京迈格威科技有限公司 | Software model training method, device, system, equipment and storage medium |
CN112418438A (en) * | 2020-11-24 | 2021-02-26 | 国电南瑞科技股份有限公司 | Container-based machine learning procedural training task execution method and system |
CN113052322A (en) * | 2021-03-10 | 2021-06-29 | 广东博智林机器人有限公司 | Machine learning modeling method and device, storage medium and processor |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10616754B2 (en) | Profile download method and system, and related device | |
CN107733847B (en) | Method and device for platform login website, computer equipment and readable storage medium | |
CN110363026B (en) | File operation method, device, equipment, system and computer readable storage medium | |
CN107450899B (en) | Method and device for generating terminal control script | |
US9351105B2 (en) | Location based applications | |
US10284561B2 (en) | Method and server for providing image captcha | |
CN109726545B (en) | Information display method, equipment, computer readable storage medium and device | |
US10324706B1 (en) | Automated software deployment for electromechanical systems | |
CN114115866A (en) | Cross-domain-based vehicle-mounted scene self-defining method, device, equipment and storage medium | |
CN106708494B (en) | JAR upgrading method and device | |
CN115129574A (en) | Code testing method and device | |
CN115563063A (en) | Model construction method and device and electronic equipment | |
CN109451497B (en) | Wireless network connection method and device, electronic equipment and storage medium | |
CN111131133A (en) | Data transmission method, data transmission device, storage medium and processor | |
CN116820663A (en) | Mirror image construction method, device, storage medium and apparatus | |
CN114745437A (en) | Method and device for unified data access | |
CN116614496A (en) | Courseware synchronization method and device, storage medium and electronic equipment | |
CN113986423A (en) | Bullet frame display method and system, storage medium and terminal equipment | |
CN113791802A (en) | Vehicle upgrading method, device, equipment and storage medium | |
CN111625746A (en) | Display method and system of application program page, electronic device and storage medium | |
CN111953637A (en) | Application service method and device | |
CN112181721B (en) | Artificial intelligence job mirror image management method and system | |
GB2583904A (en) | Commissioning a virtualised network function | |
KR101249449B1 (en) | Apparatus for web platform verification tool and control method thereof | |
CN112883309B (en) | Method, device, equipment and medium for accessing application through browser |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |