CN117808119A - Model updating system, method, device, equipment and storage medium - Google Patents

Model updating system, method, device, equipment and storage medium Download PDF

Info

Publication number
CN117808119A
CN117808119A CN202311854410.8A CN202311854410A CN117808119A CN 117808119 A CN117808119 A CN 117808119A CN 202311854410 A CN202311854410 A CN 202311854410A CN 117808119 A CN117808119 A CN 117808119A
Authority
CN
China
Prior art keywords
model
server
new
training
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311854410.8A
Other languages
Chinese (zh)
Inventor
林孟晨
吕亦书
昝妍
吕亦宸
吴振廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fulian Yuzhan Technology Shenzhen Co Ltd
Original Assignee
Fulian Yuzhan Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fulian Yuzhan Technology Shenzhen Co Ltd filed Critical Fulian Yuzhan Technology Shenzhen Co Ltd
Priority to CN202311854410.8A priority Critical patent/CN117808119A/en
Publication of CN117808119A publication Critical patent/CN117808119A/en
Pending legal-status Critical Current

Links

Landscapes

  • Stored Programmes (AREA)

Abstract

The embodiment of the application provides a system, a method, a device, equipment and a storage medium for updating a model; the system comprises: the first servers are provided with initial training models and are used for respectively receiving machining information sent by the plurality of machining devices in one-to-one correspondence, and inputting the machining information into the initial training models for training so that the initial training models output model parameters and sample numbers; the second server is used for receiving the model parameters and the sample number sent by each first server, calculating new model parameters based on the model parameters and the sample number sent by all the first servers, and sending the new model parameters; the plurality of first servers are further configured to: based on the new model parameters sent by the second server, the initial training model is updated. The initial training model of the method and the device achieve automatic updating, the updating efficiency is high, the real-time accuracy of the initial training model is guaranteed, the possibility of leakage caused by transmission of processing information through multiple servers can be reduced, the reliability of model training is greatly improved, and the accuracy of product processing data and the product quality are improved.

Description

Model updating system, method, device, equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a system, a method, an apparatus, a device, and a storage medium for model update.
Background
In recent years, artificial intelligence has grown on the trend of new waves, in which machine learning plays a central role. To train a well behaved machine learning model, a large amount of high quality data needs to be collected. In order to ensure the diversity of data, it is generally necessary to collect data of different processing apparatuses by a plurality of processing apparatuses. In some technologies, data collected by a plurality of processing devices are uploaded to an edge server, the edge server performs timing input on a machine learning model to perform training based on the data uploaded by the processing devices, after the machine learning model is trained by the edge server, the trained machine learning model is fixed to form an intelligent model to perform real-time processing on different device data, but when parameter precision in the machine learning model is inaccurate, a user is required to manually import a new machine learning model, so that the mobile phone import efficiency is low, and the production efficiency is affected.
Disclosure of Invention
In view of the foregoing, the present application provides a system, a method, an apparatus, a device, and a storage medium for model update, so as to solve the problem of low efficiency of machine learning model update in the prior art.
In a first aspect, an embodiment of the present application provides a model update system, including:
the first servers are provided with initial training models and are used for respectively receiving machining information sent by the plurality of machining devices in one-to-one correspondence, and inputting the machining information into the initial training models for training so that the initial training models output model parameters and sample numbers;
the second server is used for receiving the model parameters and the sample number sent by each first server, calculating new model parameters based on the model parameters and the sample number sent by all the first servers, and sending the new model parameters;
the plurality of first servers are further configured to: updating the initial training model based on the new model parameters sent by the second server.
In a possible implementation manner of the first aspect, the second server has a preset training model, and the second server is further configured to:
inputting the new model parameters into the preset training model, and updating the model parameters in the preset training model to form a new training model; carrying out version marking on the new training model to form a new version training model; transmitting the new version of training model to the first server; the first server is further configured to: and updating the initial training model into a new version training model.
In a possible implementation manner of the first aspect, the first server has an initial intelligent model, wherein the initial intelligent model is generated by training the preset training model and is used for receiving the processing information to deduce the quality of the workpiece or the processing device; the second server is further provided with a preset intelligent model, and is further used for: inputting the new model parameters into the preset intelligent model, and updating the model parameters in the preset intelligent model to form a new intelligent model; carrying out version marking on the new intelligent model to form a new version of the intelligent model; and sending the new version of the intelligent model to the first server so that the first server updates the initial intelligent model to the new version of the intelligent model.
In a possible implementation manner of the first aspect, the first server is further configured to: determining that the quality of the workpiece or the deducing result of the processing device by the intelligent model with the new version is unqualified; based on that the quality of the workpiece or the inference result of the processing device is unqualified by the intelligent model of the new version, generating a new version backspacing instruction, and sending the new version backspacing instruction to the second server; the second server is further configured to: receiving the new version rollback instruction sent by at least one first server, and closing the new version of the intelligent model and/or the new version of the training model; calling the preset intelligent model and/or the preset training model based on a training model of the intelligent model new version of closing; and sending the preset intelligent model and/or the preset training model to at least one first server.
In a possible implementation manner of the first aspect, the processing information is encrypted information, and the first server is further configured to: receiving an information uploading instruction; generating a decryption instruction based on the information uploading instruction; decrypting the encrypted processing information based on the decryption instruction; transmitting the decrypted processing information to the initial smart model and/or the initial training model.
In a possible implementation manner of the first aspect, the first server is further configured to: monitoring whether the storage amount of the decrypted processing information reaches a first preset amount; and if the decrypted memory amount of the processing information does not reach the first preset amount, generating the information uploading instruction.
In a possible implementation manner of the first aspect, the first server is further configured to: and if the storage amount of the decrypted processing information reaches a first preset amount, deleting the stored decrypted processing information which is transmitted to the initial intelligent model and/or the initial training model.
In a second aspect, an embodiment of the present application provides a method for updating a model, where the method includes:
receiving a plurality of model parameters and a corresponding plurality of sample numbers; the model parameters and the sample number are output by an initial training model;
Calculating new model parameters based on a plurality of model parameters and a plurality of corresponding sample numbers;
inputting the new model parameters into the preset training model, and updating the model parameters in the preset training model to form a new training model.
In a possible implementation manner of the second aspect, the method further includes:
and carrying out version marking on the new training model to form the new version training model.
In a possible implementation manner of the second aspect, the method further includes:
inputting the new model parameters into the preset intelligent model, and updating the model parameters in the preset intelligent model to form a new intelligent model;
and carrying out version marking on the new intelligent model to form the new version of the intelligent model.
In a possible implementation manner of the second aspect, the method further includes:
receiving a new version rollback instruction;
closing the new version of the intelligent model and/or the new version of the training model based on the new version of the rollback instruction;
and recovering the preset intelligent model and/or the preset training model based on closing the new version of the intelligent model and/or the new version of the training model.
In a third aspect, embodiments of the present application provide an electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method according to any one of the first aspects.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium includes a stored program, where when the program runs, the program controls a device in which the computer readable storage medium is located to execute the method of any one of the first aspects.
By adopting the scheme provided by the embodiment of the application, the plurality of first servers are provided with the initial training model, the plurality of first servers respectively and correspondingly receive the processing information sent by the plurality of processing devices one by one, and the processing information is input into the initial training model for training, so that the initial training model outputs model parameters and sample number. The second server receives the model parameters and the sample number sent by each first server, calculates new model parameters based on the model parameters and the sample number sent by each first server, and sends the new model parameters; the plurality of first servers update the initial training model based on the new model parameters sent by the second server. That is, in the embodiment of the present application, each first server may receive processing information sent by a corresponding processing device, and further may train an initial training model based on the processing information, to obtain model parameters and the number of samples. In order to further improve accuracy of the model parameters, the plurality of first servers may send the output model parameters and the number of samples to the second server. The second server may calculate new model parameters based on the received model parameters and the number of samples of the first server, and send the new model parameters. Thus, the first server can update the initial training model according to the model parameters sent by the second server, so that the initial training model is more accurate. In the embodiment of the application, the second server only can acquire the model parameters and the sample number and does not need to acquire the processing information, so that the possibility of leakage of the processing information is reduced, and the safety of the processing information is improved. And the model parameters can be updated through the second server, so that the accuracy of the initial training model is improved. That is, in the embodiment of the application, the initial training model is automatically updated, the updating efficiency is high, the accuracy of the initial training model can be ensured, the possibility of leakage caused by transmission of processing information through multiple servers can be reduced, the reliability of model training is greatly improved, and the accuracy of product processing data and the product quality are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a model updating system according to an embodiment of the present application;
fig. 2 is a schematic view of a scenario of a model update system according to an embodiment of the present application;
FIG. 3a is a schematic diagram of another model update system according to an embodiment of the present disclosure;
FIG. 3b is a schematic diagram of another model update system according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of another model update system according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of another model update system according to an embodiment of the present disclosure;
fig. 6 is a flow chart of a model updating method according to an embodiment of the present application;
FIG. 7 is a flowchart of another model update method according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solutions of the present application, embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, of the embodiments of the present application. All other embodiments, based on the embodiments herein, which would be apparent to one of ordinary skill in the art without making any inventive effort, are intended to be within the scope of the present application.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one way of describing an association of associated objects, meaning that there may be three relationships, e.g., a and/or b, which may represent: the first and second cases exist separately, and the first and second cases exist separately. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
In the related art, data collected by a plurality of processing devices can be uploaded to an edge server, the edge server performs timing input machine learning model training based on the data uploaded by the processing devices, after the machine learning model training is completed, the edge server fixes the trained machine learning model to form an intelligent model so as to perform real-time processing on different device data, but when the parameter precision in the machine learning model is inaccurate, a user is required to manually import a new machine learning model, the mobile phone importing efficiency is low, and the production efficiency is affected.
In view of the above problems, embodiments of the present application provide a system, a method, an apparatus, a device, and a storage medium for model update, where a plurality of first servers have initial training models, and the plurality of first servers respectively receive processing information sent by a plurality of processing apparatuses in one-to-one correspondence, and input the processing information into the initial training models to perform training, so that the initial training models output model parameters and sample numbers. The second server receives the model parameters and the sample number sent by each first server, calculates new model parameters based on the model parameters and the sample number sent by each first server, and sends the new model parameters; the plurality of first servers update the initial training model based on the new model parameters sent by the second server. That is, in the embodiment of the present application, each first server may receive processing information sent by a corresponding processing device, and further may train an initial training model based on the processing information, to obtain model parameters and the number of samples. In order to further improve accuracy of the model parameters, the plurality of first servers may send the output model parameters and the number of samples to the second server. The second server may calculate new model parameters based on the received model parameters and the number of samples of the first server, and send the new model parameters. Thus, the first server can update the initial training model according to the model parameters sent by the second server, so that the initial training model is more accurate. In the embodiment of the application, the second server only can acquire the model parameters and the sample number and does not need to acquire the processing information, so that the possibility of leakage of the processing information is reduced, and the safety of the processing information is improved. And the model parameters can be updated through the second server, so that the accuracy of the initial training model is improved. That is, in the embodiment of the application, the initial training model is automatically updated, the updating efficiency is high, the accuracy of the initial training model is ensured, the possibility of leakage caused by transmission of processing information through multiple servers can be reduced, the reliability of model training is greatly improved, and the accuracy of product processing data and the product quality are improved. The following is a detailed description.
Referring to fig. 1, a schematic structural diagram of a model update system according to an embodiment of the present application is provided. As shown in fig. 1, the system includes:
the plurality of first servers 10 are provided with initial training models, and are used for respectively receiving machining information sent by the plurality of machining devices in a one-to-one correspondence mode, and inputting the machining information into the initial training models for training so that the initial training models output model parameters and sample numbers.
The second server 20 is configured to receive the model parameters and the sample number sent by each first server 10, calculate new model parameters based on the model parameters and the sample number sent by all the first servers 10, and send the new model parameters.
The plurality of first servers 10 are also for: the initial training model is updated based on the new model parameters sent by the second server 20.
In the embodiment of the present application, the plurality of first servers 10 each have an initial training model, and the plurality of first servers 10 each need to train the initial training model. At this time, the plurality of first servers 10 are respectively in one-to-one correspondence with the plurality of processing devices, so that each first server 10 can receive processing information sent by its corresponding processing device, train the initial training model based on the processing information received by the first server, and the trained initial training model can output model parameters and sample numbers. Because of limitations in processing information sent by a processing device received by a first server 10, some of the first servers 10 have low accuracy in model parameters output by an initial training model, and have no generalization. Therefore, in order to improve accuracy and generalization of the model parameters, the plurality of first servers 10 respectively transmit the model parameters and the number of samples output by the initial training model to the second server 20. In this way, the second server 20 may receive the model parameters and the number of samples sent by each first server 10, and the second server 20 calculates the weight of the model parameters of each first server 10 according to the received model parameters and the number of samples of all the first servers 10, and may further calculate new model parameters according to the model parameters and the weight of the model parameters of each first server 10, and send the new model parameters to the plurality of first servers 10, as shown in fig. 2. At this time, after the plurality of first servers 10 receive the model parameters transmitted from the second server 20, the initial training model therein may be updated based on the received model parameters, so that the updated initial training model is more accurate.
As a possible implementation manner, the calculation of the new model parameters by the second server 20 based on the model parameters and the number of samples sent by all the first servers includes:
the second server 20 may be based on the formulaNew model parameters are obtained. Wherein N represents the sum of all the sample numbers received by the second server 20, i.e. +.>n i Representing the number of samples sent by the i-th first server 10. i is an integer greater than 0 and not greater than k, k being an integer greater than 0, indicating the number of first servers 10 that send the model parameters and the number of samples to the second server 20. w (w) i Representing the model parameters sent by the i-th first server 10. w represents new model parameters. Thus, by the above formula, the second server 20 can calculate the weight of the model parameter of each first server 10 based on the number of samples of each first server 10 after receiving the model parameters and the number of samples transmitted from the plurality of first servers 10>Further, the weight of the model parameter of each first server 10 and the model parameter can be calculated by the formulaNew model parameters are calculated.
As a possible implementation, after calculating the new model parameters, the second server 20 may create mirror document information according to the new model parameters, and send the mirror document information to the plurality of first servers 10. The plurality of first servers 10 may receive the mirrored document information, acquire new model parameters based on the mirrored document information, and further update the model parameters of the initial training model to the new model parameters.
As a possible implementation manner, in order to facilitate the management of the model parameters of the second server 20, after calculating new model parameters, the new model parameters may be version-labeled to form new version model parameters. And transmits the new version of the model parameters to the first server 10. In some embodiments, the second server 20 may store the new version of the model parameters and parameterize version information for the model parameters. So that the subsequent second server 20 can obtain the required model parameters by querying the version information.
As a possible implementation, the plurality of first servers 10 receiving the new model parameters may be all first servers 10 that sent the model parameters and the number of samples to the second server 20, and may be part of the first servers 10 therein. That is, when the initial training model is updated, only a part of the initial training model in the first server 10 may be updated. At this time, the second server 20 may send the new model parameters to the first server 10 that needs to update the initial model, and the first server 10 that receives the new model parameters may update the initial training model therein according to the received new model parameters.
As one possible implementation, the model parameters may be weight parameters of different features within the model. For example, the initial training model comprises a plurality of convolution layers, each convolution layer can extract different characteristics, the functions of the model can be realized through the characteristic information, and weight parameters are arranged among the characteristic information, so that the model can realize model targets more accurately by adjusting the weight parameters of different characteristics.
In some embodiments, the features corresponding to different models may be different, and the weight parameters may also be different, which is not limited in the application to the initial training model. For example, the initial training model may be a training model for detecting whether the machining device is qualified, may be a training model for detecting the quality of the workpiece, may be a training model for detecting the target, and may be any other training model.
As a possible implementation, the model updating system further comprises a plurality of processing devices 30.
And a plurality of processing devices 30 for collecting and transmitting processing information of the workpiece.
In the embodiment of the present application, the above model updating system further includes a plurality of processing devices 30. Each processing device 30 is configured to collect processing information of a workpiece, and send the processing information to its corresponding first server 10. In some embodiments, a corresponding relationship between each processing device 30 and the workpiece may be preset, and thus each processing device 30 may collect processing information of the corresponding workpiece.
In some embodiments, the processing device 30 may be in one-to-one correspondence with the first server 10, which may facilitate the first server 10 to obtain the processing information. In some embodiments, the processing device 30 and the first server 10 are located in the same local area network, so that the processing device 30 and the first server 10 can transmit processing information through an internal network, and no external network is required for transmitting the processing information, thereby ensuring that the processing information is not revealed. Alternatively, one first server 10 may correspondingly receive the processing information of one processing device 30, as shown in fig. 3a, or one first server 10 may receive the processing information of a plurality of processing devices 30, as shown in fig. 3b, where the first server 10 corresponds to a plurality of processing devices 30.
It should be understood that the machining information is machining information of the workpiece, for example, machining information, stamping information, welding information, etc. of the workpiece, but may be other information, which is not limited in this application.
As one possible implementation, the transmission of model parameters and sample numbers of the initial training model by the plurality of first servers 10 to the second server 20 may not be synchronized. Therefore, in order to obtain more accurate model parameters, the second server 20 may indicate that more model parameters and sample numbers are obtained when the number of the received model parameters and sample numbers of the initial training model reaches the first preset number threshold, and may perform calculation of new model parameters at this time. In this way, the second server 20 may store the received model parameters and the number of samples transmitted from the first server 10, for example, in a memory or a database, and calculate new model parameters when the stored number reaches the first number threshold.
As a possible implementation manner, when the storage volume reaches the second preset number threshold, it is explained that more model parameters and sample numbers are stored, and in this case, in order to prevent the model parameters and sample numbers received subsequently from being unable to be stored, the stored and used model parameters and sample numbers may be deleted according to a preset rule. For example, the first-in first-out principle may be adopted, and the model parameters and the number of samples stored first may be considered to be used at this time, so that when the storage amount reaches the second preset number threshold, the model parameters and the number of samples stored first may be deleted. Alternatively, the model parameters and the number of samples that have been used in the stored model parameters and number of samples may be marked, and then the marked model parameters and number of samples may be deleted when the storage reaches a second preset number threshold. Of course, the stored model parameters and the number of samples may be deleted in other manners, which is not limited in this application. Alternatively, the used may be sent to the second server 20, or may be transmitted to another module for use.
As a possible implementation manner, in order to reduce the complexity of updating the initial training model by the first server 10, the second server 20 may send the updated training model to the first server 10 directly after updating the training model according to the new model parameters. Based on this, the second server 20 has a preset training model.
The second server 20 is also configured to: inputting the new model parameters into a preset training model, and updating the model parameters in the preset training model to form a new training model. Carrying out version marking on the new training model to form a new version training model; the new version of the training model is sent to the first server 10.
The first server 10 is also for: and updating the initial training model into a new version training model.
In this embodiment of the present application, a preset training model may be set in the second server 20, and the model structure of the preset training model is the same as that of the initial training model in the first server 10. At this time, after calculating new model parameters according to the model parameters and the sample number of all the first servers 10, the second server 20 may input the new model parameters into the preset training model, and update the model parameters in the preset training model to the new model parameters to form a new training model. For easier management of the training model, the second server 20 may perform version tagging on the new training model to form a new version of the training model. That is, the new training model may be marked with the latest version information to form a new version of the training model, and the new version of the training model may be transmitted to the first server 10. At this time, the first server 10 updates the initial training model therein to the new version training model after receiving the new version training model.
In some embodiments, the second server 20 may store the new version of the training model and the version information, so that the corresponding version of the training model may be obtained later by querying the version information.
As a possible implementation, after forming the new version of the training model, the second server 20 may package the new version of the training model into container information and send the container information to the first server 10. That is, after forming the new version of the training model, the second server 20 may use the model parameters in the new version of the training model as mirror image document information, package the mirror image document information, the running information of the training model, and the like into container information, and send the container information to the first server 10. At this time, after the first server 10 receives the container information, new model parameters, operation information of the training model, and the like may be obtained by analyzing the container information, so that the relevant information of the initial training model in the container may be updated to the operation information of the analyzed training model, and the model parameters in the container may be set as the model parameters analyzed from the container information, thereby completing the updating of the initial training model.
As a possible implementation manner, when the initial training model in the first server 10 is updated, only a part of the initial training models in the first server 10 may be updated, at this time, the second server 20 may determine that the first server 10 needs to update the initial training model first, and then send a new version of the training model to the first server 10 needing to update the initial training model, at this time, the first server 10 that receives the new version of the training model may update the initial training model therein according to the received new version of the training model.
As a possible implementation manner, the second server 20 may determine, according to a user instruction, the first server 10 that needs to update the initial training model, and of course, the first server 10 that needs to update the initial training model may also be preset, for example, according to an output result accuracy rate of all the first servers 10 (which will be described below, for example, an initial smart model), automatically trigger to send a new version of the training model or model parameter to the first server 10 of the application in a low-to-high order, and, for example, according to an output result accuracy rate of the smart model of the first server 10 being lower than a standard accuracy rate, send a new version of the training model or model parameter to the first server 10 whose output result accuracy rate is lower than the standard accuracy rate, which is not limited in the embodiment of the present application.
As one possible implementation, the first server 10 has an initial smart model that is trained by a preset training model for receiving processing information to infer the quality of the workpiece or the processing device 30.
The second server 20 further has a preset smart model, and the second server 20 is further configured to: inputting new model parameters into a preset intelligent model, and updating the model parameters in the preset intelligent model to form a new intelligent model; carrying out version marking on the new intelligent model to form a new version intelligent model; the new version of the smart model is sent to the first server so that the first server updates the initial smart model to the new version of the smart model.
In the embodiment of the present application, the first server 10 needs to infer the quality of the workpiece or the processing device 30 through the received processing information, and therefore, an initial intelligent model for estimating the quality of the workpiece or the processing device 30 through the processing information is provided in the first server 10. The initial smart model may be a model generated after training of the preset training model is completed. That is, the second server 20 may form a new version of the training model, and after transmitting the new version of the training model to the first server 10, the first server 10 may update the initial training model therein to the new version of the training model, and may fix the new version of the training model to the initial smart model. In this way, the first server 20 can infer the quality of the workpiece or the processing device 30 from the received processing information through the initial smart model.
In order to make the initial intelligent model more accurate and reduce the operation complexity of the first server 10, a preset intelligent model is further provided in the second server 20, and the second server 20 may input new model parameters into the preset intelligent model to update the model parameters in the preset intelligent model, that is, update the model parameters in the preset intelligent model to new model parameters, and use the preset intelligent model with updated model parameters as a new intelligent model. At this time, the second server 20 may perform version tagging on the new smart model to form a new version of the smart model for convenience of managing the smart model. That is, the new smart model may be marked as the latest version of the smart model. After forming the new version of the smart model, the second server 20 may send the new version of the smart model to the first server 10. At this time, the first server 10 may receive a new version of the smart model and update the initial smart model therein to the new version of the smart model. In this way, the first server 10 only needs to perform initial intelligent model updating operation according to the intelligent model of the new version, thereby greatly reducing the operation complexity of the first server 10.
In some embodiments, the second server 20 may store the new version of the smart model and the version information, so that the corresponding version of the smart model may be obtained later by querying the version information.
As a possible implementation, after forming the new version of the smart model, the second server 20 may package the new version of the smart model into container information and send the container information to the first server 10. That is, after forming the new version of the intelligent model, the second server 20 may use the model parameters in the new version of the intelligent model as mirror image document information, package the mirror image document information, the operation information of the intelligent model, and the like into container information, and send the container information to the first server 10. At this time, after the first server 10 receives the container information, new model parameters, operation information of the intelligent model, and the like may be obtained by analyzing the container information, so that the relevant information of the initial intelligent model in the container may be updated to the operation information of the intelligent model of the analyzed new version, and the model parameters in the container may be set as the model parameters analyzed from the container information, thereby completing the update of the initial intelligent model.
As a possible implementation manner, when the initial smart model in the first server 10 is updated, only part of the initial smart models in the first server 10 may be updated, at this time, the second server 20 may determine that the first server 10 needs to update the initial smart model first, and then send the new version of the smart model to the first server 10 that needs to update the initial smart model, where the first server 10 that receives the new version of the smart model may update the initial smart model therein according to the received new version of the smart model.
As one possible implementation, the second server 20 may be implemented by a modular configuration delivery system when sending new versions of the smart model and/or new versions of the training model. Since the process of the second server 20 transmitting the new version of the smart model to the first server 10 through the module configuration delivery system is the same as the process of transmitting the new version of the training model to the first server 10. The following will take as an example a procedure in which the second server 20 transmits a new version of the smart model to the first server 10 through the module configuration delivery system.
As shown in fig. 4, the module configuration delivery system includes a first service component and a second service component. When the first service component determines that a new version of the smart model needs to be sent to the first server 10, a trigger instruction may be sent to the second service component. At this time, the second service component may acquire the relevant information of the new version of the intelligent model, package the relevant information of the new version of the intelligent model into container information, and send the container information to the first server 10. In some embodiments, if the second server 20 stores the related information of the new version of the smart model, stores the model parameters in the new version of the smart model as the image document information, and stores the related information of the operation of the new version of the smart model, the second service component may obtain the image document of the new version of the smart model and the related information of the operation of the new version of the smart model in the storage device, package the image document of the new version of the smart model and the related information of the operation of the new version of the smart model into the container information, and send the container information to the first server 10. In this way, the first server 10, upon receiving the container information, can update the initial smart model based on the container information.
As a possible implementation, the first service component may be determined based on a trigger instruction of the user when it determines that a new version of the smart model needs to be sent to the first server 10. At this time, the user may send a trigger instruction for version update of the smart model of the first server 10 to the first service component, so that the first service component may determine that the first server 10 of the initial smart model needs to be updated and send the trigger instruction to the second service component after receiving the trigger instruction of the user.
As one possible implementation, the first server 10 needs to infer the process information through an initial smart model therein to achieve the target function. After the first server 10 updates the initial smart model therein, if the operation effect of the updated initial smart model is not good, the first server 10 sends a version rollback instruction to the second server 20. Based thereon, the first server 10 is also adapted to: determining that the deducing result of the new version intelligent model on the quality of the workpiece or the processing device is unqualified; and generating a new version of back-off instruction based on disqualification of the quality of the work or the inference result of the processing device by the new version of intelligent model, and transmitting the new version of back-off instruction to the second server 20.
The second server 20 is also configured to: receiving a new version rollback instruction sent by at least one first server 10, and closing the new version intelligent model and/or the new version training model; based on the training model of the new version of the intelligent model, a preset intelligent model and/or a preset training model are called; the predetermined smart model and/or the predetermined training model is transmitted to at least one first server 10.
In this embodiment of the present application, after the initial intelligent model is updated by the first server 10, there is a situation that the actual inference effect of the updated initial intelligent model is poor, and the initial intelligent model in the first server 10 may be rolled back to the initial intelligent model before being updated. At this time, after the initial intelligent model is updated, the first server 10 may analyze and determine whether the reasoning result of the new version intelligent model is qualified after updating the initial intelligent model to the new version intelligent model. In some embodiments, whether the new version of the smart model qualifies the quality of the artifact or the inference result of the machining device may be determined by the user. That is, the user can determine whether the inference result of the intelligent model of the new version on the quality of the work or the machining apparatus is accurate for each time in the first server 10. For example, if the number of inference results exceeding the preset number is inaccurate among the inference results of the preset number, it may be determined that the new version of the intelligent model fails to qualify the quality of the workpiece or the inference result of the processing apparatus. After determining that the new version of the intelligent model fails to qualify the quality of the workpiece or the reasoning result of the machining device, the user may send an instruction to the first server 10 that the new version of the intelligent model fails to qualify the quality of the workpiece or the reasoning result of the machining device. At this time, when the first server 10 receives an instruction that the new version of the intelligent model fails to infer the quality of the work or the result of the inference of the machining device, it may determine that the new version of the intelligent model fails to infer the quality of the work or the result of the inference of the machining device.
Alternatively, in some embodiments, to reduce user involvement, the efficiency of model updating is improved. After the initial intelligent model is updated by the new version of the intelligent model, the first server 10 can judge whether the quality of the new version of the intelligent model on the workpiece or the reasoning result of the processing device is qualified or not by judging by itself or through other devices. When the first server 10 needs to determine whether the quality of the new version of the intelligent model on the workpiece or the inference result of the processing device is qualified or not, the first server 10 can determine whether the workpiece is qualified or not by detecting the processed workpiece, further determine whether the quality of the new version of the intelligent model on the workpiece or the inference result of the processing device is accurate or not according to the result of determining the workpiece, and further determine whether the quality of the new version of the intelligent model on the workpiece or the inference result of the processing device is qualified or not.
If the first server 10 determines that the updated initial intelligent model is the new version intelligent model, the inference result of the quality or the processing device of the workpiece is failed, which indicates that the new version intelligent model is not suitable for the first server 10, at this time, the first server 10 may generate a new version rollback instruction after determining that the inference result of the new version intelligent model on the quality or the processing device of the workpiece is failed, and send the new version rollback instruction to the second server 20. At this time, after receiving the new version rollback instruction sent by the at least one first server 10, the second server 20 may determine that the new version of the intelligent model inference effect is not good, and at this time, the second server 20 may stop using the new model parameters. Since the model parameters are also used by the new version of the training model, the second server 20 may shutdown the new version of the smart model and/or shutdown the new version of the training model. That is, the second server 20 no longer uses the new version of the smart model and/or shuts down the new version of the training model. At this time, the second server 20 may call the preset smart model and/or the preset training model according to the version information of the closed smart model and/or the training model. I.e. to invoke the last version of the smart model and/or the last version of the training model. The retrieved pre-set smart model and/or pre-set training model is sent to at least one first server 10. Wherein at least one first server 10 is the first server 10 that sent a new version of the rollback instruction to the second server 20. Since the new version of the smart model and/or the new version of the training model in the first server 10 that has sent the new version of the rollback instruction to the second server 20 has a problem of poor inference effect, the second server 20 needs to send the preset smart model and/or the preset training model to at least one first server 10 after retrieving the preset smart model and/or the preset training model.
As a possible implementation, when the second server 20 sends the preset smart model and/or the preset training model to the at least one server 10, this may be implemented by a module configuration delivery system therein. As shown in fig. 5, after the second server 20 receives the new version rollback instruction sent by the at least one first server 10, the first service component determines that the last version of the smart model and/or the training model needs to be sent to the at least one first server 10, and at this time, the first service component may send a send instruction of the last version of the smart model and/or the training model to the second service component. At this time, the second service component may determine, according to the version information of the sent new version of the intelligent model and/or training model, version information of the intelligent model and/or training model to be acquired, and further may acquire relevant information of the intelligent model and/or training model of the previous version. The relevant information of the smart model and/or the training model of the previous version is packaged into container information and sent to the at least one first server 10. In some embodiments, if the second server 20 stores the related information of the smart model and/or the training model, and stores the model parameters in the smart model and/or the training model as mirrored document information, and stores the related information of the operation of the smart model and/or the training model, the second service component may obtain the mirrored document of the smart model of the previous version and the related information of the operation of the smart model of the previous version in the storage device, and/or package the mirrored document of the smart model of the previous version and the related information of the operation of the smart model of the previous version as container information, and send the container information to the at least one first server 10. And/or packaging the mirror image document of the training model and the operation related information of the training model into container information, and sending the container information to at least one first server 10. In this way, the at least one first server 10, upon receiving the container information, may update the initial smart model and/or the initial training model based on the container information. That is, the initial smart model of the at least one first server 10 is updated with the old version of the pre-set smart model in the second server 20, and/or the initial training model of the at least one first server 10 is updated with the old version of the pre-set training model in the second server 20.
As a possible implementation manner, in order to improve the security of the processing information, the processing device 30 may encrypt the processing information and then send the encrypted processing information to the first server 10. At this time, the processing information is encrypted information, and the first server 10 is further configured to: receiving an information uploading instruction; generating a decryption instruction based on the information uploading instruction; decrypting the encrypted processing information based on the decryption instruction; the decrypted process information is transmitted to the initial smart model and/or the initial training model.
In the embodiment of the present application, the processing device 30 encrypts the processing information and transmits the encrypted processing information to the first server 10. At this time, when the first server 10 receives the information upload instruction, it is explained that the received processing information may be transmitted to the initial smart model and/or the initial training model. Since the initial smart model and/or the initial training model cannot use the encrypted processing information, the first server 10 needs to decrypt the processing information after receiving the processing information. The first server 10 may generate the decryption instruction after receiving the information upload instruction. And decrypting the encrypted processing information according to the decryption instruction to obtain decrypted processing information. The first server 10 transmits the decrypted machining information to the initial smart model and/or the initial training model in order to deduce the quality of the workpiece or the machining device based on the machining information by the initial smart model; and/or training the initial training model through the processing information to output model parameters and sample numbers.
As a possible implementation manner, after receiving the processing information, the first server 10 may store the decrypted processing information before transmitting the decrypted processing information to the initial intelligent model and/or the initial training model, so that decoupling between acquiring the processing information and using the processing information may be achieved, without using the processing information immediately after acquiring the processing information, and flexibility of acquiring the processing information and using the processing information by the first server 10 is improved. In some embodiments, the first server 10 may store the decrypted process information in a database or storage device such that the decrypted process information may be retrieved from the database or storage device when the process information is needed for use by the initial smart model and/or the initial training model.
As a possible implementation manner, the information uploading instruction carries information subject information, and the first server 10 is further configured to obtain processing information matched with the information subject based on the information subject carried by the information uploading instruction; and transmitting the processing information matched with the information theme to the initial intelligent model and/or the initial training model.
That is, the processing information acquired in the first server 10 may be various, and only a part of the processing information may be required when performing model training and/or target inference, so that the stored decrypted processing information may be filtered and then transmitted to the initial intelligent model and/or the initial training model. At this time, the information uploading instruction carries an information theme, and when the processing information is transmitted to the initial intelligent model and/or the initial training model, the processing information matched with the information theme can be transmitted. That is, the mapping relationship between the processing information and the information subject is preset. In this way, the first server 10 may obtain the processing information matched with the information theme from the stored decrypted processing information through the theme information carried in the information uploading instruction, and transmit the processing information matched with the information theme to the initial intelligent model and/or the initial training model. In this way, the process information may be filtered through the information topic to train an initial training model of the target function and/or to achieve target inference through the initial smart model.
As a possible implementation, the first server 10 is further configured to: monitoring whether the first storage amount of the decrypted processing information reaches a preset amount; if the first storage amount of the decrypted processing information does not reach the preset amount, an information uploading instruction is generated.
In the embodiment of the application, in order to improve the reasoning accuracy of the initial intelligent model and/or improve the model accuracy of the initial training model, more processing information is required to be used when carrying out the reasoning of the initial intelligent model or the training of the initial training model. Therefore, before the decrypted machining information is transmitted to the initial smart model and/or the initial training model, the decrypted machining information needs to be stored, and the decrypted machining information is transmitted after the first storage amount of the decrypted machining information reaches a preset amount. Based on this, the first server 10 also needs to monitor whether the first storage amount of the decrypted machining information reaches a preset amount when storing the decrypted machining information. When the first storage amount of the decrypted machining information reaches a preset amount, the decrypted machining information is more, and the decrypted machining information can be transmitted to the initial intelligent model and/or the initial training model. Thus, the first server 10 may generate information upload instructions to transmit the decrypted process information to the initial smart model and/or the initial training model.
As a possible implementation, the first server 10 is further configured to: and deleting the stored decrypted processing information which has been transmitted to the initial intelligent model and/or the initial training model if the storage amount of the decrypted processing information reaches a first preset amount.
In the embodiment of the present application, the processing device 30 transmits the processing information acquired by the processing device to the first server 10, so that the processing information acquired by the first server is gradually increased. When the first server 10 stores the decrypted machining information, the decrypted machining information stored therein is gradually increased. However, since the memory space of the memory device or the database is limited, the processing information newly transmitted from the processing device 30 cannot be stored in order to avoid the memory space of the memory device or the database from being completely used up. Therefore, when the amount of storage of the decrypted machining information reaches the first preset amount, it is indicated that the remaining available storage space of the storage device or the database is small, and at this time, in order to be able to store the machining information newly transmitted by the machining device 30, the first server 10 may delete the decrypted machining information that has been used from the stored decrypted machining information. That is, the first server 10 may delete the decrypted tooling information stored therein that has been transmitted to the initial smart model and/or the initial training model to increase the storage space of the storage device or database.
In this embodiment of the present application, each first server may receive processing information sent by a corresponding processing device, and further may train an initial training model based on the processing information, to obtain model parameters and the number of samples. In order to further improve accuracy of the model parameters, the plurality of first servers may send the output model parameters and the number of samples to the second server. The second server may calculate new model parameters based on the received model parameters and the number of samples of the first server, and send the new model parameters. Thus, the first server can update the initial training model according to the model parameters sent by the second server, so that the initial training model is more accurate. In the embodiment of the application, the second server only can acquire the model parameters and the sample number and does not need to acquire the processing information, so that the possibility of leakage of the processing information is reduced, and the safety of the processing information is improved. And the model parameters can be updated through the second server, so that the accuracy of the initial training model is improved. That is, in the embodiment of the application, the accuracy of the initial training model can be ensured, the possibility of leakage of processing information can be reduced, and the reliability of model training is greatly improved.
Referring to fig. 6, a flow chart of a model updating method according to an embodiment of the present application is provided. As shown in fig. 6, the method is applied to the second server, and the method includes:
step S601, a plurality of model parameters and a corresponding plurality of sample numbers are received.
Wherein the model parameters and the number of samples are output by an initial training model. The initial training model is a model trained using the machining information.
Step S602, calculating new model parameters based on the model parameters and the corresponding sample numbers.
Step 603, inputting new model parameters into the preset training model, and updating model parameters in the preset training model to form a new training model.
As a possible implementation manner, as shown in fig. 7, the method further includes:
and step S604, performing version marking on the new training model to form a new version training model.
As a possible implementation manner, referring to fig. 7, the method further includes:
step S605, inputting new model parameters into a preset intelligent model, and updating the model parameters in the preset intelligent model to form a new intelligent model.
And step S606, performing version marking on the new intelligent model to form a new version intelligent model.
As a possible implementation manner, referring to fig. 7, the method further includes:
step S607, a new version rollback instruction is received.
Step S608, closing the new version of the intelligent model and/or the new version of the training model based on the new version of the rollback instruction.
Step S609, recovering the preset intelligent model and/or the preset training model based on closing the new version intelligent model and/or the new version training model.
It should be noted that, the above model updating method may be applied to the second server 20, and specific implementation processes may refer to the working process of the second server 20 in the above updating system, which is not described herein.
Corresponding to the above embodiment, the present application also provides an electronic device. Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device 800 may include: a processor 801, a memory 802, and a communication unit 803. The components may communicate via one or more buses, and it will be appreciated by those skilled in the art that the configuration of the electronic device shown in the drawings is not limiting of the embodiments of the invention, as it may be a bus-like structure, a star-like structure, or include more or fewer components than shown, or may be a combination of certain components or a different arrangement of components.
Wherein the communication unit 803 is configured to establish a communication channel, so that the electronic device may communicate with other devices. Receiving user data sent by other devices or sending user data to other devices.
The processor 801, which is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and/or processes data by running or executing software programs, instructions, and/or modules stored in the memory 802, and invoking data stored in the memory. The processor may be comprised of integrated circuits (integrated circuit, ICs), such as a single packaged IC, or may be comprised of packaged ICs that connect multiple identical or different functions. For example, the processor 801 may include only a central processing unit (central processing unit, CPU). In the embodiment of the invention, the CPU can be a single operation core or can comprise multiple operation cores.
The memory 802, for storing instructions for execution by the processor 801, the memory 802 may be implemented by any type of volatile or non-volatile memory device, or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, or optical disk.
The execution of the instructions in memory 802, when executed by processor 801, enables electronic device 800 to perform some or all of the steps of the type-updating method of any of the embodiments described above.
In a specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, where the program may include some or all of the steps in each embodiment of the model updating method provided by the present invention when the program is executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
In a specific implementation, the present invention also provides a computer program product, where the computer program product contains executable instructions, which when executed on a computer, cause the computer to perform some or all of the steps in the embodiments of the simulation scenario generation method provided by the present invention.
It will be apparent to those skilled in the art that the techniques of embodiments of the present invention may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in essence or what contributes to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present invention.
The same or similar parts between the various embodiments in this specification are referred to each other. In particular, for the device embodiment and the terminal embodiment, since they are substantially similar to the method embodiment, the description is relatively simple, and reference should be made to the description in the method embodiment for relevant points.

Claims (13)

1. A model updating system, comprising:
the first servers are provided with initial training models and are used for respectively receiving machining information sent by the plurality of machining devices in one-to-one correspondence, and inputting the machining information into the initial training models for training so that the initial training models output model parameters and sample numbers;
the second server is used for receiving the model parameters and the sample number sent by each first server, calculating new model parameters based on the model parameters and the sample number sent by all the first servers, and sending the new model parameters;
the plurality of first servers are further configured to: updating the initial training model based on the new model parameters sent by the second server.
2. The model updating system of claim 1, wherein the model updating system comprises a model updating system,
The second server is provided with a preset training model, and is further used for:
inputting the new model parameters into the preset training model, and updating the model parameters in the preset training model to form a new training model;
carrying out version marking on the new training model to form a new version training model;
transmitting the new version of training model to the first server;
the first server is further configured to: and updating the initial training model into a new version training model.
3. The model updating system of claim 2, wherein the model updating system comprises a model updating system,
the first server is provided with an initial intelligent model, wherein the initial intelligent model is generated by training of the preset training model and is used for receiving the processing information to deduce the quality of a workpiece or the processing device;
the second server is further provided with a preset intelligent model, and is further used for:
inputting the new model parameters into the preset intelligent model, and updating the model parameters in the preset intelligent model to form a new intelligent model;
carrying out version marking on the new intelligent model to form a new version of the intelligent model;
And sending the new version of the intelligent model to the first server so that the first server updates the initial intelligent model to the new version of the intelligent model.
4. The model updating system of claim 3, wherein the model updating system comprises a model updating system,
the first server is further configured to:
determining that the quality of the workpiece or the deducing result of the processing device by the intelligent model with the new version is unqualified;
based on that the quality of the workpiece or the inference result of the processing device is unqualified by the intelligent model of the new version, generating a new version backspacing instruction, and sending the new version backspacing instruction to the second server;
the second server is further configured to:
receiving the new version rollback instruction sent by at least one first server, and closing the new version of the intelligent model and/or the new version of the training model;
calling the preset intelligent model and/or the preset training model based on a training model of the intelligent model new version of closing;
and sending the preset intelligent model and/or the preset training model to at least one first server.
5. The model updating system of claim 3, wherein the process information is encrypted information, the first server further configured to:
Receiving an information uploading instruction;
generating a decryption instruction based on the information uploading instruction;
decrypting the encrypted processing information based on the decryption instruction;
transmitting the decrypted processing information to the initial smart model and/or the initial training model.
6. The model updating system of claim 5, wherein the first server is further configured to:
monitoring whether the storage amount of the decrypted processing information reaches a first preset amount;
and if the decrypted memory amount of the processing information does not reach the first preset amount, generating the information uploading instruction.
7. The model updating system of claim 6, wherein the first server is further configured to:
and if the storage amount of the decrypted processing information reaches a first preset amount, deleting the stored decrypted processing information which is transmitted to the initial intelligent model and/or the initial training model.
8. A method of model updating, the method comprising:
receiving a plurality of model parameters and a corresponding plurality of sample numbers; the model parameters and the sample number are output by an initial training model;
Calculating new model parameters based on a plurality of model parameters and a plurality of corresponding sample numbers;
inputting the new model parameters into a preset training model, and updating the model parameters in the preset training model to form a new training model.
9. The method of claim 8, wherein the method further comprises:
and carrying out version marking on the new training model to form the new version training model.
10. The method of claim 8, wherein the method further comprises:
inputting the new model parameters into a preset intelligent model, and updating the model parameters in the preset intelligent model to form a new intelligent model;
and carrying out version marking on the new intelligent model to form the new version of the intelligent model.
11. The method according to claim 10, wherein the method further comprises:
receiving a new version rollback instruction;
closing the new version of the intelligent model and/or the new version of the training model based on the new version of the rollback instruction;
and recovering the preset intelligent model and/or the preset training model based on closing the new version of the intelligent model and/or the new version of the training model.
12. An electronic device comprising a memory for storing a computer program and a processor for executing the program, wherein the computer program instructions, when executed by the processor, cause the electronic device to perform the method of any of claims 8-10.
13. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program, when run, controls a device in which the computer readable storage medium is located to perform the method of any one of claims 8-10.
CN202311854410.8A 2023-12-28 2023-12-28 Model updating system, method, device, equipment and storage medium Pending CN117808119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311854410.8A CN117808119A (en) 2023-12-28 2023-12-28 Model updating system, method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311854410.8A CN117808119A (en) 2023-12-28 2023-12-28 Model updating system, method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117808119A true CN117808119A (en) 2024-04-02

Family

ID=90421279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311854410.8A Pending CN117808119A (en) 2023-12-28 2023-12-28 Model updating system, method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117808119A (en)

Similar Documents

Publication Publication Date Title
US10671474B2 (en) Monitoring node usage in a distributed system
CN104993946B (en) Appraisal procedure, the device and system of gray scale publication
CN108965381A (en) Implementation of load balancing, device, computer equipment and medium based on Nginx
CN109344170B (en) Stream data processing method, system, electronic device and readable storage medium
CN109766263A (en) Automatic test analysis and processing method, device, computer equipment and storage medium
CN107888397A (en) The method and apparatus for determining fault type
CN109345417B (en) Online assessment method and terminal equipment for business personnel based on identity authentication
CN108920364A (en) Software defect positioning method, device, terminal and computer readable storage medium
CN109684947A (en) Mark quality control method, device, computer equipment and storage medium
CN107609004A (en) Application program buries point methods and device, computer equipment and storage medium
CN107689982A (en) Multi-data source method of data synchronization, application server and computer-readable recording medium
CN110991871A (en) Risk monitoring method, device, equipment and computer readable storage medium
CN106850242A (en) A kind of information processing method and device
CN109902103A (en) Service data management method, device, equipment and computer readable storage medium
CN110069404A (en) Code debugging method, apparatus, equipment and medium
CN105553770B (en) Data acquisition control method and device
US7617313B1 (en) Metric transport and database load
CN106792784A (en) A kind of method, server and system that data check is carried out in server side
CN113360300A (en) Interface calling link generation method, device, equipment and readable storage medium
CN117808119A (en) Model updating system, method, device, equipment and storage medium
CN109445973A (en) Position the method and device of application crash
CN110689177B (en) Method and device for predicting order preparation time, electronic equipment and storage medium
CN114021732B (en) Proportional risk regression model training method, device and system and storage medium
CN103634158B (en) A kind of method for testing pressure and device of snmp management process
CN110460487A (en) The monitoring method and system of service node, service node

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination