CN113988212A - Management method and device of algorithm model - Google Patents

Management method and device of algorithm model Download PDF

Info

Publication number
CN113988212A
CN113988212A CN202111331431.2A CN202111331431A CN113988212A CN 113988212 A CN113988212 A CN 113988212A CN 202111331431 A CN202111331431 A CN 202111331431A CN 113988212 A CN113988212 A CN 113988212A
Authority
CN
China
Prior art keywords
training
algorithm model
target
information
training result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111331431.2A
Other languages
Chinese (zh)
Inventor
刘镇熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ideamake Software Technology Co Ltd
Original Assignee
Shenzhen Ideamake Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ideamake Software Technology Co Ltd filed Critical Shenzhen Ideamake Software Technology Co Ltd
Priority to CN202111331431.2A priority Critical patent/CN113988212A/en
Publication of CN113988212A publication Critical patent/CN113988212A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques

Abstract

The embodiment of the application discloses a method and a device for managing an algorithm model, wherein the method comprises the following steps: the method comprises the steps of obtaining first information of a plurality of user equipment training target algorithm models to obtain a plurality of pieces of first information, wherein a first training result index set comprises a plurality of training result indexes of the target algorithm models; selecting a target training parameter group from the plurality of first training parameter groups according to the plurality of first training result index groups; and sending the target training parameter group to the plurality of user equipment to instruct the plurality of user equipment to update the target algorithm model respectively. The method and the device update the target algorithm model of each user equipment according to the training parameters and the training result indexes of different user equipment training target algorithm models, realize unified management of the algorithm models of a plurality of user equipment, and improve the accuracy of the algorithm models while reducing the management cost.

Description

Management method and device of algorithm model
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for managing an algorithm model.
Background
With the rapid development of computer technologies, technologies such as cloud computing, big data and artificial intelligence are continuously and widely applied in life. These techniques are usually implemented based on various types of algorithm models, which are steps for solving a class of problems, such as a prediction model, an error correction model, a word segmentation model, an image recognition model, and the like.
Among the items related to the algorithm model, there are mainly two modes: a local mode and a cloud server mode. The training and optimizing results of the model in the local mode and the model file are stored locally, and the deployment of the model is also stored in a designated file for storage, so that the situation is very easy to be confused, and the model is inconvenient to update and automate the process. In the cloud server mode, all processes are operated at a server side, so that unified management is facilitated, but the cost is high after long-term use. Therefore, how to effectively manage the algorithm model is a problem to be solved urgently at present.
Disclosure of Invention
The embodiment of the application provides a management method and a management device for an algorithm model, which can manage the algorithm model operated by a plurality of user equipment, reduce the management cost and improve the accuracy of the algorithm model.
In a first aspect, an embodiment of the present application provides a method for managing an algorithm model, where the method includes:
acquiring first information of a plurality of user equipment training target algorithm models to obtain a plurality of pieces of first information, wherein each piece of first information comprises a first training parameter group and a first training result index group, the first training parameter group comprises a plurality of training parameters of the target algorithm models, and the first training result index group comprises a plurality of training result indexes of the target algorithm models;
selecting a target training parameter group from the first training parameter groups according to the first training result index groups;
and sending the target training parameter group to the user equipments to instruct the user equipments to update the target algorithm model respectively.
In a second aspect, an apparatus for managing an algorithm model provided in an embodiment of the present application includes a processing unit and a transceiver unit, wherein,
the processing unit is used for acquiring first information of a plurality of user equipment training target algorithm models to obtain a plurality of pieces of first information, wherein each piece of first information comprises a first training parameter group and a first training result index group, the first training parameter group comprises a plurality of training parameters of the target algorithm models, and the first training result index group comprises a plurality of training result indexes of the target algorithm models;
the processing unit is further configured to select a target training parameter group from the plurality of first training parameter groups according to the plurality of first training result index groups;
a transceiver unit, configured to send the target training parameter set to the multiple pieces of user equipment, so as to instruct the multiple pieces of user equipment to update the target algorithm model respectively.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, a communication interface, and one or more programs, which are stored in the memory and configured to be executed by the processor, and which include instructions for performing some or all of the steps described in the method of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform some or all of the steps described in the method of the first aspect.
In a fifth aspect, the present application provides a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps described in the method according to the first aspect of the present application. The computer program product may be a software installation package.
According to the technical scheme, a plurality of pieces of first information are obtained by obtaining the first information of a plurality of user equipment training target algorithm models, each piece of first information comprises a first training parameter group and a first training result index group, the first training parameter group comprises a plurality of training parameters of the target algorithm models, and the first training result index group comprises a plurality of training result indexes of the target algorithm models; selecting a target training parameter group from the plurality of first training parameter groups according to the plurality of first training result index groups; and sending the target training parameter group to the plurality of user equipment to instruct the plurality of user equipment to update the target algorithm model respectively. The method and the device update the target algorithm model of each user equipment according to the training parameters and the training result indexes of different user equipment training target algorithm models, realize unified management of the algorithm models of a plurality of user equipment, and improve the accuracy of the algorithm models while reducing the management cost.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic diagram of a structure of an electronic device according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a management system of an algorithm model provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of data flow of an algorithm model training provided by an embodiment of the present application;
FIG. 4 is a flow chart of a management method of an algorithm model according to an embodiment of the present disclosure;
FIG. 5a is a schematic diagram of a user interface for managing a plurality of algorithm models according to an embodiment of the present application;
FIG. 5b is a schematic diagram of a user interface of an algorithm model file according to an embodiment of the present application;
FIG. 6 is a flow chart of another algorithm model management method provided by the embodiment of the application;
FIG. 7 is a block diagram of functional units of a management device of an algorithm model according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another electronic device provided in an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions of the present application, the following description is given for clarity and completeness in conjunction with the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person skilled in the art without making any inventive step on the basis of the description of the embodiments of the present application belong to the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, software, product, or apparatus that comprises a list of steps or elements is not limited to those listed but may include other steps or elements not listed or inherent to such process, method, product, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 140, a wireless communication module 150, a display screen 160, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein the different processing units may be separate components or may be integrated in one or more processors. In some embodiments, the electronic device 100 may also include one or more processors 110.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. The USB interface 130 may also be used to connect to a headset to play audio through the headset.
Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 140 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 140 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 140 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the filtered electromagnetic wave to the modem processor for demodulation. The mobile communication module 140 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 140 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 140 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 150 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (blue tooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), UWB, and the like. The wireless communication module 150 may be one or more devices integrating at least one communication processing module. The wireless communication module 150 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 150 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
The electronic device 100 implements display functions via the GPU, the display screen 160, and the application processor, among others. The GPU is a microprocessor for image processing, and is connected to the display screen 160 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information. In some embodiments, the electronic device 100 may include 1 or more display screens 160.
Exemplarily, as shown in fig. 2, fig. 2 is a schematic diagram of a management system of an algorithm model provided in an embodiment of the present application. As illustrated, the system may provide task scheduling services and tracking services for training and inference execution of algorithmic models for multiple users.
The task scheduling service is a service for task flow arrangement and timing tasks, is an intermediate service for an algorithm model training environment and a tracking service, plays roles in associating, scheduling and coordinating different services or different tasks, can be connected with an algorithm model data source, and bears a data pipeline processing flow. Illustratively, the task scheduling service may be provided using a DolphinSchedule task scheduling tool.
The tracking service is used for recording parameters, code versions, evaluation indexes, Application Programming Interfaces (API) and User Interfaces (UI) of other output files in the algorithm model training process. For example, an MLFLOW tool may be used for algorithmic model monitoring and tracking, and team members may constantly keep the results of local runs uniformly on the server when team collaboration is required. Further, different experiments or projects may be established for each team to better differentiate management.
The file repository can provide file storage service and is used for storing algorithm model files, pictures needing to be stored, final results and other files. The service support database user stores and tracks data required to be recorded by the service, and the data can comprise table structures such as an experiment record table, an experiment label table, an algorithm model index table, an algorithm model version table and an algorithm model registration table. It may use MYSQL database.
Further, the local training environment refers to any machine for running algorithm model training and inference, and may include environments such as a self-building server and a cloud environment, and is not limited to one user, and may be any multiple users in a team who need unified management.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a data flow of an algorithm model training according to an embodiment of the present disclosure. As shown in FIG. 3, a task is first launched in the task scheduling service, data is pulled from the model data source (and file repository), and a data processing flow is executed. After the processed data is pushed to a local training environment, algorithm model training is started in the local environment, tracking service is connected, an experiment is started, logs, indexes and the experiment in the algorithm model training process are associated, data such as parameters, code versions and evaluation indexes in the tracking service process are recorded in a service support database, files and the experiment which are generated in the algorithm model training process and need to be stored are associated, and the files and the experiment are stored in a file repository through task scheduling. And then records the saved file address to the service support database. After the algorithm model training is finished, the model file is stored in a file repository by the same method, and is registered as the algorithm model, and the version of the algorithm model is automatically set and recorded in a service support database. Meanwhile, the next model iteration is judged whether to start or not by comparing the indexes of the current algorithm model, the historical indexes and the target indexes. The tracking service may also launch a web UI to display all of the key processes described above and provide further operational functions such as index comparison, model deployment, and experiment editing. Further, the task scheduling service may start timed tasks, periodically delete manually deleted content in the Web UI located in the service support database and file repository. In any inference environment, only the tracking service needs to be connected and the registered model name and version are specified, and the saved model can be pulled to be directly inferred.
In conjunction with the above description, the present application is described below from the perspective of method examples.
Referring to fig. 4, fig. 4 is a schematic flowchart of a management method of an algorithm model according to an embodiment of the present application, and the management method is applied to the electronic device shown in fig. 1. As shown in fig. 4, the method includes the following steps.
S410, obtaining first information of a plurality of user equipment training target algorithm models to obtain a plurality of first information, wherein each first information comprises a first training parameter group and a first training result index group, the first training parameter group comprises a plurality of training parameters of the target algorithm models, and the first training result index group comprises a plurality of training result indexes of the target algorithm models.
In the application, the user equipment training algorithm model can be tracked, and data such as parameters and indexes of the trained algorithm model of a plurality of user equipment are recorded. After the user equipment trains the algorithm models, the electronic equipment can compare the trained algorithm models on the plurality of user equipment according to the data recorded by the tracking service, and select the optimal algorithm model from the trained algorithm models, so that the algorithm models on the user equipment can be updated to the optimal version, and the synchronous updating and the same management of the algorithm models are realized.
Optionally, the method further includes: receiving the target algorithm models uploaded by the plurality of user equipment and the first information and the second information of each target algorithm model; generating the algorithm model list according to the uploading time of each target algorithm model; and storing the algorithm model list and sending the algorithm model list to a display for displaying.
Specifically, when the user equipment starts algorithm model training and connects tracking service, the user equipment uploads data such as training parameters, training result indexes and the like of the training, the tracking service starts model monitoring, tracking and interface display, an algorithm model list is generated according to the training parameters, the training result indexes and the uploading time of the algorithm model uploaded by each user equipment, and each line in the algorithm model list can comprise the name, the uploading time, the uploading user equipment, the model training parameters, the model training result indexes, the data types and the like of a target algorithm model.
For example, as shown in fig. 5a, the algorithm model list may simultaneously display algorithm models under multiple items, and the display interface may be divided into different areas to display different data of the algorithm models. The area A is a management directory of a plurality of projects, the area B is a storage path of an algorithm model and a model file, the area D is a recording date, a version and a code source of the algorithm model, the area E is a file type of the algorithm model, the area F is a training parameter of the algorithm model, and the area G is a training result index of the algorithm model. Further, the area C of the display interface provides a search function for the user, so that the user can quickly find the algorithm model according to the requirement, and the area H displays an additional label recorded by the user for the algorithm model.
Further, when all algorithm models are monitored through the tracking service, the updated version and the update time of each algorithm model can be displayed, or the version information and the update information of the optimal algorithm model from the algorithm models uploaded by a plurality of user devices can be displayed, so that a user can conveniently obtain and browse the algorithm models. As shown in fig. 5b, the algorithm model may specifically include an algorithm model name, a last updated version, a last update time, and the like.
S420, selecting a target training parameter group from the plurality of first training parameter groups according to the plurality of first training result index groups.
When a plurality of user equipment in a project train the same target algorithm model, the tracking service can compare the training result indexes of the target algorithm model uploaded by each user equipment, and select the training parameter group corresponding to the best training result index group as the target training parameter group of the target algorithm model, so that the target algorithm model on each user equipment can be updated to the optimal version.
Optionally, the selecting a target training parameter group from the plurality of first training parameter groups according to the plurality of first training result index sets includes: mapping the training result indexes in each first training result index group into a coordinate system, wherein the horizontal axis of the coordinate system is the value of the training result index, and the vertical axis of the coordinate system is the first training result index group; fitting coordinate points corresponding to the same training result index to obtain a plurality of fitting curves, wherein each training result index corresponds to one fitting curve; calculating the deviation degree of each fitting curve from a preset curve to obtain a plurality of deviation degrees; determining the fitted curve with the minimum deviation degree as the target fitted curve; and determining a first training parameter group corresponding to the target fitting curve as the target training parameter group.
The electronic device may store a preset curve meeting the user's needs in advance. And after each training result index in each training result index group in the plurality of training result index groups is mapped into a coordinate system, connecting coordinate points of each training result index group to obtain a fitting curve corresponding to each training result index group. And then calculating the deviation degree between each fitted curve and the preset curve, and determining the first training group corresponding to the fitted curve with the minimum deviation degree (namely the fitted curve closest to the preset curve) as the training parameter of the optimal target algorithm model.
For example, the deviation between the fitting curve and the preset curve may be calculated according to a deviation calculation formula, which may be expressed as:
Figure BDA0003346285510000071
wherein B is the deviation degree between each fitted curve and a preset curve, and aiIs the weight of the ith training result index, SiAnd the difference value of the fitting curve corresponding to the ith training result index and a preset fitting curve is obtained.
S430, sending the target training parameter group to the user equipments to instruct the user equipments to update the target algorithm model respectively.
In the embodiment of the application, the algorithm models trained by the user equipment are tracked, so that the performance indexes of the trained algorithm models can be compared and updated, the algorithm models operated by the user equipment are managed, the management cost is reduced, and the accuracy of the algorithm models can be improved.
Referring to fig. 6, fig. 6 is a flowchart illustrating a management method of another algorithm model according to an embodiment of the present application, and the management method is applied to the electronic device shown in fig. 1. As shown in fig. 6, the method includes the following steps.
S610, obtaining first information of a plurality of user equipment training target algorithm models to obtain a plurality of pieces of first information, wherein each piece of first information comprises a first training parameter group and a first training result index group, the first training parameter group comprises a plurality of training parameters of the target algorithm models, and the first training result index group comprises a plurality of training result indexes of the target algorithm models.
S620, selecting a target training parameter group from the plurality of first training parameter groups according to the plurality of first training result index groups.
S630, sending the target training parameter group to the user equipments to instruct the user equipments to update the target algorithm model respectively.
The specific implementation manner of S610-S630 may refer to the method described in fig. 4, which is not described in detail in this embodiment of the present application.
S640, receiving a first request message from a first user equipment, where the first request message is used to request the target algorithm model, and the plurality of user equipments include the first user equipment.
In practical applications, the user equipment may request the electronic device to search for the optimal target algorithm model with the best performance index, so that the optimal target algorithm model may be directly obtained for inference.
S650, acquiring an algorithm model list according to the first request message, wherein the algorithm model list comprises the first information and the second information of each target algorithm model, and the second information comprises the training time of each target algorithm model.
After receiving the first request message, the electronic device may obtain an algorithm model list composed of all target algorithm models from the service support database. By way of example, the list of algorithmic models may be as shown in FIG. 5 a.
S660, determining an optimal algorithm model from the target algorithm models according to the first information and/or the second information.
In the embodiment of the application, an optimal algorithm model meeting the requirements of the first user can be selected according to the training parameters and the training result indexes in the first information or according to the training time of the target algorithm model.
In one possible example, the determining an optimal algorithm model from the plurality of target algorithm models according to the first information and/or the second information includes: if the first request message comprises a first index, selecting the first training result index group with the best first index from the first information, and determining a target algorithm model corresponding to the first training result index group as the best algorithm model; otherwise, target training time is selected from the second information, the target training time is the training time closest to the current time, and the target algorithm model corresponding to the target training time is determined as the optimal algorithm model.
Specifically, if the request message carries a first index specified by the user, a first training result index set including an optimal first index may be selected from the first information, and a target algorithm model corresponding to the first training result index set is determined as an optimal algorithm model. For example, if the user specifies the target algorithm model with the best accuracy, the target algorithm model with the highest accuracy is determined as the optimal algorithm model from the plurality of target algorithm models. If the user specifies the training result index, that is, if the first request message does not carry the first index, the most recently updated target algorithm model may be determined as the optimal algorithm model according to the training time of the target algorithm model.
In another possible example, the determining an optimal algorithm model from the plurality of target algorithm models according to the first information and/or the second information includes: obtaining at least one first algorithm model, wherein the first algorithm model is the target algorithm model uploaded by the first user equipment; according to the second information of the at least one first algorithm model, comparing the first information of the at least one first algorithm model respectively to obtain a first index; and selecting the first training result index group with the best first index from the first information, and determining a target algorithm model corresponding to the first training result index group as the best algorithm model.
In this embodiment of the application, when the user does not specify the training result index, the electronic device may also determine the optimal algorithm model required by the user according to the training parameter or the training result index of the first user equipment updated or optimized target algorithm model.
The first information and the second information uploaded by the target algorithm model by the first user equipment can be acquired from the service support database according to the requested target algorithm model. And then comparing the differences between the target algorithm models uploaded by the first user equipment each time according to the uploading time of the first user equipment and the time sequence, analyzing the updated or optimized training parameters or training result indexes of the target algorithm models by the first user according to the differences, and selecting the target algorithm model with the best training parameters or the optimal training result indexes as the optimal algorithm model.
Optionally, the comparing, according to the second information of the at least one first algorithm model, the first information of the at least one first algorithm model to obtain a first index includes:
and sequencing the at least one first algorithm model in an ascending order according to training time, respectively matching the first training result index group of the ith first algorithm model with the first training result index group of the (i + 1) th first algorithm model to obtain a plurality of candidate indexes, wherein the candidate indexes are different training result indexes matched in the ith first algorithm model and the (i + 1) th first algorithm model, determining the candidate index with the largest number in the plurality of candidate indexes as the first index, and i is a positive integer.
Specifically, the first algorithm models are sorted according to time, then training result index groups and/or training parameter groups of two adjacent first algorithm models are respectively compared, training result indexes with different values in the training result index groups are used as candidate indexes, and/or training parameters with different values in the training parameter groups are used as candidate parameters. Then, the candidate index with the largest number in the plurality of candidate indexes is used as the first index and/or the candidate parameter with the largest number in the plurality of candidate parameters is used as the first index. And finally, determining the target algorithm model with the optimal first index as the optimal algorithm model.
And S670, sending the storage address of the optimal algorithm model to the first user equipment.
Specifically, after finding the optimal algorithm model corresponding to the algorithm model requested by the first user equipment, the storage address of the optimal algorithm model may be sent to the first user equipment, so that the first user equipment may obtain data such as the model file, the training parameters, and the training result index of the optimal target algorithm from the storage address.
The method includes the steps that first information of a plurality of user equipment training target algorithm models is obtained, and a plurality of pieces of first information are obtained, wherein each piece of first information comprises a first training parameter group and a first training result index group, the first training parameter group comprises a plurality of training parameters of the target algorithm models, and the first training result index group comprises a plurality of training result indexes of the target algorithm models; selecting a target training parameter group from the plurality of first training parameter groups according to the plurality of first training result index groups; and sending the target training parameter group to the plurality of user equipment to instruct the plurality of user equipment to update the target algorithm model respectively. The method and the device update the target algorithm model of each user equipment according to the training parameters and the training result indexes of different user equipment training target algorithm models, realize unified management of the algorithm models of a plurality of user equipment, and improve the accuracy of the algorithm models while reducing the management cost.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the network device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Referring to fig. 7, fig. 7 is a block diagram illustrating functional units of an apparatus 700 for managing an algorithm model according to an embodiment of the present application, where the apparatus 700 includes: a processing unit 710 and a transceiver unit 720, wherein,
the processing unit 710 is configured to obtain first information for a plurality of user equipments to train a target algorithm model, to obtain a plurality of first information, where each first information includes a first training parameter group and a first training result index group, the first training parameter group includes a plurality of training parameters of the target algorithm model, and the first training result index group includes a plurality of training result indexes of the target algorithm model;
the processing unit 710 is further configured to select a target training parameter group from the plurality of first training parameter groups according to the plurality of first training result index sets;
the transceiver 720 is configured to send the target training parameter group to the multiple ue to instruct the multiple ue to update the target algorithm model respectively.
Optionally, the transceiver 720 is further configured to receive a first request message from a first user equipment, where the first request message is used to request the target algorithm model, and the plurality of user equipments include the first user equipment;
the processing unit 710 is further configured to obtain an algorithm model list according to the first request message, where the algorithm model list includes the first information and the second information of each target algorithm model, and the second information includes a training time of each target algorithm model; determining an optimal algorithm model from the plurality of target algorithm models according to the first information and/or the second information;
the transceiving 720 is further configured to send the storage address of the optimal algorithm model to the first user equipment.
Optionally, in terms of determining an optimal algorithm model from the plurality of target algorithm models according to the first information and/or the second information, the processing unit 710 is specifically configured to: if the first request message comprises a first index, selecting the first training result index group with the best first index from the first information, and determining a target algorithm model corresponding to the first training result index group as the best algorithm model; otherwise, target training time is selected from the second information, the target training time is the training time closest to the current time, and the target algorithm model corresponding to the target training time is determined as the optimal algorithm model.
Optionally, in terms of determining an optimal algorithm model from the plurality of target algorithm models according to the first information and/or the second information, the processing unit 710 is specifically configured to: obtaining at least one first algorithm model, wherein the first algorithm model is the target algorithm model uploaded by the first user equipment; according to the second information of the at least one first algorithm model, comparing the first information of the at least one first algorithm model respectively to obtain a first index; and selecting the first training result index group with the best first index from the first information, and determining a target algorithm model corresponding to the first training result index group as the best algorithm model.
Optionally, in terms of obtaining the first index by comparing the first information of the at least one first algorithm model respectively according to the second information of the at least one first algorithm model, the processing unit 710 is specifically configured to: and sequencing the at least one first algorithm model in an ascending order according to training time, respectively matching the first training result index group of the ith first algorithm model with the first training result index group of the (i + 1) th first algorithm model to obtain a plurality of candidate indexes, wherein the candidate indexes are different training result indexes matched in the ith first algorithm model and the (i + 1) th first algorithm model, determining the candidate index with the largest number in the plurality of candidate indexes as the first index, and i is a positive integer.
Optionally, in terms of selecting a target training parameter group from the plurality of first training parameter groups according to the plurality of first training result index sets, the processing unit 710 is specifically configured to: mapping the training result indexes in each first training result index group into a coordinate system, wherein the horizontal axis of the coordinate system is the value of the training result index, and the vertical axis of the coordinate system is the first training result index group; fitting coordinate points corresponding to the same training result index to obtain a plurality of fitting curves, wherein each training result index corresponds to one fitting curve; calculating the deviation degree of each fitting curve from a preset curve to obtain a plurality of deviation degrees; determining the fitted curve with the minimum deviation degree as the target fitted curve; and determining the target fitting curve corresponding to a first training parameter group as the target training parameter group.
Optionally, the transceiver 720 is further configured to receive the target algorithm models uploaded by the multiple pieces of user equipment, and the first information and the second information of each target algorithm model;
the processing unit 710 is further configured to generate the algorithm model list according to the uploading time of each target algorithm model;
the transceiver unit 720 is further configured to store the algorithm model list and send the algorithm model list to a display for displaying.
It should be appreciated that the apparatus 700 herein is embodied in the form of a functional unit. The term "unit" herein may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared, dedicated, or group processor) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality. In an optional example, a person skilled in the art may understand that the apparatus 700 may be specifically an on-board device in the foregoing embodiment, and the apparatus 700 may be configured to execute each procedure and/or step corresponding to the on-board device in the foregoing method embodiment, and in order to avoid repetition, details are not described here again.
The device 700 of each scheme has the function of realizing the corresponding steps executed by the vehicle-mounted equipment in the method; the functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software comprises one or more modules corresponding to the functions; for example, the transceiver unit 720 may be replaced by a transmitter, and the processing unit 710 may be replaced by a processor, which respectively perform the transceiving operations and the related processing operations in the respective method embodiments.
In an embodiment of the present application, the apparatus 700 may also be a chip or a chip system, such as: system on chip (SoC). Correspondingly, the transceiver unit may be a transceiver circuit of the chip, and is not limited herein.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the vehicle-mounted device includes: one or more processors, one or more memories, one or more communication interfaces, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the one or more processors.
The program includes instructions for performing the steps of:
acquiring first information of a plurality of user equipment training target algorithm models to obtain a plurality of pieces of first information, wherein each piece of first information comprises a first training parameter group and a first training result index group, the first training parameter group comprises a plurality of training parameters of the target algorithm models, and the first training result index group comprises a plurality of training result indexes of the target algorithm models;
selecting a target training parameter group from the first training parameter groups according to the first training result index groups;
and sending the target training parameter group to the user equipments to instruct the user equipments to update the target algorithm model respectively.
All relevant contents of each scene related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
It will be appreciated that the memory described above may include both read-only memory and random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In the embodiment of the present application, the processor of the above apparatus may be a Central Processing Unit (CPU), and the processor may also be other general processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), Field Programmable Gate Arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that reference to "at least one" in the embodiments of the present application means one or more, and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
And, unless stated to the contrary, the embodiments of the present application refer to the ordinal numbers "first", "second", etc., for distinguishing a plurality of objects, and do not limit the sequence, timing, priority, or importance of the plurality of objects. For example, the first information and the second information are different information only for distinguishing them from each other, and do not indicate a difference in the contents, priority, transmission order, importance, or the like of the two kinds of information.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software elements in a processor. The software elements may be located in ram, flash, rom, prom, or eprom, registers, among other storage media that are well known in the art. The storage medium is located in a memory, and a processor executes instructions in the memory, in combination with hardware thereof, to perform the steps of the above-described method. To avoid repetition, it is not described in detail here.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially or partially contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, or a TRP, etc.) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for managing an algorithmic model, the method comprising:
acquiring first information of a plurality of user equipment training target algorithm models to obtain a plurality of pieces of first information, wherein each piece of first information comprises a first training parameter group and a first training result index group, the first training parameter group comprises a plurality of training parameters of the target algorithm models, and the first training result index group comprises a plurality of training result indexes of the target algorithm models;
selecting a target training parameter group from the first training parameter groups according to the first training result index groups;
and sending the target training parameter group to the user equipments to instruct the user equipments to update the target algorithm model respectively.
2. The method of claim 1, further comprising:
receiving a first request message from a first user equipment, the first request message being used for requesting the target algorithm model, the plurality of user equipments including the first user equipment;
acquiring an algorithm model list according to the first request message, wherein the algorithm model list comprises the first information and the second information of each target algorithm model, and the second information comprises the training time of each target algorithm model;
determining an optimal algorithm model from the plurality of target algorithm models according to the first information and/or the second information;
and sending the storage address of the optimal algorithm model to the first user equipment.
3. The method of claim 2, wherein determining an optimal algorithm model from the plurality of target algorithm models based on the first information and/or the second information comprises:
if the first request message comprises a first index, selecting the first training result index group with the best first index from the first information, and determining a target algorithm model corresponding to the first training result index group as the best algorithm model; otherwise, target training time is selected from the second information, the target training time is the training time closest to the current time, and the target algorithm model corresponding to the target training time is determined as the optimal algorithm model.
4. The method of claim 2, wherein determining an optimal algorithm model from the plurality of target algorithm models based on the first information and/or the second information comprises:
obtaining at least one first algorithm model, wherein the first algorithm model is the target algorithm model uploaded by the first user equipment;
according to the second information of the at least one first algorithm model, comparing the first information of the at least one first algorithm model respectively to obtain a first index;
and selecting the first training result index group with the best first index from the first information, and determining a target algorithm model corresponding to the first training result index group as the best algorithm model.
5. The method according to claim 4, wherein comparing the first information of the at least one first algorithm model with the second information of the at least one first algorithm model to obtain a first index comprises:
and sequencing the at least one first algorithm model in an ascending order according to training time, respectively matching the first training result index group of the ith first algorithm model with the first training result index group of the (i + 1) th first algorithm model to obtain a plurality of candidate indexes, wherein the candidate indexes are different training result indexes matched in the ith first algorithm model and the (i + 1) th first algorithm model, determining the candidate index with the largest number in the plurality of candidate indexes as the first index, and i is a positive integer.
6. The method according to any of claims 2-5, wherein said selecting a target training parameter set from a plurality of said first training parameter sets based on a plurality of said first training result indicator sets comprises:
mapping the training result indexes in each first training result index group into a coordinate system, wherein the horizontal axis of the coordinate system is the value of the training result index, and the vertical axis of the coordinate system is the first training result index group;
fitting coordinate points corresponding to the same training result index to obtain a plurality of fitting curves, wherein each training result index corresponds to one fitting curve;
calculating the deviation degree of each fitting curve from a preset curve to obtain a plurality of deviation degrees;
determining the fitted curve with the minimum deviation degree as the target fitted curve;
and determining a first training parameter group corresponding to the target fitting curve as the target training parameter group.
7. The method of claim 6, further comprising:
receiving the target algorithm models uploaded by the plurality of user equipment and the first information and the second information of each target algorithm model;
generating the algorithm model list according to the uploading time of each target algorithm model;
and storing the algorithm model list and sending the algorithm model list to a display for displaying.
8. An apparatus for managing an algorithmic model, the apparatus comprising:
the processing unit is used for acquiring first information of a plurality of user equipment training target algorithm models to obtain a plurality of pieces of first information, wherein each piece of first information comprises a first training parameter group and a first training result index group, the first training parameter group comprises a plurality of training parameters of the target algorithm models, and the first training result index group comprises a plurality of training result indexes of the target algorithm models;
the processing unit is further configured to select a target training parameter group from the plurality of first training parameter groups according to the plurality of first training result index groups;
a transceiver unit, configured to send the target training parameter set to the multiple pieces of user equipment, so as to instruct the multiple pieces of user equipment to update the target algorithm model respectively.
9. An electronic device comprising a processor, a memory and a communication interface, the memory storing one or more programs, and the one or more programs being executable by the processor, the one or more programs including instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform the steps of the method according to any one of claims 1-7.
CN202111331431.2A 2021-11-10 2021-11-10 Management method and device of algorithm model Pending CN113988212A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111331431.2A CN113988212A (en) 2021-11-10 2021-11-10 Management method and device of algorithm model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111331431.2A CN113988212A (en) 2021-11-10 2021-11-10 Management method and device of algorithm model

Publications (1)

Publication Number Publication Date
CN113988212A true CN113988212A (en) 2022-01-28

Family

ID=79747855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111331431.2A Pending CN113988212A (en) 2021-11-10 2021-11-10 Management method and device of algorithm model

Country Status (1)

Country Link
CN (1) CN113988212A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117035065A (en) * 2023-10-10 2023-11-10 浙江大华技术股份有限公司 Model evaluation method and related device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117035065A (en) * 2023-10-10 2023-11-10 浙江大华技术股份有限公司 Model evaluation method and related device

Similar Documents

Publication Publication Date Title
KR101943986B1 (en) Mobile Terminal and Method to Recommend Application or Content
KR102037412B1 (en) Method for fitting hearing aid connected to Mobile terminal and Mobile terminal performing thereof
US7764811B2 (en) Image mapping to provide visual geographic path
US8595330B2 (en) Method, system and apparatus for uploading and downloading a caption file
CN104821177A (en) Local network media sharing
US20110191456A1 (en) Systems and methods for coordinating data communication between two devices
CN104919843A (en) Wireless access point mapping
US10511935B2 (en) Location based information service application
CN105243119B (en) Determine region to be superimposed, superimposed image, image presentation method and the device of image
WO2014092788A1 (en) Apparatus, system and method of estimating a location of a mobile device
CN110009059B (en) Method and apparatus for generating a model
CN108804130B (en) Program installation package generation method and device
CN110430022B (en) Data transmission method and device
CN106105274A (en) For determining the system and method for the position of radio communication device
US11695679B1 (en) Performance testing using a remotely controlled device
CN111079034A (en) Shared navigation implementation method, terminal equipment and computer equipment
CN113988212A (en) Management method and device of algorithm model
CN113312543A (en) Personalized model training method based on joint learning, electronic equipment and medium
CN105765552A (en) Method and apparatus for identifying media files based upon contextual relationships
US10880601B1 (en) Dynamically determining audience response to presented content using a video feed
US8805027B2 (en) Image mapping to provide visual geographic path
CN113537512A (en) Model training method, device, system, equipment and medium based on federal learning
CN116468917A (en) Image processing method, electronic device and storage medium
CN115408609A (en) Parking route recommendation method and device, electronic equipment and computer readable medium
CN111111188B (en) Game sound effect control method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination