WO2021051920A1 - 模型优化方法、装置、存储介质及设备 - Google Patents

模型优化方法、装置、存储介质及设备 Download PDF

Info

Publication number
WO2021051920A1
WO2021051920A1 PCT/CN2020/097973 CN2020097973W WO2021051920A1 WO 2021051920 A1 WO2021051920 A1 WO 2021051920A1 CN 2020097973 W CN2020097973 W CN 2020097973W WO 2021051920 A1 WO2021051920 A1 WO 2021051920A1
Authority
WO
WIPO (PCT)
Prior art keywords
search
model
configuration information
original model
operator
Prior art date
Application number
PCT/CN2020/097973
Other languages
English (en)
French (fr)
Inventor
谢凯源
白小龙
付纹琦
于成辉
周卫民
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010423371.6A external-priority patent/CN112529207A/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20864854.3A priority Critical patent/EP4012630A4/en
Publication of WO2021051920A1 publication Critical patent/WO2021051920A1/zh
Priority to US17/694,970 priority patent/US12032571B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks
    • G06N3/105Shells for specifying net layout
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound

Definitions

  • This application relates to the field of artificial intelligence (AI), and in particular to a model optimization method, device, storage medium, and equipment.
  • AI artificial intelligence
  • the performance of the AI model has a strong correlation with the structure of the AI model, the hyperparameters of the model, the training data or inference data, and the loss function.
  • AI platforms In order to obtain an AI model that meets application requirements, various AI platforms are currently provided in related technologies. These AI platforms can perform hyperparameter search or network structure optimization on the generated initial network or the original model provided by the user according to the needs of users. And other optimization methods to output an AI model that meets the needs of users.
  • these AI platforms perform AI model optimization, each optimization only provides a single optimization function, that is, only optimizes the model from one aspect.
  • This application provides a model optimization method, which can perform joint optimization of multiple dimensions on the initial AI model, thereby obtaining an optimized model in a comprehensive space, and improving the performance of the model.
  • the technical solution is as follows:
  • a model optimization method includes: obtaining a user's original model and search configuration information, the search configuration information includes a plurality of search terms, and different search terms represent information for optimizing the original model Different search categories for searching; according to the search configuration information, multiple search operators are arranged to obtain a combination operator, and each arranged search operator corresponds to a search item, and the search operator represents the execution of the corresponding search item The algorithm used; according to the combination operator, the original model is optimized to obtain an optimized model.
  • search configuration information containing multiple search items can be obtained at the same time, and then search operators corresponding to the multiple search items are arranged according to the search configuration information to obtain a combination operator.
  • the original model can be optimized for multiple search terms at the same time according to the combination operator. That is, the embodiments of the present application can realize joint optimization of multiple dimensions of the original model at the same time, thereby obtaining an optimized model in the integrated space, and improving the performance of the model.
  • the multiple search items in the search configuration information may include at least two of hyperparameter search, network architecture search, data enhancement search, loss function search, optimizer search, and model compression strategy search.
  • the hyperparameter search refers to searching for the hyperparameters that conform to the original model in a certain search space through the hyperparameter search algorithm.
  • Hyperparameters are also called hyperparameters, which are some parameters in AI models (such as neural network models) that cannot be obtained through model training.
  • hyperparameters include parameters such as learning rate and number of iterations.
  • Network architecture search refers to algorithms based on evolutionary algorithms, reinforcement learning, and differentiable networks. In a given search space, search for a network architecture that meets user requirements. The network architecture represents the basic structure of the AI model.
  • Data enhancement search refers to searching for a data enhancement strategy that meets user requirements according to a specified data set, and then processing samples in the data set through the data enhancement strategy.
  • the data enhancement strategy is used to perform training data, test data, or inference data.
  • Preprocessing algorithm Training, testing, or inferring the AI model through the samples in the processed data set can make the performance of the AI model better.
  • Loss function search refers to searching for a loss function that meets user needs in a given search space, and the loss function is used for model optimization when training the original model.
  • Optimizer search refers to searching for an optimizer that meets the requirements in a given search space. Subsequently, model parameters are learned through the optimizer, which can make the performance of the AI model better.
  • Model compression strategy search refers to searching for a strategy for model compression in a given search space to achieve compression and tailoring of AI models.
  • a plurality of search operators are arranged, and the realization process of obtaining a combination operator may be: according to the search configuration information, determining the operation order and operation of the plurality of search operators The number of times or the comprehensive search space for each operation; the combination operator is generated according to the operation sequence, the number of operations, or the comprehensive search space for each operation of the multiple search operators.
  • the comprehensive search space refers to the search space obtained by fusing the search spaces corresponding to different search items, and may also refer to the respective search spaces obtained after the search spaces corresponding to different search items influence each other.
  • the final combination operator is obtained by determining the order of operations, the number of operations, and the comprehensive search space during each operation of multiple search operators. In this way, when optimizing the original model based on the combination operator , You can search the optimization information of the original model in the comprehensive search space. That is, the embodiment of the present application does not search in a single search space corresponding to a certain search item, but searches in a comprehensive search space according to a combination operator, which is equivalent to seeking optimization information of the model in the comprehensive search space. In this way, the model optimized according to the obtained optimization information is also an optimized model in the comprehensive search space, which improves the performance of the optimized model.
  • the search configuration information further includes: search item information and a search pattern, wherein each search item corresponds to its own search item information, and the search pattern is used to indicate the following when optimizing the original model in principle.
  • the search term information includes some information used when searching for the corresponding search term.
  • the search term information may include the search algorithm and search space of the corresponding search term, etc.
  • the search space limits the time when the corresponding search is performed. Search scope.
  • the search mode includes any one of precision mode, speed mode, economic mode, and resource mode.
  • the selected combination operator is limited by searching configuration information and search mode, so that the optimization result or process of the model is more in line with user requirements.
  • the search configuration information is obtained by the user inputting or selecting on the graphical user interface GUI.
  • the resource consumption for optimizing the original model can also be estimated according to the combination operator; according to the resource consumption, Perform resource scheduling for optimizing the original model.
  • the embodiment of the present application can realize automatic scheduling of resources according to the resource consumption during the operation of the combination operator and its own resource usage.
  • this application may also acquire evaluation indicators, which represent performance targets that should be achieved after optimizing the original model; accordingly, according to the combination operator, When optimizing the original model, search for optimization information in the comprehensive search space according to the combination operator, optimize the original model according to the optimization information, and obtain an optimized model, the optimized model The performance meets the evaluation index.
  • the evaluation index may include any one or more of the following indexes: the accuracy of the model, the loss of the model, the accuracy of the model, and the recall rate of the model.
  • the evaluation index can also be other evaluation indicators, for example, it can be a user-defined indicator, which is not limited in the embodiment of the present application.
  • this application also provides another model optimization method.
  • the method includes: providing a user with a configuration interface, the configuration interface including a list of search items for the user to select; obtaining the original model and searching configuration information, so
  • the search configuration information includes multiple search terms selected by the user in the search term list, and different search terms represent different search categories for optimizing the information search for the original model; the original model is searched for according to the search configuration information.
  • the model is optimized; the optimized model is provided to the user.
  • a joint search is performed based on multiple search terms, which realizes joint optimization of multiple dimensions of the original model at the same time, and improves the performance of the model.
  • a configuration interface can be provided to the user, and the user can select multiple search items to be searched, so as to meet the user's requirements for joint search of different search items.
  • the multiple search items selected by the user in the search item list include at least two of hyperparameter search, network architecture search, data enhancement search, loss function search, optimizer search, and model compression strategy search .
  • the meaning of each search item can refer to the related introduction in the foregoing first aspect, and the embodiments of the present application will not be repeated here.
  • the configuration interface further includes a search item information configuration page and a search mode configuration page
  • the search configuration information further includes search item information and search mode configured by the user on the configuration interface.
  • the search mode is used to indicate a principle to be followed when optimizing the original model, and the search mode includes any one of a precision mode, a speed mode, an economic mode, and a resource mode.
  • a plurality of search operators may be arranged according to the search configuration information to obtain a combination operator; then, according to the combination calculation , Optimize the original model.
  • the combination operator means an operator generated after determining the order of operations, the number of operations, and the comprehensive search space during each operation among multiple search operators or each part of each search operator.
  • the comprehensive search space refers to the search space obtained by fusing the search spaces corresponding to different search items, and may also refer to the respective search spaces obtained after the search spaces corresponding to different search items influence each other.
  • Optimizing the original model according to the combination operator can realize the search for the optimization information of the original model in the integrated search space, thereby obtaining a better solution in the integrated space and improving the performance of the model.
  • a plurality of search operators are arranged, and the realization process of obtaining a combination operator may be: according to the search configuration information, determining the operation order and operation of the plurality of search operators The number of times or the comprehensive search space for each operation; the combination operator is generated according to the operation sequence, the number of operations, or the comprehensive search space for each operation of the multiple search operators.
  • the resource consumption for optimizing the original model can also be estimated according to the combination operator; according to the resource consumption, Perform resource scheduling for optimizing the original model.
  • the present application also provides a model optimization device.
  • the model optimization device includes: a configuration module for acquiring the user’s original model and search configuration information.
  • the search configuration information includes multiple search terms, and different search terms. Represents the different search categories for searching for optimized information on the original model; the operator arrangement module is used to arrange multiple search operators according to the search configuration information to obtain a combination operator, and each arranged search algorithm The sub corresponds to a search term, and the search operator represents the algorithm used to execute the corresponding search term; the multiple search module is used to optimize the original model according to the combination operator to obtain an optimized model.
  • the multiple search items include at least two of hyperparameter search, network architecture search, data enhancement search, loss function search, optimizer search, and model compression strategy search.
  • the operator orchestration module is specifically configured to: according to the search configuration information, determine the order of operations, the number of operations, and the comprehensive search space for each operation of the plurality of search operators; The operation sequence of the search operator, the number of operations, and the comprehensive search space for each operation are used to generate the combination operator.
  • the search configuration information further includes: search item information and search mode, where each search item corresponds to its own search item information, and the search mode is used to indicate the principle followed when optimizing the original model .
  • the search mode includes any one of a precision mode, a speed mode, an economic mode, and a resource mode.
  • the search configuration information is obtained by the user inputting or selecting on the graphical user interface GUI.
  • the model optimization device further includes: a resource management module configured to: according to the combination operator, estimate resource consumption for optimizing the original model; according to the resource consumption, Resource scheduling is performed to perform operations that optimize the original model.
  • a resource management module configured to: according to the combination operator, estimate resource consumption for optimizing the original model; according to the resource consumption, Resource scheduling is performed to perform operations that optimize the original model.
  • the configuration module of the model optimization device is also used to obtain evaluation indicators, where the evaluation indicators represent performance goals that should be achieved after optimizing the original model;
  • the multivariate search module is specifically also used to The combination operator searches for optimization information in the comprehensive search space, optimizes the original model according to the optimization information, and obtains an optimized model, and the performance of the optimized model meets the evaluation index.
  • the evaluation index includes any one or more of the following indexes: accuracy of the model, loss of the model, accuracy of the model, and recall rate of the model.
  • the present application also provides another model optimization device.
  • the device includes: a configuration module for providing a user with a configuration interface, the configuration interface including a list of search items for the user to select; obtaining the original model and Search configuration information, the search configuration information includes multiple search terms selected by the user in the search term list, and different search terms represent different search categories for optimizing the information search of the original model; a multiple search module uses To optimize the original model according to the search configuration information; the feedback module is used to provide the optimized model to the user.
  • the multiple search items selected by the user in the search item list include at least two of hyperparameter search, network architecture search, data enhancement search, loss function search, optimizer search, and model compression strategy search .
  • the configuration interface further includes a search item information configuration page and a search mode configuration page
  • the search configuration information further includes search item information and search mode configured by the user on the configuration interface.
  • the search mode is used to indicate a principle to be followed when optimizing the original model, and the search mode includes any one of a precision mode, a speed mode, an economic mode, and a resource mode.
  • the multivariate search module is specifically configured to: arrange multiple search operators according to the search configuration information to obtain a combination operator; and optimize the original model according to the combination operator.
  • the multivariate search module is specifically configured to: according to the search configuration information, determine the order of operations, the number of operations, or the comprehensive search space for each operation of the plurality of search operators; The operation sequence of the operator, the number of operations, or the comprehensive search space for each operation, generate the combination operator.
  • the model optimization device further includes: a resource management module configured to: according to the combination operator, estimate resource consumption for optimizing the original model; according to the resource consumption, Resource scheduling is performed to perform operations that optimize the original model.
  • a resource management module configured to: according to the combination operator, estimate resource consumption for optimizing the original model; according to the resource consumption, Resource scheduling is performed to perform operations that optimize the original model.
  • the present application also provides a computing device.
  • the structure of the computing device includes a processor and a memory, and the memory is used to store and support the computing device to perform the model optimization provided in the first or second aspect.
  • the program of the method and the data used to realize the model optimization method provided in the first aspect or the second aspect are stored.
  • the processor executes the program stored in the memory to execute the method provided by the foregoing first aspect or second aspect and optional implementation manners thereof.
  • the computing device may also include a communication bus, which is used to establish a connection between the processor and the memory.
  • the present application also provides a computer-readable storage medium that stores instructions in the computer-readable storage medium, which when run on a computer, causes the computer to execute the first or second aspect and the foregoing The model optimization method described in the optional implementation mode.
  • the present application also provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the model optimization method described in the first or second aspect.
  • search configuration information containing multiple search items can be obtained at the same time, and then search operators corresponding to the multiple search items are arranged according to the search configuration information to obtain a combination operator.
  • the original model can be optimized for multiple search terms at the same time according to the combination operator. That is, the embodiments of the present application can realize joint optimization of multiple dimensions of the original model, thereby obtaining an optimized model in the integrated space, and improving the performance of the model.
  • FIG. 1 is a schematic structural diagram of a model optimization device provided by an embodiment of this application.
  • FIG. 2 is a schematic diagram of the deployment of a model optimization apparatus provided by an embodiment of the present application
  • FIG. 3 is an application schematic diagram of a model optimization device provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of deployment of another model optimization apparatus provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a computing device provided by an embodiment of the present application.
  • FIG. 6 is a flowchart of a model optimization method provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a configuration interface provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of another configuration interface provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an output interface of an optimized model provided by an embodiment of the present application.
  • FIG. 10 is a flowchart of automatic resource scheduling according to a combination operator provided by an embodiment of the present application.
  • FIG. 11 is a flowchart of another model optimization method provided by an embodiment of the present application.
  • Fig. 12 is a schematic structural diagram of a computer system provided by an embodiment of the present application.
  • AI models have been widely used in fields such as image recognition, video analysis, speech recognition, natural language translation, and automatic driving control.
  • the AI model represents a mathematical algorithm that can be trained to complete the learning of data features, and then can be used for reasoning.
  • the neural network model is a typical AI model.
  • Neural network model is a kind of mathematical calculation model that imitates the structure and function of biological neural network (animal's central nervous system).
  • a neural network model can include multiple calculation layers with different functions, and each layer includes parameters and calculation formulas. According to different calculation formulas or different functions, different calculation layers in the neural network model have different names.
  • the layer that performs the convolution calculation is called the convolution layer, which can be used for feature extraction of the input image.
  • the AI model is simply referred to as a model.
  • the performance of the neural network model is closely related to the hyperparameter selection of the neural network model, network architecture design, training samples, etc. How to optimize the original AI model from many aspects to obtain a higher performance AI model is the focus of the industry.
  • the developer in order to improve the performance of the AI model, after the developer has compiled the original original model, he can use the optimization method provided in the embodiments of this application to perform hyperparameter search, network architecture search, and data enhancement search for the original model. Search to obtain the optimization information of the original model, and then optimize the original model in terms of hyperparameters, network architecture, training samples, loss function, optimizer, model compression strategy, etc. according to the optimization information.
  • the original model refers to the initial AI model that has not been optimized for performance, and the original model can be expressed in the form of code.
  • hyperparameter search refers to searching for hyperparameters that conform to the original model in a certain search space through a hyperparameter search algorithm.
  • hyperparameters are also called hyperparameters, which are some parameters in AI models (for example, neural network models) that cannot be obtained through model training.
  • hyperparameters include parameters such as learning rate and number of iterations.
  • the setting of hyperparameters has a greater impact on the performance of the AI model.
  • Network architecture search refers to algorithms based on evolutionary algorithms, reinforcement learning, and differentiable networks. In a given search space, search for a network architecture that meets user requirements. The network architecture represents the basic structure of the AI model.
  • Data enhancement search refers to searching for a data enhancement strategy that meets user requirements according to a specified data set, and then processing samples in the data set through the data enhancement strategy.
  • the data enhancement strategy is used to perform training data, test data, or inference data. Preprocessing algorithm. Training, testing, or inferring the AI model through the samples in the processed data set can make the performance of the AI model better.
  • Loss function search refers to searching for a loss function that meets user needs in a given search space, and the loss function is used for model optimization when training the original model.
  • Optimizer search refers to searching for an optimizer that meets the requirements in a given search space. Subsequently, model parameters are learned through the optimizer, which can make the performance of the AI model better.
  • Model compression strategy search refers to searching for a strategy for model compression in a given search space to achieve compression and tailoring of AI models.
  • the embodiment of the present application provides a model optimization method, which is executed by a model optimization device.
  • the function of the model optimization device can be realized by a software system, can also be realized by a hardware device, or can be realized by a combination of a software system and a hardware device.
  • the model optimization device 100 can be logically divided into multiple modules, each module can have a different function, and the function of each module is read by the processor in the computing device. It is implemented by executing instructions in the memory.
  • the structure of the computing device may be the computing device 500 shown in FIG. 5 below.
  • the model optimization device may include a configuration module 101, an operator arrangement module 102, a multivariate search module 103, and a storage module 104.
  • the model optimization apparatus 100 can execute the content described in steps 601-603 and steps 1001-1002 described below, or execute the content described in steps 1101-1104 and steps 1001-1002 described below . It should be noted that the embodiment of the present application only exemplarily divides the structure and functional modules of the model optimization apparatus 100, but does not limit the specific division in any way.
  • the configuration module 101 is used to obtain the original model of the user and search for configuration information.
  • the original model of the user may be uploaded by the user or stored on other devices or equipment.
  • the search configuration information may include multiple search terms configured by the user, and each search term represents a category of optimized information for searching the original model.
  • multiple search items may be hyperparameter search and network architecture search. In this case, it means searching for hyperparameter optimization information and network architecture optimization information of the original model.
  • the multiple search items may also include data enhancement search, loss function search, optimizer search, model compression strategy search, etc., which are not limited in the embodiment of the present application.
  • the search configuration information may further include search item information and search mode, each search item information corresponds to each search item, and each search item information includes the search space corresponding to the corresponding search item.
  • the search mode may include any one of precision mode, speed mode, economy mode, and resource mode.
  • the operator arrangement module 102 is configured to communicate with the configuration module 101, the storage module 104, and the multivariate search module 103, receive search configuration information sent by the configuration module 101, and receive multiple search operators sent by the storage module 104. According to the search configuration information, multiple search operators are arranged to obtain a combination operator.
  • search operators can be stored in the storage module 104, for example, hyperparameter search operators, network architecture search operators, data enhancement search operators, loss function search operators, optimizer search operators, Model compression strategy search operator and user-defined search operator, etc.
  • the search operator refers to an algorithm that implements the corresponding search, or in other words, the search operator is a method of searching for the optimization information corresponding to the corresponding search item.
  • a hyperparameter search operator refers to a search algorithm that implements hyperparameter search, that is, it refers to a method of searching for hyperparameters
  • a network architecture search operator refers to a search algorithm that implements network architecture search, that is, refers to Schema search method.
  • the operator arrangement module 102 After the operator arrangement module 102 receives the search configuration information sent by the configuration module 101, it can obtain the operator corresponding to each search item from the storage module 104 according to the search items included in the search configuration information. For example, when the search configuration information includes When the multiple search items are hyperparameter search and network architecture search, the operator orchestration module 102 can obtain the hyperparameter search operator and the network architecture search operator from the storage module 104 according to the search items. After that, the operator arrangement module 102 can arrange the acquired operators to generate a combination operator.
  • the combination operator refers to an operator generated after determining the order of operations, the number of operations, and the comprehensive search space during each operation among multiple search operators or each part of each search operator. After generating the combination operator, the operator arrangement module 102 may send the combination operator to the multivariate search module 103.
  • the storage module 104 may include multiple search operators corresponding to each search item. Different search operators indicate that the search methods for optimization information corresponding to the same search item are different.
  • the search item is a network structure.
  • the memory 104 may store a network architecture search operator A, a network architecture search operator B, and a network architecture search operator C.
  • the same search term corresponds to multiple search operators, one of the search operators can be selected according to the user's original model or search term information and search mode.
  • the multivariate search module 103 is used to communicate with the operator arrangement module 102 and the configuration module 101.
  • the combination operator sent by the operator orchestration module 102 and the original model of the user sent by the receiving configuration module 101 are received. After that, the multivariate search module 103 can optimize the original model of the user according to the combination operator.
  • the model optimization apparatus 100 may further include a resource scheduling module 105.
  • the resource scheduling module 105 is used to communicate with the operator arrangement module 102 and the multivariate search module 103.
  • the resource scheduling module 105 can receive the combination operator determined by the operator orchestration module 102, and according to the combination operator, estimate the resource consumption for optimizing the original model, and then according to the resource consumption, perform the analysis for the multiple search module 103
  • the optimized operation of the model performs resource scheduling.
  • the model optimization device 100 may further include a feedback module 106.
  • the feedback module 106 is used to communicate with the multivariate search module 103.
  • the feedback module 106 can feed back the search results of the multivariate search module 103 and the optimized model to the user.
  • some of the multiple modules included in the model optimization apparatus 100 may also be combined into one module.
  • the above-mentioned operator arrangement module 102 and the multivariate search module 103 may be combined into one module.
  • the optimization module that is, the optimization module integrates the functions of the operator arrangement module 102 and the multivariate search module 103.
  • the model optimization apparatus 100 described above can be deployed flexibly.
  • the model optimization apparatus 100 may be deployed in a cloud environment.
  • the cloud environment is an entity that uses basic resources to provide cloud services to users in the cloud computing mode.
  • the cloud environment includes cloud data centers and cloud service platforms.
  • a cloud data center includes a large number of basic resources (including computing resources, storage resources, and network resources) owned by a cloud service provider, and the computing resources included in a cloud data center may be a large number of computing devices (for example, servers).
  • the model optimization device 100 may be a software device deployed on a server or a virtual machine in a cloud data center.
  • the software device may be used to optimize the AI model.
  • the software device may be deployed on multiple servers in a distributed manner, or Distributed deployment on multiple virtual machines, or distributed deployment on virtual machines and servers. For example, as shown in FIG. 2, the model optimization apparatus 100 is deployed in a cloud environment.
  • the client 110 may send the original model uploaded by the user to the model optimization apparatus 100, or another non-client device 120 may send the original model generated or stored by itself to the model optimization apparatus 100, and the model optimization apparatus 100 is receiving After arriving at the original model, you can arrange multiple search operators according to the search configuration information to obtain a combination operator, and then optimize the original model according to the combination operator, obtain the optimized model, and feed the optimized model back to the client 110 or other non-client device 120.
  • FIG. 3 is a schematic diagram of an application of the model optimization apparatus 100 in this application.
  • the model optimization apparatus 100 can be deployed by a cloud service provider in a cloud data center, and the cloud service provider optimizes the model.
  • the function provided by the device is abstracted into a cloud service, and the cloud service platform allows users to consult and purchase this cloud service. After purchasing this cloud service, the user can use the model optimization service provided by the model optimization apparatus 100 in the cloud data center.
  • the model optimization device can also be deployed by the tenant in the computing resources of the cloud data center rented by the tenant.
  • the tenant purchases the computing resource cloud service provided by the cloud service provider through the cloud service platform, and runs the model optimization device 100 in the purchased computing resources. , So that the model optimization device 100 optimizes the AI model.
  • the model optimization apparatus 100 may also be a software apparatus running on an edge computing device in an edge environment or one or more edge computing devices in an edge environment.
  • the so-called edge environment refers to a collection of devices including one or more edge computing devices in a certain application scenario, where the one or more edge computing devices can be computing devices in one data center or multiple data centers. Computing equipment.
  • the model optimization apparatus 100 is a software device, the model optimization apparatus 100 may be deployed in a distributed manner on multiple edge computing devices, or may be deployed in a centralized manner on one edge computing device.
  • the model optimization apparatus 100 is distributedly deployed in an edge computing device 130 included in the data center of a certain enterprise, and the client 140 in the enterprise can send the original model to the model optimization.
  • the device 100 may also send the search configuration information to the model optimization device 100.
  • the model optimization device 100 can arrange multiple search operators according to the search configuration information to obtain a combination operator, and then optimize the original model according to the combination operator to obtain an optimized model, and optimize The latter model is fed back to the client 140.
  • FIG. 5 is a schematic structural diagram of a computing device 500 provided by an embodiment of the present application.
  • the computing device 500 includes a processor 501, a communication bus 502, a memory 503, and at least one communication interface 504.
  • the processor 501 may be a general-purpose central processing unit (Central Processing Unit, CPU), an application-specific integrated circuit (ASIC), a graphics processing unit (GPU), or any combination thereof.
  • the processor 501 may include one or more chips, and the processor 501 may include an AI accelerator, such as a neural network processor (neural processing unit, NPU).
  • NPU neural network processor
  • the communication bus 502 may include a path for transferring information between various components of the computing device 500 (for example, the processor 501, the memory 503, and the communication interface 504).
  • the memory 503 can be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types that can store information and instructions.
  • the type of dynamic storage device can also be electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), CD-ROM (Compact Disc Read-Only Memory, CD-ROM) or other optical disk storage, optical disc Storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by Any other medium accessed by the computer, but not limited to this.
  • the memory 503 may exist independently, and is connected to the processor 501 through a communication bus 502.
  • the memory 503 may also be integrated with the processor 501.
  • the memory 503 can store computer instructions. When the computer instructions stored in the memory 503 are executed by the processor 501, the model optimization method of the present application can be implemented.
  • the memory 503 may also store data required by the processor in the process of executing the foregoing method, and generated intermediate data and/or result data.
  • the communication interface 504 uses any device such as a transceiver to communicate with other devices or communication networks, such as Ethernet, radio access network (RAN), wireless local area network (Wireless Local Area Networks, WLAN), etc.
  • RAN radio access network
  • WLAN Wireless Local Area Networks
  • the processor 501 may include one or more CPUs.
  • the computer device may include multiple processors.
  • processors can be a single-CPU (single-CPU) processor or a multi-core (multi-CPU) processor.
  • the processor here may refer to one or more devices, circuits, and/or processing cores for processing data (for example, computer program instructions).
  • Fig. 6 is a flowchart of a model optimization method provided by an embodiment of the present application.
  • the model optimization method can be executed by the aforementioned model optimization device 100. Referring to FIG. 6, the method includes the following steps:
  • Step 601 Obtain the user's original model and search configuration information.
  • the search configuration information includes multiple search items, and different search items represent different categories of optimized information search for the original model.
  • the original model of the user may be uploaded by the user in the form of code. That is, the model optimization device can receive the original model in the form of code uploaded by the user. Alternatively, the original model may also be obtained by the model optimization device from other devices according to a designated storage path, or the original model may also be stored in other devices and sent to the model optimization device by other devices.
  • the search configuration information can be input or selected by the user on the GUI.
  • the model optimization device can provide a configuration interface for the user.
  • the configuration interface can include original model configuration items and search configuration information options.
  • the user can input the original model configuration items in the original model configuration items.
  • the storage path the model optimization device can obtain the original model according to the storage path, and configure the search configuration information through the search configuration information option.
  • the search configuration information option may include a list of search items.
  • the search term list includes multiple search terms that can be selected, and each search term represents a category for performing optimized information search on the original model.
  • the search item list may include hyperparameter search, network architecture search, data enhancement search, loss function search, optimizer search, model compression strategy search, and the like.
  • hyperparameter search refers to searching for hyperparameters in a given search space that matches the original model.
  • Network architecture search refers to searching for a network architecture that meets user requirements in a given search space.
  • Data enhancement search refers to searching for a data enhancement strategy that meets user requirements on a specified data set, and then processing the samples in the data set through the data enhancement strategy.
  • the data enhancement strategy is used for training data, test data or inference data Algorithm for preprocessing. Training, testing, or inference of the AI model through the samples in the processed data set can make the performance of the AI model better.
  • Loss function search refers to searching for a loss function that meets the needs of users in a given search space.
  • Optimizer search refers to searching for an optimizer that meets the requirements in a given search space. Subsequently, model parameters are learned through the optimizer, which can make the performance of the AI model better.
  • Model compression strategy search refers to searching for a strategy for model compression in a given search space to achieve compression and tailoring of the model.
  • the interface when the user selects multiple search items in the search item list on the GUI interface, the interface will also provide a search item information configuration list.
  • the search item information configuration list the user can configure the corresponding search item correspondence
  • the search item information includes some information used when searching for the corresponding search item, for example, the one or more search item information may include the search algorithm and search space of the corresponding search item, etc.
  • the search space limits the search range for the corresponding search.
  • the interface can display the search item information configuration list corresponding to the hyperparameter search and the search item information configuration list corresponding to the network architecture search.
  • the search item information configuration list corresponding to the hyperparameter search includes the search algorithm configuration item for the hyperparameter search, the parameter name configuration item, and the parameter range configuration item.
  • the parameter range configuration item is equivalent to the search space configuration item corresponding to the hyperparameter search .
  • the search item information configuration list corresponding to the network architecture search includes default structure configuration items, search space configuration items, and delay setting configuration items of the network architecture. It should be noted that FIG. 7 only exemplarily shows several possible search item information. In some possible implementations, the selected search item corresponds to the search item information configuration list that may include less or more Search item information for.
  • the search configuration information item may also include a search mode and an evaluation index.
  • the search mode is used to indicate the principles followed when optimizing the original model.
  • the search mode can include precision mode, speed mode, economy mode, and resource mode.
  • the accuracy mode means that the model is optimized with model accuracy as the goal.
  • Speed mode means that the model is optimized to meet a certain optimization speed.
  • the economic model refers to the optimization of the model with the minimum cost required by the user.
  • Resource mode refers to model optimization with the goal of minimizing the resources consumed by the model optimization device.
  • the evaluation index mainly refers to the performance index to be achieved by the optimized model.
  • the evaluation index may be one or more of the accuracy of the model (accuracy), the loss of the model (loss), the accuracy of the model (precision), the recall rate (recall), etc., or the evaluation
  • the indicator may also be a user-defined indicator, which is not limited in the embodiment of the present application.
  • the evaluation index can usually be used alone as a condition for stopping model optimization to output the optimized model.
  • the evaluation index is the accuracy of the model
  • the optimization can be stopped and the optimized model is output.
  • the evaluation index may also be affected by the search mode, and the model optimization device may combine the search mode and the evaluation index to determine when to stop optimization to output the optimized model. For example, suppose that the evaluation index set by the user is the lower limit of accuracy of the model.
  • the model optimization device can continue to optimize the model to seek further improvement of the accuracy of the model.
  • the user selects the speed mode, out of the pursuit of optimization speed, after the optimization obtains the first model whose accuracy is greater than the evaluation index, the optimization can be stopped and the optimized model is output immediately.
  • the model optimization device can obtain the original model uploaded by the user and the configured search configuration information.
  • the configuration interface may also include more or less search configuration information options.
  • the configuration interface may not include an evaluation index, but a default evaluation index is given by the model optimization device.
  • the configuration interface may further include an operator arrangement mode option, and the operator arrangement mode option may indicate a way of arranging multiple search operators. The embodiment of the application does not limit this.
  • the original model uploaded by the user may be a code written by the user that has not undergone any adaptive modification to the model optimization device.
  • the user only needs to perform a simple configuration through the above configuration interface, and the model optimization device can optimize the original model through the subsequent steps without modifying the code according to the platform requirements, which reduces the user's threshold for use.
  • the user can also add declaration information to the code file of the original model, and the declaration information can indicate the content to be searched.
  • the declaration information may also include other configuration information such as evaluation indicators.
  • the model optimization device may directly receive the original model and configuration file in code form uploaded by the user, and the configuration file may include the user's search configuration information.
  • the user can directly write the search configuration information into the configuration file.
  • the model optimization device can obtain the search configuration information in the configuration file by parsing the configuration file.
  • the content included in the search configuration information is as described above, and details are not described herein again in the embodiment of the present application.
  • the configuration file can support multiple file formats, such as yaml, xml, txt, etc., which are not limited in the embodiment of the present application.
  • Step 602 Arrange multiple search operators according to the search configuration information to obtain a combination operator.
  • Each arranged search operator corresponds to a search term, and the search operator represents an algorithm used to execute the corresponding search term.
  • search operators corresponding to different search items may be stored in the model optimization device.
  • the search operator is an algorithm for searching the optimization information corresponding to the corresponding search item.
  • the hyperparameter search operator corresponding to the hyperparameter search the network architecture search operator corresponding to the network architecture search
  • the data enhancement search operator corresponding to the data enhancement search and so on.
  • the model optimization device may obtain multiple search operators corresponding to the multiple search items according to the multiple search items included in the search configuration information, and then perform multiple search operations. Arrangement of search operators is performed, and the arrangement of multiple search operators is to determine the combination operation mode of multiple search operators. For example, the model optimization device can determine the operation sequence, the number of operations, and/or the comprehensive search space for each operation of multiple search operators according to the search mode included in the search configuration information and the search item information corresponding to each search item, and then Generate combinatorial operators.
  • the model optimization device may obtain the hyperparameter search operator according to the hyperparameter search, and obtain the network architecture search operator according to the network architecture search.
  • the model optimization device can determine the operation order, the number of operations, and the search space for each operation of the plurality of search operators according to the search mode and search item information included in the search configuration information.
  • the search space during each calculation can be different, and these search spaces include a comprehensive search space.
  • the comprehensive search space can refer to the search space obtained by fusing the search spaces corresponding to different search items, or it can refer to different searches.
  • the search spaces corresponding to the items influence each other to obtain the respective search spaces.
  • the model optimization device arranges the hyperparameter search operator and the network architecture search operator to obtain
  • the combination operator can be:
  • the structure performs a hyperparameter search, and the search results are obtained.
  • the search result includes the corresponding candidate hyperparameters obtained by searching for each network structure.
  • the comprehensive search space refers to the search space obtained after the search space corresponding to the comprehensive network architecture search and the search space corresponding to the hyperparameter search, or the comprehensive search space refers to the comparison of the hyperparameter search results based on the aforementioned network architecture search results and the hyperparameter search results.
  • the reference evaluation index may be the aforementioned evaluation index configured by the user. Of course, if the user has not configured the evaluation index, the evaluation index may refer to the default evaluation index.
  • the model optimization device compares the hyperparameter search operator and the network architecture search operator
  • the combinatorial operator obtained by arranging with the data enhancement search operator can be:
  • the data enhancement strategy is searched in the search space corresponding to the data enhancement search for the specified data set, and the data enhancement process is performed on the training samples in the specified data set through the searched data enhancement strategy.
  • the outer layer calls the hyperparameter search operator and the inner layer calls the network architecture search operator.
  • the search is performed in the designated comprehensive search space, and multiple sets of search results are obtained, and each set of search results includes a network architecture And the corresponding hyperparameter.
  • the designated comprehensive search space is synthesized according to the search space of the hyperparameter search and the search space of the network architecture search, and the designated comprehensive search space is constantly changing according to the results of each layer of the loop.
  • the model can be trained and tested through the enhanced data set in (1) above to obtain the evaluation index corresponding to each set of search results.
  • the model optimization device when searching for multiple search terms, can achieve joint optimization of multiple dimensions by arranging search operators, and in the search process, multiple search terms
  • the model optimization device can arrange multiple search operators in real time according to the search configuration information to obtain a combination operator.
  • the model optimization device may arrange in advance according to the search operators corresponding to different search terms to obtain different combination operators, and test the different combination operators when optimizing the model.
  • Existing characteristics for example, the accuracy of model optimization through a certain combination operator is relatively high, and the optimization speed through another combination operator is relatively fast.
  • the model optimization device may correspondingly store the combined search term, the corresponding combination operator, and the characteristics of the combination operator.
  • the combination search term of hyperparameter search and network architecture search it corresponds to combination operator 1.
  • the characteristic of combination operator 1 is high precision.
  • the combination search term can also correspond to combination operator 2.
  • the corresponding characteristic of combination operator 2 is fast speed.
  • the model optimization device when the model optimization device obtains the search configuration information, it can first match the same combination search item from the above-mentioned corresponding relationship according to the multiple search items included in the search configuration information, and then, according to the search configuration information including The search mode is determined from the combination operator corresponding to the combination search item, and the combination operator whose characteristic matches the search mode is determined. After the matching combination operator is found, the combination can be calculated according to the search item information in the search configuration information.
  • the search space in each operation operation of the child is configured.
  • Step 603 According to the combination operator, optimize the original model to obtain an optimized model.
  • the model optimization device can optimize the original model through the combination operator.
  • the model optimization device can search for optimization information in the comprehensive search space according to the combination operator, and optimize the original model according to the searched optimization information, so as to obtain the optimized model.
  • the performance of the optimized model meets the evaluation index set by the user or is the default evaluation index.
  • the model optimization device may sequentially execute each operator in the order of execution of each operator in the combination operator to search in the comprehensive search space to obtain optimization information of the original model corresponding to the multiple search terms.
  • the optimization information includes hyperparameter optimization information of the original model and network architecture optimization information.
  • the optimization information may be information that can directly replace the corresponding content in the original model, for example, may be a hyperparameter that directly replaces the hyperparameter of the original model.
  • it can be the difference information used to change the corresponding content in the original model.
  • it can be the difference data of the hyperparameter. In this way, the difference data can be added to the hyperparameter of the original model to achieve the Optimization of the super parameters of the original model.
  • the model optimization device can optimize the original model according to the optimization information to obtain an optimized model. After that, the model optimization device can output the optimized model.
  • the model optimization device can also output optimization information corresponding to each search item and performance indicators corresponding to the optimized model, such as accuracy, reasoning delay, etc. Among them, the model optimization device can output the above content in the form of GUI or interface file.
  • the search items selected by the user are hyperparameter search and network architecture search
  • the storage path of the optimized model and the optimization information corresponding to the hyperparameter search can be displayed on the GUI (That is, the searched hyperparameters), the optimization information corresponding to the network architecture search (that is, the searched network architecture), and the model accuracy and reasoning delay after the optimization.
  • the model optimization device may obtain multiple optimized models that meet the set evaluation index during the search process. In this case, the model optimization device can sort the performance indicators of the multiple optimized models from large to small, and then output the optimized model with the performance indicators in the top N positions. Similarly, while outputting the optimized model, the optimization information and performance indicators of each search item corresponding to each optimized model can also be output.
  • search configuration information containing multiple search items can be obtained at the same time, and then search operators corresponding to the multiple search items are arranged according to the search configuration information to obtain a combination operator.
  • the original model can be optimized at the same time for multiple search terms in the comprehensive search space. That is, the embodiments of the present application can realize joint optimization of multiple dimensions of the original model at the same time, thereby obtaining an optimized model in the integrated space, and improving the performance of the model.
  • the model optimizing device may also implement automatic scheduling of computing resources according to the combination operator.
  • FIG. 10 shows a flowchart of automatic resource scheduling according to a combination operator.
  • the process includes the following steps:
  • Step 1001 Estimate the resource consumption for optimizing the original model according to the combination operator.
  • the model optimization device can estimate the total amount of resources consumed during the operation of the combined operator, the peak value of computing resources, fluctuations, and the duration of the peak computing resource based on the computing resources required by each operator in the combined operator.
  • Parameter which can reflect the resource consumption in the process of optimizing the original model.
  • the model optimization device may obtain resource consumption of other optimization tasks in parallel with the optimization of the original model in addition to the resource consumption in the process of optimizing the original model.
  • Step 1002 According to resource consumption, resource scheduling is performed for performing an operation that optimizes the original model.
  • the model optimization device can obtain the current computing resource usage parameters, for example, determine the currently used computing resources, the remaining computing resources, and the possibility of optimizing the original model. Released resources, etc. After that, the model optimization device can allocate corresponding computing resources for the operation of each operator in the combination operator in the optimization process of the original model according to the resource consumption for optimizing the original model and the current computing resource usage parameters.
  • a resource management model for resource allocation may be deployed on the model optimization device.
  • the resource management model may use a deep learning algorithm to learn operating parameters and resources during each optimization.
  • Operating feedback data is obtained, and the operating parameters and resource operating feedback data during each optimization can be shared by model optimization devices in different networks.
  • the model optimization device obtains the resource consumption when optimizing the original model and the current computing resource usage parameters, the two parameters can be used as the input of the resource management model, and the resource allocation data is obtained through the resource management model, and then based on the The resource allocation data is the computing resources corresponding to the optimized allocation of the original model.
  • the model optimization device when the model optimization device also obtains the resource consumption of other optimization tasks that are parallel to the optimization of the original model, the model optimization device may also use the resource consumption of the original model optimization and the resource consumption of other parallel optimization tasks, Further optimize computing resources.
  • the model optimization device can also optimize computing resources by adjusting the steps of concurrent operations based on the resource consumption during the original model optimization and the resource consumption of other parallel optimization jobs.
  • step 602 it is assumed that two different original models are separately optimized in parallel using this combination operator. Among them, the resource consumption of steps 1 and 3 in the combination operator is relatively large, while the resource consumption of other steps is relatively small.
  • the optimization tasks of the two models are executed concurrently, according to the resource consumption calculation in the search step, the time is staggered, the large resource consumption step and the small resource consumption step are used together, and the steps can be adjusted. For example, one of the tasks can be adjusted Step 1 and step 2 in the switch.
  • the model optimization device can also optimize the computing resources by adjusting the resource consumption of the steps of a single job based on the resource consumption during the original model optimization and the resource consumption of other parallel optimization jobs.
  • the resource consumption of a single job is reduced on the premise of meeting user needs, such as reducing the concurrency of a single job network architecture search or hyperparameter search
  • the number of training that is, dynamically adjusting the resource allocation of sub-jobs in a single search step.
  • the model optimization device can realize automatic scheduling of resources according to the resource consumption during the operation of the combination operator and its own resource usage, and according to the results of multiple concurrent optimization jobs.
  • Resource consumption the model optimization device realizes resource optimization by adjusting the steps of concurrent jobs or by adjusting the resource consumption of a single job.
  • the foregoing embodiment mainly introduces the implementation process of the model optimization device 100 to optimize the original model.
  • the implementation process of the model optimization method is introduced from the perspective of the interaction between the user and the model optimization device 100.
  • the method includes the following steps:
  • Step 1101 Provide the user with a configuration interface, where the configuration interface includes a list of search items for the user to select.
  • the model optimization apparatus may send related information of the configuration interface to the client corresponding to the user, and the client displays the configuration interface to provide the configuration interface to the user for configuration of the related information.
  • Step 1102 Obtain the original model and search configuration information.
  • the search configuration information includes multiple search items selected by the user in the search item list, and different search items represent different search categories for searching the original model for optimized information.
  • the model optimization device can obtain configuration information such as search items configured by the user in the configuration interface.
  • configuration information such as search items configured by the user in the configuration interface.
  • evaluation indicators information that the user has not configured in the configuration interface.
  • the model The optimization device can obtain a preset default configuration.
  • the model optimization device can obtain the original model according to the storage path configured by the user in the configuration interface, or receive the original model directly uploaded by the user through the client, or it can be obtained from other devices.
  • the original model refer to the method of obtaining the original model in the embodiment shown in FIG. 6 for the specific method of obtaining the original model, and refer to the introduction in the embodiment shown in FIG. 6 for the representation of the original model. Go into details again.
  • Step 1103 optimize the original model according to the search configuration information.
  • the model optimization device can arrange multiple search operators according to the search configuration information to obtain a combination operator, and optimize the original model according to the combination operator to obtain an optimized model.
  • the realization process of optimizing the original model can refer to the correlation of steps 602 and 603 in the embodiment shown in FIG. 6 Introduction, the embodiments of this application will not be repeated here.
  • Step 1104 Provide the optimized model to the user.
  • the model optimization device can feed back the optimized model to the client.
  • the model optimization device can also feed back the optimization information corresponding to each search item and the optimized model to the client.
  • Performance indicators such as accuracy, reasoning delay, etc.
  • the model optimization device may output the above content to the client in the form of GUI or interface file. For example, as shown in Figure 9.
  • the model optimization device may obtain multiple optimized models that meet the set evaluation index during the search process. In this case, the model optimization device may sort the performance indicators of the multiple optimized models from large to small, and then feed back the optimized models with the top N performance indicators to the client. Similarly, while feeding back the optimized model, it is also possible to feed back the optimization information and performance indicators of each search item corresponding to each optimized model.
  • the model optimization device obtains the original model and search configuration information, and performs a joint search of optimization information on the original model according to multiple search items included in the search configuration information, thereby realizing joint optimization of multiple dimensions of the original model , Improve the performance of the model.
  • the model optimization device can provide the user with a configuration interface, and the user can configure the search information such as search items and search modes in the configuration interface according to their own needs to meet the different optimization needs of the user. It is flexible and easy to operate, reducing the number of users. The burden of use.
  • model optimization device before the model optimization device optimizes the original model, it can also use the resource scheduling method in the embodiment shown in FIG. 10 to realize automatic resource scheduling and optimization. I won't repeat it here.
  • the embodiment of the present application also provides a model optimization device 100 as shown in FIG. 1.
  • the modules and functions included in the model optimization device 100 are as described above, and will not be repeated here.
  • the configuration module 101 in the model optimization apparatus 100 is used to perform step 601 in the foregoing embodiment.
  • the operator orchestration module 102 is used to execute step 602 in the foregoing embodiment.
  • the multivariate search module 103 is used to execute step 603 in the foregoing embodiment.
  • the model optimization apparatus 100 may further include a resource scheduling module 105, and the resource scheduling module 105 may be used to perform step 1001 and step 1002 in the foregoing embodiment.
  • the configuration module 101 in the model optimization apparatus 100 is used to perform steps 1101 and 1102 in the foregoing embodiment, and the operator orchestration module 102 and the multivariate search module 103 can be combined into one optimization module. It can be used to perform step 1103 in the foregoing embodiment, and the feedback module 106 is used to perform step 1104 in the foregoing embodiment.
  • the embodiment of the present application also provides a computing device 500 as shown in FIG. 5.
  • the processor 501 in the computing device 500 reads a set of computer instructions stored in the memory 503 to execute the aforementioned model optimization method.
  • the present application also provides a computing device as shown in FIG. 12 ( It may also be referred to as a computer system).
  • the computer system includes a plurality of computers 1200, and the structure of each computer 1200 is the same as or similar to the structure of the computing device 500 in FIG. 5, and will not be repeated here.
  • Each of the above-mentioned computers 1200 establishes a communication path through a communication network.
  • Each computer 1200 runs any one or more of the aforementioned configuration module 101, operator arrangement module 102, multivariate search module 103, resource scheduling module 105, and feedback module 106.
  • Any computer 1200 may be a computer (for example, a server) in a cloud data center, or an edge computer, or a terminal computing device.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product that implements model optimization includes one or more computer instructions for model optimization.
  • the processes described in Figures 6 and 10 of the embodiments of the present application are generated in whole or in part. Or function, or, all or part of the generation is according to the process or function described in FIG. 11 and FIG. 10 of the embodiment of the present application.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website, computer, server or data center via wired (for example: coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (for example: infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a digital versatile disc (DVD)), or a semiconductor medium (for example, a solid state disk (SSD)) )Wait.
  • a magnetic medium for example, a floppy disk, a hard disk, and a magnetic tape
  • an optical medium for example, a digital versatile disc (DVD)
  • DVD digital versatile disc
  • SSD solid state disk
  • the program can be stored in a computer-readable storage medium.
  • the storage medium mentioned can be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种模型优化方法、装置、存储介质及设备,属于AI领域。在本申请实施例中,可以同时获取包含有多个搜索项的搜索配置信息,进而根据该搜索配置信息对多个搜索项分别对应的搜索算子进行编排,得到组合算子。这样,就可以根据该组合算子,对该原模型同时进行多个搜索项对应的优化。也即,本申请实施例可以实现同时对原模型的多个维度的联合优化,从而得到综合空间内的优化后的模型,提高了模型的性能。

Description

模型优化方法、装置、存储介质及设备 技术领域
本申请涉及人工智能(artificial intelligence,AI)领域,特别涉及一种模型优化方法、装置、存储介质及设备。
背景技术
随着AI模型的应用越来越广泛,不同的应用场景对AI模型的性能要求也不相同,例如:对于应用于智能手机解锁的人脸识别模型,需要该人脸识别模型达到99.2%的准确度,以及较小的推理时延;对于应用于物体分类的模型,需要该分类模型准确率高于95%即可。AI模型的性能与AI模型的结构、模型的超参数、训练数据或推理数据、损失函数等具有强相关关系。
为了获得符合应用需求的AI模型,目前,相关技术中提供了各种AI平台,这些AI平台可以根据用户的需求,对生成的初始网络或者用户提供的原模型进行超参数搜索或网络结构调优等优化方式,以输出满足用户需求的AI模型。然而,这些AI平台在进行AI模型优化时,每次优化只提供单一的优化功能,即仅从一个方面进行模型的优化。
发明内容
本申请提供了一种模型优化方法,可以对初始的AI模型进行多个维度的联合优化,从而得到综合空间内的优化模型,提高了模型性能。所述技术方案如下:
第一方面,提供了一种模型优化方法,所述方法包括:获取用户的原模型和搜索配置信息,所述搜索配置信息包括多个搜索项,不同搜索项表示对所述原模型进行优化信息搜索的不同搜索类别;根据所述搜索配置信息,对多个搜索算子进行编排,得到组合算子,每个被编排的搜索算子与一个搜索项对应,搜索算子表示执行对应的搜索项使用的算法;根据所述组合算子,对所述原模型进行优化,获得优化后的模型。
在本申请实施例中,可以同时获取包含有多个搜索项的搜索配置信息,进而根据该搜索配置信息对多个搜索项分别对应的搜索算子进行编排,得到组合算子。这样,就可以根据该组合算子,对该原模型同时进行多个搜索项对应的优化。也即,本申请实施例可以实现同时对原模型的多个维度的联合优化,从而得到综合空间内的优化模型,提高了模型的性能。
可选地,上述搜索配置信息中的多个搜索项可以包括超参搜索、网络架构搜索、数据增强搜索、损失函数搜索、优化器搜索、模型压缩策略搜索中的至少两种。
其中,超参搜索是指通过超参搜索算法在一定的搜索空间内搜索符合原模型的超参。超参也称为超参数,是AI模型(例如:神经网络模型)中一些无法通过模型训练得到的参数,例如:超参包括学习率、迭代次数等参数。网络架构搜索是指基于演化算法、强化学习、可微分网络等算法,在给定的搜索空间中,搜索满足用户要求的网络架构,网络架构表示AI模型的基本结构。数据增强搜索是指根据指定的数据集搜索符合用户要求的数据增强策略,进而通过该数据增强策略对该数据集中的样本进行处理,数据增强策略是用于对训练数据、测试数据或者推理数据进行预处理的算法。通过处理后的数据集 中的样本对AI模型进行训练、测试或推理,可以使得AI模型性能更优。损失函数搜索是指在给定的搜索空间内搜索符合用户需求的损失函数,损失函数用于对原模型进行训练时的模型优化。优化器搜索是指在给定的搜索空间内搜索符合要求的优化器,后续,通过该优化器进行模型参数的学习,可以使得AI模型的性能更优。模型压缩策略搜索是指在给定的搜索空间内搜索用于进行模型压缩的策略,以实现对AI模型的压缩裁剪。
可选地,根据所述搜索配置信息,对多个搜索算子进行编排,得到组合算子的实现过程可以为:根据所述搜索配置信息,确定所述多个搜索算子的运算顺序、运算次数或每次运算时的综合搜索空间;根据所述多个搜索算子的运算顺序、运算次数或每次运算时的综合搜索空间,生成所述组合算子。
其中,综合搜索空间是指将不同搜索项对应的搜索空间进行融合得到的搜索空间,也可以是指不同搜索项对应的搜索空间彼此相互影响后得到的各自的搜索空间。在本申请实施例中,通过确定多个搜索算子的运算顺序、运算次数和每次运算时的综合搜索空间得到最终的组合算子,这样,在根据该组合算子对原模型进行优化时,就可以在综合搜索空间内搜素该原模型的优化信息。也即,本申请实施例并不是在某个搜索项对应的单一搜索空间内进行搜索,而是根据组合算子在综合搜索空间内进行搜索,相当于是在综合搜索空间内寻求模型的优化信息,这样,根据得到的优化信息优化后的模型也是综合搜索空间内的优化模型,提高了优化后的模型的性能。
可选地,所述搜索配置信息还包括:搜索项信息和搜索模式,其中,每个搜索项对应各自的搜索项信息,所述搜索模式用于表示对所述原模型进行优化时所遵循的原则。
其中,该搜索项信息包括进行相应搜索项的搜索时所用到的一些信息,例如,该搜索项信息中可以包括相应搜索项的搜索算法和搜索空间等,该搜索空间限定了进行相应搜索时的搜索范围。所述搜索模式包括精度模式、速度模式、经济模式和资源模式中的任一种。
在本申请实施例中,通过搜索配置信息和搜索模式来限定选用的组合算子,从而使得模型的优化结果或过程更符合用户要求。
可选地,所述搜索配置信息由用户在图形用户界面GUI上进行输入或选择得到。
可选地,在根据所述组合算子,对所述原模型进行优化之前,还可以根据所述组合算子,预估对所述原模型进行优化的资源消耗;根据所述资源消耗,为执行对所述原模型进行优化的操作进行资源调度。
也即,本申请实施例根据组合算子进行运算时的资源消耗情况以及自身的资源使用情况,可以实现资源的自动调度。
可选地,在获取搜索配置信息的同时,本申请还可以获取评价指标,所述评价指标表示对所述原模型进行优化后应达到的性能目标;相应地,在根据所述组合算子,对所述原模型进行优化时,可以根据所述组合算子在综合搜索空间进行优化信息搜索,根据所述优化信息对所述原模型进行优化,获得优化后的模型,所述优化后的模型的性能满足所述评价指标。
在本申请实施例中,通过设定评价指标,可以输出性能符合要求的优化模型。其中,评价指标可以包括以下指标中的任意一种或多种:模型的准确率、模型的损失、模型的精确度、模型的召回率。当然,还可以为其他的评价指标,例如可以为用户自定义的指标,本申请实施例对此不作限定。
第二方面,本申请还提供了另一种模型优化方法,该方法包括:向用户提供配置界面,所述配置界面包括供所述用户选择的搜索项列表;获取原模型和搜索配置信息,所述搜索配置信息包括所述用户在所述搜索项列表中选择的多个搜索项,不同搜索项表示对所述原模型进行优化信息搜索的不同搜索类别;根据所述搜索配置信息对所述原模型进行优化;向所述用户提供优化后的模型。
在本申请实施例中,根据多个搜索项进行联合搜索,实现了同时对原模型的多个维度的联合优化,提高了模型的性能。另外,可以向用户提供配置界面,由用户来选择所要搜索的多个搜索项,以满足用户对于不同搜索项的联合搜索的需求。
可选地,所述用户在所述搜索项列表中选择的多个搜索项包括超参搜索、网络架构搜索、数据增强搜索、损失函数搜索、优化器搜索、模型压缩策略搜索中的至少两种。其中,各个搜索项所代表的含义可以参考前述第一方面中的相关介绍,本申请实施例在此不做赘述。
可选地,所述配置界面还包括搜索项信息配置页和搜索模式配置页,所述搜索配置信息还包括所述用户在所述配置界面配置的搜索项信息和搜索模式。
可选地,所述搜索模式用于表示对所述原模型进行优化时遵循的原则,所述搜索模式包括精度模式、速度模式、经济模式和资源模式中的任一种。
可选地,在根据所述搜索配置信息对所述原模型进行优化时,首先可以根据所述搜索配置信息,对多个搜索算子进行编排,得到组合算子;之后,根据所述组合算子,对所述原模型进行优化。
其中,组合算子表示对多个搜索算子之间或者每个搜索算子中每部分的运算顺序、运算次数以及每次运算时的综合搜索空间进行确定后生成的算子。综合搜索空间是指将不同搜索项对应的搜索空间进行融合得到的搜索空间,也可以是指不同搜索项对应的搜索空间彼此相互影响后得到的各自的搜索空间。根据组合算子对原模型进行优化可以实现在综合搜索空间内对原模型的优化信息的搜索,从而得到综合空间内的较优解,提高模型性能。
可选地,根据所述搜索配置信息,对多个搜索算子进行编排,得到组合算子的实现过程可以为:根据所述搜索配置信息,确定所述多个搜索算子的运算顺序、运算次数或每次运算时的综合搜索空间;根据所述多个搜索算子的运算顺序、运算次数或每次运算时的综合搜索空间,生成所述组合算子。
可选地,在根据所述组合算子,对所述原模型进行优化之前,还可以根据所述组合算子,预估对所述原模型进行优化的资源消耗;根据所述资源消耗,为执行对所述原模型进行优化的操作进行资源调度。
第三方面,本申请还提供了一种模型优化装置,该模型优化装置包括:配置模块,用于获取用户的原模型和搜索配置信息,所述搜索配置信息包括多个搜索项,不同搜索项表示对所述原模型进行优化信息搜索的不同搜索类别;算子编排模块,用于根据所述搜索配置信息,对多个搜索算子进行编排,得到组合算子,每个被编排的搜索算子对应一个搜索项,搜索算子表示执行对应的搜索项使用的算法;多元搜索模块,用于根据所述组合算子,对所述原模型进行优化,获得优化后的模型。
可选地,所述多个搜索项包括超参搜索、网络架构搜索、数据增强搜索、损失函数搜索、优化器搜索、模型压缩策略搜索中的至少两种。
可选地,所述算子编排模块具体用于:根据所述搜索配置信息,确定所述多个搜索算子的运算顺序、运算次数以及每次运算时的综合搜索空间;根据所述多个搜索算子的运算顺序、运算次数以及每次运算时的综合搜索空间,生成所述组合算子。
可选地,所述搜索配置信息还包括:搜索项信息和搜索模式,其中,每个搜索项对应各自的搜索项信息,所述搜索模式用于表示对所述原模型进行优化时遵循的原则。
可选地,所述搜索模式包括精度模式、速度模式、经济模式和资源模式中的任一种。
可选地,所述搜索配置信息由用户在图形用户界面GUI上进行输入或选择得到。
可选地,所述模型优化装置还包括:资源管理模块,所述资源管理模块用于:根据所述组合算子,预估对所述原模型进行优化的资源消耗;根据所述资源消耗,为执行对所述原模型进行优化的操作进行资源调度。
可选地,所述模型优化装置的配置模块还用于获取评价指标,所述评价指标表示对所述原模型进行优化后应达到的性能目标;所述多元搜索模块具体还用于根据所述组合算子在综合搜索空间进行优化信息搜索,根据所述优化信息对所述原模型进行优化,获得优化后的模型,所述优化后的的模型的性能满足所述评价指标。
可选地,所述评价指标包括以下指标中的任意一种或多种:模型的准确率、模型的损失、模型的精确度、模型的召回率。
第四方面,本申请还提供了另一种模型优化装置,该装置包括:配置模块,用于向用户提供配置界面,所述配置界面包括供所述用户选择的搜索项列表;获取原模型和搜索配置信息,所述搜索配置信息包括所述用户在所述搜索项列表中选择的多个搜索项,不同搜索项表示对所述原模型进行优化信息搜索的不同搜索类别;多元搜索模块,用于根据所述搜索配置信息对所述原模型进行优化;反馈模块,用于向所述用户提供优化后的模型。
可选地,所述用户在所述搜索项列表中选择的多个搜索项包括超参搜索、网络架构搜索、数据增强搜索、损失函数搜索、优化器搜索、模型压缩策略搜索中的至少两种。
可选地,所述配置界面还包括搜索项信息配置页和搜索模式配置页,所述搜索配置信息还包括所述用户在所述配置界面配置的搜索项信息和搜索模式。
可选地,所述搜索模式用于表示对所述原模型进行优化时遵循的原则,所述搜索模式包括精度模式、速度模式、经济模式和资源模式中的任一种。
可选地,所述多元搜索模块具体用于:根据所述搜索配置信息,对多个搜索算子进行编排,得到组合算子;根据所述组合算子,对所述原模型进行优化。
可选地,所述多元搜索模块具体用于:根据所述搜索配置信息,确定所述多个搜索算子的运算顺序、运算次数或每次运算时的综合搜索空间;根据所述多个搜索算子的运算顺序、运算次数或每次运算时的综合搜索空间,生成所述组合算子。
可选地,所述模型优化装置还包括:资源管理模块,所述资源管理模块用于:根据所述组合算子,预估对所述原模型进行优化的资源消耗;根据所述资源消耗,为执行对所述原模型进行优化的操作进行资源调度。
第五方面,本申请还提供了一种计算设备,所述计算设备的结构中包括处理器和存储器,所述存储器用于存储支持计算设备执行上述第一方面或第二方面所提供的模型优化方法的程序,以及存储用于实现上述第一方面或第二方面所提供的模型优化方法所涉及的数据。所述处理器执行所述存储器中存储的程序执行前述第一方面或第二方面及其可选的实现方式提供的方法。所述计算设备还可以包括通信总线,该通信总线用于该处理器与存储器之间建立连接。
第六方面,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面或第二方面及其可选的实现方式所述的模型优化方法。
第七方面,本申请还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面或第二方面所述的模型优化方法。
上述第二方面、第三方面、第四方面、第五方面、第六方面和第七方面所获得的技术效果与第一方面中对应的技术手段获得的技术效果近似,在这里不再赘述。
本申请提供的技术方案带来的有益效果至少包括:
在本申请实施例中,可以同时获取包含有多个搜索项的搜索配置信息,进而根据该搜索配置信息对多个搜索项分别对应的搜索算子进行编排,得到组合算子。这样,就可以根据该组合算子,对该原模型同时进行多个搜索项对应的优化。也即,本申请实施例可以实现对原模型的多个维度的联合优化,从而得到综合空间内的优化模型,提高了模型的性能。
附图说明
为了更清楚地说明本申请实施例的技术方法,下面将对实施例中所需使用的附图作以简单地介绍。
图1为本申请实施例提供的一种模型优化装置的结构示意图;
图2是本申请实施例提供的一种模型优化装置的部署示意图;
图3是本申请实施例提供的一种模型优化装置的应用示意图;
图4是本申请实施例提供的另一种模型优化装置的部署示意图;
图5是本申请实施例提供的一种计算设备的结构示意图;
图6是本申请实施例提供的一种模型优化方法流程图;
图7是本申请实施例提供的一种配置界面的示意图;
图8是本申请实施例提供的另一种配置界面的示意图;
图9是本申请实施例提供的一种优化后的模型的输出界面的示意图;
图10是本申请实施例提供的根据组合算子进行资源自动调度的流程图;
图11是本申请实施例提供的另一种模型优化方法的流程图;
图12是本申请实施例提供的一种计算机系统的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式 作进一步地详细描述。
在对本申请实施例进行详细的解释说明之前,先对本申请实施例涉及的应用场景予以介绍。
当前,AI模型已经被广泛地应用于诸如图像识别、视频分析、语音识别、自然语言翻译、自动驾驶控制等领域中。AI模型表示一种可以通过被训练完成对数据特征的学习,进而可以用于进行推理的数学算法。业界存在多种不同类型的AI模型,例如:神经网络模型是一种典型的AI模型。神经网络模型是一类模仿生物神经网络(动物的中枢神经系统)的结构和功能的数学计算模型。一个神经网络模型可以包括多种不同功能的计算层,每层包括参数和计算公式。根据计算公式的不同或功能的不同,神经网络模型中不同的计算层有不同的名称,例如:进行卷积计算的层称为卷积层,可以用于对输入图像进行特征提取。为简洁起见,本申请实施例中一些表述中将AI模型简称为模型。
随着AI模型的应用越来越广泛,对AI模型的性能要求也越来越高。例如:神经网络模型的性能与神经网络模型的超参选择、网络架构设计、训练样本等息息相关。如何从多方面对原始AI模型进行优化,以获得一个性能更高的AI模型是行业关注的重点。
基于此,为了提高AI模型的性能,开发人员在编写好最初的原模型之后,可以采用本申请实施例提供的优化方法,针对该原模型进行超参搜索、网络架构搜索、数据增强搜索等联合搜索,以得到该原模型的优化信息,进而根据该优化信息对原模型进行超参、网络架构、训练样本、损失函数、优化器、模型压缩策略等方面的优化。其中,原模型是指还未进行性能优化的初始AI模型,并且,该原模型可以通过代码的形式表示。
另外,需要说明的是,超参搜索是指通过超参搜索算法在一定的搜索空间内搜索符合原模型的超参。应理解,超参也称为超参数,是AI模型(例如:神经网络模型)中一些无法通过模型训练得到的参数,例如:超参包括学习率、迭代次数等参数。超参的设定对于AI模型的性能影响较大。网络架构搜索是指基于演化算法、强化学习、可微分网络等算法,在给定的搜索空间中,搜索满足用户要求的网络架构,网络架构表示AI模型的基本结构。数据增强搜索是指根据指定的数据集搜索符合用户要求的数据增强策略,进而通过该数据增强策略对该数据集中的样本进行处理,数据增强策略是用于对训练数据、测试数据或者推理数据进行预处理的算法。通过处理后的数据集中的样本对AI模型进行训练、测试或推理,可以使得AI模型性能更优。损失函数搜索是指在给定的搜索空间内搜索符合用户需求的损失函数,损失函数用于对原模型进行训练时的模型优化。优化器搜索是指在给定的搜索空间内搜索符合要求的优化器,后续,通过该优化器进行模型参数的学习,可以使得AI模型的性能更优。模型压缩策略搜索是指在给定的搜索空间内搜索用于进行模型压缩的策略,以实现对AI模型的压缩裁剪。
本申请实施例提供了一种模型优化方法,该方法由模型优化装置来执行。模型优化装置的功能可以由软件系统实现,也可以由硬件设备实现,还可以由软件系统和硬件设备结合来实现。
当模型优化装置为软件装置时,参见图1,该模型优化装置100可以在逻辑上分成多个模块,每个模块可以具有不同的功能,每个模块的功能由计算设备中的处理器读取并执行存储器中的指令来实现,该计算设备结构可以如下文中图5所示的计算设备500。示例性的,该模型优化装置可以包括配置模块101、算子编排模块102、多元搜索模块103和存储模块104。在一种具体实现方式中,模型优化装置100可以执行下文描述的步骤 601-603和步骤1001-1002中描述的内容,或者,执行下文描述的步骤1101-1104和步骤1001-1002中描述的内容。需要说明的是,本申请实施例仅对模型优化装置100的结构和功能模块进行示例性划分,但是并不对其具体划分做任何限定。
配置模块101,用于获取用户的原模型和搜索配置信息。其中,用户的原模型可以是用户上传的,也可以是存储在其他装置或设备上的。搜索配置信息可以包括用户配置的多个搜索项,每个搜索项表示一种搜索原模型的优化信息的类别。例如,多个搜索项可以为超参搜索和网络架构搜索,此时,则表示搜索原模型的超参优化信息和网络架构优化信息。可选地,多个搜索项还可以包括数据增强搜索、损失函数搜索、优化器搜索、模型压缩策略搜索等等,本申请实施例在此不做限定。在一些可能的实现方式中,搜索配置信息还可以包括搜索项信息和搜索模式,每个搜索项信息与每个搜索项对应,每个搜索项信息包括对应的搜索项所对应的搜索空间。搜索模式可以包括精度模式、速度模式、经济模式和资源模式中的任一种。
算子编排模块102,用于与配置模块101、存储模块104以及多元搜索模块103进行通信连接,接收配置模块101发送的搜索配置信息,以及接收存储模块104发送的多个搜索算子。根据该搜索配置信息,对多个搜索算子进行编排,得到组合算子。
需要说明的是,存储模块104中可以存储有多种搜索算子,例如,超参搜索算子、网络架构搜索算子、数据增强搜索算子、损失函数搜索算子、优化器搜索算子、模型压缩策略搜索算子以及用户自定义搜索算子等。其中,搜索算子是指实现相应搜索的算法,或者说,搜索算子是对相应搜索项对应的优化信息进行搜索的方法。例如,超参搜索算子是指实现超参搜索的搜索算法,也即,是指搜索超参的方法;网络架构搜索算子是指实现网络架构搜索的搜索算法,也即,是指进行网络架构的搜索的方法。
算子编排模块102在接收到配置模块101发送的搜索配置信息之后,可以根据搜索配置信息包括的搜索项从存储模块104中获取每个搜索项对应的算子,例如,当搜索配置信息包括的多个搜索项为超参搜索和网络架构搜索时,则算子编排模块102可以根据搜索项从存储模块104中获取超参搜索算子和网络架构搜索算子。之后,算子编排模块102可以对获取到的算子进行编排,从而生成组合算子。组合算子表示对多个搜索算子之间或者每个搜索算子中每部分的运算顺序、运算次数以及每次运算时的综合搜索空间进行确定后生成的算子。在生成组合算子之后,算子编排模块102可以将该组合算子发送至多元搜索模块103。
可选的,存储模块104中可以包括每个搜索项对应的多个搜索算子,不同的搜索算子表示对同一个搜索项对应的优化信息的搜索方式不同,例如:对于搜索项为网络架构搜索,存储器104中可以存储有网络架构搜索算子A、网络架构搜索算子B、网络架构搜索算子C。对于同一搜索项对应多个搜索算子的情况,可以根据用户的原模型或者搜索项信息、搜索模式选择其中一个搜索算子。
多元搜索模块103,用于与算子编排模块102以及配置模块101进行通信连接。接收算子编排模块102发送的组合算子,以及接收配置模块101发送的用户的原模型。之后,多元搜索模块103可以根据组合算子,对用户的原模型进行优化。
可选地,该模型优化装置100还可以包括资源调度模块105。该资源调度模块105用于与算子编排模块102以及多元搜索模块103进行通信连接。该资源调度模块105可以接收算子编排模块102确定的组合算子,并根据该组合算子,预估对原模型进行优化 的资源消耗,进而根据该资源消耗,为多元搜索模块103执行对原模型的优化的操作进行资源调度。
可选地,该模型优化装置100还可以包括反馈模块106。该反馈模块106用于与多元搜索模块103进行通信连接。该反馈模块106可以将多元搜索模块103的搜索结果以及优化后的模型反馈给用户。
另外,在一些可能的情况中,上述的该模型优化装置100包括的多个模块中的部分模块的也可以合并为一个模块,例如,上述的算子编排模块102和多元搜索模块103可以合并为优化模块,也即,该优化模块集合了算子编排模块102和多元搜索模块103的功能。
在本申请实施例中,上述介绍的模型优化装置100可以灵活的部署。例如,该模型优化装置100可以部署在云环境。云环境是云计算模式下利用基础资源向用户提供云服务的实体,云环境包括云数据中心和云服务平台。
云数据中心包括云服务提供商拥有的大量基础资源(包括计算资源、存储资源和网络资源),云数据中心包括的计算资源可以是大量的计算设备(例如服务器)。该模型优化装置100可以是部署在云数据中心中的服务器或者虚拟机上的软件装置,该软件装置可以用于进行AI模型的优化,该软件装置可以分布式地部署在多个服务器上、或者分布式地部署在多个虚拟机上、或者分布式地部署在虚拟机和服务器上。例如,如图2所示,该模型优化装置100部署在云环境中。客户端110可以将用户上传的原模型发送至该模型优化装置100,或者是其他非客户端设备120可以将自身生成或存储的原模型发送至该模型优化装置100,该模型优化装置100在接收到原模型之后,可以根据搜索配置信息对多个搜索算子进行编排,得到组合算子,进而根据组合算子对原模型进行优化,获得优化后的模型,将优化后的模型反馈给客户端110或者是其他非客户端设备120。
示例性地,图3为本申请中的模型优化装置100的一种应用示意图,如图3所示,模型优化装置100可以由云服务提供商部署在云数据中心,云服务提供商将模型优化装置提供的功能抽象成为一项云服务,云服务平台供用户咨询和购买这项云服务。用户购买这项云服务后即可使用云数据中心的该模型优化装置100提供的模型优化服务。该模型优化装置还可以由租户部署在租户租用的云数据中心的计算资源中,租户通过云服务平台购买云服务提供商提供的计算资源云服务,在购买的计算资源中运行该模型优化装置100,使得该模型优化装置100进行AI模型的优化。
可选地,该模型优化装置100还可以是边缘环境中运行在边缘计算设备上的软件装置或者是边缘环境中的一个或多个边缘计算设备。所谓边缘环境是指某个应用场景中包括一个或多个边缘计算设备在内的设备集合,其中,该一个或多个边缘计算设备可以是一个数据中心内的计算设备或者是多个数据中心的计算设备。当模型优化装置100为软件装置时,模型优化装置100可以分布式地部署在多台边缘计算设备,也可以集中地部署在一台边缘计算设备。示例性地,如图4所示,该模型优化装置100分布式地部署在某个企业的数据中心包括的边缘计算设备130中,该企业中的客户端140可以将原模型发送至该模型优化装置100,可选地,客户端140还可以将搜索配置信息发送至模型优化装置100。该模型优化装置100在接收到原模型之后,可以根据搜索配置信息对多个搜索算子进行编排,得到组合算子,进而根据组合算子对原模型进行优化,获得优化后的模型,将优化后的模型反馈给客户端140。
当该模型优化装置为硬件设备时,该模型优化装置可以为任意环境中的一个计算设备,例如,可以为前述介绍的边缘计算设备,也可以为前述介绍的云环境下的计算设备。图5是本申请实施例提供的一种计算设备500的结构示意图。该计算设备500包括处理器501,通信总线502,存储器503以及至少一个通信接口504。
处理器501可以是一个通用中央处理器(Central Processing Unit,CPU),特定应用集成电路(application-specific integrated circuit,ASIC),图形处理器(graphics processing unit,GPU)或其任意组合。处理器501可以包括一个或多个芯片,处理器501可以包括AI加速器,例如:神经网络处理器(neural processing unit,NPU)。
通信总线502可包括在计算设备500各个部件(例如,处理器501、存储器503、通信接口504)之间传送信息的通路。
存储器503可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其它类型的静态存储设备,随机存取存储器(random access memory,RAM))或者可存储信息和指令的其它类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其它光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其它磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质,但不限于此。存储器503可以是独立存在,通过通信总线502与处理器501相连接。存储器503也可以和处理器501集成在一起。存储器503可以存储计算机指令,当存储器503中存储的计算机指令被处理器501执行时,可以实现本申请的模型优化方法。另外,存储器503中还可以存储有处理器在执行上述方法的过程中所需的数据以及所产生的中间数据和/或结果数据。
通信接口504,使用任何收发器一类的装置,用于与其它设备或通信网络通信,如以太网,无线接入网(RAN),无线局域网(Wireless Local Area Networks,WLAN)等。
在具体实现中,作为一种实施例,处理器501可以包括一个或多个CPU。
在具体实现中,作为一种实施例,计算机设备可以包括多个处理器。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
接下来对本申请实施例提供的模型优化方法进行介绍。
图6是本申请实施例提供的一种模型优化方法的流程图。该模型优化方法可以由前述的模型优化装置100来执行,参见图6,该方法包括以下步骤:
步骤601:获取用户的原模型和搜索配置信息,搜索配置信息包括多个搜索项,不同搜索项表示对原模型进行优化信息搜索的不同类别。
在本申请实施例中,用户的原模型可以由用户以代码形式上传。也即,模型优化装置可以接收用户上传的代码形式的原模型。或者,该原模型也可以是模型优化装置根据指定的存储路径从其他装置中获取得到,或者,该原模型也可以是其他设备中存储的,由其他设备发送至该模型优化装置。另外,搜索配置信息可以由用户在GUI上进行输入 或选择得到。
在一种可能的实现方式中,模型优化装置可以为用户提供一个配置界面,在该配置界面中可以包括原模型配置项和搜索配置信息选项,用户可以在该原模型配置项中输入原模型的存储路径,模型优化装置可以根据该存储路径获取原模型,并通过搜索配置信息选项配置搜索配置信息。
需要说明的是,搜索配置信息选项中可以包括搜索项列表。示例性地,如图7所示,搜索项列表中包括多个可被选择的搜索项,每个搜索项表示一种对原模型进行优化信息搜索的类别。示例性地,搜索项列表可以包括超参搜索、网络架构搜索、数据增强搜索、损失函数搜索、优化器搜索、模型压缩策略搜索等。其中,超参搜索是指在给定的搜索空间内搜索符合原模型的超参。网络架构搜索是指在给定的搜索空间内搜索符合用户要求的网络架构。数据增强搜索是指在指定的数据集上搜索符合用户要求的数据增强策略,进而通过该数据增强策略对该数据集中的样本进行处理,数据增强策略是用于对训练数据、测试数据或推理数据进行预处理的算法。通过处理后的数据集中的样本对AI模型进行训练、测试或者是推理,可以使得AI模型的性能更优。损失函数搜索是指在给定的搜索空间内搜索符合用户需求的损失函数。优化器搜索是指在给定的搜索空间内搜索符合要求的优化器,后续,通过该优化器进行模型参数的学习,可以使得AI模型的性能更优。模型压缩策略搜索是指在给定的搜索空间内搜索用于进行模型压缩的策略,以实现对模型的压缩裁剪。可选的,当用户在GUI界面上选择了搜索项列表中的多种搜索项后,界面还会提供搜索项信息配置列表,在该搜索项信息配置列表中,用户可以配置相应地搜索项对应的一个或多个搜索项信息,该搜索项信息包括进行相应搜索项的搜索时所用到的一些信息,例如,该一个或多个搜索项信息中可以包括相应搜索项的搜索算法和搜索空间等,该搜索空间限定了进行相应搜索时的搜索范围。例如,如图7所示,当选择了超参搜索和网络架构搜索之后,界面中可以显示超参搜索对应的搜索项信息配置列表和网络架构搜索对应的搜索项信息配置列表。其中,超参搜索对应的搜索项信息配置列表中包括超参搜索的搜索算法配置项、参数名称配置项以及参数范围配置项,该参数范围配置项即相当于是超参搜索对应的搜索空间配置项。网络架构搜索对应的搜索项信息配置列表中包括网络架构的默认结构配置项、搜索空间配置项以及延时设定配置项。需要说明的是,图7中仅是示例性的给出了几种可能的搜索项信息,在一些可能的实现方式中,选择的搜索项对应搜索项信息配置列表中可以包括更少或更多的搜索项信息。
可选地,参见图8,搜索配置信息项还可以包括搜索模式、评价指标。其中,搜索模式用于表示对原模型进行优化时遵循的原则。搜索模式可以包括精度模式、速度模式、经济模式和资源模式。其中,精度模式表示以模型精度为目标来进行模型优化。速度模式则表示以满足一定的优化速度为目标来进行模型优化。经济模式是指以用户所需耗费的费用最少为目标来进行模型优化。资源模式则是指以模型优化装置所消耗的资源最少为目标来进行模型优化。
评价指标主要是指优化后的模型所要达到的性能指标。当对原模型优化后的模型的指标达到设定的评价指标时,即可以停止优化。示例性的,评价指标可以为模型的准确率(accuracy)、模型的损失(loss)、模型的精确度(precision)、召回率(recall)等指标中的一种或多种,或者,该评价指标也可以为用户自定义的指标,本申请实施例对此不作限定。
评价指标通常可以单独作为停止模型优化以输出优化后的模型的条件来使用。例如,当评价指标为模型的精确度时,在优化原模型的过程中,如果优化后的模型的精确度达到了该评价指标,则可以停止继续优化,输出该优化后的模型。或者,评价指标也可能受到搜索模式的影响,模型优化装置可以结合搜索模式和评价指标来决定何时停止优化以输出优化后的模型。例如,假设用户设定的评价指标为模型的精确度下限值,如果用户还选择了精度模式,则在对原模型优化的过程中,当优化后的模型精确度达到模型的精确度下限值之后,由于用户还选择了精度模式,而精度模式追求的是模型精度越高越好,因此,模型优化装置还可以对模型继续进行优化,以寻求模型精确度的进一步提高。但是,如果用户选择的是速度模式,则出于对优化速度的追求,在优化得到第一个精确度大于评价指标的模型之后,即可以停止优化,立即输出该优化后的模型。
当用户配置完成之后,模型优化装置可以获取用户上传的原模型以及配置的搜索配置信息。
需要说明的是,上述仅是本申请实施例中给出的一些可能的搜索配置信息选项,并且,在上述搜索配置信息选项中,对于用户而言,有些搜索配置信息是可选地,例如,搜索模式和评价指标。另外,根据实际需要,该配置界面中还可以包括更多或更少的搜索配置信息选项,例如,该配置界面中可以不包括评价指标,而是由该模型优化装置给定默认评价指标。再例如,该配置界面还可以包括算子编排方式选项,该算子编排方式选项可以表示对多个搜索算子进行编排的方式。本申请实施例对此不作限定。
需要说明的是,用户上传的原模型可以是用户编写的未经过任何针对所述模型优化装置进行适应性修改的代码。用户只需通过上述配置界面进行简单的配置,模型优化装置即可以通过后续步骤来对原模型优化,而不需根据平台要求进行代码修改,降低了用户使用门槛。可选地,用户也可以在原模型的代码文件中添加声明信息,该声明信息可以指示出所要搜索的内容。可选地,该声明信息中还可以包括评价指标等其他配置信息。
在另一种可能的实现方式中,模型优化装置可以直接接收用户上传的代码形式的原模型以及配置文件,该配置文件中可以包括用户的搜索配置信息。
在该种实现方式中,用户可以直接将搜索配置信息写入配置文件中。模型优化装置在接收到该配置文件之后,可以通过对该配置文件进行解析来获取其中的搜索配置信息。其中,搜索配置信息包括的内容如前所述,本申请实施例在此不再赘述。配置文件可以支持多种文件格式,如yaml,xml,txt等,本申请实施例对此不作限定。
步骤602:根据搜索配置信息,对多个搜索算子进行编排,得到组合算子,每个被编排的搜索算子对应一个搜索项,搜索算子表示执行对应的搜索项使用的算法。
在本申请实施例中,模型优化装置中可以存储有不同搜索项对应的搜索算子。该搜索算子是搜索相应搜索项对应的优化信息的算法。例如,超参搜索对应的超参搜索算子、网络架构搜索对应的网络架构搜索算子、数据增强搜索对应的数据增强搜索算子等。
在一种可能的实现方式中,模型优化装置在获取到搜索配置信息之后,可以根据搜索配置信息中包括的多个搜索项,获取多个搜索项所对应的多个搜索算子,进而对多个搜索算子进行编排,所述对多个搜索算子进行编排即为确定多个搜索算子组合运行的方式。例如:模型优化装置可以根据搜索配置信息包括的搜索模式和每个搜索项对应的搜索项信息,确定多个搜索算子的运算顺序、运算次数和/或每次运算时的综合搜索空间,进而生成组合算子。
例如,当多个搜索项为超参搜索和网络架构搜索时,模型优化装置可以根据超参搜索获取超参搜索算子,根据网络架构搜索获取网络架构搜索算子。
在获取到多个搜索算子之后,模型优化装置可以根据该搜索配置信息包括的搜索模式和搜索项信息,确定多个搜索算子的运算顺序、运算次数以及每次运算时的搜索空间。其中,每次运算时的搜索空间可以不同,且这些搜索空间中包括综合搜索空间,该综合搜索空间可以是指将不同搜索项对应的搜索空间进行融合得到的搜索空间,也可以是指不同搜索项对应的搜索空间彼此相互影响后得到的各自的搜索空间。
作为一种示例,假设用户选择的搜索项为超参搜索和网络架构搜索,且选择的搜索模式为速度模式,则模型优化装置对超参搜索算子和网络架构搜索算子进行编排后得到的组合算子可以为:
(1)、通过网络架构搜索算子,根据用户以代码形式上传的原模型,生成多种可能的原模型的网络结构。
(2)、通过超参搜索算子,根据用户配置的超参搜索的搜索项信息中的搜索空间以及其他参数,对上述(1)中得到的多种原模型的网络结构中的每种网络结构进行超参搜索,得到搜索结果。该搜索结果包括针对每种网络结构搜索得到的对应的候选超参。
(3)通过网络架构搜索算子,对上述(2)得到的搜索结果进行评估,从中选出效果最好的目标结构。
(4)、通过超参搜索算子,在综合搜索空间上针对该目标结构再次进行超参搜索,得到评价指标符合参考评价指标的该目标结构对应的目标超参。其中,综合搜索空间是指综合网络架构搜索对应的搜索空间和超参搜索对应的搜索空间后得到的搜索空间,或者,综合搜索空间是指根据前述网络架构搜索结果以及超参搜索结果对超参搜索的搜索项信息包括的搜索空间进行调整后得到的搜索空间。其中,参考评价指标可以是前述的用户配置的评价指标,当然,在用户未配置的情况下,该评价指标可以是指默认的评价指标。
作为另一种示例,假设用户选择的搜索项为超参搜索、网络架构搜索和数据增强搜索,且选择的搜索模式为精度模式,则模型优化装置对超参搜索算子、网络架构搜索算子和数据增强搜索算子进行编排后得到的组合算子可以为:
(1)通过数据增强搜索算子,针对指定数据集在数据增强搜索所对应的搜索空间内搜索数据增强策略,通过搜索到的数据增强策略对指定数据集中的训练样本进行数据增强处理。
(2)外层调用超参搜索算子、内层调用网络架构搜索算子,通过双层循环,在指定的综合搜索空间内进行搜索,得到多组搜索结果,每组搜索结果包括一个网络架构和对应的超参。需要说明的是,搜索的过程中,指定的综合搜索空间是根据超参搜索的搜索空间和网络架构搜索的搜索空间综合得到的,并且,指定的综合搜索空间根据每层循环的结果不断变化。并且,在搜索的过程中,可以通过前述(1)中增强处理的数据集对模型进行训练和测试,得到每组搜索结果对应的评价指标。
(3)从上述(2)中得到的多组搜索结果中选择对应的评价指标排在前N个的结果输出。
由此可见,在本申请实施例中,在进行多种搜索项的搜索时,模型优化装置可以通过对搜索算子编排实现多个维度的联合优化,并且,在搜索过程中,多个搜索项的搜索 空间彼此影响或者是综合,这样,根据编排得到的组合算子优化的模型将是多个搜索维度所构成的联合搜索空间内的优化模型,提高了优化后模型的性能。
需要说明的是,上述是本申请实施例给出的几种可能的组合算子的实现方式,用以说明对搜索项对应的多个搜索算子进行编排以得到组合算子的过程,根据搜索项的不同,搜索算子编排的方式也可以完全不同,本申请实施例对此不作限定。
在上述实现方式中,模型优化装置可以根据搜索配置信息对多个搜索算子进行实时的编排,以得到组合算子。可选地,在另一些可能的实现方式中,模型优化装置可以预先根据对不同搜索项对应的搜索算子进行编排,得到不同的组合算子,并测试不同的组合算子在优化模型时所存在的特性,例如,通过某个组合算子进行模型优化的精度比较高,通过另一个组合算子进行优化时速度比较快等。模型优化装置可以将组合的搜索项、对应的组合算子以及该组合算子的特性对应存储。例如,对于超参搜索和网络架构搜索这个组合搜索项,其对应有组合算子1,组合算子1对应的特性是精度高,另外,该组合搜索项还可以对应有组合算子2,该组合算子2对应的特性是速度快。在这种情况下,当模型优化装置获取到搜索配置信息时,首先可以根据搜索配置信息中包括的多个搜索项从上述对应关系中匹配相同的组合搜索项,之后,根据搜索配置信息中包括的搜索模式,从组合搜索项对应的组合算子中确定特性与该搜索模式相匹配的组合算子,在找到匹配的组合算子之后,可以根据搜索配置信息中的搜索项信息对该组合算子的各个运算操作中的搜索空间进行配置。
步骤603:根据组合算子,对原模型进行优化,获得优化后的模型。
模型优化装置在得到组合算子之后,可以通过该组合算子对原模型进行优化。
示例性的,模型优化装置可以根据该组合算子在综合搜索空间进行优化信息搜索,根据搜索到的优化信息对原模型进行优化,从而得到优化后的模型。其中,优化后的模型的性能满足用户设定的评价指标或者是默认的评价指标。
模型优化装置可以按照组合算子中各个算子执行的顺序依次执行各个算子,以在综和搜索空间内进行搜索,得到多个搜索项对应的原模型的优化信息。例如,当搜索项为超参搜索和网络架构搜索时,优化信息包括对原模型的超参优化信息和网络架构优化信息。其中,该优化信息可以是能够直接替换原模型中相应的内容的信息,例如,可以是直接替换原模型的超参的超参。或者,也可以是用于对原模型中相应的内容进行更改的差异信息,例如,可以为超参的差异数据,这样,可以在原模型的超参的基础上加上该差异数据,以实现对原模型的超参的优化。
在得到优化信息之后,模型优化装置可以根据该优化信息对原模型进行优化,获得优化后的模型。之后,模型优化装置可以输出优化后的模型,可选地,模型优化装置还可以输出各个搜索项所对应的优化信息以及优化后的模型所对应的性能指标,例如准确率、推理时延等。其中,模型优化装置可以以GUI的形式或者是接口文件的形式将上述内容输出。
例如,参见图9,当用户选择的搜索项为超参搜索和网络架构搜索时,在得到优化后的模型之后,可以在GUI上显示优化后的模型的存储路径、超参搜索对应的优化信息(也即搜索到的超参)、网络架构搜索对应的优化信息(也即搜索到的网络架构)以及该优化后的模型准确率和推理时延。
在一些可能的实现方式中,模型优化装置在搜索过程中可能会得到多个满足设定的 评价指标的优化后的模型。在这种情况下,模型优化装置可以按照多个优化后的模型的性能指标从大到小进行排序,然后将性能指标排在前N位的优化后的模型输出。同样的,在输出优化后的模型的同时,还可以输出每个优化后的模型所对应的各个搜索项的优化信息以及性能指标。
在本申请实施例中,可以同时获取包含有多个搜索项的搜索配置信息,进而根据该搜索配置信息对多个搜索项分别对应的搜索算子进行编排,得到组合算子。这样,就可以根据该组合算子,在综合搜索空间内对该原模型同时进行多个搜索项对应的优化。也即,本申请实施例可以实现同时对原模型的多个维度的联合优化,从而得到综合空间内的优化模型,提高了模型的性能。
上述实施例中介绍了对模型进行优化的实现过程。可选地,在本申请实施例中,在对模型进行优化之前,模型优化装置还可以根据组合算子实现计算资源的自动调度。
示例性地,参见图10,其示出了根据组合算子进行资源自动调度的流程图,该过程包括以下步骤:
步骤1001:根据组合算子,预估对原模型进行优化的资源消耗。
其中,模型优化装置可以根据组合算子中各个算子运算时所需的运算资源,预估该组合算子运行时所需要消耗的资源总和、计算资源峰值、波动情况以及计算资源峰值持续时长等参数,该参数即可以反映对原模型进行优化的过程中的资源消耗。
可选地,在本申请实施例中,模型优化装置在获取到对原模型进行优化的过程中的资源消耗之外,还可以获取与该原模型的优化并行的其他优化作业的资源消耗。
步骤1002:根据资源消耗,为执行对原模型进行优化的操作进行资源调度。
在预估得到对原模型进行优化的资源消耗之后,模型优化装置可以获取当前的计算资源使用参数,例如,确定当前已使用的计算资源、剩余的计算资源以及在对原模型进行优化过程中可能释放的资源等。之后,模型优化装置可以根据对原模型进行优化的资源消耗和当前的计算资源使用参数,为原模型的优化过程中组合算子中每个算子的运算分配对应的计算资源。
需要说明的是,在本申请实施例中,模型优化装置上可以部署有用于进行资源分配的资源管理模型,该资源管理模型可以是采用深度学习算法,通过学习每次优化时的运行参数和资源运行反馈数据得到,并且,每次优化时的运行参数和资源运行反馈数据可以是由不同网络中的模型优化装置共享的。模型优化装置在获得对原模型进行优化时的资源消耗以及当前的计算资源使用参数之后,可以将这两部分参数作为该资源管理模型的输入,通过该资源管理模型获得资源分配数据,进而根据该资源分配数据为原模型的优化分配对应的计算资源。
可选地,当模型优化装置还获取到了与该原模型的优化并行的其他优化作业的资源消耗时,模型优化装置还可以根据原模型优化时的资源消耗和并行的其他优化作业的资源消耗,对计算资源进行进一步地优化。
其中,模型优化装置也可以根据原模型优化时的资源消耗和并行的其他优化作业的资源消耗,通过调整并发作业的步骤对计算资源进行优化。
示例性地,以前述步骤602中示出的第二种组合算子为例,假设以该种组合算子分别对两个不同的原模型进行并行优化。其中,该组合算子中步骤1和3的资源消耗相对 较大,而其它步骤的资源消耗相对较少。当同时并发执行两个模型的优化作业时,根据搜索步骤中的资源消耗计算,进行时间错峰,把大资源消耗步骤与小资源消耗步骤搭配使用,将步骤进行调整,例如可以将其中一个作业中的步骤1和步骤2进行调换。
或者,模型优化装置也可以根据原模型优化时的资源消耗和并行的其他优化作业的资源消耗,通过调整对单个作业的步骤的资源消耗来对计算资源进行优化。
示例性地,当计算资源较少或者有多个请求同时发送给模型优化装置时,在满足用户需求的前提下,缩小单个作业的资源消耗,例如降低单个作业网络架构搜索或者超参搜索时并发训练的数量,也即,动态调整单个搜索步骤中子作业的资源配置。
由此可见,在本申请实施例中,模型优化装置根据组合算子进行运算时的资源消耗情况以及自身的资源使用情况,可以实现资源的自动调度,并且,根据同时并发的多个优化作业的资源消耗情况,模型优化装置通过调整并发作业的步骤或者是通过调整单个作业的资源消耗,实现了资源的优化。
上述实施例主要介绍了模型优化装置100对原模型进行优化的实现过程,接下来从用户与模型优化装置100进行交互的角度来介绍该模型优化方法的实现流程,示例性地,参见图11,该方法包括以下步骤:
步骤1101:向用户提供配置界面,该配置界面包括供用户选择的搜索项列表。
在本申请实施例中,模型优化装置可以向用户对应的客户端发送配置界面的相关信息,由该客户端显示该配置界面,以将该配置界面提供给用户进行相关信息的配置。
其中,配置界面的实现方式可以参考图6所示的实施例中所介绍的配置界面,本申请实施例在此不再赘述。
步骤1102:获取原模型和搜索配置信息,该搜索配置信息包括用户在搜索项列表中选择的多个搜索项,不同搜索项表示对原模型进行优化信息搜索的不同搜索类别。
在本申请实施例中,模型优化装置可以获取用户在配置界面中配置的诸如搜索项之类的配置信息,除此之外,对于用户未在配置界面中配置的信息,例如评价指标等,模型优化装置可以获取预先设置的默认配置。
另外,关于获取原模型的实现过程,模型优化装置可以根据用户在配置界面中配置的存储路径来获取原模型,或者是接收用户通过客户端直接上传的原模型,或者也可以是从其他设备获取原模型,具体的获取方式可以参考图6所示实施例中获取原模型的方法,并且,关于原模型的表示方式也可以参考图6所示实施例中的介绍,本申请实施例在此不再赘述。
步骤1103:根据搜索配置信息对原模型进行优化。
模型优化装置在获得搜索配置信息之后,可以根据搜索配置信息对多个搜索算子进行编排,得到组合算子,根据该组合算子,对原模型进行优化,从而得到优化后的模型。
其中,根据搜索配置信息对多个搜索算子进行编排,得到组合算子,根据该组合算子,对原模型进行优化的实现过程可以参考图6所示实施例中的步骤602和603的相关介绍,本申请实施例在此不再赘述。
步骤1104:向用户提供优化后的模型。
在得到优化后的模型之后,模型优化装置可以向客户端反馈优化后的模型,可选地,模型优化装置还可以向客户端反馈各个搜索项所对应的优化信息以及优化后的模型所对 应的性能指标,例如准确率、推理时延等。其中,模型优化装置可以以GUI的形式或者是接口文件的形式将上述内容输出至客户端。例如,如图9所示。
在一些可能的实现方式中,模型优化装置在搜索过程中可能会得到多个满足设定的评价指标的优化后的模型。在这种情况下,模型优化装置可以按照多个优化后的模型的性能指标从大到小进行排序,然后将性能指标排在前N位的优化后的模型反馈给客户端。同样的,在反馈优化后的模型的同时,还可以反馈每个优化后的模型所对应的各个搜索项的优化信息以及性能指标。
在本申请实施例中,模型优化装置获取原模型和搜索配置信息,根据搜索配置信息包括的多个搜索项对原模型进行优化信息的联合搜索,从而实现对原模型的多个维度的联合优化,提高了模型的性能。其中,模型优化装置可以向用户提供配置界面,由用户根据自身需求在配置界面中进行诸如搜索项、搜索模式等搜索信息的配置,以满足用户不同的优化需求,灵活且操作简便,降低了用户的使用负担。
需要说明的是,在本申请实施例中,模型优化装置在对原模型进行优化之前,同样可以采用图10所示的实施例中的资源调度方法来实现资源自动调度和优化,本申请实施例对此不再赘述。
本申请实施例还提供了如图1中所示的模型优化装置100,该模型优化装置100包括的模块和功能如前文的描述,在此不再赘述。
在一些实施例中,模型优化装置100中的配置模块101用于执行前述实施例中的步骤601。算子编排模块102用于执行前述实施例中的步骤602。多元搜索模块103用于执行前述实施例中的步骤603。
可选地,该模型优化装置100还可以包括资源调度模块105,该资源调度模块105可以用于执行前述实施例中的步骤1001和步骤1002。
在另一些实施例中,模型优化装置100中的配置模块101用于执行前述实施例中的步骤1101和步骤1102,算子编排模块102和多元搜索模块103可以合并为一个优化模块,该优化模块可以用于执行前述实施例中的步骤1103,反馈模块106用于执行前述实施例中的步骤1104。
本申请实施例还提供了一种如图5所示的计算设备500。计算设备500中的处理器501读取存储器503中存储的一组计算机指令以执行前述的模型优化方法。
由于本申请实施例提供的模型优化装置100中的各个模块可以分布式的部署在同一环境或不同环境的多个计算机上,因此,本申请还提供了一种如图12所示的计算设备(也可以称为计算机系统),该计算机系统包括多个计算机1200,每个计算机1200的结构与前述图5中的计算设备500的结构相同或相似,在此不再赘述。
上述每个计算机1200间通过通信网络建立通信通路。每个计算机1200上运行前述配置模块101、算子编排模块102、多元搜索模块103、资源调度模块105和反馈模块106中的任意一个或多个。任一计算机1200可以为云数据中心中的计算机(例如:服务器),或边缘计算机,或终端计算设备。
上述各个附图对应的流程的描述各有侧重,某个流程中没有详述的部分,可以参见其他流程的相关描述。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。 当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。实现模型优化的计算机程序产品包括一个或多个进行模型优化的计算机指令,在计算机上加载和执行这些计算机程序指令时,全部或部分地产生按照本申请实施例图6和图10所述的流程或功能,或者,全部或部分的产生按照本申请实施例图11和图10所述的流程或功能。
所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如:同轴电缆、光纤、数据用户线(digital subscriber line,DSL))或无线(例如:红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如:软盘、硬盘、磁带)、光介质(例如:数字通用光盘(digital versatile disc,DVD))、或者半导体介质(例如:固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述为本申请提供的实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (34)

  1. 一种模型优化方法,其特征在于,所述方法包括:
    获取原模型和搜索配置信息,所述搜索配置信息包括多个搜索项,不同搜索项表示对所述原模型进行优化信息搜索的不同搜索类别;
    根据所述搜索配置信息,对多个搜索算子进行编排,得到组合算子,每个被编排的搜索算子与一个搜索项对应,搜索算子表示执行对应的搜索项使用的算法;
    根据所述组合算子,对所述原模型进行优化,获得优化后的模型。
  2. 根据权利要求1所述的方法,其特征在于,所述多个搜索项包括超参搜索、网络架构搜索、数据增强搜索、损失函数搜索、优化器搜索、模型压缩策略搜索中的至少两种。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述搜索配置信息,对多个搜索算子进行编排,得到组合算子,包括:
    根据所述搜索配置信息,确定所述多个搜索算子的运算顺序、运算次数或每次运算时的综合搜索空间;
    根据所述多个搜索算子的运算顺序、运算次数或每次运算时的综合搜索空间,生成所述组合算子。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述搜索配置信息还包括:搜索项信息和搜索模式,其中,每个搜索项对应各自的搜索项信息,所述搜索模式用于表示对所述原模型进行优化时遵循的原则。
  5. 根据权利要求4所述的方法,其特征在于,所述搜索模式包括精度模式、速度模式、经济模式和资源模式中的任一种。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述搜索配置信息由用户在图形用户界面GUI上进行输入或选择得到。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,在根据所述组合算子,对所述原模型进行优化之前,所述方法还包括:
    根据所述组合算子,预估对所述原模型进行优化的资源消耗;
    根据所述资源消耗,为执行对所述原模型进行优化的操作进行资源调度。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述方法还包括:
    获取评价指标,所述评价指标表示对所述原模型进行优化后应达到的性能目标;
    所述根据所述组合算子,对所述原模型进行优化,获得优化后的模型,包括:
    根据所述组合算子在综合搜索空间进行优化信息搜索,根据所述优化信息对所述原模型进行优化,获得优化后的模型,所述优化后的模型的性能满足所述评价指标。
  9. 根据权利要求8所述的方法,其特征在于,所述评价指标包括以下指标中的任意一种或多种:模型的准确率、模型的损失、模型的精确度、模型的召回率。
  10. 一种模型优化方法,其特征在于,所述方法包括:
    向用户提供配置界面,所述配置界面包括供所述用户选择的搜索项列表;
    获取原模型和搜索配置信息,所述搜索配置信息包括所述用户在所述搜索项列表中选择的多个搜索项,不同搜索项表示对所述原模型进行优化信息搜索的不同搜索类别;
    根据所述搜索配置信息对所述原模型进行优化;
    向所述用户提供优化后的模型。
  11. 根据权利要求10所述的方法,其特征在于,所述用户在所述搜索项列表中选择的多个搜索项包括超参搜索、网络架构搜索、数据增强搜索、损失函数搜索、优化器搜索、模型压缩策略搜索中的至少两种。
  12. 根据权利要求10或11所述的方法,其特征在于,所述配置界面还包括搜索项信息配置页和搜索模式配置页,所述搜索配置信息还包括所述用户在所述配置界面配置的搜索项信息和搜索模式。
  13. 根据权利要求12所述的方法,其特征在于,所述搜索模式用于表示对所述原模型进行优化时遵循的原则,所述搜索模式包括精度模式、速度模式、经济模式和资源模式中的任一种。
  14. 根据权利要求10-13任一项所述的方法,其特征在于,所述根据所述搜索配置信息对所述原模型进行优化,包括:
    根据所述搜索配置信息,对多个搜索算子进行编排,得到组合算子;
    根据所述组合算子,对所述原模型进行优化。
  15. 根据权利要求14所述的方法,其特征在于,所述根据所述搜索配置信息,对多个搜索算子进行编排,得到组合算子,包括:
    根据所述搜索配置信息,确定所述多个搜索算子的运算顺序、运算次数或每次运算时的综合搜索空间;
    根据所述多个搜索算子的运算顺序、运算次数或每次运算时的综合搜索空间,生成所述组合算子。
  16. 根据权利要求14或15所述的方法,其特征在于,在根据所述组合算子,对所述原模型进行优化之前,所述方法还包括:
    根据所述组合算子,预估对所述原模型进行优化的资源消耗;
    根据所述资源消耗,为执行对所述原模型进行优化的操作进行资源调度。
  17. 一种模型优化装置,其特征在于,所述装置包括:
    配置模块,用于获取用户的原模型和搜索配置信息,所述搜索配置信息包括多个搜索项,不同搜索项表示对所述原模型进行优化信息搜索的不同搜索类别;
    算子编排模块,用于根据所述搜索配置信息,对多个搜索算子进行编排,得到组合算子,每个被编排的搜索算子与一个搜索项对应,搜索算子表示执行对应的搜索项使用的算法;
    多元搜索模块,用于根据所述组合算子,对所述原模型进行优化,获得优化后的模型。
  18. 根据权利要求17所述的装置,其特征在于,所述多个搜索项包括超参搜索、网络架构搜索、数据增强搜索、损失函数搜索、优化器搜索、模型压缩策略搜索中的至少两种。
  19. 根据权利要求17或18所述的装置,其特征在于,所述算子编排模块具体用于:
    根据所述搜索配置信息,确定所述多个搜索算子的运算顺序、运算次数以及每次运算时的综合搜索空间;
    根据所述多个搜索算子的运算顺序、运算次数以及每次运算时的综合搜索空间,生成所述组合算子。
  20. 根据权利要求17-19任一项所述的装置,其特征在于,所述搜索配置信息还包括:搜索项信息和搜索模式,其中,每个搜索项对应各自的搜索项信息,所述搜索模式用于表示对所述原模型进行优化时遵循的原则。
  21. 根据权利要求20所述的装置,其特征在于,所述搜索模式包括精度模式、速度模式、经济模式和资源模式中的任一种。
  22. 根据权利要求17-21任一项所述的装置,其特征在于,所述搜索配置信息由用户在图形用户界面GUI上进行输入或选择得到。
  23. 根据权利要求17-22任一项所述的装置,其特征在于,所述装置还包括:资源管理模块,所述资源管理模块用于:
    根据所述组合算子,预估对所述原模型进行优化的资源消耗;
    根据所述资源消耗,为执行对所述原模型进行优化的操作进行资源调度。
  24. 根据权利要求17-23任一项所述的装置,其特征在于,
    所述配置模块,还用于获取评价指标,所述评价指标表示对所述原模型进行优化后应达到的性能目标;
    所述多元搜索模块,具体还用于根据所述组合算子在综合搜索空间进行优化信息搜索,根据所述优化信息对所述原模型进行优化,获得优化后的模型,所述优化后的的模型的性能满足所述评价指标。
  25. 根据权利要求24所述的装置,其特征在于,所述评价指标包括以下指标中的任意一种或多种:模型的准确率、模型的损失、模型的精确度、模型的召回率。
  26. 一种模型优化装置,其特征在于,所述装置包括:
    配置模块,用于向用户提供配置界面,所述配置界面包括供所述用户选择的搜索项列表;获取原模型和搜索配置信息,所述搜索配置信息包括所述用户在所述搜索项列表中选择的多个搜索项,不同搜索项表示对所述原模型进行优化信息搜索的不同搜索类别;
    优化模块,用于根据所述搜索配置信息对所述原模型进行优化;
    反馈模块,用于向所述用户提供优化后的模型。
  27. 根据权利要求26所述的装置,其特征在于,所述用户在所述搜索项列表中选择的多个搜索项包括超参搜索、网络架构搜索、数据增强搜索、损失函数搜索、优化器搜索、模型压缩策略搜索中的至少两种。
  28. 根据权利要求26或27所述的装置,其特征在于,所述配置界面还包括搜索项信息配置页和搜索模式配置页,所述搜索配置信息还包括所述用户在所述配置界面配置的搜索项信息和搜索模式。
  29. 根据权利要求28所述的装置,其特征在于,所述搜索模式用于表示对所述原模型进行优化时遵循的原则,所述搜索模式包括精度模式、速度模式、经济模式和资源模式中的任一种。
  30. 根据权利要求26-29任一项所述的装置,其特征在于,所述优化模块具体用于:
    根据所述搜索配置信息,对多个搜索算子进行编排,得到组合算子;
    根据所述组合算子,对所述原模型进行优化。
  31. 根据权利要求30所述的装置,其特征在于,所述优化模块具体用于:
    根据所述搜索配置信息,确定所述多个搜索算子的运算顺序、运算次数或每次运算时的综合搜索空间;
    根据所述多个搜索算子的运算顺序、运算次数或每次运算时的综合搜索空间,生成所述组合算子。
  32. 根据权利要求30或31所述的装置,其特征在于,所述装置还包括:资源管理模块,所述资源管理模块用于:
    根据所述组合算子,预估对所述原模型进行优化的资源消耗;
    根据所述资源消耗,为执行对所述原模型进行优化的操作进行资源调度。
  33. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序代码,当所述计算机程序代码被计算设备执行时,所述计算设备执行上述权利要求1至9或10至16中任一项所述的方法。
  34. 一种计算设备,其特征在于,所述计算设备包括处理器和存储器,所述存储器用于存储一组计算机指令,当所述处理器执行所述一组计算机指令时,所述计算设备执行上述权利要求1至9或10至16中任一项所述的方法。
PCT/CN2020/097973 2019-09-17 2020-06-24 模型优化方法、装置、存储介质及设备 WO2021051920A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20864854.3A EP4012630A4 (en) 2019-09-17 2020-06-24 MODEL OPTIMIZATION METHOD AND APPARATUS, STORAGE MEDIA AND DEVICE
US17/694,970 US12032571B2 (en) 2019-09-17 2022-03-15 AI model optimization method and apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910877331.6 2019-09-17
CN201910877331 2019-09-17
CN202010423371.6 2020-05-19
CN202010423371.6A CN112529207A (zh) 2019-09-17 2020-05-19 模型优化方法、装置、存储介质及设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/694,970 Continuation US12032571B2 (en) 2019-09-17 2022-03-15 AI model optimization method and apparatus

Publications (1)

Publication Number Publication Date
WO2021051920A1 true WO2021051920A1 (zh) 2021-03-25

Family

ID=74883930

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097973 WO2021051920A1 (zh) 2019-09-17 2020-06-24 模型优化方法、装置、存储介质及设备

Country Status (3)

Country Link
US (1) US12032571B2 (zh)
EP (1) EP4012630A4 (zh)
WO (1) WO2021051920A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023083058A1 (zh) * 2021-11-12 2023-05-19 中兴通讯股份有限公司 调度参数的调整方法、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021051920A1 (zh) * 2019-09-17 2021-03-25 华为技术有限公司 模型优化方法、装置、存储介质及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160275413A1 (en) * 2015-03-20 2016-09-22 Xingtian Shi Model vector generation for machine learning algorithms
CN106779087A (zh) * 2016-11-30 2017-05-31 福建亿榕信息技术有限公司 一种通用机器学习数据分析平台
CN108399451A (zh) * 2018-02-05 2018-08-14 西北工业大学 一种结合遗传算法的混合粒子群优化算法
CN109447277A (zh) * 2018-10-19 2019-03-08 厦门渊亭信息科技有限公司 一种通用的机器学习超参黑盒优化方法及系统
CN110110862A (zh) * 2019-05-10 2019-08-09 电子科技大学 一种基于适应性模型的超参数优化方法

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070179917A1 (en) * 2006-01-31 2007-08-02 Caterpillar Inc. Intelligent design optimization method and system
US8775332B1 (en) * 2013-06-13 2014-07-08 InsideSales.com, Inc. Adaptive user interfaces
US9443192B1 (en) * 2015-08-30 2016-09-13 Jasmin Cosic Universal artificial intelligence engine for autonomous computing devices and software applications
US10733532B2 (en) * 2016-01-27 2020-08-04 Bonsai AI, Inc. Multiple user interfaces of an artificial intelligence system to accommodate different types of users solving different types of problems with artificial intelligence
KR102532658B1 (ko) 2016-10-28 2023-05-15 구글 엘엘씨 신경 아키텍처 검색
US11138503B2 (en) * 2017-03-22 2021-10-05 Larsx Continuously learning and optimizing artificial intelligence (AI) adaptive neural network (ANN) computer modeling methods and systems
US10628527B2 (en) * 2018-04-26 2020-04-21 Microsoft Technology Licensing, Llc Automatically cross-linking application programming interfaces
CN110020667A (zh) 2019-02-21 2019-07-16 广州视源电子科技股份有限公司 神经网络结构的搜索方法、系统、存储介质以及设备
US11687839B2 (en) * 2019-03-14 2023-06-27 Actapio, Inc. System and method for generating and optimizing artificial intelligence models
US20200387818A1 (en) * 2019-06-07 2020-12-10 Aspen Technology, Inc. Asset Optimization Using Integrated Modeling, Optimization, and Artificial Intelligence
US11694124B2 (en) * 2019-06-14 2023-07-04 Accenture Global Solutions Limited Artificial intelligence (AI) based predictions and recommendations for equipment
WO2021051920A1 (zh) * 2019-09-17 2021-03-25 华为技术有限公司 模型优化方法、装置、存储介质及设备
US11893537B2 (en) * 2020-12-08 2024-02-06 Aon Risk Services, Inc. Of Maryland Linguistic analysis of seed documents and peer groups
US20230275715A1 (en) * 2022-02-28 2023-08-31 Qualcomm Incorporated Inter-slot demodulation reference signal patterns

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160275413A1 (en) * 2015-03-20 2016-09-22 Xingtian Shi Model vector generation for machine learning algorithms
CN106779087A (zh) * 2016-11-30 2017-05-31 福建亿榕信息技术有限公司 一种通用机器学习数据分析平台
CN108399451A (zh) * 2018-02-05 2018-08-14 西北工业大学 一种结合遗传算法的混合粒子群优化算法
CN109447277A (zh) * 2018-10-19 2019-03-08 厦门渊亭信息科技有限公司 一种通用的机器学习超参黑盒优化方法及系统
CN110110862A (zh) * 2019-05-10 2019-08-09 电子科技大学 一种基于适应性模型的超参数优化方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4012630A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023083058A1 (zh) * 2021-11-12 2023-05-19 中兴通讯股份有限公司 调度参数的调整方法、设备及存储介质

Also Published As

Publication number Publication date
EP4012630A1 (en) 2022-06-15
US20220197901A1 (en) 2022-06-23
US12032571B2 (en) 2024-07-09
EP4012630A4 (en) 2022-11-23

Similar Documents

Publication Publication Date Title
US11762690B2 (en) Data processing method and related products
US20210326729A1 (en) Recommendation Model Training Method and Related Apparatus
US20220343172A1 (en) Dynamic, automated fulfillment of computer-based resource request provisioning using deep reinforcement learning
CN107003906B (zh) 云计算技术部件的类型到类型分析
US20230206132A1 (en) Method and Apparatus for Training AI Model, Computing Device, and Storage Medium
US20200349161A1 (en) Learned resource consumption model for optimizing big data queries
US20180113746A1 (en) Software service execution apparatus, system, & method
US20200034750A1 (en) Generating artificial training data for machine-learning
CN113821332B (zh) 自动机器学习系统效能调优方法、装置、设备及介质
US12032571B2 (en) AI model optimization method and apparatus
US11544502B2 (en) Management of indexed data to improve content retrieval processing
EP4280107A1 (en) Data processing method and apparatus, device, and medium
WO2022252694A1 (zh) 神经网络优化方法及其装置
US20230334325A1 (en) Model Training Method and Apparatus, Storage Medium, and Device
Markov et al. Looper: An end-to-end ml platform for product decisions
US10891514B2 (en) Image classification pipeline
WO2020237535A1 (en) Systems, methods, and computer readable mediums for controlling federation of automated agents
US12050979B2 (en) Budgeted neural network architecture search system and method
CN112529207A (zh) 模型优化方法、装置、存储介质及设备
Sagaama et al. Automatic parameter tuning for big data pipelines with deep reinforcement learning
Hodak et al. Benchmarking AI inference: where we are in 2020
CN110866605A (zh) 数据模型训练方法、装置、电子设备及可读介质
US11941421B1 (en) Evaluating and scaling a collection of isolated execution environments at a particular geographic location
WO2024065535A1 (en) Methods, apparatus, and articles of manufacture to generate hardware-aware machine learning model architectures for multiple domains without training
CN116415044A (zh) 指针分析算法推荐的方法、装置及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20864854

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020864854

Country of ref document: EP

Effective date: 20220311