EP4293540A1 - Procédé et appareil de recommandation de modèle, et dispositif informatique - Google Patents

Procédé et appareil de recommandation de modèle, et dispositif informatique Download PDF

Info

Publication number
EP4293540A1
EP4293540A1 EP22773923.2A EP22773923A EP4293540A1 EP 4293540 A1 EP4293540 A1 EP 4293540A1 EP 22773923 A EP22773923 A EP 22773923A EP 4293540 A1 EP4293540 A1 EP 4293540A1
Authority
EP
European Patent Office
Prior art keywords
application
model
computing device
hardware parameter
proxy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22773923.2A
Other languages
German (de)
English (en)
Inventor
Fuchun WEI
Yongzhong Wang
Zhongqing OUYANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP4293540A1 publication Critical patent/EP4293540A1/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • G06F17/153Multidimensional correlation or convolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Definitions

  • a neural network model search algorithm can be used to automatically search for and recommend a neural network model suitable for a current application scenario based on training data. Accuracy of the recommended model may exceed that of a manually filtered neural network model to meet a user requirement.
  • search time is long, a large quantity of computing resources need to be consumed in a search process, and a selected model may fail to be fully applicable to the current application scenario. Therefore, how to provide an efficient model recommendation method that can be used to sense a computing capability of a device used by a user becomes a technical problem to be urgently resolved.
  • the model recommendation method is run on the computing device. According to the foregoing method, in the model recommendation method, the hardware parameter of the computing device running the application may be directly obtained.
  • connection structure includes a branch structure and an input structure.
  • the hardware parameter of the computing device includes a type of a chip included in the computing device, a quantity of cores of the chip, or a clock rate of the chip.
  • this application provides a model recommendation apparatus.
  • the model recommendation apparatus may be located on a computing device or may be an independent device.
  • the model recommendation apparatus includes modules configured to perform the model recommendation method according to any one of the first aspect or the possible implementations of the first aspect.
  • this application provides a computing device.
  • the computing device includes a processor and a memory.
  • the memory is configured to store computer-executable instructions, and when the computing device runs, the processor executes the computer-executable instructions in the memory to perform the operation steps of the method according to any one of the first aspect or the possible implementations of the first aspect by using a hardware resource in the computing device.
  • the computing device 110 is configured to receive data generated in an application scenario and perform various types of calculation processing on the data, and may further include a data collection apparatus 111, a processor 112, and a memory 113.
  • the data collection apparatus 111, the processor 112, and the memory 113 are connected through a bus.
  • the bus may be a data bus, or may be a power bus, a control bus, a status signal bus, or the like.
  • the bus may alternatively be a bus of another type for implementing a connection between components within a device.
  • the computing device 110 may further include a graphics processing unit (graphics processing unit, GPU), and the application 114 may use the GPU to run a particular model.
  • graphics processing unit graphics processing unit, GPU
  • the application 114 may use the GPU to run a particular model.
  • the computing device 110 is an electronic device having a computing capability, and may be a terminal computing device (for example, a notebook computer, a mobile phone, a personal desktop computer, or a community gate), may be a server, or may be a server cluster including several servers, or a cloud computing service center. This is not limited in embodiments of this application. It should be noted that the data collection apparatus 111 may be a hardware component of the computing device 110 or an independent device. As shown in FIG. 1 , the data collection apparatus 111 is integrated within the computing device 110, and in other embodiments, the data collection apparatus 111 may be located in an independent device.
  • a terminal computing device for example, a notebook computer, a mobile phone, a personal desktop computer, or a community gate
  • the data collection apparatus 111 may be a hardware component of the computing device 110 or an independent device. As shown in FIG. 1 , the data collection apparatus 111 is integrated within the computing device 110, and in other embodiments, the data collection apparatus 111 may be located in an independent device.
  • the model recommendation apparatus 120 is configured to recommend a model that meets a requirement of the application scenario and the computing capability of the computing device for the application 114, and includes a processor 121 and a memory 122.
  • the memory 122 may be configured to basic operators for forming different models, and may include a read-only memory and a random access memory, and provide instructions and data to the processor 121.
  • the memory 122 may alternatively include a non-volatile random access memory.
  • the memory 122 may further store information about a type of the device.
  • the memory 122 may be a volatile memory or a non-volatile memory, or may include both the volatile memory and the non-volatile memory.
  • the non-volatile memory may be a read-only memory (read-only memory, ROM), a programmable read-only memory (programmable ROM, PROM), an erasable programmable read-only memory (erasable PROM, EPROM), an electrically erasable programmable read-only memory (electrically EPROM, EEPROM), or a flash memory.
  • the volatile memory may be a random access memory (random access memory, RAM) used as an external cache.
  • RAMs such as a static random access memory (static RAM SRAM), a dynamic random access memory (DRAM), a synchronous dynamic random access memory (synchronous DRAM, SDRAM), a double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), an enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), a synchlink dynamic random access memory (synchlink DRAM, SLDRAM), and a direct rambus random access memory (direct rambus RAM, DR RAM).
  • static random access memory static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate SDRAM double data rate SDRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • synchlink dynamic random access memory synchlink dynamic random access memory
  • SLDRAM direct rambus random access memory
  • direct rambus RAM direct rambus RAM
  • the processor 121 is configured to use different models formed by different basic operators and select a model suitable for the application from the different models to recommend to the application 114.
  • the processor 121 may be a CPU, another general-purpose processor, a digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (application-specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA), or another programmable logic device, a discrete gate or a transistor logic device, a discrete hardware component, or the like.
  • the general-purpose processor may be a microprocessor or any conventional processor or the like.
  • the model recommendation apparatus 120 may also be a terminal computing device, a server, or a server cluster including several servers, or a cloud computing service center.
  • a wired or wireless transmission mode may be used between the model recommendation apparatus 120 and the computing device 110.
  • the wired transmission mode includes data transmission using the Ethernet, an optical fiber, and the like, and the wireless transmission manner includes a mobile hotspot (Wi-Fi) transmission mode, a Bluetooth transmission mode, an infrared transmission mode, and the like.
  • Wi-Fi mobile hotspot
  • the model recommendation apparatus 120 may also be a hardware component on a hardware apparatus or a set of software apparatus running on a hardware apparatus.
  • the processor 121 and the memory 122 may be a processor and a memory of the hardware apparatus on which the model recommendation apparatus is located.
  • the hardware apparatus may also be a terminal computing device, a server, or a server cluster including several servers, or a cloud computing service center, and in particular, may be the computing device 110 in FIG. 1 .
  • the processor of the hardware apparatus on which the model recommendation apparatus 120 is located is also referred to as the processor of the model recommendation apparatus 120
  • the memory of the hardware apparatus in which the model recommendation apparatus 120 is located is also referred to as the memory of the model recommendation apparatus 120.
  • the processor 121 and the memory 122 forming the model recommendation apparatus 120 may be separately deployed in different systems or hardware apparatuses.
  • FIG. 2 is a schematic flowchart of the model recommendation method according to this application. The method may be performed by the model recommendation apparatus 120 shown in FIG. 1 , and as shown in the figure, the method specifically includes the following steps.
  • the model recommendation apparatus 120 obtains a proxy dataset of an application.
  • the proxy dataset includes proxy input data and a label corresponding to the proxy input data.
  • the proxy input data is a part of data obtained by the application from the data collection apparatus 111 in FIG. 1
  • the label is a real result corresponding to the part of the input data.
  • a data processing capability of the application may be evaluated by inputting the proxy input data to the application and comparing a calculation result obtained through model calculation by the application with the label corresponding to the proxy input data. A smaller difference between the calculation result obtained by the application and the label indicates higher accuracy of processing the data by the application. Shorter time taken by the application to obtain the calculation result indicates higher efficiency of processing the data by the application.
  • the model recommendation apparatus 120 may obtain the proxy input data from the data collection apparatus 111 used by the application 114, and label the proxy input data.
  • the model recommendation apparatus may further use a proxy dataset preset by a user.
  • the computing device 110 or the user may send a hardware parameter of the computing device to a cloud data center.
  • the hardware parameter includes a type of a processor used when the application 114 is run, a quantity of cores of the processor, a clock rate of the processor, or a type of the GPU.
  • the data center may obtain hardware parameters of all model recommendation apparatuses in advance, and then specify, based on a hardware parameter sent by the computing device 110 or the user, model recommendation apparatuses with a same hardware parameter to perform the model recommendation method.
  • the data center may further construct, by using a virtualization technology based on the hardware parameter of the computing device sent by the user, a virtual computing device with a same hardware parameter, and run the model recommendation apparatus on the virtual computing device. Then, the computing device 110 or the user may send the proxy dataset to the model recommendation apparatus.
  • the model recommendation apparatus determines a set of basic operations suitable for the application.
  • One basic operation may directly include one basic operator.
  • a pooling operation may include a pooling operator with a same step.
  • a 2-step pooling operation includes a 2-step pooling operator, and one piece of data may be output at an interval of two pieces of data.
  • one basic operation may be formed by combining a plurality of basic operators.
  • a convolution operation may include a convolution operator with a same size of a convolution kernel, a rectified linear unit (rectified linear unit, ReLU) operator, and a batch normalization (batch normalization, BN) operator.
  • FIG. 3 is a schematic structural diagram of a 7*7 convolution operation according to an embodiment of this application. As shown in the figure, the 7*7 convolution operation is formed by linearly connecting a Relu operator, a 7*7 convolution operator, and a BN operator.
  • Basic operations include but are not limited to: a 1*1 convolution operation, a 3*3 convolution operation, a 5*5 convolution operation, a 7*7 convolution operation, a 2-step pooling operation, and a 3-step pooling operation.
  • the convolution operator in the convolution operation may be implemented in different manners.
  • the 3*3 convolution operator may be a 3*3 standard convolution operator, or may be a 3*3 combined convolution operator formed by combining a 3*1 convolution operator and a 1*3 convolution operator. Both the two operators can implement a 3*3 convolution function.
  • the 3*3*3 convolution operator may be a 3*3*3 standard convolution operator, or may be a 3*3*3 depthwise separable convolution operator formed by combining three 3*3*1 convolution operators and one 1*1*3 convolution operator. Both the two operators can implement a 3*3*3 convolution function.
  • FIG. 4(d) are a schematic diagram of convolution operators in different implementations according to an embodiment of this application.
  • 7*7 to-be-processed data on the left is transformed into 5*5 data on the right.
  • to-be-processed data may alternatively be processed by a 3*3 combined convolution operator, that is, the to-be-processed data is first processed by a 3*1 convolution operator and then processed by a 1*3 convolution operator, to obtain a same processing result.
  • FIG. 4(a) after being processed by a middle 3*3 standard convolution operator, 7*7 to-be-processed data on the left is transformed into 5*5 data on the right.
  • to-be-processed data may alternatively be processed by a 3*3 combined convolution operator, that is, the to-be-processed data is first processed by a 3*1 convolution operator and then processed by a 1*3 convolution operator, to obtain a same processing result.
  • to-be-processed data may alternatively be processed by a 3*3*3 depthwise separable convolution operator, that is, the to-be-processed data is first processed by three 3*3*1 convolution operators and then processed by a 1*1*3 convolution operator, to obtain a same processing result. Therefore, one convolution operation may be classified into three different convolution operations: a standard convolution operation, a combined convolution operation, and a depthwise separable convolution operation; and the three convolution operations can implement a same convolution function.
  • pooling operator in the pooling operation may also be implemented in different manners.
  • a 4-step pooling operator may be a 4-step average pooling operator, and an implementation is that a mean value of every four pieces of data is used as a sample value for output
  • a 4-step pooling operator may be a 4-step maximum pooling operator, and an implementation is that a maximum value of every four pieces of data is used as a sample value for output. Therefore, one pooling operation may be classified into an average pooling operation and a maximum pooling operation, and the two operations can implement a same sampling function.
  • the model recommendation apparatus determines a connection structure suitable for the application.
  • branch structures may be identified by quantities of operation nodes and quantities of branches in the branch structures, and are denoted as m-n branch structures, where m represents the quantity of operation nodes, and n represents the quantity of branches.
  • the quantity of branches may be calculated based on a sum (Q+W+E) of a quantity Q of connections of output nodes of another cell connected to a cell, a maximum quantity W of connections of an operation node in the cell, and a quantity E of connections of an output node.
  • FIG. 5 is a schematic diagram of different branch structures according to this application. For example, one cell includes four operation nodes. As shown in FIG. 5 , a solid line box represents an output node, a circle represents an operation node, and a dashed line box represents an output node of another cell connected to the cell. For ease of description, the dashed line box is also referred to as an input node of the cell in FIG. 5 . In (a) in FIG.
  • a cell has one input node that is connected to two operation nodes, and a value of Q is 2.
  • each operation node in the cell is connected to only one operation node, and a maximum quantity W of connections is 1.
  • the output node is connected to two operation nodes, and a value of E is 2.
  • a quantity of branches of the cell in (a) in FIG. 5 is 5, and a branch structure is 4-5.
  • a cell has two input nodes that are respectively connected to two operation nodes and one operation node, and a value of Q is 3.
  • a value of W is 1, and a value of E is 2.
  • an input of each cell is related to the first three cells connected to the cell, and is denoted as a three-input structure.
  • an input of a cell is related to a plurality of cells, for example, in (b) in FIG. 6 , to calculate a current cell, an output of the first two cells needs to be additionally transferred to a memory of a computing unit.
  • a memory architecture used by some computing devices 110 during design is sensitive to a delay caused by a plurality of memory transfers, and these computing devices 110 are not suitable for using a multi-input structure in a model. Therefore, during actual application, an input structure of a computing device that runs an application in affinity mode should be selected to form a model.
  • FIG. 7 is a schematic diagram of a pareto solution set according to an embodiment of this application.
  • Values of f1 and f2 of A are less than a value of D, indicating that A dominates D, the value of f2 of A is less than a value of f2 of B, but the value of f1 of A is greater than a value of f1 of B, indicating that A and B are in a non-dominant relationship.
  • Three points: A, B, and C are all pareto solutions, belong to the pareto solution set, and are in a non-dominant relationship.
  • a model applicable to the computing device running the application and an application scenario in which the application is used may be recommended for the application, to increase a data processing speed of the application and improve accuracy of an output result of the application.
  • FIG. 8 is a schematic flowchart of determining a set of basic operations according to an embodiment of this application. A process may be performed by the model recommendation apparatus 120 shown in FIG. 1 . As shown in the figure, a determining method is specifically as follows:
  • FIG. 9 is a schematic flowchart of a method for scoring a connection structure of basic operators according to an embodiment of this application. The method may be performed by the model recommendation apparatus 120 shown in FIG. 1 . As shown in the figure, a specific process is as follows:
  • different levels may be set based on the processing time. For example, a branch structure whose processing time is less than T1 may be set to a level 1, a branch structure whose processing time is between T1 and T2 may be set to a level 2, and a branch structure whose processing time is greater than T2 may be set to a level 3.
  • S906 Determine a maximum value of a quantity of cells that are in the input structure and that are related to an input of one cell.
  • the value may be specified by the user or may be set based on an empirical value.
  • S907. Determine different to-be-scored input structures based on the maximum value. Similar to S906, a same quantity of cells and cells of a same structure may be used in all to-be-scored input structures. A quantity of cells may be any number greater than the maximum value in S907, and may be generally set to the maximum value plus 1.
  • FIG. 10 is a schematic flowchart of an evolution based model search algorithm according to this application.
  • a process may be performed by the model recommendation apparatus 120 shown in FIG. 1 .
  • a specific method is as follows: S1001. Initialize a population.
  • the population is a collective name of a plurality of models.
  • the model search algorithm may be used to randomly combine different types of basic operations into P models by using different connection structures, and the P models are used as a population.
  • a quantity of cells that form each model is the maximum value in S806, and a quantity of nodes that form each cell is the quantity of nodes in S902.
  • the quantity P of models in the initialized population may be set based on an empirical value.
  • Input data in a proxy dataset is divided into two parts: a training set and a validation set, and a part of data in the training set and a label corresponding to the part of data are used as input data of a model, to train each model in the population, to obtain a trained population.
  • f (Amin, Lmin, Rmax), where A represents a difference between output data obtained by the model and actual data, L represents calculation time for completing data processing by the model, and R represents the score of the connection structure of the model.
  • data in the validation set may be input to each model in the first population, to obtain output data, and time for completing data processing by each model is recorded.
  • the value of A is equal to a difference between a label corresponding to the output data and a label corresponding to the data in the validation set
  • the value of L is equal to the time for completing data processing by the model.
  • a value of R of the model may be obtained based on the scores of the branch structure and the input structure obtained in step S203.
  • the value of R is equal to the score of the branch structure of each cell of the model multiplied by the score of the input structure.
  • FIG. 11 is a schematic diagram of calculating a value of R by a model according to this application. It is assumed that scores of a 3-3 branch structure and a 3-5 branch structure are respectively M1 and M2, and scores of a one-input structure and a two-input structure are respectively N1 and N2.
  • a model a and a model b each include three cells, each cell includes three nodes, and each node includes at least one identical or different operation.
  • branch structures of the cells in the model a are a 3-3 branch structure, a 3-5 branch structure, and a 3-3 branch structure, and an input structure is a one-input structure
  • a value of R of the model a is (M1+M2+M1)*N1.
  • branch structures of the cells in the model b are a 3-3 branch structure, a 3-5 branch structure, and a 3-5 branch structure, and an input structure is a two-input structure
  • a value of R of the model b is (M1+M2+M2)*N2.
  • Values (A, L, and R) of three parameters of each model in the first population are calculated, and a model corresponding to a pareto solution of the objective function f is selected.
  • FIG. 12A and FIG. 12B are a schematic diagram of a mutation operation according to an embodiment. As shown in the figure, the mutation operation is not performed on the model a, and models b, c, and d may all be models obtained by performing the mutation operation on the model a.
  • the model obtained through the mutation operation is retrained by using the part of data in the training set in S 1 00 1, the trained model is added to the first population, and a model that exists for longest time in the first population is deleted, to generate the second population.
  • a quantity of deleted models is equal to a quantity of newly added models.
  • a model having an optimal pareto solution in the second population after iteration is stopped is obtained according to the objective function, and any model may be selected as the recommended model.
  • all models having an optimal pareto solution and values of (A, L, and R) of the models may be sent to an application, and the application determines a model as the recommended model.
  • the model having the optimal pareto solution in the second population may continue to be retrained by using data in all training sets, then the data in the validation set is input to each model in the first population, to obtain output data, and time for completing data processing by each model is recorded. Then, a model with the optimal pareto solution is obtained by using the objective function again, and any model is selected as the recommended model.
  • the model recommendation apparatus when determining the recommended model, the model recommendation apparatus considers both the data processing accuracy and time of the model and the score of the connection structure of the model, and obtains the model suitable for the application by using a multi-objective optimization algorithm.
  • model recommendation method provided in embodiments of this application is described above in detail with reference to FIG. 1 to FIG. 12A and FIG. 12B .
  • the following further describes, with reference to FIG. 13 , a model recommendation apparatus provided in embodiments of this application.
  • FIG. 13 is a schematic diagram of a model recommendation apparatus 120 according to this application.
  • the model recommendation apparatus 120 is configured to implement the model recommendation method shown in FIG. 2 , and includes an obtaining unit 1301 and a recommendation unit 1302.
  • the recommendation unit 1302 is configured to recommend a model suitable for the application based on the proxy dataset and a hardware parameter of the computing device.
  • model recommendation apparatus 120 in this embodiment of this application may be implemented by an application-specific integrated circuit (application-specific integrated circuit, ASIC) or a programmable logic device (programmable logic device, PLD).
  • the PLD may be a complex program logic device (complex programmable logical device, CPLD), a field programmable gate array (field programmable gate array, FPGA), generic array logic (generic array logic, GAL), or any combination thereof.
  • the model recommendation method shown in FIG. 2 may be implemented by using software
  • the model recommendation apparatus 120 and each module of the model recommendation apparatus 120 may be software modules.
  • the model recommendation apparatus 120 may be executed by the computing device 110 shown in FIG. 1 , or may be executed by an independent device.
  • the recommendation unit 1302 is further configured to: determine a set of basic operations suitable for the application based on the hardware parameter of the computing device, where the set includes a plurality of basic operations; determine a connection structure suitable for the application based on the hardware parameter of the computing device, where the connection structure is used to combine the plurality of basic operations into different models; and finally, recommend the model suitable for the application based on the set of basic operations, the connection structure, and the proxy dataset.
  • the recommendation unit may comprehensively consider a usage scenario of the application and the hardware parameter during running, to avoid a case in which the computing device running the application cannot support data processing performed by the model recommended by the model recommendation apparatus, and data processing accuracy and a data processing speed of the application are reduced.
  • All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof.
  • the foregoing embodiments may be implemented completely or partially in a form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable apparatuses.
  • the computer instructions may be stored in a computer-readable storage medium or may be transmitted from one computer-readable storage medium to another computer-readable storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
EP22773923.2A 2021-03-23 2022-02-09 Procédé et appareil de recommandation de modèle, et dispositif informatique Pending EP4293540A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110309016.0A CN114969636B (zh) 2021-03-23 2021-03-23 一种模型推荐的方法、装置和计算机设备
PCT/CN2022/075585 WO2022199261A1 (fr) 2021-03-23 2022-02-09 Procédé et appareil de recommandation de modèle, et dispositif informatique

Publications (1)

Publication Number Publication Date
EP4293540A1 true EP4293540A1 (fr) 2023-12-20

Family

ID=82974148

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22773923.2A Pending EP4293540A1 (fr) 2021-03-23 2022-02-09 Procédé et appareil de recommandation de modèle, et dispositif informatique

Country Status (4)

Country Link
US (1) US20240013027A1 (fr)
EP (1) EP4293540A1 (fr)
CN (1) CN114969636B (fr)
WO (1) WO2022199261A1 (fr)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556464A (zh) * 2009-05-22 2009-10-14 天津大学 基于关联规则的城市电力负荷预测模型自动推荐方法
US20190138901A1 (en) * 2017-11-06 2019-05-09 The Royal Institution For The Advancement Of Learning/Mcgill University Techniques for designing artificial neural networks
US20200082247A1 (en) * 2018-09-07 2020-03-12 Kneron (Taiwan) Co., Ltd. Automatically architecture searching framework for convolutional neural network in reconfigurable hardware design
CN112036558A (zh) * 2019-06-04 2020-12-04 北京京东尚科信息技术有限公司 模型管理方法、电子设备和介质
CN110276456B (zh) * 2019-06-20 2021-08-20 山东大学 一种机器学习模型辅助构建方法、系统、设备及介质
CN112446462B (zh) * 2019-08-30 2024-06-18 华为技术有限公司 目标神经网络模型的生成方法和装置
CN111428854A (zh) * 2020-01-17 2020-07-17 华为技术有限公司 一种结构搜索方法及结构搜索装置
CN112114892B (zh) * 2020-08-11 2023-07-21 北京奇艺世纪科技有限公司 深度学习模型的获取方法、加载方法及选取方法
CN112001496B (zh) * 2020-08-27 2022-09-27 展讯通信(上海)有限公司 神经网络结构搜索方法及系统、电子设备及存储介质
CN112418392A (zh) * 2020-10-21 2021-02-26 华为技术有限公司 一种神经网络构建方法以及装置

Also Published As

Publication number Publication date
CN114969636A (zh) 2022-08-30
CN114969636B (zh) 2023-10-03
US20240013027A1 (en) 2024-01-11
WO2022199261A1 (fr) 2022-09-29

Similar Documents

Publication Publication Date Title
CN109948641B (zh) 异常群体识别方法及装置
EP3467723A1 (fr) Procédé et appareil de construction de modèles de réseau basé sur l'apprentissage par machine
CN111047563B (zh) 一种应用于医学超声图像的神经网络构建方法
WO2022142026A1 (fr) Procédé de construction de réseau de classification et procédé de classification basé sur un réseau de classification
CN114168318A (zh) 存储释放模型的训练方法、存储释放方法及设备
CN113761193A (zh) 日志分类方法、装置、计算机设备和存储介质
EP4293540A1 (fr) Procédé et appareil de recommandation de modèle, et dispositif informatique
CN113961765B (zh) 基于神经网络模型的搜索方法、装置、设备和介质
CN110705889A (zh) 一种企业筛选方法、装置、设备及存储介质
WO2022252694A1 (fr) Procédé et appareil d'optimisation de réseau neuronal
CN110544166A (zh) 样本生成方法、装置及存储介质
CN115203556A (zh) 一种评分预测模型训练方法、装置、电子设备及存储介质
US11676050B2 (en) Systems and methods for neighbor frequency aggregation of parametric probability distributions with decision trees using leaf nodes
CN114610648A (zh) 一种测试方法、装置及设备
CN114037060A (zh) 预训练模型的生成方法、装置、电子设备以及存储介质
CN114357180A (zh) 知识图谱的更新方法及电子设备
CN109436980A (zh) 电梯部件的状态检测方法和系统
US20230266720A1 (en) Quality aware machine teaching for autonomous platforms
CN117114087B (zh) 故障预测方法、计算机设备和可读存储介质
US20230140148A1 (en) Methods for community search, electronic device and storage medium
US20230194594A1 (en) Method of generating device model and computing device performing the same
CN117371508A (zh) 模型压缩方法、装置、电子设备以及存储介质
CN116308715A (zh) 信用评价模型的获取方法、用户信用评估的方法和装置
WO2023163831A1 (fr) Apprentissage automatique sensible à la qualité pour plateformes autonomes
CN116415075A (zh) 兴趣度预测及模型训练方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230913

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)