CN110569984B - Configuration information generation method, device, equipment and storage medium - Google Patents

Configuration information generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN110569984B
CN110569984B CN201910851977.7A CN201910851977A CN110569984B CN 110569984 B CN110569984 B CN 110569984B CN 201910851977 A CN201910851977 A CN 201910851977A CN 110569984 B CN110569984 B CN 110569984B
Authority
CN
China
Prior art keywords
machine learning
learning model
operator
configuration information
implementation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910851977.7A
Other languages
Chinese (zh)
Other versions
CN110569984A (en
Inventor
谭志鹏
刘耀勇
蒋燚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910851977.7A priority Critical patent/CN110569984B/en
Publication of CN110569984A publication Critical patent/CN110569984A/en
Application granted granted Critical
Publication of CN110569984B publication Critical patent/CN110569984B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Stored Programmes (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a configuration information generation method, a configuration information generation device, a configuration information generation equipment and a storage medium. The method comprises the following steps: loading a machine learning model, wherein the machine learning model comprises at least one operator, the ith operator in the at least one operator has n implementation modes, i is a positive integer, and n is an integer larger than 1; determining a target implementation mode from the n implementation modes; and generating configuration information corresponding to the machine learning model, wherein the configuration information is used for indicating that the target implementation mode is used when the ith operator of the machine learning model is operated. According to the method and the device for realizing the operator, the machine learning model supports multiple implementation modes of one operator, the implementation mode corresponding to the operator can be selected according to actual conditions, the implementation mode corresponding to the operator is determined more flexibly, and the operation performance of the operator included by the machine learning model on a terminal is guaranteed.

Description

Configuration information generation method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of machine learning, in particular to a configuration information generation method, a device, equipment and a storage medium.
Background
A machine learning model is a network model that may have computer vision, natural language processing, etc. functionality.
In the related art, the terminal may configure the machine learning model using the configuration information, thereby performing various services, such as face detection, voice recognition, image recognition, and the like, through the configured machine learning model.
Disclosure of Invention
The embodiment of the application provides a configuration information generation method, a configuration information generation device, a configuration information generation equipment and a storage medium. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for generating configuration information, where the method includes:
loading a machine learning model, wherein the machine learning model comprises at least one operator, the ith operator in the at least one operator has n implementation modes, i is a positive integer, and n is an integer greater than 1;
determining a target implementation mode from the n implementation modes;
generating configuration information corresponding to the machine learning model, wherein the configuration information is used for indicating that the target implementation mode is used when the ith operator of the machine learning model is operated.
In another aspect, an embodiment of the present application provides a device for generating configuration information, where the device includes:
the model loading module is used for loading a machine learning model, wherein the machine learning model comprises at least one operator, the ith operator in the at least one operator has n implementation modes, i is a positive integer, and n is an integer greater than 1;
a mode determination module, configured to determine a target implementation mode from the n implementation modes;
an information generating module, configured to generate configuration information corresponding to the machine learning model, where the configuration information is used to indicate that the target implementation is used when the i-th operator of the machine learning model is run.
In yet another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores a computer program, and the computer program is loaded and executed by the processor to implement the configuration information generating method according to the above aspect.
In still another aspect, an embodiment of the present application provides a computer-readable storage medium, in which a computer program is stored, and the computer program is loaded and executed by a processor to implement the configuration information generating method according to the above aspect.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
when an operator in the machine learning model has multiple implementation modes, a target implementation mode is determined from the multiple implementation modes, and the target implementation mode is used when the machine learning model is operated.
Drawings
Fig. 1 is a flowchart of a configuration information generation method according to an embodiment of the present application;
FIG. 2 is a block diagram of a machine learning model provided by an embodiment of the present application;
fig. 3 is a flowchart of a computation time obtaining method according to an embodiment of the present application;
fig. 4 is a block diagram of a configuration information generation apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of a configuration information generation apparatus according to another embodiment of the present application;
FIG. 6 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The execution main body of the embodiment of the application can be computer equipment, the computer equipment refers to electronic equipment with computing and processing capabilities, and the computer equipment can be a terminal, for example, the terminal can be a mobile phone, a tablet computer, an electronic book reading device, a multimedia playing device, a wearable device or other portable electronic equipment; the computer device may also be a server, which may be a single server or a cluster of servers. Of course, in other possible implementations, the computer device may also be other electronic devices, such as medical devices, smart home devices, and so on.
The machine learning model is a network model which can have the functions of computer vision, natural language processing or social network analysis and the like, when a user wants to unlock through face recognition, the terminal carries out face recognition detection through the machine learning model so as to judge whether the user has the unlocking permission; when a user wants to convert the voice into the Chinese characters, the terminal performs voice recognition through the machine learning model and converts the voice into the Chinese characters for display.
Referring to fig. 1, a flowchart of a configuration information generating method according to an embodiment of the present application is shown. The method may include several steps as follows.
Step 101, loading a machine learning model, wherein the machine learning model comprises at least one operator, the ith operator in the at least one operator has n implementation modes, i is a positive integer, and n is an integer greater than 1.
As shown in FIG. 2, a schematic diagram of a machine learning model is shown. An operator is a mapping from one function space to another, i.e., an operator is an algorithm with data processing capabilities. The machine learning model includes at least one operator, such as a convolution operator, a pooling operator, an excitation function operator, and the like. Some operators may have more than one implementation, and different implementations of the same operator are different algorithms for implementing the function of the operator. For example, the volume operators have implementations such as sliding window computation, matrix phase multiplication, fast fourier transform, and the like; the pooling operators have implementations such as a maximum pooling method, an average pooling method, a center pooling method, and the like. The machine learning model includes a deep learning model, which may be loaded by the terminal in an exemplary embodiment.
And 102, determining a target implementation mode from the n implementation modes.
For an operator with multiple implementation manners, the terminal may determine the target implementation manner from the n implementation manners according to the operation time or the operation precision or other judgment bases. The terminal may determine one implementation manner from the n implementation manners as the target implementation manner.
And 103, generating configuration information corresponding to the machine learning model, wherein the configuration information is used for indicating that the target implementation mode is used when the ith operator of the machine learning model is operated.
After the target implementation manner corresponding to the operator is determined, the terminal may generate configuration information corresponding to the target implementation manner. The configuration information corresponding to different operators is different, the configuration information can be represented by configuration variables, and the configuration variables corresponding to different operators are different. Illustratively, the configuration variable corresponding to the ith operator may be set as the target implementation manner, and is used for characterizing that the target implementation manner is used when the ith operator in the machine learning model is run. For example, the ConvRunningMethod (a configuration variable corresponding to a convolution operator) may be set as FFT (Fast Fourier transform) for characterizing that the FFT method is used when running the convolution operator in the machine learning model.
To sum up, in the technical scheme provided in the embodiment of the present application, when an operator in a machine learning model has multiple implementation manners, a target implementation manner is determined from the multiple implementation manners, and the above target implementation manner is used when the machine learning model is operated, compared with a machine learning model in the related art, an operator with multiple implementation manners is fixed as one implementation manner by the machine learning model, the machine learning model provided in the embodiment of the present application supports multiple implementation manners of one operator, an implementation manner corresponding to the operator can be selected according to an actual situation, the implementation manner corresponding to the operator is determined more flexibly, and the operational performance of the operator included in the machine learning model on a terminal is ensured.
The target implementation of an operator with multiple implementations may be determined as follows:
in one example, the terminal may obtain the operation time corresponding to each of the n implementation manners; and selecting the implementation mode with the minimum operation time as a target implementation mode from the n implementation modes.
As shown in fig. 3, the operation time corresponding to each of the n implementation manners may be obtained by the following steps:
step 301, for the mth implementation manner of the n implementation manners, configuring the ith operator of the machine learning model to use the mth implementation manner, where m is a positive integer less than or equal to n.
For example, the configuration variable corresponding to the ith operator may be set to the mth implementation manner, so that the ith operator configuring the machine learning model uses the mth implementation manner. By switching the configuration variables corresponding to the ith operator, the ith operator of the machine learning model can be configured in different implementation modes.
Step 302, calling a machine learning model to execute a preset task for a times, wherein a is an integer greater than 1.
The preset task can be determined according to the application field of the machine learning model, for example, if the application field of the machine learning model is human face recognition, the preset task can be set as human face detection, so that the machine learning model is called to execute the human face detection for a times.
Step 303, obtaining the operation time corresponding to the mth implementation mode of the ith operator in each execution process of the a execution processes to obtain a operation times.
And when the machine learning model is called to execute the preset task a times, a calculation time corresponding to the mth implementation mode can be obtained.
Optionally, obtaining a starting execution time and an ending execution time of a b-th execution process corresponding to the m-th implementation manner; and determining the time difference between the starting execution time and the ending execution time as the operation time corresponding to the execution process of the b-th time, wherein b is a positive integer less than or equal to a.
Assuming that the terminal calls the machine learning model for 2 times, and for the 1 st execution process in the 2 execution processes, the starting execution time corresponding to the mth implementation mode is 40ms, and the ending execution time is 50ms, determining the operation time corresponding to the 1 st execution process to be 10ms; for the 2 nd execution process in the 2 nd execution process, the starting execution time corresponding to the mth implementation manner is 30ms, and the ending execution time is 45ms, then the operation time corresponding to the 2 nd execution process is determined to be 15ms.
And 304, calculating the average value of the a operation times to obtain the operation time of the mth implementation mode of the ith operator.
Still taking the above example as an example, the operation time of the mth implementation is (10 + 15)/2 =12.5ms.
Different implementation modes of operators may have different performance performances under processing chips with different hardware structures, the same implementation mode of one operator has different operation time on the processing chips with different hardware structures, the operation time represents the operation efficiency, and the implementation mode with the minimum operation time is selected as the target implementation mode, so that the operation efficiency of the machine learning model on the processing chip can be ensured.
Optionally, before the terminal leaves the factory, the operation time corresponding to each of the n implementation manners is obtained.
In another example, the terminal may obtain the operation precision corresponding to each of the n implementation manners; and selecting the implementation mode with the highest operation precision as the target implementation mode from the n implementation modes.
Illustratively, the operation precision can be determined according to the matching degree between the recognition result and the real result of the machine learning model, and the higher the matching degree is, the higher the operation precision is; conversely, the lower the matching degree, the lower the operation precision. The implementation mode with the highest operation precision is selected as the target implementation mode, and the machine learning model can be applied to a business scene with high requirement on operation precision.
In summary, in the technical scheme provided by the embodiment of the present application, by selecting the implementation mode with the minimum computation time as the target implementation mode, the computation efficiency of the machine learning model on the processing chip is ensured, and the effect of accelerating network reasoning is achieved; by selecting the implementation mode with the highest operation precision as the target implementation mode, the machine learning model can be applied to a service scene with higher requirement on operation precision. The target implementation mode is determined according to the two different bases, and the target implementation mode is more flexible to select.
Illustratively, after the terminal generates the configuration information corresponding to the machine learning model, the configuration information generating method may further include the following steps:
1. acquiring a service scene corresponding to a target service;
the target service may be any one of services such as face recognition, scanned word recognition, picture content recognition, text content understanding, voice recognition, machine translation, user profiling, network association analysis, hotspot discovery, and the like. The business scene corresponding to the face recognition can be face unlocking, face payment, face login and the like; the service scene corresponding to the speech recognition may be speech conversion to text display or the like. The embodiment of the application does not limit the type of the target service and the service scene corresponding to the target service.
2. Selecting a configuration file matched with a service scene from at least two configuration files as a target configuration file; wherein any two of the at least two configuration files have different configuration information.
Exemplarily, assuming that a service scene corresponding to a target service is face payment, and the face payment has a high requirement on identification accuracy, the terminal may select a configuration file matched with the face payment from at least two configuration files (for example, the configuration file has the configuration information representing the highest operation accuracy corresponding to the implementation manner of an operator) as a target configuration file; assuming that a service scenario corresponding to a target service is voice translation, and the voice translation has a high requirement on operation time, the terminal may select, from the at least two configuration files, a configuration file matched with the voice translation (for example, the configuration file has the configuration information representing that the operation time corresponding to the implementation mode of the operator is the minimum) as a target configuration file.
3. If the target service is required to be executed through the machine learning model, reading configuration information in a target configuration file;
4. configuring the machine learning model according to the configuration information in the target configuration file to obtain a configured machine learning model;
after the terminal reads the configuration information in the target configuration file, a target implementation mode used when any operator of the machine learning model is operated can be obtained. Exemplarily, the terminal configures implementation manners used by operators included in the machine learning model according to configuration information in the target configuration file to obtain the configured machine learning model.
5. And calling the machine learning model which completes the configuration to execute the target business.
In summary, in the technical solution provided in the embodiment of the present application, the machine learning model is configured according to the configuration file matched with the service scene corresponding to the target service, so that the machine learning model can meet the requirement when the target service is executed, and thus the final result of the target service is more accurate and the execution efficiency is higher.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 4, a block diagram of a configuration information generating apparatus according to an embodiment of the present application is shown. The device has the functions of realizing the method examples, and the functions can be realized by hardware or by hardware executing corresponding software. The apparatus 400 may include: a model loading module 410, a mode determination module 420, and an information generation module 430.
The model loading module 410 is configured to load a machine learning model, where the machine learning model includes at least one operator, an ith operator in the at least one operator has n implementation manners, where i is a positive integer, and n is an integer greater than 1.
The mode determining module 420 is configured to determine a target implementation mode from the n implementation modes.
The information generating module 430 is configured to generate configuration information corresponding to the machine learning model, where the configuration information is used to indicate that the target implementation manner is used when the ith operator of the machine learning model is run.
To sum up, in the technical scheme provided in this application embodiment, when an operator in the machine learning model has multiple implementation manners, a target implementation manner is determined from the multiple implementation manners, and the target implementation manner is used when the machine learning model is operated.
Optionally, as shown in fig. 5, the mode determining module 420 includes: a time acquisition unit 421 and a mode determination unit 422.
The time obtaining unit 421 is configured to obtain the operation time corresponding to each of the n implementation manners.
The mode determining unit 422 is configured to select, from the n implementation modes, an implementation mode with the smallest operation time as the target implementation mode.
Optionally, the time obtaining unit 421 includes: a mode configuration subunit, a model calling subunit, a time acquisition subunit, and a time determination subunit (not shown in the figure).
A mode configuration subunit, configured to configure, for an mth implementation mode of the n implementation modes, the ith operator of the machine learning model to use the mth implementation mode, where m is a positive integer less than or equal to n.
And the model calling subunit is used for calling the machine learning model to execute a preset task a times, wherein a is an integer greater than 1.
And the time acquisition subunit is configured to acquire, in each of the a execution processes, an operation time corresponding to the mth implementation manner of the ith operator to obtain a operation times.
And the time determining subunit is used for calculating the average value of the a operation times to obtain the operation time of the mth implementation mode of the ith operator.
Optionally, the time obtaining subunit is configured to:
acquiring the starting execution time and the ending execution time of the b-th execution process corresponding to the m-th implementation mode;
and determining the time difference between the starting execution time and the ending execution time as the operation time corresponding to the b-th execution process, wherein b is a positive integer less than or equal to a.
Optionally, the mode determining module 420 further includes: a precision acquisition unit 423.
The precision obtaining unit 423 is configured to obtain the operation precision corresponding to each of the n implementation manners.
The mode determining unit 422 is further configured to select an implementation with the highest operation accuracy from the n implementation manners as the target implementation manner.
Optionally, the apparatus 400 further includes: an information reading module 440, a model configuration module 450, and a business execution module 460.
The information reading module 440 is configured to read configuration information in a target configuration file if the target service needs to be executed through the machine learning model.
The model configuration module 450 is configured to configure the machine learning model according to the configuration information in the target configuration file to obtain the configured machine learning model.
The service execution module 460 is configured to invoke the configured machine learning model to execute the target service.
Optionally, the model configuration module 450 is configured to:
and configuring the implementation modes used by the operators included in the machine learning model according to the configuration information in the target configuration file to obtain the configured machine learning model.
Optionally, the apparatus 400 further includes: a scene acquisition module 470 and a file selection module 480.
The scene obtaining module 470 is configured to obtain a service scene corresponding to the target service.
The file selecting module 480 is configured to select a configuration file matched with the service scenario from at least two configuration files as the target configuration file;
wherein any two of the at least two configuration files have different configuration information.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments, which are not described herein again.
Referring to fig. 6, a block diagram of a computer device 600 according to an embodiment of the present application is shown. The computer device 600 refers to an electronic device with computing and processing capabilities, for example, the computer device 600 may be a terminal or a server or other electronic devices, and the terminal may be a mobile phone, a tablet computer, an electronic book reading device, a multimedia playing device, a wearable device or other portable electronic devices.
The computer device 600 in the embodiments of the present application may include one or more of the following components: a processor 610 and a memory 620.
Processor 610 may include one or more processing cores. The processor 610, using the various interfaces and lines connecting the various parts throughout the computer device, performs various functions of the computer device and processes data by executing or performing instructions, programs, code sets, or instruction sets stored in the memory 620 and invoking data stored in the memory 620. Alternatively, the processor 610 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 610 may integrate one or a combination of a Central Processing Unit (CPU) and a modem. Wherein, the CPU mainly processes an operating system, an application program and the like; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 610, but may be implemented by a single chip.
Optionally, the processor 610, when executing the program instructions in the memory 620, implements the methods provided by the various method embodiments described above.
The Memory 620 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 620 includes a non-transitory computer-readable medium. The memory 620 may be used to store instructions, programs, code sets, or instruction sets. The memory 620 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function, instructions for implementing the various method embodiments described above, and the like; the storage data area may store data created according to use of the computer device, and the like.
The structure of the computer device described above is only illustrative, and in actual implementation, the computer device may include more or less components, such as: a display screen, etc., which are not limited in this embodiment.
Those skilled in the art will appreciate that the configuration shown in FIG. 6 does not constitute a limitation of the computer device 600, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, a computer readable storage medium is also provided, in which a computer program is stored, which is loaded and executed by a processor of a terminal to implement the respective steps in the above-described method embodiments.
In an exemplary embodiment, a computer program product is also provided for implementing the above method when executed.
The above description is only exemplary of the application and should not be taken as limiting the application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the application should be included in the protection scope of the application.

Claims (8)

1. A method for generating configuration information, the method comprising:
loading a machine learning model, wherein the machine learning model comprises at least one operator, the ith operator in the at least one operator has n implementation modes, i is a positive integer, and n is an integer greater than 1;
for an mth implementation of the n implementations, configuring the ith operator of the machine learning model to use the mth implementation, where m is a positive integer less than or equal to n;
calling the machine learning model to execute a preset task for a times, wherein a is an integer greater than 1;
acquiring the starting execution time and the ending execution time of the corresponding b-th execution process in the m-th implementation mode, wherein b is a positive integer less than or equal to a;
determining the time difference between the starting execution time and the ending execution time as the operation time corresponding to the b-th execution process;
calculating the average value of a operation times to obtain the operation time of the mth implementation mode of the ith operator;
selecting the implementation mode with the minimum operation time as a target implementation mode from the n implementation modes;
generating configuration information corresponding to the machine learning model, wherein the configuration information is used for indicating that the target implementation mode is used when the ith operator of the machine learning model is operated.
2. The method of claim 1, wherein determining a target implementation from the n implementations comprises:
acquiring the operation precision corresponding to each of the n implementation modes;
and selecting the implementation mode with the highest operation precision as the target implementation mode from the n implementation modes.
3. The method according to claim 1 or 2, wherein after generating the configuration information corresponding to the machine learning model, the method further comprises:
if the target service is required to be executed through the machine learning model, reading configuration information in a target configuration file;
configuring the machine learning model according to the configuration information in the target configuration file to obtain a configured machine learning model;
and calling the configured machine learning model to execute the target service.
4. The method of claim 3, wherein configuring the machine learning model according to the configuration information in the target configuration file to obtain a configured machine learning model comprises:
and configuring the implementation modes used by the operators included in the machine learning model according to the configuration information in the target configuration file to obtain the configured machine learning model.
5. The method of claim 3, wherein before reading the configuration information in the target configuration file, further comprising:
acquiring a service scene corresponding to the target service;
selecting a configuration file matched with the service scene from at least two configuration files as the target configuration file;
wherein any two of the at least two configuration files have different configuration information.
6. An apparatus for generating configuration information, the apparatus comprising:
the model loading module is used for loading a machine learning model, wherein the machine learning model comprises at least one operator, the ith operator in the at least one operator has n implementation modes, i is a positive integer, and n is an integer greater than 1;
a manner determination module, configured to configure, for an mth implementation among the n implementations, the ith operator of the machine learning model to use the mth implementation, where m is a positive integer less than or equal to n; calling the machine learning model to execute a preset task a times, wherein a is an integer larger than 1; acquiring the starting execution time and the ending execution time of the corresponding b-th execution process in the m implementation modes, wherein b is a positive integer less than or equal to a; determining the time difference between the starting execution time and the ending execution time as the operation time corresponding to the b-th execution process; calculating the average value of a operation times to obtain the operation time of the mth implementation mode of the ith operator; selecting the implementation mode with the minimum operation time as a target implementation mode from the n implementation modes;
an information generating module, configured to generate configuration information corresponding to the machine learning model, where the configuration information is used to indicate that the target implementation is used when the i-th operator of the machine learning model is run.
7. A computer device, characterized in that it comprises a processor and a memory, said memory storing a computer program which is loaded and executed by said processor to implement the method according to any one of claims 1 to 5.
8. A computer-readable storage medium, in which a computer program is stored which is loaded and executed by a processor to implement the method according to any one of claims 1 to 5.
CN201910851977.7A 2019-09-10 2019-09-10 Configuration information generation method, device, equipment and storage medium Active CN110569984B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910851977.7A CN110569984B (en) 2019-09-10 2019-09-10 Configuration information generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910851977.7A CN110569984B (en) 2019-09-10 2019-09-10 Configuration information generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110569984A CN110569984A (en) 2019-12-13
CN110569984B true CN110569984B (en) 2023-04-14

Family

ID=68778712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910851977.7A Active CN110569984B (en) 2019-09-10 2019-09-10 Configuration information generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110569984B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114787793A (en) * 2020-02-18 2022-07-22 Oppo广东移动通信有限公司 Management method of network model and method and device for establishing or modifying session
CN111340237B (en) * 2020-03-05 2024-04-26 腾讯科技(深圳)有限公司 Data processing and model running method, device and computer equipment
CN114970654B (en) * 2021-05-21 2023-04-18 华为技术有限公司 Data processing method and device and terminal
CN117785260A (en) * 2022-09-22 2024-03-29 华为技术有限公司 Operator operation mode configuration method, device and related system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10713594B2 (en) * 2015-03-20 2020-07-14 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing machine learning model training and deployment with a rollback mechanism
CN105912500B (en) * 2016-03-30 2017-11-14 百度在线网络技术(北京)有限公司 Machine learning model generation method and device
KR101886373B1 (en) * 2016-07-14 2018-08-09 주식회사 언더핀 Platform for providing task based on deep learning
CN109409533B (en) * 2018-09-28 2021-07-27 深圳乐信软件技术有限公司 Method, device, equipment and storage medium for generating machine learning model

Also Published As

Publication number Publication date
CN110569984A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
CN110569984B (en) Configuration information generation method, device, equipment and storage medium
US11023801B2 (en) Data processing method and apparatus
CN109961780B (en) A man-machine interaction method a device(s) Server and storage medium
CN111950638B (en) Image classification method and device based on model distillation and electronic equipment
CN110458294B (en) Model operation method, device, terminal and storage medium
CN110347863B (en) Speaking recommendation method and device and storage medium
CN109088995A (en) Support the method and mobile phone of global languages translation
CN110941698B (en) Service discovery method based on convolutional neural network under BERT
CN111797294A (en) Visualization method and related equipment
US11526681B2 (en) Dynamic multilingual speech recognition
CN114911465B (en) Method, device and equipment for generating operator and storage medium
CN107807841B (en) Server simulation method, device, equipment and readable storage medium
CN111210005A (en) Equipment operation method and device, storage medium and electronic equipment
CN116842036A (en) Data query method, device, computer equipment and storage medium
CN113626512A (en) Data processing method, device, equipment and readable storage medium
CN114416877A (en) Data processing method, device and equipment and readable storage medium
CN112989733B (en) Circuit analysis method, circuit analysis device, circuit analysis equipment and storage medium
CN114579718A (en) Text feature generation method, device, equipment and storage medium combining RPA and AI
CN108062401B (en) Application recommendation method and device and storage medium
CN110750295B (en) Information processing method, device, electronic equipment and storage medium
CN113468344A (en) Entity relationship extraction method and device, electronic equipment and computer readable medium
CN107680598B (en) Information interaction method, device and equipment based on friend voiceprint address list
CN111898363B (en) Compression method, device, computer equipment and storage medium for long and difficult text sentence
CN110647753B (en) Method, device and equipment for acquiring kernel file and storage medium
CN110413423B (en) Data processing method, related device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant