CN112163677A - Method, device and equipment for applying machine learning model - Google Patents

Method, device and equipment for applying machine learning model Download PDF

Info

Publication number
CN112163677A
CN112163677A CN202011096940.7A CN202011096940A CN112163677A CN 112163677 A CN112163677 A CN 112163677A CN 202011096940 A CN202011096940 A CN 202011096940A CN 112163677 A CN112163677 A CN 112163677A
Authority
CN
China
Prior art keywords
machine learning
learning model
processing function
parameter configuration
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011096940.7A
Other languages
Chinese (zh)
Other versions
CN112163677B (en
Inventor
徐江川
童超
车军
任烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202011096940.7A priority Critical patent/CN112163677B/en
Publication of CN112163677A publication Critical patent/CN112163677A/en
Application granted granted Critical
Publication of CN112163677B publication Critical patent/CN112163677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device and equipment for applying a machine learning model, and belongs to the technical field of machine learning. The method comprises the following steps: acquiring an execution code, a processing type and parameter configuration information of a target machine learning model; acquiring an output processing function of the target machine learning model without parameter configuration based on the processing type; based on the parameter configuration information, performing parameter configuration on the output processing function which is not subjected to parameter configuration to obtain the output processing function of the target machine learning model; the method comprises the steps of obtaining target input data to be input into a target machine learning model, obtaining target output data based on an execution code of the target machine learning model and the target input data, and processing the target output data based on an output processing function to obtain a processing result corresponding to the target input data. By the method and the device, the problem that the machine learning model cannot be normally used due to the fact that technicians mix the machine learning model and the corresponding parameters can be solved.

Description

Method, device and equipment for applying machine learning model
Technical Field
The present application relates to the field of machine learning technologies, and in particular, to a method, an apparatus, and a device for applying a machine learning model.
Background
With the continuous development of the machine learning technology, the application scenes and the application platforms of the machine learning technology are more and more, and more functions can be realized through the machine learning technology, such as face recognition, voice recognition and the like. The above functions can be realized by a trained machine learning model.
Before using the machine learning model, the technician needs to write an output processing function related to the machine learning model according to parameters corresponding to the machine learning model, for example, write a corresponding output processing function according to a processing type of the machine learning model. The output processing function is used for determining a processing result corresponding to the data input into the machine learning model according to the machine learning model output vector, for example, determining a classification result of the classification model on the input data according to the vector which is output by the classification model and consists of confidence degrees corresponding to various categories.
In the course of implementing the present application, the inventors found that the related art has at least the following problems:
different machine learning models can correspond to different parameters, and when a plurality of machine learning models are used by technicians, the machine learning models and the corresponding parameters are easily mixed, so that the output processing function compiled by the technicians according to the parameters is not matched with the machine learning models, and the machine learning models cannot be normally used.
Disclosure of Invention
The embodiment of the application provides a method, a device and equipment for applying a machine learning model, and can avoid the problem that the machine learning model cannot be normally used due to the fact that technicians mix the machine learning model and corresponding parameters. The technical scheme is as follows:
in one aspect, a method of applying a machine learning model is provided, the method comprising:
acquiring an execution code, a processing type and parameter configuration information of a target machine learning model, wherein the parameter configuration information comprises configuration values of preset parameters;
acquiring an output processing function of the target machine learning model without parameter configuration based on the processing type;
performing parameter configuration on the output processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the output processing function of the target machine learning model;
acquiring target input data to be input into the target machine learning model, obtaining target output data based on an execution code of the target machine learning model and the target input data, and processing the target output data based on the output processing function to obtain a processing result corresponding to the target input data.
Optionally, the obtaining of the execution code, the processing type, and the parameter configuration information of the target machine learning model includes:
and obtaining a model encapsulation file of the target machine learning model, and performing decapsulation processing on the model encapsulation file to obtain an execution code, a processing type and parameter configuration information of the target machine learning model.
Optionally, the method further includes:
acquiring an input processing function of the target machine learning model without parameter configuration based on the processing type;
performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the input processing function of the target machine learning model;
obtaining target output data based on the execution code of the target machine learning model and the target input data, including:
and processing the target input data based on the input processing function to obtain processed target input data, and inputting the processed target input data into the target machine learning model to obtain corresponding target output data.
Optionally, the obtaining, based on the processing type, an input processing function of the target machine learning model without parameter configuration includes:
acquiring the input processing function of the target machine learning model without parameter configuration in a preset function library based on the processing type and the corresponding relation between the processing type and the input processing function without parameter configuration;
the obtaining of the output processing function of the target machine learning model without parameter configuration based on the processing type includes:
acquiring an output layer identifier of the target machine learning model in the model encapsulation file;
and acquiring the output data processing function of the target machine learning model without parameter configuration in a preset function library based on the processing type, the output layer identification of the target machine learning model and the corresponding relation between the processing type, the output layer identification and the output processing function without parameter configuration.
Optionally, the preset parameters include input parameters and output parameters;
the performing parameter configuration on the input processing function without parameter configuration based on the parameter configuration information to obtain the input processing function of the target machine learning model includes:
performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the configuration value of the input parameter to obtain an input data processing function of the target machine learning model;
the performing parameter configuration on the output processing function without parameter configuration based on the parameter configuration information to obtain the output processing function of the target machine learning model includes:
and performing parameter configuration on the output processing function which is not subjected to parameter configuration based on the configuration value of the output parameter to obtain an output data processing function of the target machine learning model.
Optionally, the performing parameter configuration on the input processing function that is not subjected to parameter configuration based on the configuration value of the input parameter to obtain the input data processing function of the target machine learning model includes:
inserting the configuration value of each input parameter into the corresponding insertion position based on the pre-stored insertion position of each input parameter in the input processing function without parameter configuration to obtain the input data processing function of the target machine learning model;
the performing parameter configuration on the output processing function without parameter configuration based on the configuration value of the output parameter to obtain the output data processing function of the target machine learning model includes:
and inserting the configuration value of each output parameter into the corresponding insertion bit based on the pre-stored insertion bit of each output parameter in the output processing function without parameter configuration to obtain the output data processing function of the target machine learning model.
Optionally, the configuration values of the preset parameters include a resolution of the input data and a resolution of the output data.
In another aspect, an apparatus for applying a machine learning model is provided, the apparatus comprising:
the acquisition module is used for acquiring an execution code, a processing type and parameter configuration information of the target machine learning model, wherein the parameter configuration information comprises configuration values of preset parameters; acquiring an output processing function of the target machine learning model without parameter configuration based on the processing type;
the configuration module is used for carrying out parameter configuration on the output processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the output processing function of the target machine learning model;
the processing module is used for acquiring target input data to be input into the target machine learning model, acquiring target output data based on an execution code of the target machine learning model and the target input data, and processing the target output data based on the output processing function to acquire a processing result corresponding to the target input data.
Optionally, the obtaining module is configured to:
and obtaining a model encapsulation file of the target machine learning model, and performing decapsulation processing on the model encapsulation file to obtain an execution code, a processing type and parameter configuration information of the target machine learning model.
Optionally, the obtaining module is further configured to:
acquiring an input processing function of the target machine learning model without parameter configuration based on the processing type;
the configuration module is further to: performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the input processing function of the target machine learning model;
the process model is further to: and processing the target input data based on the input processing function to obtain processed target input data, and inputting the processed target input data into the target machine learning model to obtain corresponding target output data.
Optionally, the obtaining module is configured to
Acquiring the input processing function of the target machine learning model without parameter configuration in a preset function library based on the processing type and the corresponding relation between the processing type and the input processing function without parameter configuration;
acquiring an output layer identifier of the target machine learning model in the model encapsulation file;
and acquiring the output data processing function of the target machine learning model without parameter configuration in a preset function library based on the processing type, the output layer identification of the target machine learning model and the corresponding relation between the processing type, the output layer identification and the output processing function without parameter configuration.
Optionally, the preset parameters include input parameters and output parameters;
the configuration module is configured to: performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the configuration value of the input parameter to obtain an input data processing function of the target machine learning model; and performing parameter configuration on the output processing function which is not subjected to parameter configuration based on the configuration value of the output parameter to obtain an output data processing function of the target machine learning model.
Optionally, the configuration module is configured to:
inserting the configuration value of each input parameter into the corresponding insertion position based on the pre-stored insertion position of each input parameter in the input processing function without parameter configuration to obtain the input data processing function of the target machine learning model;
and inserting the configuration value of each output parameter into the corresponding insertion bit based on the pre-stored insertion bit of each output parameter in the output processing function without parameter configuration to obtain the output data processing function of the target machine learning model.
In yet another aspect, a computer device is provided that includes a processor and a memory, where at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to implement the operations performed by the method for applying a machine learning model as described above.
In yet another aspect, a computer-readable storage medium is provided, wherein at least one instruction is stored in the storage medium, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the method for applying a machine learning model as described above.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
after the execution code, the processing type and the parameter configuration information of the target machine learning model are acquired, the output processing function corresponding to the target machine learning model and not subjected to parameter configuration can be determined according to the processing type, then the output processing function which is not subjected to parameter configuration is subjected to parameter configuration according to the parameter configuration information to obtain the output processing function of the target machine learning model, and finally the data output by the target machine learning model is processed through the output processing function to directly obtain the processing result corresponding to the target input data, due to the execution code of the target machine learning model, the processing type and the acquisition of parameter configuration information, and the configuration of the output processing function is automatically executed by the execution equipment, the whole process does not need the participation of technicians, therefore, the problem that the machine learning model cannot be normally used due to the fact that technicians mix the machine learning model with corresponding parameters can be solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a method for applying a machine learning model provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a method for applying a machine learning model according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an apparatus for applying a machine learning model according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a terminal provided in an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The method for applying the machine learning model provided by the embodiment of the application can be realized by a terminal, and the terminal can be intelligent equipment such as a mobile phone, a tablet computer, a notebook computer and a desktop computer. The device may include a processor and a memory, where the memory may store an execution program corresponding to the method for applying the machine learning model, execution data, and the like, and the processor may execute the execution program stored in the memory to process the execution data, and the like, so as to implement the method for applying the machine learning model provided in the embodiment of the present application.
The machine learning model may be stored in the form of a file after training is completed. Technical manufacturers developing machine learning models can provide the trained machine learning models for technicians of other manufacturers or other departments, and the technicians of other manufacturers or other departments can set other functions on the basis of the functions realized by the trained machine learning models. For example, the machine learning model is a gender recognition model, and other manufacturers can recognize the gender of the user through the facial image of the user according to the gender recognition model, and can beautify the corresponding user according to the gender. The method for applying the machine learning model provided by the embodiment of the application can enable technicians of other manufacturers or other departments to directly apply the machine learning model without configuring relevant parameters of the machine learning model after the technicians obtain the machine learning model.
Fig. 1 is a flowchart of a method for applying a machine learning model according to an embodiment of the present disclosure. Referring to fig. 1, the embodiment includes:
step 101, obtaining an execution code, a processing type and parameter configuration information of a target machine learning model, wherein the parameter configuration information comprises configuration values of preset parameters.
The target machine learning model may be any post-training machine learning model, such as a face detection model, an image classification model, and so on. The execution code of the target machine learning model is an execution code of a program corresponding to the trained machine learning model, the processing type is a task type of the target machine learning model, such as a face detection task, a target classification task, and the like, the parameter configuration information includes a parameter value of an output parameter in an output processing function corresponding to the target machine learning model, for example, the output processing function corresponding to the image processing model can process an image output by the image processing model to a preset resolution, and the parameter value can be a data resolution (image resolution); the output processing function corresponding to the classification model can process each class output by the classification model and the corresponding confidence coefficient according to the number of the identifiable classes of the classification model to obtain the classification result of the classification model, and the parameter value can be the number of the identifiable classes; the output processing function corresponding to the semantic segmentation model may form a matrix of a preset size according to the probability of the image category corresponding to each pixel of the image to be segmented output by the semantic segmentation model, and then may perform segmentation processing on the image to be segmented according to the matrix of the preset size, where the parameter value may be a data size (a preset size of the matrix) or the like.
Optionally, the execution code, the processing type, and the parameter configuration information of the target machine learning model may be encapsulated in one file, and a technician may decapsulate the model encapsulation file by obtaining the model encapsulation file of the target machine learning model to obtain the execution code, the processing type, and the parameter configuration information of the target machine learning model.
In implementation, the processing type and the parameter configuration information of the target machine learning model may be set in an encrypted binary file, and a user or a technician applying the target machine learning model may obtain the processing type and the parameter configuration information of the target machine learning model by obtaining the binary file. The binary file may be provided to a user or technician applying the target machine learning model by the person developing the target machine learning model. After obtaining the encrypted binary file, a technician applying the target machine learning model can decrypt and decapsulate the encrypted binary file to obtain an execution code, a processing type, parameter configuration information and the like of the target machine learning model.
The binary file may be generated as follows:
and acquiring a model file corresponding to the trained target machine learning model, wherein the model file can be a file for recording an execution code corresponding to the target machine learning model. And obtaining a model configuration file corresponding to the target machine learning model, wherein the model configuration file can comprise auxiliary reasoning information, service application information, binding embedded information and the like. The auxiliary inference information may include parameter values corresponding to data input into the target machine learning model, for example, a resolution of the data input into the target machine learning model (a resolution of an image), a size of the data, a type of the data, and the like. Namely, the target machine learning model can also correspond to an input processing function, and the input processing function can adjust the data to be input into the target machine learning model into the resolution, the size, the type and the like which can be processed by the target machine learning model according to the parameter values of the input parameters in the auxiliary reasoning information. The service application information may include parameter values of the output parameters, such as resolution of the output data, size of the output data, number of recognizable categories, and the like, and may further include a processing type, a type of the operating platform, an output layer identifier, and the like. The binding embedded information may include some description information related to the model, an ID (Identity document, serial number) of the model, a version number of the model, developer information of the model, and the like, and when the corresponding machine learning model is abnormal, error checking may be performed according to the binding embedded information, for example, whether the model is correct is determined according to the ID of the model, the version number of the model, and the like, or error checking may be performed by contacting with the developer according to the developer information of the model. The parameter configuration information is a parameter value corresponding to an output processing function in the service application information, and may also include a corresponding parameter value in the auxiliary inference information. The technical personnel can write the auxiliary reasoning information, the service application information and the binding embedded information of the target machine learning model in a text file according to a preset format, for example, the text format can be txt, json, xml and the like. And then, converting the auxiliary reasoning information, the service application information, the binding embedded information and the like in the text file into structured data according to the corresponding relation. For example, a text file may record "input resolution: 224 × 224 "," input data type: assigned char "," model task type: detection "," detection category: face "," output layer position: last layer "," model provider: XXX "," model ID: face _ det ", the corresponding structured data may be:
InputShape:{1,3,1,224,224};
InputDataType:U08;
taskType:DETCTION;
categoryInfo:FACE;
layerPosition:-1;
modelAuthor:“XXX”;
modelld:“Face_det”;
a technician may encapsulate the corresponding model file and the structured data corresponding to the model configuration file by using an existing encapsulation protocol to obtain an encapsulated file, and then encrypt the encapsulated file, for example, by using an AES algorithm, an SM4 algorithm, a 3DES algorithm, or the like to obtain an encrypted binary file.
And 102, acquiring an output processing function of the target machine learning model without parameter configuration based on the processing type.
The output processing function is a function for processing an output vector of the machine learning model, and since a value output by a general machine learning model is a vector, it cannot be directly applied, and an output processing function for processing output data of the machine learning model is generally provided. For example, the machine learning model is an age recognition model, after a face image is input into the age recognition model, the age recognition model outputs a vector consisting of each age value and a confidence corresponding to each age value, then the corresponding vector is input into a corresponding output processing function, and the output processing function obtains the age value corresponding to the face image finally recognized by the age recognition model. The output processing function which is not configured with the parameters is the output processing function which is not set with the parameter values of the corresponding processing parameters, wherein the parameters corresponding to the output processing function can comprise the data resolution output by the machine learning model, the number of recognizable categories and the like. And for the machine learning models with the same processing task, the same output processing function without parameter configuration can be corresponded, and different processing parameters can be corresponded. For example, if the number of classes identifiable by different respective models is different, the corresponding confidence levels to be calculated are different, and the corresponding parameter may be the number of classes identifiable by the respective models.
In implementation, the processing type of the target machine learning model is recorded in the model configuration file, wherein the processing type may be divided into a model task and a detection category, and the like. After the terminal acquires the processing category of the target machine learning, the corresponding output processing function can be determined according to the processing category. In addition, the model configuration file may further include identification information of an operating platform corresponding to the target Machine learning model, where the operating platform may be a GPU (Graphics Processing Unit), an ARM (Advanced RISC Machine), a HISI (Advanced instruction set system), or the like. The identification information of the running platform may indicate a platform on which the target machine learning model is running.
Optionally, the target machine learning model may also correspond to an input processing function that is not subjected to parameter configuration, and the input processing function that is not subjected to parameter configuration of the target machine learning model may be obtained based on the processing type.
The input processing function is a function for processing data to be input, and the data format input by the general machine learning model, such as the resolution, the data type and the like of the data, is fixed and can be set by a technician training the machine learning model. Therefore, before the data to be processed by the target machine learning model is input into the target machine learning model, the data to be input can be processed by the input processing function, so that the data to be input can meet the data format of the input data of the target machine learning model. For example, processing an image to be input into the image recognition model, adjusting the image to be input to a preset image size, and the like. In addition, in both the image recognition model and the voice recognition model, data such as an image and a voice needs to be decoded into a binary code, and then the binary code needs to be converted into an input vector of a fixed size, and the binary code can be processed into an input matrix of a fixed size by an input processing function. The input processing function that is not configured with the parameters is an input processing function that has not set the parameter values of the corresponding processing parameters, where the processing parameters are the data type of the input received by the machine learning model, the resolution of the image, the size of the input matrix, and the like. And for machine learning models with the same processing task, the same input processing function without parameter configuration can be corresponded, but different parameter values can be corresponded.
In implementation, the processing type of the target machine learning model is recorded in the model configuration file, wherein the processing type may be divided into a model task and a detection category, and the like. After the terminal acquires the processing category of the target machine learning, the corresponding input processing function can be determined according to the processing category.
Optionally, for the input processing function, the input processing function of the target machine learning model without parameter configuration may be acquired in a preset function library based on the processing type and the corresponding relationship between the processing type and the input processing function without parameter configuration;
after the terminal obtains the processing type of the target machine learning model from the model configuration file, the input processing function which is not subjected to parameter configuration and corresponds to the target machine learning model can be determined according to the corresponding relation between the pre-stored processing type and the input processing function which is not subjected to parameter configuration.
Optionally, the output layer identifier of the target machine learning model may be obtained in the model encapsulation file, and for the output processing function, the output data processing function of the target machine learning model, which is not subjected to parameter configuration, may be obtained in a preset function library based on the processing type, the output layer identifier of the target machine learning model, and the corresponding relationship between the processing type, the output layer identifier, and the output processing function, which is not subjected to parameter configuration.
There may be multiple layers of the neural convolution network in the machine learning model, the output of each layer of the neural convolution network may be the input of the next layer of the neural convolution network, and the achievable functions of the data output by different neural convolution networks may be different. For example, in an age-gender detection model, the detection of the corresponding gender and the age detection can be determined by different vectors output by the neural convolutional network. For example, the detection of the gender can be realized by data output by a penultimate neural convolution network in an age and gender detection model, and the detection of the age can be realized by data output by a last neural convolution network in the age and gender detection model. Different neural convolutional networks are different output layers, so that different output layers can correspond to different output processing functions. Different output processing functions are used for processing data output by the neural convolution network, and different functions can be realized. After the terminal obtains the processing type of the model from the model configuration file after the model encapsulation file is decapsulated and the output layer identifier of the target machine learning model, the terminal may search the corresponding output data processing function of the target machine learning model, which is not subjected to parameter configuration, in the preset function library according to the preset corresponding relationship between the processing type and the output layer identifier and the output processing function which is not subjected to parameter configuration.
And 103, performing parameter configuration on the output data processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the output data processing function of the target machine learning model.
The parameter configuration information includes configuration values of preset parameters, and the preset parameters include output parameters.
In implementation, the output processing function of the target machine learning model may be obtained by performing parameter configuration on the output processing function that is not subjected to parameter configuration based on the configuration value of the output parameter.
The configuration value of each output parameter can be inserted into the corresponding insertion bit based on the insertion bit of each output parameter in the pre-stored output processing function without parameter configuration, so as to obtain the output data processing function of the target machine learning model.
In an implementation, the configuration value of each parameter may correspond to a parameter name, such as Output Size: 200 × 200, the parameter is 200 × 200, the parameter name is Output Size, the terminal can store the corresponding relationship between the parameter name and the position (i.e., insertion position) of the parameter to be inserted in the code corresponding to the Output processing function, and after the terminal obtains the configuration values of the Output parameters, the configuration values of each Output parameter can be inserted into the insertion position of the corresponding Output processing function, so as to obtain the corresponding Output data processing function of the target machine learning model.
Optionally, the parameter configuration information includes a configuration value of a preset parameter, and the preset parameter may include an input parameter in addition to the output parameter.
The input data processing function of the target machine learning model is obtained by performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the configuration values of the input parameters, and then the configuration value of each input parameter is inserted into the corresponding insertion position based on the pre-stored insertion position of each input parameter in the input processing function which is not subjected to parameter configuration to obtain the input data processing function of the target machine learning model;
in an implementation, the configuration value of each parameter may correspond to a parameter name, such as Input Size: 200, the parameter is 200, the parameter name is Input Size, the terminal can store the corresponding relationship between the parameter name and the position (i.e. insertion position) of the parameter to be inserted in the code corresponding to the corresponding Input processing function, and after the terminal obtains the configuration values of the Input parameters, the configuration values of each Input parameter can be inserted into the insertion position of the corresponding Input processing function, so as to obtain the corresponding Input data processing function of the target machine learning model.
And 104, acquiring target input data to be input into the target machine learning model, acquiring target output data based on the execution code of the target machine learning model and the target input data, and processing the target output data based on the output data processing function to acquire a processing result corresponding to the target input data.
In practice, after obtaining the output processing function of the target machine learning model, the target input data may be input into the target machine learning model, and the target machine learning model may output the target output data corresponding to the target input data. And then, processing the target output data according to the output processing function of the target machine learning model to obtain the processing result of the target machine learning model on the target input data.
Alternatively, when the target machine learning model has an input processing function, the corresponding processing may be as follows: the method comprises the steps of processing target input data based on an input processing function to obtain processed target input data, inputting the processed target input data into a target machine learning model to obtain corresponding target output data, and processing the target output data based on an output data processing function to obtain a processing result corresponding to the target input data.
In implementation, after configuring the input processing function and the output processing function of the target machine learning model, as shown in fig. 2, the target input data to be input into the target machine learning model may be processed according to the input processing function composed of the input processing function determined in the function library without parameter configuration and the input parameters in the parameter configuration information, so as to obtain processed target input data, then the processed target input data is input into the target machine learning model, so as to obtain corresponding target output data, and then the target output data is processed according to the output processing function composed of the output processing function determined in the function library without parameter configuration and the output parameters in the parameter configuration information, so as to obtain a processing result corresponding to the target input data.
After the execution code, the processing type and the parameter configuration information of the target machine learning model are acquired, the output processing function corresponding to the target machine learning model and not subjected to parameter configuration can be determined according to the processing type, then the output processing function which is not subjected to parameter configuration is subjected to parameter configuration according to the parameter configuration information to obtain the output processing function of the target machine learning model, and finally the data output by the target machine learning model is processed through the output processing function to directly obtain the processing result corresponding to the target input data, due to the execution code of the target machine learning model, the processing type and the acquisition of parameter configuration information, and the configuration of the output processing function is automatically executed by the execution equipment, the whole process does not need the participation of technicians, therefore, the problem that the machine learning model cannot be normally used due to the fact that technicians mix the machine learning model with corresponding parameters can be solved.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 3 is a device for applying a machine learning model according to an embodiment of the present application, where the device may be a terminal in the foregoing embodiment, and the device includes:
an obtaining module 310, configured to obtain an execution code, a processing type, and parameter configuration information of a target machine learning model, where the parameter configuration information includes configuration values of preset parameters; acquiring an output processing function of the target machine learning model without parameter configuration based on the processing type;
a configuration module 320, configured to perform parameter configuration on the output processing function that is not subjected to parameter configuration based on the parameter configuration information, so as to obtain an output processing function of the target machine learning model;
the processing module 330 is configured to obtain target input data to be input to the target machine learning model, obtain target output data based on an execution code of the target machine learning model and the target input data, and process the target output data based on the output processing function to obtain a processing result corresponding to the target input data.
Optionally, the obtaining module 310 is configured to:
and obtaining a model encapsulation file of the target machine learning model, and performing decapsulation processing on the model encapsulation file to obtain an execution code, a processing type and parameter configuration information of the target machine learning model.
Optionally, the obtaining module 310 is further configured to:
acquiring an input processing function of the target machine learning model without parameter configuration based on the processing type;
the configuration module 320 is further configured to: performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the input processing function of the target machine learning model;
the process model 320 is further configured to: and processing the target input data based on the input processing function to obtain processed target input data, and inputting the processed target input data into the target machine learning model to obtain corresponding target output data.
Optionally, the obtaining module 310 is configured to obtain the data of the target object
Acquiring the input processing function of the target machine learning model without parameter configuration in a preset function library based on the processing type and the corresponding relation between the processing type and the input processing function without parameter configuration;
acquiring an output layer identifier of the target machine learning model in the model encapsulation file;
and acquiring the output data processing function of the target machine learning model without parameter configuration in a preset function library based on the processing type, the output layer identification of the target machine learning model and the corresponding relation between the processing type, the output layer identification and the output processing function without parameter configuration.
Optionally, the preset parameters include input parameters and output parameters;
the configuration module 320 is configured to: performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the configuration value of the input parameter to obtain an input data processing function of the target machine learning model; and performing parameter configuration on the output processing function which is not subjected to parameter configuration based on the configuration value of the output parameter to obtain an output data processing function of the target machine learning model.
Optionally, the configuration module 320 is configured to:
inserting the configuration value of each input parameter into the corresponding insertion position based on the pre-stored insertion position of each input parameter in the input processing function without parameter configuration to obtain the input data processing function of the target machine learning model;
and inserting the configuration value of each output parameter into the corresponding insertion bit based on the pre-stored insertion bit of each output parameter in the output processing function without parameter configuration to obtain the output data processing function of the target machine learning model.
It should be noted that: in the device for applying a machine learning model according to the above embodiment, when the machine learning model is applied, only the division of the functional modules is illustrated, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the apparatus applying the machine learning model and the method embodiment applying the machine learning model provided in the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 4 shows a block diagram of a terminal 400 according to an exemplary embodiment of the present application. The terminal 400 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, the terminal 400 includes: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 401 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement the method of applying a machine learning model provided by method embodiments herein.
In some embodiments, the terminal 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, providing the front panel of the terminal 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the terminal 400 or in a folded design; in still other embodiments, the display 405 may be a flexible display disposed on a curved surface or a folded surface of the terminal 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate the current geographic position of the terminal 400 for navigation or LBS (Location Based Service). The Positioning component 408 may be a Positioning component based on the GPS (Global Positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
The power supply 409 is used to supply power to the various components in the terminal 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When power source 409 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the terminal 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the terminal 400 by the user. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 413 may be disposed on a side bezel of the terminal 400 and/or a lower layer of the touch display screen 405. When the pressure sensor 413 is disposed on the side frame of the terminal 400, a user's holding signal to the terminal 400 can be detected, and the processor 401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the terminal 400. When a physical key or vendor Logo is provided on the terminal 400, the fingerprint sensor 414 may be integrated with the physical key or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
A proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of the terminal 400. The proximity sensor 416 is used to collect the distance between the user and the front surface of the terminal 400. In one embodiment, when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually decreases, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually becomes larger, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 4 is not intended to be limiting of terminal 400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor in a terminal to perform the method of applying a machine learning model in the above embodiments is also provided. The computer readable storage medium may be non-transitory. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (14)

1. A method of applying a machine learning model, the method comprising:
acquiring an execution code, a processing type and parameter configuration information of a target machine learning model, wherein the parameter configuration information comprises configuration values of preset parameters;
acquiring an output processing function of the target machine learning model without parameter configuration based on the processing type;
performing parameter configuration on the output processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the output processing function of the target machine learning model;
acquiring target input data to be input into the target machine learning model, obtaining target output data based on an execution code of the target machine learning model and the target input data, and processing the target output data based on the output processing function to obtain a processing result corresponding to the target input data.
2. The method of claim 1, wherein obtaining execution code, process type, and parameter configuration information of the target machine learning model comprises:
and obtaining a model encapsulation file of the target machine learning model, and performing decapsulation processing on the model encapsulation file to obtain an execution code, a processing type and parameter configuration information of the target machine learning model.
3. The method of claim 2, further comprising:
acquiring an input processing function of the target machine learning model without parameter configuration based on the processing type;
performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the input processing function of the target machine learning model;
obtaining target output data based on the execution code of the target machine learning model and the target input data, including:
and processing the target input data based on the input processing function to obtain processed target input data, and inputting the processed target input data into the target machine learning model to obtain corresponding target output data.
4. The method of claim 3, wherein obtaining the non-parameter configured input processing function of the target machine learning model based on the processing type comprises:
acquiring the input processing function of the target machine learning model without parameter configuration in a preset function library based on the processing type and the corresponding relation between the processing type and the input processing function without parameter configuration;
the obtaining of the output processing function of the target machine learning model without parameter configuration based on the processing type includes:
acquiring an output layer identifier of the target machine learning model in the model encapsulation file;
and acquiring the output data processing function of the target machine learning model without parameter configuration in a preset function library based on the processing type, the output layer identification of the target machine learning model and the corresponding relation between the processing type, the output layer identification and the output processing function without parameter configuration.
5. The method of claim 3, wherein the preset parameters comprise input parameters and output parameters;
the performing parameter configuration on the input processing function without parameter configuration based on the parameter configuration information to obtain the input processing function of the target machine learning model includes:
performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the configuration value of the input parameter to obtain an input data processing function of the target machine learning model;
the performing parameter configuration on the output processing function without parameter configuration based on the parameter configuration information to obtain the output processing function of the target machine learning model includes:
and performing parameter configuration on the output processing function which is not subjected to parameter configuration based on the configuration value of the output parameter to obtain an output data processing function of the target machine learning model.
6. The method according to claim 5, wherein the performing parameter configuration on the input processing function without parameter configuration based on the configuration value of the input parameter to obtain the input data processing function of the target machine learning model comprises:
inserting the configuration value of each input parameter into the corresponding insertion position based on the pre-stored insertion position of each input parameter in the input processing function without parameter configuration to obtain the input data processing function of the target machine learning model;
the performing parameter configuration on the output processing function without parameter configuration based on the configuration value of the output parameter to obtain the output data processing function of the target machine learning model includes:
and inserting the configuration value of each output parameter into the corresponding insertion bit based on the pre-stored insertion bit of each output parameter in the output processing function without parameter configuration to obtain the output data processing function of the target machine learning model.
7. The method of claim 3, wherein the configuration values of the preset parameters comprise a resolution of the input data and a resolution of the output data.
8. An apparatus for applying a machine learning model, the apparatus comprising:
the acquisition module is used for acquiring an execution code, a processing type and parameter configuration information of the target machine learning model, wherein the parameter configuration information comprises configuration values of preset parameters; acquiring an output processing function of the target machine learning model without parameter configuration based on the processing type;
the configuration module is used for carrying out parameter configuration on the output processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the output processing function of the target machine learning model;
the processing module is used for acquiring target input data to be input into the target machine learning model, acquiring target output data based on an execution code of the target machine learning model and the target input data, and processing the target output data based on the output processing function to acquire a processing result corresponding to the target input data.
9. The apparatus of claim 8, wherein the means for obtaining is configured to:
and obtaining a model encapsulation file of the target machine learning model, and performing decapsulation processing on the model encapsulation file to obtain an execution code, a processing type and parameter configuration information of the target machine learning model.
10. The apparatus of claim 9, wherein the obtaining module is further configured to:
acquiring an input processing function of the target machine learning model without parameter configuration based on the processing type;
the configuration module is further to: performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the input processing function of the target machine learning model;
the process model is further to: and processing the target input data based on the input processing function to obtain processed target input data, and inputting the processed target input data into the target machine learning model to obtain corresponding target output data.
11. The apparatus of claim 10, wherein the obtaining module is configured to obtain the data from the wireless device
Acquiring the input processing function of the target machine learning model without parameter configuration in a preset function library based on the processing type and the corresponding relation between the processing type and the input processing function without parameter configuration;
acquiring an output layer identifier of the target machine learning model in the model encapsulation file;
and acquiring the output data processing function of the target machine learning model without parameter configuration in a preset function library based on the processing type, the output layer identification of the target machine learning model and the corresponding relation between the processing type, the output layer identification and the output processing function without parameter configuration.
12. The apparatus of claim 10, wherein the preset parameters comprise input parameters and output parameters;
the configuration module is configured to: performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the configuration value of the input parameter to obtain an input data processing function of the target machine learning model; and performing parameter configuration on the output processing function which is not subjected to parameter configuration based on the configuration value of the output parameter to obtain an output data processing function of the target machine learning model.
13. The apparatus of claim 12, wherein the configuration module is configured to:
inserting the configuration value of each input parameter into the corresponding insertion position based on the pre-stored insertion position of each input parameter in the input processing function without parameter configuration to obtain the input data processing function of the target machine learning model;
and inserting the configuration value of each output parameter into the corresponding insertion bit based on the pre-stored insertion bit of each output parameter in the output processing function without parameter configuration to obtain the output data processing function of the target machine learning model.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to perform operations performed by a method of applying a machine learning model according to any one of claims 1 to 7.
CN202011096940.7A 2020-10-14 2020-10-14 Method, device and equipment for applying machine learning model Active CN112163677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011096940.7A CN112163677B (en) 2020-10-14 2020-10-14 Method, device and equipment for applying machine learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011096940.7A CN112163677B (en) 2020-10-14 2020-10-14 Method, device and equipment for applying machine learning model

Publications (2)

Publication Number Publication Date
CN112163677A true CN112163677A (en) 2021-01-01
CN112163677B CN112163677B (en) 2023-09-19

Family

ID=73868220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011096940.7A Active CN112163677B (en) 2020-10-14 2020-10-14 Method, device and equipment for applying machine learning model

Country Status (1)

Country Link
CN (1) CN112163677B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570030A (en) * 2021-01-18 2021-10-29 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105518647A (en) * 2013-07-05 2016-04-20 里索非特德夫公司 Systems and methods for creating and implementing artificially intelligent agent or system
US20180136912A1 (en) * 2016-11-17 2018-05-17 The Mathworks, Inc. Systems and methods for automatically generating code for deep learning systems
CN109325541A (en) * 2018-09-30 2019-02-12 北京字节跳动网络技术有限公司 Method and apparatus for training pattern
CN110147251A (en) * 2019-01-28 2019-08-20 腾讯科技(深圳)有限公司 For calculating the framework, chip and calculation method of neural network model
CN110163345A (en) * 2019-05-09 2019-08-23 腾讯科技(深圳)有限公司 A kind of Processing with Neural Network method, apparatus, equipment and medium
CN110363291A (en) * 2018-03-26 2019-10-22 上海寒武纪信息科技有限公司 Operation method, device, computer equipment and the storage medium of neural network
CN110580527A (en) * 2018-06-08 2019-12-17 上海寒武纪信息科技有限公司 method and device for generating universal machine learning model and storage medium
CN110839128A (en) * 2018-08-16 2020-02-25 杭州海康威视数字技术股份有限公司 Photographing behavior detection method and device and storage medium
US20200258235A1 (en) * 2019-02-07 2020-08-13 Vysioneer INC. Method and apparatus for automated target and tissue segmentation using multi-modal imaging and ensemble machine learning models
US20200320428A1 (en) * 2019-04-08 2020-10-08 International Business Machines Corporation Fairness improvement through reinforcement learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105518647A (en) * 2013-07-05 2016-04-20 里索非特德夫公司 Systems and methods for creating and implementing artificially intelligent agent or system
US20180136912A1 (en) * 2016-11-17 2018-05-17 The Mathworks, Inc. Systems and methods for automatically generating code for deep learning systems
CN110363291A (en) * 2018-03-26 2019-10-22 上海寒武纪信息科技有限公司 Operation method, device, computer equipment and the storage medium of neural network
CN110580527A (en) * 2018-06-08 2019-12-17 上海寒武纪信息科技有限公司 method and device for generating universal machine learning model and storage medium
CN110839128A (en) * 2018-08-16 2020-02-25 杭州海康威视数字技术股份有限公司 Photographing behavior detection method and device and storage medium
CN109325541A (en) * 2018-09-30 2019-02-12 北京字节跳动网络技术有限公司 Method and apparatus for training pattern
CN110147251A (en) * 2019-01-28 2019-08-20 腾讯科技(深圳)有限公司 For calculating the framework, chip and calculation method of neural network model
US20200258235A1 (en) * 2019-02-07 2020-08-13 Vysioneer INC. Method and apparatus for automated target and tissue segmentation using multi-modal imaging and ensemble machine learning models
US20200320428A1 (en) * 2019-04-08 2020-10-08 International Business Machines Corporation Fairness improvement through reinforcement learning
CN110163345A (en) * 2019-05-09 2019-08-23 腾讯科技(深圳)有限公司 A kind of Processing with Neural Network method, apparatus, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
M.EGMONT-PETERSEN 等: "Image processing with neural networks——a review", PATTERN RECOGNITION *
陈璐 等: "基于深度学习的城市高分遥感图像变化检测方法的研究", 计算机应用研究 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570030A (en) * 2021-01-18 2021-10-29 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN113570030B (en) * 2021-01-18 2024-05-10 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112163677B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN109829456B (en) Image identification method and device and terminal
CN110839128B (en) Photographing behavior detection method and device and storage medium
CN108717365B (en) Method and device for executing function in application program
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN110784370B (en) Method and device for testing equipment, electronic equipment and medium
CN111027490A (en) Face attribute recognition method and device and storage medium
CN108734662B (en) Method and device for displaying icons
CN110705614A (en) Model training method and device, electronic equipment and storage medium
CN110677713B (en) Video image processing method and device and storage medium
CN110290191B (en) Resource transfer result processing method, device, server, terminal and storage medium
CN109783176B (en) Page switching method and device
CN111753606A (en) Intelligent model upgrading method and device
CN109117466B (en) Table format conversion method, device, equipment and storage medium
CN112396076A (en) License plate image generation method and device and computer storage medium
CN111881423A (en) Method, device and system for limiting function use authorization
CN112163677B (en) Method, device and equipment for applying machine learning model
CN110933454A (en) Method, device, equipment and storage medium for processing live broadcast budding gift
CN114595019A (en) Theme setting method, device and equipment of application program and storage medium
CN111128115B (en) Information verification method and device, electronic equipment and storage medium
CN111294320B (en) Data conversion method and device
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN110992954A (en) Method, device, equipment and storage medium for voice recognition
CN111708581A (en) Application starting method, device, equipment and computer storage medium
CN112990424A (en) Method and device for training neural network model
CN112308104A (en) Abnormity identification method and device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant