CN111797870A - Optimization method and device of algorithm model, storage medium and electronic equipment - Google Patents

Optimization method and device of algorithm model, storage medium and electronic equipment Download PDF

Info

Publication number
CN111797870A
CN111797870A CN201910282430.XA CN201910282430A CN111797870A CN 111797870 A CN111797870 A CN 111797870A CN 201910282430 A CN201910282430 A CN 201910282430A CN 111797870 A CN111797870 A CN 111797870A
Authority
CN
China
Prior art keywords
task
terminal
algorithm model
algorithm
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910282430.XA
Other languages
Chinese (zh)
Inventor
何明
陈仲铭
黄粟
刘耀勇
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910282430.XA priority Critical patent/CN111797870A/en
Publication of CN111797870A publication Critical patent/CN111797870A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The application discloses an optimization method, an optimization device, a storage medium and electronic equipment of an algorithm model, wherein the method comprises the following steps: the method comprises the steps of obtaining a plurality of terminal tasks in the electronic equipment, classifying the terminal tasks according to a preset database, learning the classified terminal tasks through a preset algorithm to obtain a plurality of knowledge bases, matching a target knowledge base in the plurality of knowledge bases according to the task type of a new terminal task when the electronic equipment executes the new terminal task, and training an algorithm model of the new terminal task according to the target knowledge base to optimize parameters of the algorithm model. According to the method and the device, when a new terminal task is executed, the corresponding knowledge base can be matched according to the characteristics of the task, so that the algorithm model is optimized, and the processing effect of the algorithm model on the task is further improved.

Description

Optimization method and device of algorithm model, storage medium and electronic equipment
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to an optimization method and device of an algorithm model, a storage medium and electronic equipment.
Background
With the development of electronic technology, electronic devices such as smart phones have become more and more intelligent. The electronic device may perform data processing through various algorithmic models to provide various functions to the user. For example, the electronic device may learn behavior characteristics of the user according to the algorithm model, thereby providing personalized services to the user.
In the prior art, most of the tasks adopt a single algorithm to be trained independently, and the training is not well cooperated and combined with other tasks, so that the final training effect is difficult to achieve the expectation well. In fact, there is some commonality or part of collaboration between different tasks, such as recommended tasks and predicted tasks, and there is some collaboration on the model and knowledge. It is particularly important that the tasks are mostly completed once, and later, with the change of user interests and behaviors, the early model-based model does not necessarily meet the current new requirements well.
Disclosure of Invention
The application provides an optimization method and device of an algorithm model, a storage medium and electronic equipment, which can improve the task processing effect of the algorithm model.
In a first aspect, an embodiment of the present application provides an optimization method for an algorithm model, including:
acquiring a plurality of terminal tasks in the electronic equipment, and classifying the terminal tasks according to a preset database;
respectively learning the classified terminal tasks through a preset algorithm to obtain a plurality of knowledge bases;
when the electronic equipment executes a new terminal task, matching a target knowledge base in the plurality of knowledge bases according to the task type of the new terminal task;
and training the algorithm model of the new terminal task according to the target knowledge base so as to optimize the parameters of the algorithm model.
In a second aspect, an embodiment of the present application provides an optimization apparatus for an algorithm model, including: the device comprises a classification module, a learning module, a matching module and an optimization module;
the classification module is used for acquiring a plurality of terminal tasks in the electronic equipment and classifying the terminal tasks according to a preset database;
the learning module is used for learning the classified terminal tasks respectively through a preset algorithm to obtain a plurality of knowledge bases;
the matching module is used for matching a target knowledge base in the plurality of knowledge bases according to the task type of a new terminal task when the electronic equipment executes the new terminal task;
and the optimization module is used for training the algorithm model of the new terminal task according to the target knowledge base so as to optimize the parameters of the algorithm model.
In a third aspect, an embodiment of the present application provides a storage medium, on which a computer program is stored, which, when running on a computer, causes the computer to execute the above-mentioned optimization method of an algorithm model.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores a plurality of instructions, and the processor loads the instructions in the memory to perform the following steps:
acquiring a plurality of terminal tasks in the electronic equipment, and classifying the terminal tasks according to a preset database;
respectively learning the classified terminal tasks through a preset algorithm to obtain a plurality of knowledge bases;
when the electronic equipment executes a new terminal task, matching a target knowledge base in the plurality of knowledge bases according to the task type of the new terminal task;
and training the algorithm model of the new terminal task according to the target knowledge base so as to optimize the parameters of the algorithm model.
The optimization method of the algorithm model can acquire a plurality of terminal tasks in the electronic equipment, classify the terminal tasks according to the preset database, learn the classified terminal tasks through the preset algorithm respectively to obtain a plurality of knowledge bases, when the electronic equipment executes a new terminal task, match a target knowledge base in the plurality of knowledge bases according to the task type of the new terminal task, train the algorithm model of the new terminal task according to the target knowledge base, and optimize parameters of the algorithm model. According to the method and the device, when a new terminal task is executed, the corresponding knowledge base can be matched according to the characteristics of the task, so that the algorithm model is optimized, and the processing effect of the algorithm model on the task is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is an application scenario diagram of an optimization method of an algorithm model according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of an optimization method of an algorithm model according to an embodiment of the present disclosure.
Fig. 3 is another schematic flow chart of an optimization method of an algorithm model according to an embodiment of the present disclosure.
Fig. 4 is a schematic structural diagram of an optimization apparatus of an algorithm model according to an embodiment of the present disclosure.
Fig. 5 is another schematic structural diagram of an optimization apparatus of an algorithm model according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 7 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
In the description that follows, specific embodiments of the present application will be described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the application have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, and it will be recognized by those of ordinary skill in the art that various of the steps and operations described below may be implemented in hardware.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules listed, but rather, some embodiments may include other steps or modules not listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of an optimization method of an algorithm model according to an embodiment of the present application. The optimization method of the algorithm model is applied to electronic equipment. A panoramic perception framework is arranged in the electronic equipment. The panoramic sensing architecture is an integration of hardware and software for implementing the optimization method of the algorithm model in the electronic device.
The panoramic perception architecture comprises an information perception layer, a data processing layer, a feature extraction layer, a scene modeling layer and an intelligent service layer.
The information perception layer is used for acquiring information of the electronic equipment or information in an external environment. The information-perceiving layer may include a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, and a heart rate sensor.
Among other things, a distance sensor may be used to detect a distance between the electronic device and an external object. The magnetic field sensor may be used to detect magnetic field information of the environment in which the electronic device is located. The light sensor can be used for detecting light information of the environment where the electronic equipment is located. The acceleration sensor may be used to detect acceleration data of the electronic device. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect, and can be used for realizing automatic control of electronic equipment. The location sensor may be used to detect the geographic location where the electronic device is currently located. Gyroscopes may be used to detect angular velocity of an electronic device in various directions. Inertial sensors may be used to detect motion data of an electronic device. The gesture sensor may be used to sense gesture information of the electronic device. A barometer may be used to detect the barometric pressure of the environment in which the electronic device is located. The heart rate sensor may be used to detect heart rate information of the user.
And the data processing layer is used for processing the data acquired by the information perception layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by the information perception layer into a higher or more abstract dimension so as to comprehensively process the data of the plurality of single dimensions. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer so that the transformed data can meet the processing requirement. The data reduction means that the data volume is reduced to the maximum extent on the premise of keeping the original appearance of the data as much as possible.
The characteristic extraction layer is used for extracting characteristics of the data processed by the data processing layer so as to extract the characteristics included in the data. The extracted features may reflect the state of the electronic device itself or the state of the user or the environmental state of the environment in which the electronic device is located, etc.
The feature extraction layer may extract features or process the extracted features by a method such as a filtering method, a packing method, or an integration method.
The filtering method is to filter the extracted features to remove redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate a plurality of feature extraction methods together to construct a more efficient and more accurate feature extraction method for extracting features.
The scene modeling layer is used for building a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the electronic equipment, the state of a user, the environment state and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, an entity relation model, an object-oriented model, and the like according to the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent services for the user according to the model constructed by the scene modeling layer. For example, the intelligent service layer can provide basic application services for users, perform system intelligent optimization for electronic equipment, and provide personalized intelligent services for users.
In addition, the panoramic perception architecture can further comprise a plurality of algorithms, each algorithm can be used for analyzing and processing data, and the plurality of algorithms can form an algorithm library. For example, the algorithm library may include algorithms such as a markov algorithm, a hidden dirichlet distribution algorithm, a bayesian classification algorithm, a support vector machine, a deep deterministic strategy algorithm, a conditional random field, a residual network, a long-short term memory network, a convolutional neural network, a cyclic neural network, and the like.
The embodiment of the present application provides an optimization method of an algorithm model, where an execution subject of the optimization method of the algorithm model may be an optimization device of the algorithm model provided in the embodiment of the present application, or an electronic device integrated with the optimization device of the algorithm model, where the optimization device of the algorithm model may be implemented in a hardware or software manner.
The embodiments of the present application will be described in terms of an optimization device of an algorithm model, which may be specifically integrated in an electronic device. The optimization method of the algorithm model comprises the following steps: acquiring panoramic information of electronic equipment in a preset time period, and extracting characteristic information of the panoramic information;
sending the characteristic information to a server to enable the server to establish an experience pool with a user emotion threshold as an experience track;
performing feature modeling on the panoramic information according to a preset algorithm to obtain a reinforcement learning model, wherein the reinforcement learning model takes an emotion threshold value as an optimization target;
and learning the data in the experience pool through the reinforcement learning model so as to determine the emotion threshold of the current user.
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating an optimization method of an algorithm model according to an embodiment of the present disclosure. The optimization method of the algorithm model provided by the embodiment of the application is applied to the electronic equipment, and the specific flow can be as follows:
step 101, acquiring a plurality of terminal tasks in the electronic device, and classifying the terminal tasks according to a preset database.
In one embodiment, the electronic device includes a plurality of terminal tasks, each terminal task including a corresponding algorithmic model for performing different terminal tasks, such as process scheduling, background cleaning, performance optimization, song recommendation, and the like. The algorithm model may include one or more algorithm modules, each for processing data according to a certain algorithm. For example, the algorithm model may include a markov algorithm module, a convolutional neural network algorithm module. The system comprises a Markov algorithm module, a convolutional neural network algorithm module and a data processing module, wherein the Markov algorithm module is used for processing input data according to a Markov algorithm, and the convolutional neural network algorithm module is used for processing the input data according to a convolutional neural network algorithm.
In practical applications, the algorithm model may be configured according to tasks that the electronic device needs to perform. The algorithm model comprises a configuration file, and the configuration file declares an algorithm module to be called. And when the electronic equipment executes the task corresponding to the algorithm model, the main program reads in the configuration file to generate an algorithm model graph. The algorithm model graph comprises algorithm modules needing to be called and data input and output relations among the algorithm modules. Then, the main program calls the declared algorithm module according to the algorithm model graph and generates an executable program to process the task.
In an embodiment, the electronic device processes the terminal task through the algorithm model to obtain an output result, where the processing result may include a result obtained after the algorithm module processes the terminal task. For example, when the terminal task is a data cleaning task, the input data of the terminal task may include raw data collected by the information sensing layer, and the output data may include cleaned data.
In an embodiment, the electronic device obtains a plurality of terminal tasks in the electronic device, and classifies the terminal tasks according to a preset database. The preset database can be expert data, and a task classification list with certain relevance is obtained. The expert data is data in an expert system database, contains a large amount of knowledge and experience of expert level in a certain field, and can process the problem in the field by utilizing the knowledge of human experts and a problem solving method. Such as classifying the terminal tasks with similar algorithm models together, or classifying the terminal tasks with stronger relevance together, etc.
And 102, learning the classified terminal tasks respectively through a preset algorithm to obtain a plurality of knowledge bases.
In one embodiment, the predetermined algorithm may be a supervised learning algorithm, wherein the supervised learning algorithm learns a function (model parameters) from a given training data set, and when new data arrives, the result can be predicted according to the function. The training set requirements for supervised learning include input and output, also referred to as features and goals. The targets in the training set are labeled by humans. Supervised learning obtains an optimal model through training of an existing training sample, all inputs are mapped into corresponding outputs by utilizing the optimal model, and the outputs are simply judged so as to achieve the purpose of classification. The supervised learning algorithm may include KNN (K nearest neighbor classification algorithm) and SVM (support vector machine). Different knowledge bases are obtained through a supervised learning algorithm aiming at terminal tasks under different classifications. The knowledge base is developed on the basis of large-scale knowledge processing, is applied to the technical industries such as large-scale knowledge processing, natural language understanding, knowledge management, automatic question answering systems, reasoning and the like, is a rule set applied by expert system design, comprises facts and data related to rules, and forms the knowledge base.
Further, after learning to obtain a plurality of knowledge bases, corresponding task labels may be respectively labeled, and in an embodiment, the labeling may be performed according to a classification label of a terminal task, where the task label is, for example, identification, prediction, or classification.
And 103, when the electronic equipment executes the new terminal task, matching the target knowledge base in the plurality of knowledge bases according to the task type of the new terminal task.
In an embodiment, when the electronic device needs to complete a new terminal task, since the electronic device does not store the knowledge base corresponding to the new terminal task, the target knowledge base may be matched among the plurality of knowledge bases according to the task type of the new terminal task, for example, the related attributes of the new terminal task are analyzed, such as data quality, application type, and the like. Specifically, the task type similarity between the terminal task and the new terminal task can be calculated respectively, and then the knowledge base corresponding to the terminal task with higher similarity is selected.
For example, the terminal tasks in the electronic device include song recommendation, movie recommendation, process cleaning, image recognition, and the like, the new terminal task is novel recommendation, and since the similarity between the new terminal task and the types of the song recommendation and the movie recommendation is high, the knowledge bases corresponding to the song recommendation and the movie recommendation can be selected.
And 104, training the algorithm model of the new terminal task according to the target knowledge base so as to optimize the parameters of the algorithm model.
In an embodiment, the target knowledge base selected in step 103 is fused into the algorithm model of the new terminal task, and the algorithm model of the new terminal task is trained according to the target knowledge base to optimize parameters of the algorithm model, so that the learning of the new task can fully combine prior knowledge of other related tasks, help the new terminal task to learn better, and help to improve the completion sales volume and accuracy of the new terminal task.
At the moment, the electronic equipment adjusts the algorithm model of the new terminal task to realize optimization of the algorithm model, so that the processing effect of the adjusted algorithm module on the new terminal task can be closer to the use habit of a user, and the task processing accuracy is improved.
As can be seen from the above, the optimization method of the algorithm model provided in the embodiment of the present application can obtain a plurality of terminal tasks in the electronic device, classify the terminal tasks according to the preset database, learn the classified terminal tasks through the preset algorithm, respectively, to obtain a plurality of knowledge bases, when the electronic device executes a new terminal task, match a target knowledge base in the plurality of knowledge bases according to a task type of the new terminal task, train the algorithm model of the new terminal task according to the target knowledge base, and optimize parameters of the algorithm model. According to the method and the device, when a new terminal task is executed, the corresponding knowledge base can be matched according to the characteristics of the task, so that the algorithm model is optimized, and the processing effect of the algorithm model on the task is further improved.
The cleaning method of the present application will be further described below on the basis of the method described in the above embodiment. Referring to fig. 3, fig. 3 is another schematic flow chart of an optimization method of an algorithm model according to an embodiment of the present application, where the optimization method of the algorithm model includes:
step 201, acquiring a plurality of terminal tasks in the electronic device, and classifying the terminal tasks according to expert data.
In one embodiment, the electronic device includes a plurality of terminal tasks, each terminal task including a corresponding algorithmic model for performing different terminal tasks, such as process scheduling, background cleaning, performance optimization, song recommendation, and the like. The algorithm model may include one or more algorithm modules, each for processing data according to a certain algorithm.
In an embodiment, the electronic device obtains a plurality of terminal tasks in the electronic device, and classifies the terminal tasks according to expert data to obtain a task classification list with certain relevance.
Step 202, acquiring task type information of the terminal tasks, and sequencing the classified terminal tasks according to the task type information.
And sorting the terminal tasks based on the classification result in the step 201. In an embodiment, the task types may be specifically sorted according to the task types of the terminal tasks, where the task types may include the usage of the terminal tasks, for example, the terminal tasks on the electronic device may be classified into types of cleaning, predicting, identifying, and the like according to the usage. For example, the cleaning type may include terminal tasks such as background cleaning of the electronic device, photo cleaning, address book cleaning, and the like; the prediction types can comprise terminal tasks such as music prediction, movie prediction, application prediction and the like; the recognition type can comprise terminal tasks such as character recognition, face recognition, image recognition and the like.
After the task type information of the terminal tasks is obtained, the classified terminal tasks are further sequenced according to the task type information, the sequence among the terminal tasks is adjusted, and a task sequence list is obtained. For example, when the recommended task and the predicted task are arranged together, the collaboration and knowledge sharing among the tasks are facilitated; and when the classification task of the image and the task of object recognition in the image are arranged together, the coordination and knowledge sharing among the tasks are also more facilitated.
And step 203, dividing the terminal tasks into a plurality of sets according to the sorting.
After the sorted terminal tasks are sorted, the terminal tasks may be further divided into a plurality of sets according to the sorting. In an embodiment, a preset number of terminal tasks may be divided into one set according to a sequence, for example, every 5 terminal tasks are divided into one set, so that the number of terminal tasks in each set after division is the same. In other embodiments, the number of terminal tasks in each set may be different, for example, the first 10 terminal tasks are divided into one set, then the first 20 terminal tasks are divided into one set, and so on.
And 204, learning the terminal tasks in the sets respectively through a preset algorithm to obtain a plurality of knowledge bases.
In an embodiment, the preset algorithm may be a lifetime supervised learning algorithm. For example, a lifelong supervised learning algorithm is adopted to sequentially learn the terminal tasks in the set obtained in the step 203 after the division, so as to obtain knowledge bases in different stages. If the previous 10 tasks are sequentially learned, the finally obtained knowledge base is used as the knowledge base in the first stage, and corresponding task labels such as classification, clustering and the like are given at the same time; and sequentially learning the first 20 tasks, and taking the finally obtained knowledge base as the knowledge base of the second stage, and so on. Finally, a staged knowledge base sequence is obtained, and corresponding task labels can be attached, for example, labeling can be performed according to classification labels of terminal tasks, such as identification, prediction, or classification, and the like.
Step 205, when the electronic device executes a new terminal task, matching a target knowledge base in the plurality of knowledge bases according to the task type of the new terminal task.
In an embodiment, when the electronic device needs to complete a new terminal task, since the electronic device does not store the knowledge base corresponding to the new terminal task, the target knowledge base may be matched among the plurality of knowledge bases according to the task type of the new terminal task, for example, the related attributes of the new terminal task are analyzed, such as data quality, application type, and the like. Specifically, the task type similarity between the terminal task and the new terminal task can be calculated respectively, and then the knowledge base corresponding to the terminal task with higher similarity is selected.
And step 206, training the algorithm model of the new terminal task according to the target knowledge base so as to optimize the parameters of the algorithm model.
And fusing the target knowledge base selected based on the steps into the algorithm model of the new terminal task, and training the algorithm model of the new terminal task according to the target knowledge base so as to optimize the parameters of the algorithm model.
In an embodiment, training the algorithm model of the new terminal task according to the target knowledge base to optimize parameters of the algorithm model includes:
extracting a training set from the target knowledge base;
and training the algorithm model of the new terminal task according to the training set so as to optimize the parameters of the algorithm model.
And step 207, after the electronic equipment executes the new terminal task through the optimized algorithm model, extracting task data in the new terminal task and adding the task data to the target knowledge base.
In an embodiment, after the electronic device executes a new terminal task through the optimized algorithm model, new learning knowledge can be obtained and added to the target knowledge base selected based on step 205, so that the old knowledge base can be updated well. And when a new terminal task exists, continuing the steps.
From the above, the optimization method of the algorithm model provided in the embodiment of the present application can obtain a plurality of terminal tasks in the electronic device, classifying the terminal tasks according to the expert data to obtain task type information of the terminal tasks, sequencing the classified terminal tasks according to the task type information, dividing the terminal tasks into a plurality of sets according to the sequence, respectively learning the terminal tasks in the sets through a preset algorithm to obtain a plurality of knowledge bases, when the electronic equipment executes a new terminal task, matching the target knowledge bases in the plurality of knowledge bases according to the task type of the new terminal task, training the algorithm model of the new terminal task according to the target knowledge bases to optimize the parameters of the algorithm model, after the electronic equipment executes the new terminal task through the optimized algorithm model, extracting task data in the new terminal task and adding the task data into the target knowledge base. According to the method and the device, when a new terminal task is executed, the corresponding knowledge base can be matched according to the characteristics of the task, so that the algorithm model is optimized, and the processing effect of the algorithm model on the task is further improved.
The embodiment of the application further provides a preset algorithm, which can be a supervised learning algorithm and is used for obtaining a plurality of terminal tasks in the electronic equipment, classifying the terminal tasks according to a preset database, and then learning the classified terminal tasks through the supervised learning algorithm to obtain a plurality of knowledge bases, so that when the electronic equipment executes a new terminal task, a target knowledge base is matched in the plurality of knowledge bases according to the task type of the new terminal task, and an algorithm model of the new terminal task is trained according to the target knowledge base to optimize parameters of the algorithm model.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an optimization apparatus of an algorithm model according to an embodiment of the present disclosure. Wherein the optimization device 30 of the algorithm model comprises a classification module 301, a learning module 302, a matching module 303 and an optimization module 304;
the classification module 301 is configured to obtain a plurality of terminal tasks in the electronic device, and classify the terminal tasks according to a preset database.
In one embodiment, the electronic device includes a plurality of terminal tasks, each terminal task including a corresponding algorithmic model for performing different terminal tasks, such as process scheduling, background cleaning, performance optimization, song recommendation, and the like. The algorithm model may include one or more algorithm modules, each for processing data according to a certain algorithm. For example, the algorithm model may include a markov algorithm module, a convolutional neural network algorithm module. The system comprises a Markov algorithm module, a convolutional neural network algorithm module and a data processing module, wherein the Markov algorithm module is used for processing input data according to a Markov algorithm, and the convolutional neural network algorithm module is used for processing the input data according to a convolutional neural network algorithm.
In an embodiment, the electronic device obtains a plurality of terminal tasks in the electronic device, and classifies the terminal tasks according to a preset database. The preset database can be expert data, and a task classification list with certain relevance is obtained.
The learning module 302 is configured to learn the classified terminal tasks through a preset algorithm, so as to obtain a plurality of knowledge bases.
In an embodiment, the preset algorithm may be a supervised learning algorithm, wherein different knowledge bases are obtained through the supervised learning algorithm for terminal tasks under different categories. Further, after learning to obtain a plurality of knowledge bases, corresponding task labels may be respectively labeled, and in an embodiment, the labeling may be performed according to a classification label of a terminal task, where the task label is, for example, identification, prediction, or classification.
The matching module 303 is configured to, when the electronic device executes a new terminal task, match a target knowledge base among the plurality of knowledge bases according to a task type of the new terminal task.
In an embodiment, when the electronic device needs to complete a new terminal task, since the electronic device does not store the knowledge base corresponding to the new terminal task, the target knowledge base may be matched among the plurality of knowledge bases according to the task type of the new terminal task, for example, the related attributes of the new terminal task are analyzed, such as data quality, application type, and the like.
The optimization module 304 is configured to train the algorithm model of the new terminal task according to the target knowledge base, so as to optimize parameters of the algorithm model.
In one embodiment, the selected target knowledge base is fused into an algorithm model of the new terminal task, and the algorithm model of the new terminal task is trained according to the target knowledge base to optimize parameters of the algorithm model, so that the learning of the new task can be fully combined with prior knowledge of other related tasks, the new terminal task is helped to learn better, and the completion sales volume and the accuracy of the new terminal task are helped to be improved.
In an embodiment, please refer to fig. 5, fig. 5 is another schematic structural diagram of an optimization apparatus of an algorithm model provided in the embodiment of the present application, wherein the apparatus 30 may further include: an acquisition module 305 and a sorting module 306;
the obtaining module 305 is configured to obtain task type information of the terminal task after the classifying module 301 classifies the terminal task according to a preset database;
the sorting module 306 is configured to sort the classified terminal tasks according to the task type information.
With continued reference to fig. 5, in an embodiment, the apparatus 30 may further include: an extraction module 307 and an addition module 308;
the extracting module 307 is configured to extract task data in a new terminal task after the electronic device executes the new terminal task through the optimized algorithm model;
the adding module 308 is configured to add the task data to the target knowledge base.
Therefore, the optimization device of the algorithm model in the embodiment of the application can acquire a plurality of terminal tasks in the electronic device, classify the terminal tasks according to the preset database, learn the classified terminal tasks through the preset algorithm respectively to obtain a plurality of knowledge bases, when the electronic device executes a new terminal task, match a target knowledge base in the plurality of knowledge bases according to the task type of the new terminal task, train the algorithm model of the new terminal task according to the target knowledge base, and optimize the parameters of the algorithm model. According to the method and the device, when a new terminal task is executed, the corresponding knowledge base can be matched according to the characteristics of the task, so that the algorithm model is optimized, and the processing effect of the algorithm model on the task is further improved.
In the embodiment of the present application, the optimization device of the algorithm model and the optimization method of the algorithm model in the above embodiments belong to the same concept, any method provided in the optimization method embodiment of the algorithm model may be run on the optimization device of the algorithm model, and the specific implementation process thereof is detailed in the embodiment of the optimization method of the algorithm model, and is not described herein again.
The term "module" as used herein may be considered a software object executing on the computing system. The different components, modules, engines, and services described herein may be considered as implementation objects on the computing system. The apparatus and method described herein may be implemented in software, but may also be implemented in hardware, and are within the scope of the present application.
The embodiment of the present application further provides a storage medium, on which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the optimization method of the algorithm model.
The embodiment of the application also provides an electronic device, such as a tablet computer, a mobile phone and the like. The processor in the electronic device loads instructions corresponding to processes of one or more application programs into the memory according to the following steps, and the processor runs the application programs stored in the memory, so that various functions are realized:
acquiring a plurality of terminal tasks in the electronic equipment, and classifying the terminal tasks according to a preset database;
respectively learning the classified terminal tasks through a preset algorithm to obtain a plurality of knowledge bases;
when the electronic equipment executes a new terminal task, matching a target knowledge base in the plurality of knowledge bases according to the task type of the new terminal task;
and training the algorithm model of the new terminal task according to the target knowledge base so as to optimize the parameters of the algorithm model.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 6, the electronic device 400 includes a processor 401 and a memory 402. The processor 401 is electrically connected to the memory 402.
The processor 400 is a control center of the electronic device 400, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device 400 by running or loading a computer program stored in the memory 402 and calling data stored in the memory 402, and processes the data, thereby monitoring the electronic device 400 as a whole.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the computer programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, a computer program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
In this embodiment, the processor 401 in the electronic device 400 loads instructions corresponding to one or more processes of the computer program into the memory 402 according to the following steps, and the processor 401 runs the computer program stored in the memory 402, so as to implement various functions, as follows:
acquiring a plurality of terminal tasks in the electronic equipment, and classifying the terminal tasks according to a preset database;
respectively learning the classified terminal tasks through a preset algorithm to obtain a plurality of knowledge bases;
when the electronic equipment executes a new terminal task, matching a target knowledge base in the plurality of knowledge bases according to the task type of the new terminal task;
and training the algorithm model of the new terminal task according to the target knowledge base so as to optimize the parameters of the algorithm model.
Referring also to fig. 7, in some embodiments, the electronic device 400 may further include: a display 403, radio frequency circuitry 404, audio circuitry 405, and a power supply 406. The display 403, the rf circuit 404, the audio circuit 405, and the power source 406 are electrically connected to the processor 401.
The display 403 may be used to display information entered by or provided to the user as well as various graphical user interfaces, which may be made up of graphics, text, icons, video, and any combination thereof. The Display 403 may include a Display panel, and in some embodiments, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The rf circuit 404 may be used for transceiving rf signals to establish wireless communication with a network device or other electronic devices through wireless communication, and for transceiving signals with the network device or other electronic devices.
The audio circuit 405 may be used to provide an audio interface between the user and the electronic device through a speaker, microphone.
The power supply 406 may be used to power various components of the electronic device 400. In some embodiments, power supply 406 may be logically coupled to processor 401 via a power management system, such that functions to manage charging, discharging, and power consumption management are performed via the power management system.
Although not shown in fig. 7, the electronic device 400 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
In the embodiment of the present application, the storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for the optimization method of the algorithm model in the embodiment of the present application, it can be understood by a person skilled in the art that all or part of the process of implementing the optimization method of the algorithm model in the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, and the computer program can be stored in a computer readable storage medium, such as a memory of an electronic device, and executed by at least one processor in the electronic device, and during the execution process, the process of the embodiment of the optimization method of the algorithm model can be included. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, etc.
For the optimization device of the algorithm model in the embodiment of the present application, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The method, the apparatus, the storage medium, and the electronic device for optimizing an algorithm model provided in the embodiments of the present application are described in detail above, and a specific example is applied in the description to explain the principle and the implementation of the present application, and the description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for optimizing an algorithmic model, the method comprising the steps of:
acquiring a plurality of terminal tasks in the electronic equipment, and classifying the terminal tasks according to a preset database;
respectively learning the classified terminal tasks through a preset algorithm to obtain a plurality of knowledge bases;
when the electronic equipment executes a new terminal task, matching a target knowledge base in the plurality of knowledge bases according to the task type of the new terminal task;
and training the algorithm model of the new terminal task according to the target knowledge base so as to optimize the parameters of the algorithm model.
2. The method for optimizing an algorithmic model as defined in claim 1, wherein after classifying the terminal tasks according to a predetermined database, the method further comprises:
acquiring task type information of the terminal task;
and sequencing the classified terminal tasks according to the task type information.
3. The method for optimizing an algorithm model according to claim 2, wherein learning the classified terminal tasks through a preset algorithm to obtain a plurality of knowledge bases comprises:
dividing the terminal tasks into a plurality of sets according to the sorting;
and respectively learning the terminal tasks in the sets through a preset algorithm to obtain a plurality of knowledge bases.
4. The method for optimizing an algorithm model according to claim 1, wherein training the algorithm model of the new terminal task according to the target knowledge base to optimize parameters of the algorithm model comprises:
extracting a training set from the target knowledge base;
and training the algorithm model of the new terminal task according to the training set so as to optimize the parameters of the algorithm model.
5. The method of optimizing an algorithmic model as defined in claim 1, wherein after optimizing the parameters of the algorithmic model, the method further comprises:
after the electronic equipment executes a new terminal task through the optimized algorithm model, extracting task data in the new terminal task;
adding the task data to the target knowledge base.
6. An apparatus for optimizing an algorithmic model, the apparatus comprising: the device comprises a classification module, a learning module, a matching module and an optimization module;
the classification module is used for acquiring a plurality of terminal tasks in the electronic equipment and classifying the terminal tasks according to a preset database;
the learning module is used for learning the classified terminal tasks respectively through a preset algorithm to obtain a plurality of knowledge bases;
the matching module is used for matching a target knowledge base in the plurality of knowledge bases according to the task type of a new terminal task when the electronic equipment executes the new terminal task;
and the optimization module is used for training the algorithm model of the new terminal task according to the target knowledge base so as to optimize the parameters of the algorithm model.
7. The apparatus for optimizing an algorithmic model as defined in claim 6, wherein the apparatus further comprises: the device comprises an acquisition module and a sorting module;
the acquisition module is used for acquiring task type information of the terminal task after the classification module classifies the terminal task according to a preset database;
and the sequencing module is used for sequencing the classified terminal tasks according to the task type information.
8. The apparatus for optimizing an algorithmic model as defined in claim 6, wherein the apparatus further comprises: an extraction module and an addition module;
the extraction module is used for extracting task data in the new terminal task after the electronic equipment executes the new terminal task through the optimized algorithm model;
the adding module is used for adding the task data into the target knowledge base.
9. A storage medium having stored thereon a computer program, characterized in that, when the computer program is run on a computer, it causes the computer to execute a method of optimization of an algorithmic model as defined in any of claims 1 to 5.
10. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions, wherein the instructions in the memory are loaded by the processor for performing the steps of:
acquiring a plurality of terminal tasks in the electronic equipment, and classifying the terminal tasks according to a preset database;
respectively learning the classified terminal tasks through a preset algorithm to obtain a plurality of knowledge bases;
when the electronic equipment executes a new terminal task, matching a target knowledge base in the plurality of knowledge bases according to the task type of the new terminal task;
and training the algorithm model of the new terminal task according to the target knowledge base so as to optimize the parameters of the algorithm model.
CN201910282430.XA 2019-04-09 2019-04-09 Optimization method and device of algorithm model, storage medium and electronic equipment Pending CN111797870A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282430.XA CN111797870A (en) 2019-04-09 2019-04-09 Optimization method and device of algorithm model, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282430.XA CN111797870A (en) 2019-04-09 2019-04-09 Optimization method and device of algorithm model, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111797870A true CN111797870A (en) 2020-10-20

Family

ID=72805334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282430.XA Pending CN111797870A (en) 2019-04-09 2019-04-09 Optimization method and device of algorithm model, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111797870A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112988337A (en) * 2021-03-01 2021-06-18 创新奇智(上海)科技有限公司 Task processing system, method, device, electronic equipment and storage medium
CN113703928A (en) * 2021-08-31 2021-11-26 南开大学 Social media multitasking method and system
EP4152176A1 (en) * 2021-09-18 2023-03-22 Beijing Baidu Netcom Science And Technology Co. Ltd. Intelligent question-answering processing method and system, electronic device and storage medium
WO2023125855A1 (en) * 2021-12-30 2023-07-06 维沃移动通信有限公司 Model updating method and communication device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506775A (en) * 2016-06-14 2017-12-22 北京陌上花科技有限公司 model training method and device
US20170372221A1 (en) * 2016-06-23 2017-12-28 International Business Machines Corporation Cognitive machine learning classifier generation
CN107844634A (en) * 2017-09-30 2018-03-27 平安科技(深圳)有限公司 Polynary universal model platform modeling method, electronic equipment and computer-readable recording medium
CN108009593A (en) * 2017-12-15 2018-05-08 清华大学 A kind of transfer learning optimal algorithm choosing method and system
CN109086268A (en) * 2018-07-13 2018-12-25 上海乐言信息科技有限公司 A kind of field syntax learning system and method based on transfer learning
CN109325599A (en) * 2018-08-14 2019-02-12 重庆邂智科技有限公司 A kind of data processing method, server and computer-readable medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506775A (en) * 2016-06-14 2017-12-22 北京陌上花科技有限公司 model training method and device
US20170372221A1 (en) * 2016-06-23 2017-12-28 International Business Machines Corporation Cognitive machine learning classifier generation
CN107844634A (en) * 2017-09-30 2018-03-27 平安科技(深圳)有限公司 Polynary universal model platform modeling method, electronic equipment and computer-readable recording medium
CN108009593A (en) * 2017-12-15 2018-05-08 清华大学 A kind of transfer learning optimal algorithm choosing method and system
CN109086268A (en) * 2018-07-13 2018-12-25 上海乐言信息科技有限公司 A kind of field syntax learning system and method based on transfer learning
CN109325599A (en) * 2018-08-14 2019-02-12 重庆邂智科技有限公司 A kind of data processing method, server and computer-readable medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112988337A (en) * 2021-03-01 2021-06-18 创新奇智(上海)科技有限公司 Task processing system, method, device, electronic equipment and storage medium
CN113703928A (en) * 2021-08-31 2021-11-26 南开大学 Social media multitasking method and system
EP4152176A1 (en) * 2021-09-18 2023-03-22 Beijing Baidu Netcom Science And Technology Co. Ltd. Intelligent question-answering processing method and system, electronic device and storage medium
WO2023125855A1 (en) * 2021-12-30 2023-07-06 维沃移动通信有限公司 Model updating method and communication device

Similar Documents

Publication Publication Date Title
CN111813532B (en) Image management method and device based on multitask machine learning model
CN111797870A (en) Optimization method and device of algorithm model, storage medium and electronic equipment
CN111950596A (en) Training method for neural network and related equipment
CN111798018A (en) Behavior prediction method, behavior prediction device, storage medium and electronic equipment
CN111797288A (en) Data screening method and device, storage medium and electronic equipment
CN111797854B (en) Scene model building method and device, storage medium and electronic equipment
CN113515942A (en) Text processing method and device, computer equipment and storage medium
CN111798259A (en) Application recommendation method and device, storage medium and electronic equipment
CN111797076A (en) Data cleaning method and device, storage medium and electronic equipment
CN111797861A (en) Information processing method, information processing apparatus, storage medium, and electronic device
KR20190053481A (en) Apparatus and method for user interest information generation
CN111796925A (en) Method and device for screening algorithm model, storage medium and electronic equipment
CN111798019B (en) Intention prediction method, intention prediction device, storage medium and electronic equipment
CN111797867A (en) System resource optimization method and device, storage medium and electronic equipment
CN111797862A (en) Task processing method and device, storage medium and electronic equipment
CN111797289A (en) Model processing method and device, storage medium and electronic equipment
CN111814812A (en) Modeling method, modeling device, storage medium, electronic device and scene recognition method
CN111797856A (en) Modeling method, modeling device, storage medium and electronic equipment
CN111797874A (en) Behavior prediction method, behavior prediction device, storage medium and electronic equipment
CN111796663B (en) Scene recognition model updating method and device, storage medium and electronic equipment
CN111797863A (en) Model training method, data processing method, device, storage medium and equipment
CN111797290A (en) Data processing method, data processing device, storage medium and electronic equipment
CN111800535B (en) Terminal running state evaluation method and device, storage medium and electronic equipment
CN111797878A (en) Data processing method, data processing device, storage medium and electronic equipment
CN111797299A (en) Model training method, webpage classification method, device, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination