WO2019062414A1 - 应用程序管控方法、装置、存储介质及电子设备 - Google Patents

应用程序管控方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2019062414A1
WO2019062414A1 PCT/CN2018/102254 CN2018102254W WO2019062414A1 WO 2019062414 A1 WO2019062414 A1 WO 2019062414A1 CN 2018102254 W CN2018102254 W CN 2018102254W WO 2019062414 A1 WO2019062414 A1 WO 2019062414A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
sample
loss function
prediction
training
Prior art date
Application number
PCT/CN2018/102254
Other languages
English (en)
French (fr)
Inventor
曾元清
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019062414A1 publication Critical patent/WO2019062414A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present application belongs to the field of communications technologies, and in particular, to an application management and control method, device, storage medium, and electronic device.
  • the application provides an application management method, device, storage medium and electronic device, which can improve the intelligence and accuracy of the application control.
  • an embodiment of the present application provides an application management method, including:
  • an application management device including:
  • An acquisition module configured to collect multi-dimensional feature information of the application as a sample, and construct a sample set of the application
  • a building module configured to extract feature information from the sample set according to a preset rule, and construct a plurality of training sets
  • a training module configured to train the logistic regression model according to the plurality of training sets to obtain a trained prediction model
  • the control module is configured to acquire the current multi-dimensional feature information of the application and use the prediction sample as a prediction sample, generate a prediction result according to the prediction sample and the trained prediction model, and control the application according to the prediction result.
  • an embodiment of the present application provides a storage medium on which a computer program is stored, and when the computer program runs on a computer, causes the computer to execute the application management and control method described above.
  • an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory has a computer program, and the processor is configured to execute the foregoing application management and control method by calling the computer program.
  • FIG. 1 is a schematic diagram of a system of an application management device according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an application scenario of an application management and control device according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application.
  • FIG. 4 is another schematic flowchart of an application management and control method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another application scenario of an application management device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application.
  • FIG. 7 is another schematic structural diagram of an application management device according to an embodiment of the present application.
  • FIG. 8 is still another schematic structural diagram of an application management device according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 10 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
  • module as used herein may be taken to mean a software object that is executed on the computing system.
  • the different components, modules, engines, and services described herein can be viewed as implementation objects on the computing system.
  • the apparatus and method described herein may be implemented in software, and may of course be implemented in hardware, all of which are within the scope of the present application.
  • references to "an embodiment” herein mean that a particular feature, structure, or characteristic described in connection with the embodiments can be included in at least one embodiment of the present application.
  • the appearances of the phrases in various places in the specification are not necessarily referring to the same embodiments, and are not exclusive or alternative embodiments that are mutually exclusive. Those skilled in the art will understand and implicitly understand that the embodiments described herein can be combined with other embodiments.
  • the application in the background is usually cleaned according to the memory usage of the electronic device and the priority of each application to release the memory.
  • Some applications are important to users, or users need to use some applications again in a short period of time. If these applications are cleaned up after cleaning, users will need to reload these applications when they use them again.
  • the application process takes a lot of time and memory resources.
  • the electronic device may be a smart phone, a tablet computer, a desktop computer, a notebook computer, or a handheld computer.
  • FIG. 1 is a schematic diagram of a system for controlling an application program according to an embodiment of the present application.
  • the application control device is mainly configured to: collect multi-dimensional feature information of the application as a sample, construct a sample set of the application, extract feature information from the sample set according to a preset rule, and construct a plurality of training sets, according to the plurality of training sets
  • the logistic regression model is trained to obtain the trained prediction model, obtain the current multi-dimensional feature information of the application and use as the prediction sample, generate the prediction result according to the prediction sample and the trained prediction model, and control the application according to the prediction result. , for example, cleaning, or freezing.
  • FIG. 2 is a schematic diagram of an application scenario of an application management device according to an embodiment of the present application.
  • the application management device receives the control request, it detects that the application running in the background of the electronic device includes the application a, the application b, and the application c. Then, the multi-dimensional feature information corresponding to the application a, the multi-dimensional feature information corresponding to the application b, and the multi-dimensional feature information corresponding to the application c are respectively obtained, and the probability of whether the application a needs to be used is predicted by the logistic regression model to obtain the probability a.
  • the probability b' and the probability c' control the application a, the application b, and the application c running in the background, for example, the application b with the lowest probability is closed.
  • the embodiment of the present application provides an application management and control method, and the execution subject of the application management method may be an application management device provided by the embodiment of the present application, or an electronic device integrated with the application management and control device, wherein the application control The device can be implemented in hardware or software.
  • An embodiment of the present application provides an application management and control method, including:
  • the step of extracting feature information from the sample set according to a preset rule, and the step of constructing multiple training sets includes:
  • a preset number of feature information is randomly extracted and returned to form a corresponding sub-sample, and the plurality of sub-samples constitute a training set;
  • the step of extracting feature information from the sample set according to a preset rule, and the step of constructing multiple training sets includes:
  • a plurality of training sets are constructed according to the plurality of first training sets and the second training set.
  • the step of training the logistic regression model according to the multiple training sets includes:
  • the first loss function of the logistic regression model is obtained according to a preset formula and the plurality of first training sets.
  • the preset formula is:
  • Both i and k are positive integers, y ik is the predicted probability distribution, N s is the batch size of the training classification, C is the number of categories, and y i is the unique heat code that characterizes the sample category.
  • the step of generating a target loss function according to the first loss function and the second loss function comprises:
  • a weighted sum of the first loss function and the second loss function is calculated to obtain the target loss function.
  • the step of estimating model parameters in the logistic regression model according to the target loss function comprises:
  • the target loss function is calculated based on a gradient descent method to obtain model parameters in the logistic regression model.
  • the prediction result includes: a first probability that the application can be cleaned, and a second probability that cannot be cleaned;
  • the step of controlling the application according to the prediction result includes:
  • the step of outputting the first prediction result that can be cleaned by the application or the second prediction result that cannot be cleaned according to the comparison result includes:
  • FIG. 3 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application.
  • the application management and control method provided by the embodiment of the present application is applied to an electronic device, and the specific process may be as follows:
  • Step 101 Collect multi-dimensional feature information of the application as a sample, and build a sample set of the application.
  • the preset application may be any application installed in the electronic device, such as a communication application, a multimedia application, a game application, a news application, or a shopping application.
  • the multi-dimensional feature information of the application has a dimension of a certain length, and the parameters in each dimension correspond to a feature information of the application, that is, the multi-dimensional feature information is composed of a plurality of feature information.
  • the plurality of feature information may include feature information related to the application itself.
  • 30 features of the device may be collected to form a 30-dimensional vector, such as:
  • the length of the screen off time is accumulated during the last time the application is cut into the background to the present;
  • the application is in the foreground at the time of the day (the rest day is divided by the working day and the rest day);
  • the target application is the first bin in the background dwell time histogram (the proportion of times corresponding to 0-5 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of times corresponding to 5-10 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of times corresponding to 10-15 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of the corresponding number of 15-20 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of the corresponding number of 15-20 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of times corresponding to 25-30 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of the corresponding number after 30 minutes);
  • the way the target application is switched is divided into the home key switch, the recent key switch, and the other application switch;
  • the current screen is on and off
  • the time period index of the current day of the current time is the time period index of the current day of the current time
  • the background application is closely followed by the number of times the current foreground application is opened, regardless of the workday rest day statistics;
  • the background application is followed by the number of times the current foreground application is opened, and the workday rest days are counted;
  • the current foreground application goes into the background until the target application enters the foreground and the average screen is extinguished by daily.
  • the sample set of the application may include multiple samples collected at a preset frequency during the historical time period.
  • the historical time period may be, for example, the past 7 days or 10 days; the preset frequency may be, for example, collected every 10 minutes and collected every half hour. It can be understood that the multi-dimensional feature data of the application acquired at one time constitutes one sample, and a plurality of samples constitute the sample set.
  • each sample in the sample set can be marked to obtain a sample label for each sample. Since the implementation of the present invention is to predict whether the application can be cleaned, the labeled sample label includes both cleanable and non-cleanable. Clean up. Specifically, the user may mark the historical usage habits of the application. For example, after the application enters the background for 30 minutes, the user closes the application and is marked as “cleanable”; for example, after the application enters the background for 3 minutes, the user will The application is switched to the foreground and is marked as "not cleanable”. Specifically, the value "1" can be used to indicate "can be cleaned up", the value "0" is used to mean “not cleanable", and vice versa.
  • the feature information of the multi-dimensional feature information of the application that is not directly represented by the numerical value may be quantified by a specific numerical value, for example, the characteristic information of the wireless network connection state of the electronic device may be represented by a value of 1
  • the state uses the value 0 to indicate the abnormal state (or vice versa); for example, for the characteristic information of whether the electronic device is in the charging state, the value 1 can be used to indicate the state of charge, and the value 0 is used to indicate the state of no charge (or vice versa).
  • Step 102 Extract feature information from the sample set according to a preset rule, and construct a plurality of training sets.
  • a preset number of feature information may be randomly extracted from the multi-dimensional feature information of each sample to form a corresponding sub-sample, and the plurality of sub-samples constitute a training set, and after multiple extractions , build multiple training sets, the preset number can be customized according to actual needs.
  • the training set can be divided into two parts, one part is a single sample x, and marked whether the target application is used nextly, if it is, it can be marked as 1, if otherwise marked as 0, the form can be (x) i , y i ), where y i ⁇ ⁇ 0, 1 ⁇ .
  • the other part is a triple, that is, by sampling two samples (x i , x j ), if the two sample labels are the same, it is recorded as 1, and the labels are inconsistent, which is recorded as -1, and the form is (x i , x j , ⁇ ), where ⁇ ⁇ ⁇ 1, -1 ⁇ .
  • the step of extracting feature information from the sample set according to a preset rule to construct a plurality of training sets may include:
  • Extracting two samples from the sample set generating a second label of two samples according to the first label corresponding to the two samples respectively, forming a second training set according to the two samples and the corresponding second label, and extracting multiple times to obtain multiple Second training set;
  • a plurality of training sets are constructed according to the plurality of first training sets and the second training set.
  • Step 103 Train the logistic regression model according to the multiple training sets to obtain the trained prediction model.
  • the classifier in the form of a classifier, according to the classification ability of the classifier, the classifier can be divided into: weak classifier and strong classifier. Therefore, the classifier generally refers to the timing logistic regression model.
  • the corresponding logistic regression model can be trained by using the training set to obtain a corresponding predicted model after training.
  • the neural network in the present invention is a shallow neural network, and the network structure is only two layers, that is, an embedded layer and a fully connected layer.
  • the embedded layer parameters are trained by a single sample and a triplet at the same time, and the embedded layer is subjected to the fully connected layer. Classification greatly improves the accuracy.
  • the step of training the logistic regression model according to the plurality of training sets comprises:
  • a target loss function is generated according to the first loss function and the second loss function, and the model parameters in the logistic regression model are estimated according to the target loss function.
  • the step of generating the target loss function according to the first loss function and the second loss function includes:
  • a weighted sum of the first loss function and the second loss function is calculated to obtain a target loss function.
  • the target loss function can be calculated based on the gradient descent method to obtain the model parameters in the logistic regression model.
  • an embedded value is calculated for each data x i in the training set, the process being implemented by a neural network hidden node composed of 8 neurons.
  • the embedded layer is classified by logistic regression, using the category cross entropy as the loss function:
  • N s is the batch size of the training classification
  • C is the number of categories
  • y i is the unique heat code that characterizes the sample category
  • W is the weight of the fully connected layer; by minimizing the loss function, training is obtained to obtain the embedded layer .
  • N g is the batch size of the training triples, and the learned embedded layer is further trained.
  • the loss function is used to estimate the degree of inconsistency between the predicted value f(x) of the model and the true value Y. It is a non-negative real-valued function, usually using L(Y, f(x)), or L(w) indicates that the smaller the loss function, the better the robustness of the model.
  • the loss function is the core part of the empirical risk function and an important part of the structural risk function.
  • Step 104 Acquire current multi-dimensional feature information of the application and use the prediction sample as a prediction sample to generate a prediction result according to the prediction sample and the trained prediction model, and control the application according to the prediction result.
  • the multidimensional features of the application can be collected as prediction samples based on the predicted time.
  • the prediction time can be set according to requirements, such as the current time.
  • a multi-dimensional feature of an application can be acquired as a prediction sample at a predicted time point.
  • the foregoing prediction result may include cleaning or not cleaning. If it is necessary to determine whether the current background application is cleanable, obtain current multi-dimensional feature information of the application, such as application usage information and current feature information of the electronic device, etc., to input the prediction.
  • the model the prediction model can be calculated according to the model parameters to obtain the prediction result, thereby judging whether the application needs to be cleaned up.
  • the training process of the predictive model can be completed on the server side or on the electronic device side.
  • the training process and the actual prediction process of the predictive model are completed on the server side, when the trained predictive model is needed, the feature information of the current multiple dimensions of the application may be input to the server, and after the actual prediction of the server is completed, the prediction will be performed. The result is sent to the electronic device, which then controls the application based on the predicted results.
  • the current multi-dimensional feature information of the application may be input to the electronic device, and after the actual prediction of the electronic device is completed, the electronic device Control the application based on the predicted results.
  • the application management method collects the multi-dimensional feature information of the application as a sample, constructs a sample set of the application, extracts feature information from the sample set according to a preset rule, and constructs multiple training sets.
  • the plurality of training sets are trained on the logistic regression model to obtain the trained prediction model, obtain the current multi-dimensional feature information of the application and use as the prediction sample, generate the prediction result according to the prediction sample and the trained prediction model, and generate the prediction result according to the prediction result.
  • Control the application This application can improve the accuracy of the prediction of the application, thereby improving the intelligence and accuracy of the control of the application entering the background.
  • the application management method includes:
  • the multi-dimensional feature information of the application has a dimension of a certain length, and the parameters in each dimension correspond to a feature information of the application, that is, the multi-dimensional feature information is composed of a plurality of feature information.
  • the plurality of feature information may include feature information related to the application itself.
  • 30 features of the device may be collected to form a 30-dimensional vector, such as:
  • the length of the screen off time is accumulated during the last time the application is cut into the background to the present;
  • the application is in the foreground at the time of the day (the rest day is divided by the working day and the rest day);
  • the target application is the first bin in the background dwell time histogram (the proportion of times corresponding to 0-5 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of times corresponding to 5-10 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of times corresponding to 10-15 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of the corresponding number of 15-20 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of the corresponding number of 15-20 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of times corresponding to 25-30 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of the corresponding number after 30 minutes);
  • the way the target application is switched is divided into the home key switch, the recent key switch, and the other application switch;
  • the current screen is on and off
  • the time period index of the current day of the current time is the time period index of the current day of the current time
  • the background application is closely followed by the number of times the current foreground application is opened, regardless of the workday rest day statistics;
  • the background application is followed by the number of times the current foreground application is opened, and the workday rest days are counted;
  • the current foreground application goes into the background until the target application enters the foreground and the average screen is extinguished by daily.
  • the training set can be divided into two parts, one part is a single sample x, and marked whether the target application is used nextly, if it is, it can be marked as 1, if otherwise marked as 0, the form can be (x) i , y i ), where y i ⁇ ⁇ 0, 1 ⁇ .
  • the other part is a triple, that is, by sampling two samples (x i , x j ), if the two sample labels are the same, it is recorded as 1, and the labels are inconsistent, which is recorded as -1, and the form is (x i , x j , ⁇ ), where ⁇ ⁇ ⁇ 1, -1 ⁇ .
  • the corresponding logistic regression model can be trained by using the training set to obtain a corresponding predicted model after training.
  • the neural network in the present invention is a shallow neural network, and the network structure is only two layers, that is, an embedded layer and a fully connected layer.
  • the embedded layer parameters are trained by a single sample and a triplet at the same time, and the embedded layer is subjected to the fully connected layer. Classification greatly improves the accuracy.
  • the corresponding prediction probability is outputted, and multiple prediction probabilities are obtained.
  • a logistic regression model outputs a predicted probability that includes a first probability that the application can clean up and a second probability that the application cannot clean.
  • the first prediction result that can be cleaned is output, and when the first probability is not greater than the second probability, the second prediction result that cannot be cleaned is output.
  • P(Y 1
  • x) is not greater than P(Y 0
  • the pre-trained logistic regression model can be used to predict whether multiple applications running in the background can be cleaned. As shown in Table 1, it is determined that the application A1 and the application A3 running in the background can be cleaned, while the application A2 is maintained. The status of running in the background is unchanged.
  • the application management method collects the multi-dimensional feature information of the application as a sample, constructs a sample set of the application, extracts feature information from the sample set according to a preset rule, and constructs multiple training sets, according to multiple
  • the training set trains the logistic regression model to obtain the trained prediction model, obtains the current multi-dimensional feature information of the application and uses it as a prediction sample, generates prediction results based on the prediction sample and the trained prediction model, and applies the prediction result according to the prediction result.
  • the program is controlled. This application can improve the accuracy of the prediction of the application, thereby improving the intelligence and accuracy of the control of the application entering the background.
  • FIG. 5 is a schematic diagram of another application scenario of an application management device according to an embodiment of the present disclosure.
  • the training process of the predictive model is completed on the server side, and the actual prediction process of the predictive model is completed on the electronic device side
  • the optimized predictive model needs to be used
  • the current multidimensional feature information of the application can be input to the electronic device, and the actual electronic device
  • the electronic device manages the application based on the predicted result.
  • the trained predictive model file (model file) can be transplanted to the smart device. If it is necessary to determine whether the current background application can be cleaned up, the current sample set is updated, and the trained predictive model file (model file) is input. , the calculation can get the predicted value.
  • the method may further include:
  • the application Detect whether the application enters the background. If it enters the background, it obtains the current multi-dimensional feature information of the application. Then input the prediction model to generate the prediction results, and control the application according to the prediction results.
  • the method may further include:
  • the preset time is obtained, and if the current system time reaches the preset time, the current multi-dimensional feature information of the application is obtained.
  • the preset time can be a time point in the day, such as 9 am, or several time points in the day, such as 9 am, 6 pm, and the like. It can also be one or several time points in multiple days. Then input the prediction model to generate the prediction results, and control the application according to the prediction results.
  • the application also provides an application management device, including:
  • An acquisition module configured to collect multi-dimensional feature information of the application as a sample, and construct a sample set of the application
  • a building module configured to extract feature information from the sample set according to a preset rule, and construct a plurality of training sets
  • a training module configured to train the logistic regression model according to the plurality of training sets to obtain a trained prediction model
  • the control module is configured to acquire the current multi-dimensional feature information of the application and use the prediction sample as a prediction sample, generate a prediction result according to the prediction sample and the trained prediction model, and control the application according to the prediction result.
  • the building module is specifically configured to randomly extract a preset number of feature information from the multi-dimensional feature information of each sample to form a corresponding sub-sample, and the plurality of sub-samples constitute a The training set is extracted multiple times to get multiple training sets.
  • the building module specifically includes:
  • a marking submodule for marking the samples in the sample set to obtain a first label of each sample
  • a first extraction sub-module configured to extract a single sample from the sample set, form a first training set according to the sample and the corresponding first label, and extract multiple times to obtain a plurality of first training sets;
  • a second extraction sub-module configured to extract two samples from the sample set, and generate a second label of the two samples according to the first labels respectively corresponding to the two samples, according to the two samples and corresponding
  • the second label constitutes a second training set, and is extracted multiple times to obtain a plurality of second training sets;
  • the training module specifically includes:
  • a first function obtaining submodule configured to acquire a first loss function of the logistic regression model according to the plurality of first training sets
  • a second function obtaining submodule configured to acquire a second loss function of the logistic regression model according to the plurality of second training sets
  • a parameter estimation submodule configured to generate a target loss function according to the first loss function and the second loss function, and estimate a model parameter in the logistic regression model according to the target loss function.
  • the first function acquisition sub-module is specifically configured to acquire a first loss function of the logistic regression model according to a preset formula and the plurality of first training sets.
  • the preset formula is:
  • N s is the batch size of the training classification
  • C is the number of categories
  • y i is the unique heat code that characterizes the sample category.
  • the parameter estimation sub-module is specifically configured to separately obtain weight values of the first loss function and the second loss function, and calculate a weighted sum of the first loss function and the second loss function, The target loss function is obtained.
  • the parameter estimation sub-module is further configured to calculate the target loss function based on a gradient descent method to obtain a model parameter in the logistic regression model.
  • the prediction result includes: a first probability that the application can be cleaned, and a second probability that the application cannot be cleaned
  • the management module includes:
  • the output sub-module is configured to compare the first probability that the application can be cleaned with the second probability that cannot be cleaned, obtain a comparison result, and output a first prediction result that can be cleaned by the application according to the comparison result, or a second prediction that cannot be cleaned up result;
  • the determining submodule is configured to determine whether the application is cleanable according to the quantity of the first prediction result and the quantity of the second prediction result.
  • the output sub-module is configured to output a first predictable result that can be cleaned when the first probability is greater than the second probability
  • FIG. 6 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application.
  • the application management device 300 is applied to an electronic device, and the application management device 300 includes an acquisition module 301, a construction module 302, a training module 303, and a management module 304.
  • the collecting module 301 is configured to collect multi-dimensional feature information of the application as a sample, and construct a sample set of the application.
  • the preset application may be any application installed in the electronic device, such as a communication application, a multimedia application, a game application, a news application, or a shopping application.
  • the multi-dimensional feature information of the application has a dimension of a certain length, and the parameters in each dimension correspond to a feature information of the application, that is, the multi-dimensional feature information is composed of a plurality of feature information.
  • the plurality of feature information may include feature information related to the application itself.
  • 30 features of the device may be collected to form a 30-dimensional vector, such as:
  • the length of the screen off time is accumulated during the last time the application is cut into the background to the present;
  • the application is in the foreground at the time of the day (the rest day is divided by the working day and the rest day);
  • the target application is the first bin in the background dwell time histogram (the proportion of times corresponding to 0-5 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of times corresponding to 5-10 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of times corresponding to 10-15 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of the corresponding number of 15-20 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of the corresponding number of 15-20 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of times corresponding to 25-30 minutes);
  • the target application is the first bin in the background dwell time histogram (the proportion of the corresponding number after 30 minutes);
  • the way the target application is switched is divided into the home key switch, the recent key switch, and the other application switch;
  • the current screen is on and off
  • the time period index of the current day of the current time is the time period index of the current day of the current time
  • the background application is closely followed by the number of times the current foreground application is opened, regardless of the workday rest day statistics;
  • the background application is followed by the number of times the current foreground application is opened, and the workday rest days are counted;
  • the current foreground application goes into the background until the target application enters the foreground and the average screen is extinguished by daily.
  • the building module 302 is configured to extract feature information from the sample set according to a preset rule, and construct a plurality of training sets.
  • a preset number of feature information may be randomly extracted from the multi-dimensional feature information of each sample to form a corresponding sub-sample, and the plurality of sub-samples constitute a training set, and after multiple extractions , build multiple training sets, the preset number can be customized according to actual needs.
  • the training set can be divided into two parts, one part is a single sample x, and marked whether the target application is used nextly, if it is, it can be marked as 1, if otherwise marked as 0, the form can be (x) i , y i ), where y i ⁇ ⁇ 0, 1 ⁇ .
  • the other part is a triple, that is, by sampling two samples (x i , x j ), if the two sample labels are the same, it is recorded as 1, and the labels are inconsistent, which is recorded as -1, and the form is (x i , x j , ⁇ ), where ⁇ ⁇ ⁇ 1, -1 ⁇ .
  • the training module 303 is configured to train the logistic regression model according to the multiple training sets to obtain the trained prediction model.
  • the classifier in the form of a classifier, according to the classification ability of the classifier, the classifier can be divided into: weak classifier and strong classifier. Therefore, the classifier generally refers to the timing logistic regression model.
  • the corresponding logistic regression model can be trained by using the training set to obtain a corresponding predicted model after training.
  • the neural network in the present invention is a shallow neural network, and the network structure is only two layers, that is, an embedded layer and a fully connected layer.
  • the embedded layer parameters are trained by a single sample and a triplet at the same time, and the embedded layer is subjected to the fully connected layer. Classification greatly improves the accuracy.
  • an embedded value is calculated for each data x i in the training set, the process being implemented by a neural network hidden node composed of 8 neurons.
  • the embedded layer is classified by logistic regression, using the category cross entropy as the loss function:
  • N s is the batch size of the training classification
  • C is the number of categories
  • y i is the unique heat code representing the sample category
  • W is the weight of the fully connected layer
  • N g is the batch size of the training triples, and the learned embedded layer is further trained.
  • the control module 304 is configured to acquire the current multi-dimensional feature information of the application and use the prediction sample as a prediction sample to generate a prediction result according to the prediction sample and the trained prediction model, and control the application according to the prediction result.
  • the multidimensional features of the application can be collected as prediction samples based on the predicted time.
  • the prediction time can be set according to requirements, such as the current time.
  • a multi-dimensional feature of an application can be acquired as a prediction sample at a predicted time point.
  • the foregoing prediction result may include cleaning or not cleaning. If it is necessary to determine whether the current background application is cleanable, obtain current multi-dimensional feature information of the application, such as application usage information and current feature information of the electronic device, etc., to input the prediction.
  • the model the prediction model can be calculated according to the model parameters to obtain the prediction result, thereby judging whether the application needs to be cleaned up.
  • the training process of the predictive model can be completed on the server side or on the electronic device side.
  • the training process and the actual prediction process of the predictive model are completed on the server side, when the trained predictive model is needed, the feature information of the current multiple dimensions of the application may be input to the server, and after the actual prediction of the server is completed, the prediction will be performed. The result is sent to the electronic device, which then controls the application based on the predicted results.
  • the current multi-dimensional feature information of the application may be input to the electronic device, and after the actual prediction of the electronic device is completed, the electronic device Control the application based on the predicted results.
  • the foregoing building module 302 may specifically include:
  • a marking sub-module 3021 configured to mark the samples in the sample set to obtain a first label of each sample
  • a first extraction sub-module 3022 configured to extract a single sample from the sample set, form a first training set according to the sample and the corresponding first label, and extract multiple times to obtain a plurality of first training sets;
  • a second extraction sub-module 3023 configured to extract two samples from the sample set, generate a second label of two samples according to the first label corresponding to the two samples, and form a second training according to the two samples and the corresponding second label.
  • Set multiple extractions to obtain a plurality of second training sets;
  • the construction sub-module 3024 is configured to construct a plurality of training sets according to the plurality of first training sets and the second training set.
  • the training module 303 specifically includes:
  • a first function obtaining submodule 3031 configured to obtain a first loss function of the logistic regression model according to the plurality of first training sets
  • the second function obtaining submodule 3032 is configured to obtain a second loss function of the logistic regression model according to the plurality of second training sets;
  • the parameter estimation sub-module 3033 is configured to generate a target loss function according to the first loss function and the second loss function, and estimate a model parameter in the logistic regression model according to the target loss function.
  • the parameter estimation sub-module 3033 is specifically configured to separately obtain weight values of the first loss function and the second loss function, and calculate a weighted sum of the first loss function and the second loss function to obtain a target loss function.
  • the parameter estimation sub-module 3033 is further configured to calculate a target loss function based on a gradient descent method to obtain a model parameter in the logistic regression model.
  • the prediction result includes: a first probability that the application can be cleaned, and a second probability that the application cannot be cleaned
  • the control module 304 includes:
  • the output sub-module is configured to compare the first probability that the application can be cleaned with the second probability that cannot be cleaned, obtain a comparison result, and output a first prediction result that can be cleaned by the application according to the comparison result, or a second prediction that cannot be cleaned up result;
  • the determining submodule is configured to determine whether the application is cleanable according to the quantity of the first prediction result and the quantity of the second prediction result.
  • the output sub-module is specifically configured to output a first predictable result that can be cleaned when the first probability is greater than the second probability;
  • the application management device of the embodiment of the present application constructs a sample set of the application by collecting multi-dimensional feature information of the application as a sample, extracts feature information from the sample set according to a preset rule, and constructs a plurality of training sets, according to The training set is trained by multiple training sets to obtain the predicted model after training, and the current multi-dimensional feature information of the application is obtained and used as a prediction sample, and the prediction result is generated according to the prediction sample and the trained prediction model, and the prediction result is The application is managed.
  • This application can improve the accuracy of the prediction of the application, thereby improving the intelligence and accuracy of the control of the application entering the background.
  • the application management device is in the same concept as the application management and control method in the above embodiment, and any method provided in the embodiment of the application management and control method can be run on the application management device, and the specific implementation process thereof For details, refer to the embodiment of the application management method, which is not described here.
  • the electronic device 400 includes a processor 401 and a memory 402.
  • the processor 401 is electrically connected to the memory 402.
  • the processor 400 is a control center of the electronic device 400 that connects various portions of the entire electronic device using various interfaces and lines, executes the electronic by running or loading a computer program stored in the memory 402, and recalling data stored in the memory 402.
  • the memory 402 can be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by running computer programs and modules stored in the memory 402.
  • the memory 402 can mainly include a storage program area and a storage data area, wherein the storage program area can store an operating system, a computer program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area can be stored according to Data created by the use of electronic devices, etc.
  • memory 402 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 402 can also include a memory controller to provide processor 401 access to memory 402.
  • the processor 401 in the electronic device 400 loads the instructions corresponding to the process of one or more computer programs into the memory 402 according to the following steps, and is stored in the memory 402 by the processor 401.
  • the computer program in which to implement various functions, as follows:
  • the trained prediction model obtains the current multi-dimensional feature information of the application and uses it as a prediction sample, generates a prediction result according to the prediction sample and the trained prediction model, and controls the application according to the prediction result.
  • This application can improve the accuracy of the prediction of the application, thereby improving the intelligence and accuracy of the control of the application entering the background.
  • the electronic device 400 may further include: a display 403, a radio frequency circuit 404, an audio circuit 405, and a power source 406.
  • the display 403, the radio frequency circuit 404, the audio circuit 405, and the power source 406 are electrically connected to the processor 401, respectively.
  • Display 403 can be used to display information entered by a user or information provided to a user, as well as various graphical user interfaces, which can be composed of graphics, text, icons, video, and any combination thereof.
  • the display 403 can include a display panel.
  • the display panel can be configured in the form of a liquid crystal display (LCD), or an organic light-emitting diode (OLED).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the radio frequency circuit 404 can be used to transmit and receive radio frequency signals to establish wireless communication with network devices or other electronic devices through wireless communication, and to transmit and receive signals with network devices or other electronic devices.
  • the audio circuit 405 can be used to provide an audio interface between the user and the electronic device through the speaker and the microphone.
  • Power source 406 can be used to power various components of electronic device 400.
  • the power supply 406 can be logically coupled to the processor 401 through a power management system to enable functions such as managing charging, discharging, and power management through the power management system.
  • the electronic device 400 may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the embodiment of the present application further provides a storage medium storing a computer program, and when the computer program runs on the computer, causing the computer to execute the application management and control method in any of the above embodiments.
  • the storage medium may be a magnetic disk, an optical disk, a read only memory (ROM), or a random access memory (RAM).
  • ROM read only memory
  • RAM random access memory
  • the computer program can be stored in a computer readable storage medium, such as in a memory of the electronic device, and executed by at least one processor in the electronic device, and can include, for example, an application management method during execution.
  • the storage medium may be a magnetic disk, an optical disk, a read only memory, a random access memory, or the like.
  • each functional module may be integrated into one processing chip, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • An integrated module, if implemented in the form of a software functional module and sold or used as a standalone product, may also be stored in a computer readable storage medium such as a read only memory, a magnetic disk or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种应用程序管控方法、装置、存储介质及电子设备,所述方法包括:采集应用程序的多维特征信息作为样本,构建预设后台应用程序的样本集(101),按照预设规则从样本集中提取特征信息,构建多个训练集(102),根据多个训练集对逻辑回归模型进行训练,得到训练后的预测模型(103),获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对应用程序进行管控(104)。

Description

应用程序管控方法、装置、存储介质及电子设备
本申请要求于2017年09月30日提交中国专利局、申请号为CN 201710940355.2、申请名称为“应用程序管控方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于通信技术领域,尤其涉及一种应用程序管控方法、装置、存储介质及电子设备。
背景技术
随着电子技术的发展,人们通常在电子设备上安装很多应用程序。当用户在电子设备中打开多个应用程序时,若用户退回电子设备的桌面或者停留在某一应用程序的应用界面或者管控电子设备屏幕,则用户打开的多个应用程序依然会在电子设备的后台运行。
申请内容
本申请提供一种应用程序管控方法、装置、存储介质及电子设备,能够提升对应用程序进行管控的智能化和准确性。
第一方面,本申请实施例提供一种应用程序管控方法,包括:
采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集;
按照预设规则从所述样本集中提取特征信息,构建多个训练集;
根据所述多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型;
获取所述应用程序当前的多维特征信息并作为预测样本,根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控。
第二方面,本申请实施例提供一种应用程序管控装置,包括:
采集模块,用于采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集;
构建模块,用于按照预设规则从所述样本集中提取特征信息,构建多个训练集;
训练模块,用于根据所述多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型;
管控模块,用于获取所述应用程序当前的多维特征信息并作为预测样本,根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控。
第三方面,本申请实施例提供一种存储介质,其上存储有计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行上述的应用程序管控方法。
第四方面,本申请实施例提供一种电子设备,包括处理器和存储器,所述存储器有计算机程序,所述处理器通过调用所述计算机程序,用于执行上述的应用程序管控方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的应用程序管控装置的系统示意图。
图2为本申请实施例提供的应用程序管控装置的应用场景示意图。
图3为本申请实施例提供的应用程序管控方法的流程示意图。
图4为本申请实施例提供的应用程序管控方法的另一流程示意图。
图5为本申请实施例提供的应用程序管控装置的另一应用场景示意图。
图6为本申请实施例提供的应用程序管控装置的结构示意图。
图7为本申请实施例提供的应用程序管控装置的另一结构示意图。
图8为本申请实施例提供的应用程序管控装置的又一结构示意图。
图9为本申请实施例提供的电子设备的结构示意图。
图10为本申请实施例提供的电子设备的另一结构示意图。
具体实施方式
请参照图式,其中相同的组件符号代表相同的组件,本申请的原理是以实施在一适当的运算环境中来举例说明。以下的说明是基于所例示的本申请具体实施例,其不应被视为限制本申请未在此详述的其它具体实施例。
在以下的说明中,本申请的具体实施例将参考由一部或多部计算机所执行的步骤及符号来说明,除非另有述明。因此,这些步骤及操作将有数次提到由计算机执行,本文所指的计算机执行包括了由代表了以一结构化型式中的数据的电子信号的计算机处理单元的操作。此操作转换该数据或将其维持在该计算机的内存系统中的位置处,其可重新配置或另外以本领域测试人员所熟知的方式来改变该计算机的运作。该数据所维持的数据结构为该内存的实体位置,其具有由该数据格式所定义的特定特性。但是,本申请原理以上述文字来说明,其并不代表为一种限制,本领域测试人员将可了解到以下所述的多种步骤及操作亦可实施在硬件当中。
本文所使用的术语“模块”可看作为在该运算系统上执行的软件对象。本文所述的不同组件、模块、引擎及服务可看作为在该运算系统上的实施对象。而本文所述的装置及方法可以以软件的方式进行实施,当然也可在硬件上进行实施,均在本申请保护范围之内。
本申请中的术语“第一”、“第二”和“第三”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或模块的过程、方法、系统、产品或设备没有限定于已列出的步骤或模块,而是某些实施例还包括没有列出的步骤或模块,或某些实施例还包括对于这些过程、方法、产品或设备固有的其它步骤或模块。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
在现有技术中,对后台的应用程序进行管控时,通常直接根据电子设备的内存占用情况以及各应用程序的优先级,对后台的部分应用程序进行清理,以释放内存。有些应用程序对用户很重要、或者用户在短时间内需要再次使用某些应用程序,若在对后进行清理时将这些应用程序清理掉,则用户再次使用这些应用程序时需要电子设备重新加载这些应用程序的进程,需要耗费大量时间及内存资源。
处于后台的很多应用程序用户一段时间内并不会使用,但是这些后台运行的应用程序会严重地占用电子设备的内存,使得中央处理器(central processing unit,CPU)占用率过高,导致电子设备出现运行速度变慢,卡顿,耗电过快等问题,并且导致电子设备的耗电速度加快。其中,该电子设备可以是智能手机、平板电脑、台式电脑、笔记本电脑、或者掌上电脑等设备。
请参阅图1,图1为本申请实施例提供的应用程序管控装置的系统示意图。该应用程序管控装置主要用于:采集应用程序的多维特征信息作为样本,构建应用程序的样本集,按照预设规则从样本集中提取特征信息,构建多个训练集,根据所述多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型,获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对应用程序进 行管控,例如清理、或者冻结等。
具体的,请参阅图2,图2为本申请实施例提供的应用程序管控装置的应用场景示意图。比如,应用程序管控装置在接收到管控请求时,检测到在电子设备的后台运行的应用程序包括应用程序a、应用程序b以及应用程序c。然后分别获取应用程序a对应的多维特征信息、应用程序b对应的多维特征信息以及应用程序c对应的多维特征信息,通过逻辑回归模型对应用程序a是否需要被使用的概率进行预测,得到概率a’,通过逻辑回归模型对应用程序b是否需要被使用的概率进行预测,得到概率b’,通过逻辑回归模型对应用程序c是否需要被使用的概率进行预测,得到概率c’;根据概率a’、概率b’以及概率c’对后台运行的应用程序a、应用程序b以及应用程序c进行管控,例如将概率最低的应用程序b关闭。
本申请实施例提供一种应用程序管控方法,该应用程序管控方法的执行主体可以是本申请实施例提供的应用程序管控装置,或者集成了该应用程序管控装置的电子设备,其中该应用程序管控装置可以采用硬件或者软件的方式实现。
本申请实施例提供一种应用程序管控方法,包括:
采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集;
按照预设规则从所述样本集中提取特征信息,构建多个训练集;
根据所述多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型;
获取所述应用程序当前的多维特征信息并作为预测样本,根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控。
在一实施例中,所述按照预设规则从所述样本集中提取特征信息,构建多个训练集的步骤包括:
从每个样本的多维特征信息中,有放回地随机提取预设数目的特征信息,构成对应的子样本,所述多个子样本构成一个训练集;
多次提取以得到多个训练集。
在一实施例中,所述按照预设规则从所述样本集中提取特征信息,构建多个训练集的步骤包括:
对所述样本集中的样本进行标记,得到每个样本的第一标签;
从所述样本集中提取单个样本,根据所述样本以及对应的第一标签构成第一训练集,多次提取以得到多个第一训练集;
从所述样本集中提取两个样本,根据所述两个样本分别对应的第一标签生成所述两个样本的第二标签,根据所述两个样本以及对应的第二标签构成第二训练集,多次提取以得到多个第二训练集;
根据所述多个第一训练集和所述第二训练集构建多个训练集。
在一实施例中,所述根据所述多个训练集对逻辑回归模型进行训练的步骤,包括:
根据所述多个第一训练集获取所述逻辑回归模型的第一损失函数;
根据所述多个第二训练集获取所述逻辑回归模型的第二损失函数;
根据所述第一损失函数和第二损失函数生成目标损失函数,并根据所述目标损失函数估计所述逻辑回归模型中的模型参数。
在一实施例中,根据预设公式以及所述多个第一训练集获取所述逻辑回归模型的第一损失函数。其中所述预设公式为:
Figure PCTCN2018102254-appb-000001
其中:
i、k均为正整数,y ik为预测概率分布,N s为训练分类的批量大小,C为类别个数,y i为表征样本类别的独热码。
在一实施例中,根据所述第一损失函数和第二损失函数生成目标损失函数的步骤,包括:
分别获取所述第一损失函数和第二损失函数的权重值;
计算所述第一损失函数和第二损失函数的加权和,以得到所述目标损失函数。
在一实施例中,根据所述目标损失函数估计所述逻辑回归模型中的模型参数的步骤,包括:
基于梯度下降法对所述目标损失函数计算,以得到所述逻辑回归模型中的模型参数。
在一实施例中,所述预测结果包括:所述应用程序可清理的第一概率、和不可清理的第二概率;
根据所述预测结果对所述应用程序进行管控的步骤,包括:
对应用程序可清理的第一概率与不可清理的第二概率进行比较,得到比较结果;
根据比较结果输出应用程序可清理的第一预测结果、或者不可清理的第二预测结果;
根据第一预测结果的数量和第二预测结果的数量,确定所述应用程序是否可清理。
在一实施例中,根据比较结果输出应用程序可清理的第一预测结果、或者不可清理的第二预测结果的步骤,包括:
当所述第一概率大于所述第二概率时,输出可清理的第一预测结果;
当所述第一概率不大于所述第二概率时,输出不可清理的第二预测结果。
请参阅图3,图3为本申请实施例提供的应用程序管控方法的流程示意图。本申请实施例提供的应用程序管控方法应用于电子设备,具体流程可以如下:
步骤101,采集应用程序的多维特征信息作为样本,构建应用程序的样本集。
其中,预设应用程序可以是安装在电子设备中的任意应用程序,例如通讯应用程序、多媒体应用程序、游戏应用程序、资讯应用程序、或者购物应用程序等等。
应用的多维特征信息具有一定长度的维度,其每个维度上的参数均对应表征应用的一种特征信息,即该多维特征信息由多个特征信息构成。该多个特征信息可以包括应用自身相关的特征信息。在一实施例当中,可以搜集设备的30个特征,构成一个30维向量,该30个特征例如为:
应用上一次切入后台到现在的时长;
应用上一次切入后台到现在的期间中,累计屏幕关闭时间长度;
应用上一次在前台被使用时长;
应用上上一次在前台被使用时长;
应用上上上一次在前台被使用时长;
应用一天里(按每天统计)进入前台的次数;
应用一天里(休息日按工作日、休息日分开统计)进入前台的次数;
应用一天中(按每天统计)处于前台的时间;
应用一天中(休息日按工作日、休息日分开统计)处于前台的时间;
目标应用每天在8:00-12:00这个时段被使用的时间长度;
目标应用在后台停留时间直方图第一个bin(0-5分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(5-10分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(10-15分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(15-20分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(15-20分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(25-30分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(30分钟以后对应的次数占比);
目标应用一级类型;
目标应用二级类型;
目标应用被切换的方式,分为被home键切换、被recent键切换、被其他应用切换;
屏幕量灭时间;
当前屏幕亮灭状态;
当前是否有在充电;
当前的电量;
当前wifi状态;
当前时间所处当天的时间段index;
该后台应用紧跟当前前台应用后被打开次数,不分工作日休息日统计所得;
该后台应用紧跟当前前台应用后被打开次数,分工作日休息日统计;
当前前台应用进入后台到目标应用进入前台按每天统计的平均间隔时间;
当前前台应用进入后台到目标应用进入前台期间按每天统计的平均屏幕熄灭时间。
应用的样本集中,可以包括在历史时间段内,按照预设频率采集的多个样本。历史时间段,例如可以是过去7天、10天;预设频率,例如可以是每10分钟采集一次、每半小时采集一次。可以理解的是,一次采集的应用的多维特征数据构成一个样本,多个样本,构成所述样本集。
在构成样本集之后,可以对样本集中的每个样本进行标记,得到每个样本的样本标签,由于本实施要实现的是预测应用是否可以清理,因此,所标记的样本标签包括可清理和不可清理。具体可根据用户对应用的历史使用习惯进行标记,例如:当应用进入后台30分钟后,用户关闭了该应用,则标记为“可清理”;再例如,当应用进入后台3分钟之后,用户将应用切换到了前台运行,则标记为“不可清理”。具体地,可以用数值“1”表示“可清理”,用数值“0”表示“不可清理”,反之亦可。
为便于分类、训练,可以将应用的多维特征信息中,未用数值直接表示的特征信息用具体的数值量化出来,例如针对电子设备的无线网连接状态这个特征信息,可以用数值1表示正常的状态,用数值0表示异常的状态(反之亦可);再例如,针对电子设备是否在充电状态这个特征信息,可以用数值1表示充电状态,用数值0表示未充电状态(反之亦可)。
步骤102,按照预设规则从样本集中提取特征信息,构建多个训练集。
在一实施例中,可以每次从每个样本的多维特征信息中,有放回地随机提取预设数目的特征信息,构成对应的子样本,多个子样本构成一个训练集,多次提取后,构建多个训练集,预设数目可根据实际需要自定义取值。
在一实施例中,训练集可以分为两部分,一部分是单体样本x,并标记此时目标应用接下来是否使用,若是则可以标记为1,若否则标记为0,形式可以为(x i,y i),其中y i∈{0,1}。另一部分为三元组,即通过采样两个样本(x i,x j),若两个样本标签一致,记为1,标签不一致,记为-1,形式为(x i,x j,γ),其中γ∈{1,-1}。
因此,上述按照预设规则从所述样本集中提取特征信息,构建多个训练集的步骤可以包括:
对样本集中的样本进行标记,得到每个样本的第一标签;
从样本集中提取单个样本,根据样本以及对应的第一标签构成第一训练集,多次提取以得到多个第一训练集;
从样本集中提取两个样本,根据两个样本分别对应的第一标签生成两个样本的第二标签,根据两个样本以及对应的第二标签构成第二训练集,多次提取以得到多个第二训练集;
根据多个第一训练集和第二训练集构建多个训练集。
步骤103,根据多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型。
逻辑回归(Logistic Regression,LR)模型是机器学习中的一种分类模型,由于算法的简单和高效,在实际中应用非常广泛。逻辑回归主要通过构造一个重要的指标:发生比来判定因变量的类别。其引入概率的概念,把事件(如应用可清理)发生定义为Y=1,事件(如应用不可清理)未发生定义为Y=0,那么事件发生的概率为p,事件未发生的概率为1-p,把p看成x的线性函数。
在实际应用中,逻辑回归模型的表现形式有多种,比如,以分类器形式,按照分类器的分类能力,可以将分类器划分成:弱分类器和强分类器。所以,分类器一般指的计时逻辑回归模型。
本申请实施例,可以利用训练集对相应的逻辑回归模型进行训练,得到相应的训练后的预测模型。本发明中的神经网络为浅层神经网络,网络结构仅为两层,即嵌入层和全连接层,通过单样本、三元组同时来训练得到嵌入层参数,嵌入层经过全连接层后进行分类,大大提高了准确率。
在一实施例中,根据所述多个训练集对逻辑回归模型进行训练的步骤,包括:
根据多个第一训练集获取所述逻辑回归模型的第一损失函数;
根据多个第二训练集获取所述逻辑回归模型的第二损失函数;
根据第一损失函数和第二损失函数生成目标损失函数,并根据目标损失函数估计所述逻辑回归模型中的模型参数。
其中,上述根据第一损失函数和第二损失函数生成目标损失函数的步骤,包括:
分别获取第一损失函数和第二损失函数的权重值;
计算第一损失函数和第二损失函数的加权和,以得到目标损失函数。
在得到目标损失函数后,可以基于梯度下降法对目标损失函数计算,以得到逻辑回归模型中的模型参数。
例如,在构建多个训练集之后,对于训练集中每个数据x i计算一个嵌入值,该过程通过一个由8个神经元组成神经网络隐藏节点实现。
对于单个样本,对嵌入层通过逻辑回归做分类,采用类别交叉熵作为损失函数:
Figure PCTCN2018102254-appb-000002
其中:
Figure PCTCN2018102254-appb-000003
i、k均为正整数,
Figure PCTCN2018102254-appb-000004
为预测概率分布,N s为训练分类的批量大小,C为类别个数,y i为表征样本类别的独热码,W为全连接层的权重;通过最小化该损失函数,训练得到嵌入层。
对于三元组样本(x i,x j,γ),其中γ为采样的标签,如果一致为1,不一致为-1,通过余弦距离:
Figure PCTCN2018102254-appb-000005
计算两个节点在嵌入层上的相似度,通过最小化逻辑回归损失函数:
Figure PCTCN2018102254-appb-000006
其中,N g为训练三元组的批量大小,进一步训练学习得到的嵌入层。
最终优化的目标损失函数为上述两项加权和,即L=L s+λL u,λ为权重,用以调节单个样本和三元组损失函数的相对比例;通过自适应学习率的梯度下降方法,得到最终的嵌入层。
其中,损失函数(loss function)是用来估量模型的预测值f(x)与真实值Y的不一致程度,它是一个非负实值函数,通常使用L(Y,f(x)),或者L(w)来表示,损失函数越小,模型的鲁棒性就越好。损失函数是经验风险函数的核心部分,也是结构风险函数重要组成部分。
步骤104,获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对应用程序进行管控。
比如,可以根据预测时间采集应用的多维特征作为预测样本。其中,预测时间可以根据需求设定,如可以为当前时间等。譬如,可以在预测时间点采集应用的多维特征作为预测样本。
上述预测结果可以包含清理或不清理,若需要判断当前后台应用是否可清理,获取应用程序的当前多维度特征信息,比如应用程序使用信息和电子设备当前的多个特征信息等,以输入到预测模型,预测模型根据模型参数计算即可得到预测结果,从而判断应用程序是否需要清理。
需要说明的是,预测模型的训练过程可以在服务器端也可以在电子设备端完成。当预测模型的训练过程、实际预测过程都在服务器端完成时,需要使用训练后的预测模型时,可以将应用程序的当前多个维度的特征信息输入到服务器,服务器实际预测完成后,将预测结果发送至电子设备端,电子设备再根据预测结果管控该应用程序。
当预测模型的训练过程、实际预测过程都在电子设备端完成时,需要使用训练后的预测模型时,可以将应用程序的当前多维特征信息输入到电子设备,电子设备实际预测完成后,电子设备根据预测结果管控该应用程序。
由上可知,本申请实施例提供的应用程序管控方法采集应用程序的多维特征信息作为样本,构建应用程序的样本集,按照预设规则从样本集中提取特征信息,构建多个训练集,根据所述多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型,获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对应用程序进行管控。本申请可以提高对应用程序进行预测的准确性,从而提升对进入后台的应用程序进行管控的智能化和准确性。
下面将在上述实施例描述的方法基础上,对本申请的清理方法做进一步介绍。参考图4,该应用程序管控方法包括:
201,采集应用程序的多维特征信息作为样本,构建应用程序的样本集。
应用的多维特征信息具有一定长度的维度,其每个维度上的参数均对应表征应用的一种特征信息,即该多维特征信息由多个特征信息构成。该多个特征信息可以包括应用自身相关的特征信息。在一实施例当中,可以搜集设备的30个特征,构成一个30维向量,该30个特征例如为:
应用上一次切入后台到现在的时长;
应用上一次切入后台到现在的期间中,累计屏幕关闭时间长度;
应用上一次在前台被使用时长;
应用上上一次在前台被使用时长;
应用上上上一次在前台被使用时长;
应用一天里(按每天统计)进入前台的次数;
应用一天里(休息日按工作日、休息日分开统计)进入前台的次数;
应用一天中(按每天统计)处于前台的时间;
应用一天中(休息日按工作日、休息日分开统计)处于前台的时间;
目标应用每天在8:00-12:00这个时段被使用的时间长度;
目标应用在后台停留时间直方图第一个bin(0-5分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(5-10分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(10-15分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(15-20分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(15-20分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(25-30分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(30分钟以后对应的次数占比);
目标应用一级类型;
目标应用二级类型;
目标应用被切换的方式,分为被home键切换、被recent键切换、被其他应用切换;
屏幕量灭时间;
当前屏幕亮灭状态;
当前是否有在充电;
当前的电量;
当前wifi状态;
当前时间所处当天的时间段index;
该后台应用紧跟当前前台应用后被打开次数,不分工作日休息日统计所得;
该后台应用紧跟当前前台应用后被打开次数,分工作日休息日统计;
当前前台应用进入后台到目标应用进入前台按每天统计的平均间隔时间;
当前前台应用进入后台到目标应用进入前台期间按每天统计的平均屏幕熄灭时间。
202,按照预设规则从样本集中提取特征信息,构建多个训练集。
在一实施例中,训练集可以分为两部分,一部分是单体样本x,并标记此时目标应用接下来是否使用,若是则可以标记为1,若否则标记为0,形式可以为(x i,y i),其中y i∈{0,1}。另一部分为三元组,即通过采样两个样本(x i,x j),若两个样本标签一致,记为1,标签不一致,记为-1,形式为(x i,x j,γ),其中γ∈{1,-1}。
203,根据多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型。
本申请实施例,可以利用训练集对相应的逻辑回归模型进行训练,得到相应的训练后的预测模型。本发明中的神经网络为浅层神经网络,网络结构仅为两层,即嵌入层和全连接层,通过单样本、三元组同时来训练得到嵌入层参数,嵌入层经过全连接层后进行分类,大大提高了准确率。
204,获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成应用程序可清理的第一概率、和不可清理的第二概率。
根据预测集样本及其对应的训练后逻辑回归模型,输出相应的预测概率,得到多个预测概率。一个逻辑回归模型输出一个包含应用可清理的第一概率、和应用不可清理的第二概率的预测概率。
205,对应用程序可清理的第一概率与不可清理的第二概率进行比较,得到比较结果。
206,根据比较结果输出应用程序可清理的第一预测结果、或者不可清理的第二预测结果。
具体的,当第一概率大于第二概率时,输出可清理的第一预测结果,当第一概率不大于第二概率时,输出不可清理的第二预测结果。
比如,对于某个预测概率P,如果Y=1表示应用可清理、Y=0表示应用不可清理,假设P(Y=1|x)大于P(Y=0|x),此时,输出预测应用可清理的第一预测结果;假设P(Y=1|x)不大于P(Y=0|x),此时,输出预测应用不可清理的第二预测结果。
207,根据第一预测结果的数量和第二预测结果的数量,确定应用程序是否可清理。
当第一预测结果的数量大于第二预测结果的数量时,确定应用可清理;
当第一预测结果的数量不大于第二预测结果的数量时,确定应用不可清理。
在一个具体的例子中,可以利用预先训练的逻辑回归模型预测后台运行的多个应用是否可清理,如表1所示,则确定可以清理后台运行的应用A1和应用A3,而保持应用A2在后台运行的状态不变。
应用 预测结果
应用A1 可清理
应用A2 不可清理
应用A3 可清理
表1
由上可知,本申请实施例提供的应用程序管控方法采集应用程序的多维特征信息作为样本,构建应用程序的样本集,按照预设规则从样本集中提取特征信息,构建多个训练集,根据多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型,获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对应用程序进行管控。本申请可以提高对应用程序进行预测的准确性,从而提升对进入后台的应用程序进行管控的智能化和准确性。
请参阅图5,图5为本申请实施例提供的应用程序管控装置的另一应用场景示意图。当预测模型的训练过程在服务器端完成,预测模型的实际预测过程在电子设备端完成时,需要使用优化后的预测模型时,可以将应用程序当前的多维特征信息输入到电子设备,电子设备实际预测完成后,电子设备根据预测结果管控该应用程序。可选的,可以将训练好的预测模型文件(model文件)移植到智能设备上,若需要判断当前后台应用是否可清理,更新当前的样本集,输入到训练好的预测模型文件(model文件),计算即可得到预测值。
在一些实施例中,在获取应用程序当前的多维特征信息的步骤之前,还可以包括:
检测应用程序是否进入后台,若进入后台,则获取应用程序当前的多维特征信息。然后输入预测模型生成预测结果,并根据预测结果对应用程序进行管控。
在一些实施例中,在获取应用程序当前的多维特征信息的步骤之前,还可以包括:
获取预设时间,若当前系统时间到达预设时间时,则获取应用程序当前的多维特征信息。其中预设时间可以为一天中的一个时间点,如上午9点,也可以为一天中的几个时间点,如上午9点、下午6点等。也可以为多天中的一个或几个时间点。然后输入预测模型生成预测结果,并根据预测结果对应用程序进行管控。
上述所有的技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。
本申请还提供一种应用程序管控装置,包括:
采集模块,用于采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集;
构建模块,用于按照预设规则从所述样本集中提取特征信息,构建多个训练集;
训练模块,用于根据所述多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型;
管控模块,用于获取所述应用程序当前的多维特征信息并作为预测样本,根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控。
在一实施例中,所述构建模块,具体用于从每个样本的多维特征信息中,有放回地随机提取预设数目的特征信息,构成对应的子样本,所述多个子样本构成一个训练集,多次提取以得到多个训练集。
在一实施例中,所述构建模块具体包括:
标记子模块,用于对所述样本集中的样本进行标记,得到每个样本的第一标签;
第一提取子模块,用于从所述样本集中提取单个样本,根据所述样本以及对应的第一标签构成第一训练集,多次提取以得到多个第一训练集;
第二提取子模块,用于从所述样本集中提取两个样本,根据所述两个样本分别对应的第一标签生成所述两个样本的第二标签,根据所述两个样本以及对应的第二标签构成第二训练集,多次提取以得到多个第二训练集;
构建子模块,用于根据所述多个第一训练集和所述第二训练集构建多个训练集。
在一实施例中,所述训练模块具体包括:
第一函数获取子模块,用于根据所述多个第一训练集获取所述逻辑回归模型的第一损失函数;
第二函数获取子模块,用于根据所述多个第二训练集获取所述逻辑回归模型的第二损失函数;
参数估计子模块,用于根据所述第一损失函数和第二损失函数生成目标损失函数,并根据所述目标损失函数估计所述逻辑回归模型中的模型参数。
在一实施例中,所述第一函数获取子模块,具体用于根据预设公式以及所述多个第一训练集获取所述逻辑回归模型的第一损失函数。其中所述预设公式为:
Figure PCTCN2018102254-appb-000007
其中:
i、k均为正整数,
Figure PCTCN2018102254-appb-000008
为预测概率分布,N s为训练分类的批量大小,C为类别个数,y i为表征样本类别的独热码。
在一实施例中,所述参数估计子模块,具体用于分别获取所述第一损失函数和第二损失函数的权重值,计算所述第一损失函数和第二损失函数的加权和,以得到所述目标损失函数。
在一实施例中,所述参数估计子模块,还具体用于基于梯度下降法对所述目标损失函数计算,以得到所述逻辑回归模型中的模型参数。
在一实施例中,所述预测结果包括:所述应用程序可清理的第一概率、和不可清理的第二概率,所述管控模块,包括:
输出子模块,用于对应用程序可清理的第一概率与不可清理的第二概率进行比较,得到比较结果,根据比较结果输出应用程序可清理的第一预测结果、或者不可清理的第二预测结果;
确定子模块,用于根据第一预测结果的数量和第二预测结果的数量,确定所述应用程序是否可清理。
在一实施例中,所述输出子模块,具体用于当所述第一概率大于所述第二概率时,输出可清理的第一预测结果;
当所述第一概率不大于所述第二概率时,输出不可清理的第二预测结果。
请参阅图6,图6为本申请实施例提供的应用程序管控装置的结构示意图。其中该应用程序管控装置300应用于电子设备,该应用程序管控装置300包括采集模块301、构建模块302、训练模块303以及管控模块304。
其中,采集模块301,用于采集应用程序的多维特征信息作为样本,构建应用程序的样本集。
其中,预设应用程序可以是安装在电子设备中的任意应用程序,例如通讯应用程序、多媒体应用程序、游戏应用程序、资讯应用程序、或者购物应用程序等等。
应用的多维特征信息具有一定长度的维度,其每个维度上的参数均对应表征应用的一种特征信息,即该多维特征信息由多个特征信息构成。该多个特征信息可以包括应用自身相关的特征信息。在一实施例当中,可以搜集设备的30个特征,构成一个30维向量,该30个特征例如为:
应用上一次切入后台到现在的时长;
应用上一次切入后台到现在的期间中,累计屏幕关闭时间长度;
应用上一次在前台被使用时长;
应用上上一次在前台被使用时长;
应用上上上一次在前台被使用时长;
应用一天里(按每天统计)进入前台的次数;
应用一天里(休息日按工作日、休息日分开统计)进入前台的次数;
应用一天中(按每天统计)处于前台的时间;
应用一天中(休息日按工作日、休息日分开统计)处于前台的时间;
目标应用每天在8:00-12:00这个时段被使用的时间长度;
目标应用在后台停留时间直方图第一个bin(0-5分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(5-10分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(10-15分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(15-20分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(15-20分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(25-30分钟对应的次数占比);
目标应用在后台停留时间直方图第一个bin(30分钟以后对应的次数占比);
目标应用一级类型;
目标应用二级类型;
目标应用被切换的方式,分为被home键切换、被recent键切换、被其他应用切换;
屏幕量灭时间;
当前屏幕亮灭状态;
当前是否有在充电;
当前的电量;
当前wifi状态;
当前时间所处当天的时间段index;
该后台应用紧跟当前前台应用后被打开次数,不分工作日休息日统计所得;
该后台应用紧跟当前前台应用后被打开次数,分工作日休息日统计;
当前前台应用进入后台到目标应用进入前台按每天统计的平均间隔时间;
当前前台应用进入后台到目标应用进入前台期间按每天统计的平均屏幕熄灭时间。
构建模块302,用于按照预设规则从样本集中提取特征信息,构建多个训练集。
在一实施例中,可以每次从每个样本的多维特征信息中,有放回地随机提取预设数目的特征信息,构成对应的子样本,多个子样本构成一个训练集,多次提取后,构建多个训练集,预设数目可根据实际需要自定义取值。
在一实施例中,训练集可以分为两部分,一部分是单体样本x,并标记此时目标应用接下来是否使用,若是则可以标记为1,若否则标记为0,形式可以为(x i,y i),其中y i∈{0,1}。另一部分为三元组,即通过采样两个样本(x i,x j),若两个样本标签一致,记为1,标签不一致,记为-1,形式为(x i,x j,γ),其中γ∈{1,-1}。
训练模块303,用于根据多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型。
逻辑回归(Logistic Regression,LR)模型是机器学习中的一种分类模型,由于算法的简单和高效,在实际中应用非常广泛。逻辑回归主要通过构造一个重要的指标:发生比来判定因变量的类别。其引入概率的概念,把事件(如应用可清理)发生定义为Y=1,事件(如应用不可清理)未发生定义为Y=0,那么事件发生的概率为p,事件未发生的概率为1-p,把p看成x的线性函数。
在实际应用中,逻辑回归模型的表现形式有多种,比如,以分类器形式,按照分类器的分类能力,可以将分类器划分成:弱分类器和强分类器。所以,分类器一般指的计时逻辑回归模型。
本申请实施例,可以利用训练集对相应的逻辑回归模型进行训练,得到相应的训练后的预测模型。本发明中的神经网络为浅层神经网络,网络结构仅为两层,即嵌入层和全连接层,通过单样本、三元组同时来训练得到嵌入层参数,嵌入层经过全连接层后进行分类,大大提高了准确率。
在一实施例中,在构建多个训练集之后,对于训练集中每个数据x i计算一个嵌入值,该过程通过一个由8个神经元组成神经网络隐藏节点实现。
对于单个样本,对嵌入层通过逻辑回归做分类,采用类别交叉熵作为损失函数:
Figure PCTCN2018102254-appb-000009
其中:
Figure PCTCN2018102254-appb-000010
N s为训练分类的批量大小,C为类别个数,y i为表征样本类别的独热码,W为全连接层的权重;通过最小化该损失函数,训练得到嵌入层。
对于三元组样本(x i,x j,γ),其中γ为采样的标签,如果一致为1,不一致为-1,通过余弦距离:
Figure PCTCN2018102254-appb-000011
计算两个节点在嵌入层上的相似度,通过最小化逻辑回归损失函数:
Figure PCTCN2018102254-appb-000012
其中,N g为训练三元组的批量大小,进一步训练学习得到的嵌入层。
最终优化的目标损失函数为上述两项加权和,即L=L s+λL u,λ为权重,用以调节单个样本和三元组损失函数的相对比例;通过自适应学习率的梯度下降方法,得到最终的嵌入层。
管控模块304,用于获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对应用程序进行管控。
比如,可以根据预测时间采集应用的多维特征作为预测样本。其中,预测时间可以根据需求设定,如可以为当前时间等。譬如,可以在预测时间点采集应用的多维特征作为预测样本。
上述预测结果可以包含清理或不清理,若需要判断当前后台应用是否可清理,获取应用程序的当前多维度特征信息,比如应用程序使用信息和电子设备当前的多个特征信息等,以输入到预测模型,预测模型根据模型参数计算即可得到预测结果,从而判断应用程序是否需要清理。
需要说明的是,预测模型的训练过程可以在服务器端也可以在电子设备端完成。当预测模型的训练过程、实际预测过程都在服务器端完成时,需要使用训练后的预测模型时,可以将应用程序的当前多个维度的特征信息输入到服务器,服务器实际预测完成后,将预测结果发送至电子设备端,电子设备再根据预测结果管控该应用程序。
当预测模型的训练过程、实际预测过程都在电子设备端完成时,需要使用训练后的预测模型时,可以将应用程序的当前多维特征信息输入到电子设备,电子设备实际预测完成后,电子设备根据预测结果管控该应用程序。
请参阅图7,上述构建模块302可以具体包括:
标记子模块3021,用于对样本集中的样本进行标记,得到每个样本的第一标签;
第一提取子模块3022,用于从样本集中提取单个样本,根据样本以及对应的第一标签构成第一训练集,多次提取以得到多个第一训练集;
第二提取子模块3023,用于从样本集中提取两个样本,根据两个样本分别对应的第一标签生成两个样本的第二标签,根据两个样本以及对应的第二标签构成第二训练集,多次提取以得到多个第二训练集;
构建子模块3024,用于根据多个第一训练集和第二训练集构建多个训练集。
继续参阅图8,上述训练模块303具体包括:
第一函数获取子模块3031,用于根据多个第一训练集获取逻辑回归模型的第一损失函数;
第二函数获取子模块3032,用于根据多个第二训练集获取逻辑回归模型的第二损失函数;
参数估计子模块3033,用于根据第一损失函数和第二损失函数生成目标损失函数,并根据目标损失函数估计逻辑回归模型中的模型参数。
在一实施例中,参数估计子模块3033具体用于分别获取第一损失函数和第二损失函数的权重值,计算第一损失函数和第二损失函数的加权和,以得到目标损失函数。
上述参数估计子模块3033,还具体用于基于梯度下降法对目标损失函数计算,以得到逻辑回归模型中的模型参数。
在一实施例中,预测结果包括:应用程序可清理的第一概率、和不可清理的第二概率,管控模块304,包括:
输出子模块,用于对应用程序可清理的第一概率与不可清理的第二概率进行比较,得到比较结果,根据比较结果输出应用程序可清理的第一预测结果、或者不可清理的第二预测结果;
确定子模块,用于根据第一预测结果的数量和第二预测结果的数量,确定所述应用程序是否可清理。
其中,上述输出子模块,具体用于当第一概率大于所述第二概率时,输出可清理的第一预测结果;
当第一概率不大于所述第二概率时,输出不可清理的第二预测结果。
上述所有的技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。
由上述可知,本申请实施例的应用程序管控装置,通过采集应用程序的多维特征信息作为样本,构建应用程序的样本集,按照预设规则从样本集中提取特征信息,构建多个训练集,根据多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型,获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对应用程序进行管控。本申请可以提高对应用程序进行预测的准确性,从而提升对进入后台的应用程序进行管控的智能化和准确性。
本申请实施例中,应用程序管控装置与上文实施例中的应用程序管控方法属于同一构思,在应用程序管控装置上可以运行应用程序管控方法实施例中提供的任一方法,其具体实现过程详见应用程序管控方法的实施例,此处不再赘述。
本申请实施例还提供一种电子设备。请参阅图9,电子设备400包括处理器401以及存储器402。其中,处理器401与存储器402电性连接。
处理器400是电子设备400的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或加载存储在存储器402内的计算机程序,以及调用存储在存储器402内的数据,执行电子设备400的各种功能并处理数据,从而对电子设备400进行整体监控。
存储器402可用于存储软件程序以及模块,处理器401通过运行存储在存储器402的计算机程序以及模块,从而执行各种功能应用以及数据处理。存储器402可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的计算机程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器402还可以包括存储器控制器,以提供处理器401对存储器402的访问。
在本申请实施例中,电子设备400中的处理器401会按照如下的步骤,将一个或一个以上的计算机程序的进程对应的指令加载到存储器402中,并由处理器401运行存储在存储器402中的计算机程序,从而实现各种功能,如下:
采集应用程序的多维特征信息作为样本,构建应用程序的样本集,按照预设规则从样本集中提取特征信息,构建多个训练集,根据所述多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型,获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对应用程序进行管控。本申请可以提高对应用程序进行预测的准确性,从而提升对进入后台的应用程序进行管控的智能化和准确性。
请一并参阅图10,在一些实施方式中,电子设备400还可以包括:显示器403、射频电路404、音频电路405以及电源406。其中,其中,显示器403、射频电路404、音频电路405以及电源406分别与处理器401电性连接。
显示器403可以用于显示由用户输入的信息或提供给用户的信息以及各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示器403可以包括显示面板,在一些实施方式中,可以采用液晶显示器(Liquid Crystal Display, LCD)、或者有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板。
射频电路404可以用于收发射频信号,以通过无线通信与网络设备或其他电子设备建立无线通讯,与网络设备或其他电子设备之间收发信号。
音频电路405可以用于通过扬声器、传声器提供用户与电子设备之间的音频接口。
电源406可以用于给电子设备400的各个部件供电。在一些实施例中,电源406可以通过电源管理系统与处理器401逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管图10中未示出,电子设备400还可以包括摄像头、蓝牙模块等,在此不再赘述。
本申请实施例还提供一种存储介质,存储介质存储有计算机程序,当计算机程序在计算机上运行时,使得计算机执行上述任一实施例中的应用程序管控方法。
在本申请实施例中,存储介质可以是磁碟、光盘、只读存储器(Read Only Memory,ROM)、或者随机存取记忆体(Random Access Memory,RAM)等。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
需要说明的是,对本申请实施例的应用程序管控方法而言,本领域普通测试人员可以理解实现本申请实施例应用程序管控方法的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,计算机程序可存储于一计算机可读取存储介质中,如存储在电子设备的存储器中,并被该电子设备内的至少一个处理器执行,在执行过程中可包括如应用程序管控方法的实施例的流程。其中,的存储介质可为磁碟、光盘、只读存储器、随机存取记忆体等。
对本申请实施例的应用程序管控装置而言,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中,存储介质譬如为只读存储器,磁盘或光盘等。
以上对本申请实施例所提供的一种应用程序管控方法、装置、存储介质及电子设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种应用程序管控方法,其中,所述方法包括以下步骤:
    采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集;
    按照预设规则从所述样本集中提取特征信息,构建多个训练集;
    根据所述多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型;
    获取所述应用程序当前的多维特征信息并作为预测样本,根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控。
  2. 根据权利要求1所述的应用程序管控方法,其中,所述按照预设规则从所述样本集中提取特征信息,构建多个训练集的步骤包括:
    从每个样本的多维特征信息中,有放回地随机提取预设数目的特征信息,构成对应的子样本,所述多个子样本构成一个训练集;
    多次提取以得到多个训练集。
  3. 根据权利要求1所述的应用程序管控方法,其中,所述按照预设规则从所述样本集中提取特征信息,构建多个训练集的步骤包括:
    对所述样本集中的样本进行标记,得到每个样本的第一标签;
    从所述样本集中提取单个样本,根据所述样本以及对应的第一标签构成第一训练集,多次提取以得到多个第一训练集;
    从所述样本集中提取两个样本,根据所述两个样本分别对应的第一标签生成所述两个样本的第二标签,根据所述两个样本以及对应的第二标签构成第二训练集,多次提取以得到多个第二训练集;
    根据所述多个第一训练集和所述第二训练集构建多个训练集。
  4. 根据权利要求3所述的应用程序管控方法,其中,所述根据所述多个训练集对逻辑回归模型进行训练的步骤,包括:
    根据所述多个第一训练集获取所述逻辑回归模型的第一损失函数;
    根据所述多个第二训练集获取所述逻辑回归模型的第二损失函数;
    根据所述第一损失函数和第二损失函数生成目标损失函数,并根据所述目标损失函数估计所述逻辑回归模型中的模型参数。
  5. 根据权利要求4所述的应用程序管控方法,其中,根据预设公式以及所述多个第一训练集获取所述逻辑回归模型的第一损失函数。其中所述预设公式为:
    Figure PCTCN2018102254-appb-100001
    其中:
    i、k均为正整数,
    Figure PCTCN2018102254-appb-100002
    为预测概率分布,N s为训练分类的批量大小,C为类别个数,y i为表征样本类别的独热码。
  6. 根据权利要求4所述的应用程序管控方法,其中,根据所述第一损失函数和第二损失函数生成目标损失函数的步骤,包括:
    分别获取所述第一损失函数和第二损失函数的权重值;
    计算所述第一损失函数和第二损失函数的加权和,以得到所述目标损失函数。
  7. 根据权利要求4所述的应用程序管控方法,其中,根据所述目标损失函数估计所述逻辑回归模型中的模型参数的步骤,包括:
    基于梯度下降法对所述目标损失函数计算,以得到所述逻辑回归模型中的模型参数。
  8. 根据权利要求1所述的应用程序管控方法,其中,所述预测结果包括:所述应用 程序可清理的第一概率、和不可清理的第二概率;
    根据所述预测结果对所述应用程序进行管控的步骤,包括:
    对应用程序可清理的第一概率与不可清理的第二概率进行比较,得到比较结果;
    根据比较结果输出应用程序可清理的第一预测结果、或者不可清理的第二预测结果;
    根据第一预测结果的数量和第二预测结果的数量,确定所述应用程序是否可清理。
  9. 根据权利要求8所述的应用程序管控方法,其中,根据比较结果输出应用程序可清理的第一预测结果、或者不可清理的第二预测结果的步骤,包括:
    当所述第一概率大于所述第二概率时,输出可清理的第一预测结果;
    当所述第一概率不大于所述第二概率时,输出不可清理的第二预测结果。
  10. 一种应用程序管控装置,其中,所述装置包括:
    采集模块,用于采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集;
    构建模块,用于按照预设规则从所述样本集中提取特征信息,构建多个训练集;
    训练模块,用于根据所述多个训练集对逻辑回归模型进行训练,以得到训练后的预测模型;
    管控模块,用于获取所述应用程序当前的多维特征信息并作为预测样本,根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控。
  11. 根据权利要求10所述的应用程序管控装置,其中,
    所述构建模块,具体用于从每个样本的多维特征信息中,有放回地随机提取预设数目的特征信息,构成对应的子样本,所述多个子样本构成一个训练集,多次提取以得到多个训练集。
  12. 根据权利要求10所述的应用程序管控装置,其中,所述构建模块具体包括:
    标记子模块,用于对所述样本集中的样本进行标记,得到每个样本的第一标签;
    第一提取子模块,用于从所述样本集中提取单个样本,根据所述样本以及对应的第一标签构成第一训练集,多次提取以得到多个第一训练集;
    第二提取子模块,用于从所述样本集中提取两个样本,根据所述两个样本分别对应的第一标签生成所述两个样本的第二标签,根据所述两个样本以及对应的第二标签构成第二训练集,多次提取以得到多个第二训练集;
    构建子模块,用于根据所述多个第一训练集和所述第二训练集构建多个训练集。
  13. 根据权利要求12所述的应用程序管控装置,其中,所述训练模块具体包括:
    第一函数获取子模块,用于根据所述多个第一训练集获取所述逻辑回归模型的第一损失函数;
    第二函数获取子模块,用于根据所述多个第二训练集获取所述逻辑回归模型的第二损失函数;
    参数估计子模块,用于根据所述第一损失函数和第二损失函数生成目标损失函数,并根据所述目标损失函数估计所述逻辑回归模型中的模型参数。
  14. 根据权利要求13所述的应用程序管控装置,其中,
    所述第一函数获取子模块,具体用于根据预设公式以及所述多个第一训练集获取所述逻辑回归模型的第一损失函数。其中所述预设公式为:
    Figure PCTCN2018102254-appb-100003
    其中:
    i、k均为正整数,
    Figure PCTCN2018102254-appb-100004
    为预测概率分布,N s为训练分类的批量大小,C为类别个数,y i为 表征样本类别的独热码。
  15. 根据权利要求13所述的应用程序管控装置,其中,
    所述参数估计子模块,具体用于分别获取所述第一损失函数和第二损失函数的权重值,计算所述第一损失函数和第二损失函数的加权和,以得到所述目标损失函数。
  16. 根据权利要求13所述的应用程序管控装置,其中,
    所述参数估计子模块,还具体用于基于梯度下降法对所述目标损失函数计算,以得到所述逻辑回归模型中的模型参数。
  17. 根据权利要求10所述的应用程序管控装置,其中,所述预测结果包括:所述应用程序可清理的第一概率、和不可清理的第二概率,所述管控模块,包括:
    输出子模块,用于对应用程序可清理的第一概率与不可清理的第二概率进行比较,得到比较结果,根据比较结果输出应用程序可清理的第一预测结果、或者不可清理的第二预测结果;
    确定子模块,用于根据第一预测结果的数量和第二预测结果的数量,确定所述应用程序是否可清理。
  18. 根据权利要求17所述的应用程序管控装置,其中,
    所述输出子模块,具体用于当所述第一概率大于所述第二概率时,输出可清理的第一预测结果;
    当所述第一概率不大于所述第二概率时,输出不可清理的第二预测结果。
  19. 一种存储介质,其上存储有计算机程序,其中,当所述计算机程序在计算机上运行时,使得所述计算机执行如权利要求1至9任一项所述的应用程序管控方法。
  20. 一种电子设备,包括处理器和存储器,所述存储器有计算机程序,其中,所述处理器通过调用所述计算机程序,用于执行如权利要求1至9任一项所述的应用程序管控方法。
PCT/CN2018/102254 2017-09-30 2018-08-24 应用程序管控方法、装置、存储介质及电子设备 WO2019062414A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710940355.2A CN107678845B (zh) 2017-09-30 2017-09-30 应用程序管控方法、装置、存储介质及电子设备
CN201710940355.2 2017-09-30

Publications (1)

Publication Number Publication Date
WO2019062414A1 true WO2019062414A1 (zh) 2019-04-04

Family

ID=61140234

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/102254 WO2019062414A1 (zh) 2017-09-30 2018-08-24 应用程序管控方法、装置、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN107678845B (zh)
WO (1) WO2019062414A1 (zh)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232403A (zh) * 2019-05-15 2019-09-13 腾讯科技(深圳)有限公司 一种标签预测方法、装置、电子设备及介质
CN110442516A (zh) * 2019-07-12 2019-11-12 上海陆家嘴国际金融资产交易市场股份有限公司 信息处理方法、设备及计算机可读存储介质
CN110796513A (zh) * 2019-09-25 2020-02-14 北京三快在线科技有限公司 多任务学习方法、装置、电子设备及存储介质
CN111124632A (zh) * 2019-12-06 2020-05-08 西安易朴通讯技术有限公司 移动终端的优化方法、装置、终端设备及存储介质
CN111144473A (zh) * 2019-12-23 2020-05-12 中国医学科学院肿瘤医院 训练集构建方法、装置、电子设备及计算机可读存储介质
CN111460150A (zh) * 2020-03-27 2020-07-28 北京松果电子有限公司 一种分类模型的训练方法、分类方法、装置及存储介质
CN111797861A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 信息处理方法、装置、存储介质及电子设备
CN111797423A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 模型训练方法、数据授权方法、装置、存储介质及设备
CN111949530A (zh) * 2020-08-07 2020-11-17 北京灵汐科技有限公司 测试结果的预测方法、装置、计算机设备及存储介质
CN112396445A (zh) * 2019-08-16 2021-02-23 京东数字科技控股有限公司 用于识别用户身份信息的方法和装置
CN112506556A (zh) * 2020-11-19 2021-03-16 杭州云深科技有限公司 应用程序分类方法、装置、计算机设备及存储介质
CN112651534A (zh) * 2019-10-10 2021-04-13 顺丰科技有限公司 一种预测资源供应链需求量的方法、装置及存储介质
CN113034260A (zh) * 2019-12-09 2021-06-25 中国移动通信有限公司研究院 一种信用评估方法、模型构建方法、显示方法及相关设备
CN113239799A (zh) * 2021-05-12 2021-08-10 北京沃东天骏信息技术有限公司 训练方法、识别方法、装置、电子设备和可读存储介质
CN113342335A (zh) * 2021-05-11 2021-09-03 北京大学 快应用页面选择方法、装置、设备及存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108337358B (zh) * 2017-09-30 2020-01-14 Oppo广东移动通信有限公司 应用清理方法、装置、存储介质及电子设备
CN107678845B (zh) * 2017-09-30 2020-03-10 Oppo广东移动通信有限公司 应用程序管控方法、装置、存储介质及电子设备
CN110163460B (zh) * 2018-03-30 2023-09-19 腾讯科技(深圳)有限公司 一种确定应用分值的方法及设备
CN111797880A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 数据处理方法、装置、存储介质及电子设备
CN112770002B (zh) * 2019-10-17 2022-04-19 荣耀终端有限公司 一种心跳管控的方法和电子设备
CN113325998B (zh) * 2020-02-29 2022-09-06 杭州海康存储科技有限公司 读写速度控制方法、装置
CN112130991A (zh) * 2020-08-28 2020-12-25 北京思特奇信息技术股份有限公司 一种基于机器学习的应用程序控制方法和系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298549A (zh) * 2014-09-30 2015-01-21 北京金山安全软件有限公司 移动终端中应用程序的清理方法、装置和移动终端
CN104991803A (zh) * 2015-07-10 2015-10-21 上海斐讯数据通信技术有限公司 对android应用程序在特定条件下自启动的管控系统及方法
CN107133436A (zh) * 2016-02-26 2017-09-05 阿里巴巴集团控股有限公司 一种多重抽样模型训练方法及装置
CN107643948A (zh) * 2017-09-30 2018-01-30 广东欧珀移动通信有限公司 应用程序管控方法、装置、介质及电子设备
CN107678799A (zh) * 2017-09-30 2018-02-09 广东欧珀移动通信有限公司 应用程序管控方法、装置、存储介质及电子设备
CN107678845A (zh) * 2017-09-30 2018-02-09 广东欧珀移动通信有限公司 应用程序管控方法、装置、存储介质及电子设备
CN107704364A (zh) * 2017-09-30 2018-02-16 广东欧珀移动通信有限公司 后台应用程序管控方法、装置、存储介质及电子设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102917B (zh) * 2014-07-03 2017-05-10 中国石油大学(北京) 域自适应分类器的构造及数据分类的方法和装置
CN106093612B (zh) * 2016-05-26 2019-03-19 国网江苏省电力公司电力科学研究院 一种电力变压器故障诊断方法
CN106096538B (zh) * 2016-06-08 2019-08-23 中国科学院自动化研究所 基于定序神经网络模型的人脸识别方法及装置
CN106993083B (zh) * 2017-02-21 2020-12-04 北京奇虎科技有限公司 一种推荐智能终端操作提示信息的方法和装置
CN107133094B (zh) * 2017-06-05 2021-11-02 努比亚技术有限公司 应用管理方法、移动终端及计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298549A (zh) * 2014-09-30 2015-01-21 北京金山安全软件有限公司 移动终端中应用程序的清理方法、装置和移动终端
CN104991803A (zh) * 2015-07-10 2015-10-21 上海斐讯数据通信技术有限公司 对android应用程序在特定条件下自启动的管控系统及方法
CN107133436A (zh) * 2016-02-26 2017-09-05 阿里巴巴集团控股有限公司 一种多重抽样模型训练方法及装置
CN107643948A (zh) * 2017-09-30 2018-01-30 广东欧珀移动通信有限公司 应用程序管控方法、装置、介质及电子设备
CN107678799A (zh) * 2017-09-30 2018-02-09 广东欧珀移动通信有限公司 应用程序管控方法、装置、存储介质及电子设备
CN107678845A (zh) * 2017-09-30 2018-02-09 广东欧珀移动通信有限公司 应用程序管控方法、装置、存储介质及电子设备
CN107704364A (zh) * 2017-09-30 2018-02-16 广东欧珀移动通信有限公司 后台应用程序管控方法、装置、存储介质及电子设备

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797861A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 信息处理方法、装置、存储介质及电子设备
CN111797423A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 模型训练方法、数据授权方法、装置、存储介质及设备
CN110232403A (zh) * 2019-05-15 2019-09-13 腾讯科技(深圳)有限公司 一种标签预测方法、装置、电子设备及介质
CN110442516A (zh) * 2019-07-12 2019-11-12 上海陆家嘴国际金融资产交易市场股份有限公司 信息处理方法、设备及计算机可读存储介质
CN110442516B (zh) * 2019-07-12 2024-02-09 未鲲(上海)科技服务有限公司 信息处理方法、设备及计算机可读存储介质
CN112396445A (zh) * 2019-08-16 2021-02-23 京东数字科技控股有限公司 用于识别用户身份信息的方法和装置
CN110796513A (zh) * 2019-09-25 2020-02-14 北京三快在线科技有限公司 多任务学习方法、装置、电子设备及存储介质
CN112651534A (zh) * 2019-10-10 2021-04-13 顺丰科技有限公司 一种预测资源供应链需求量的方法、装置及存储介质
CN111124632A (zh) * 2019-12-06 2020-05-08 西安易朴通讯技术有限公司 移动终端的优化方法、装置、终端设备及存储介质
CN111124632B (zh) * 2019-12-06 2024-02-13 西安易朴通讯技术有限公司 移动终端的优化方法、装置、终端设备及存储介质
CN113034260A (zh) * 2019-12-09 2021-06-25 中国移动通信有限公司研究院 一种信用评估方法、模型构建方法、显示方法及相关设备
CN111144473A (zh) * 2019-12-23 2020-05-12 中国医学科学院肿瘤医院 训练集构建方法、装置、电子设备及计算机可读存储介质
CN111144473B (zh) * 2019-12-23 2024-04-23 中国医学科学院肿瘤医院 训练集构建方法、装置、电子设备及计算机可读存储介质
CN111460150A (zh) * 2020-03-27 2020-07-28 北京松果电子有限公司 一种分类模型的训练方法、分类方法、装置及存储介质
CN111460150B (zh) * 2020-03-27 2023-11-10 北京小米松果电子有限公司 一种分类模型的训练方法、分类方法、装置及存储介质
CN111949530A (zh) * 2020-08-07 2020-11-17 北京灵汐科技有限公司 测试结果的预测方法、装置、计算机设备及存储介质
CN111949530B (zh) * 2020-08-07 2024-02-20 北京灵汐科技有限公司 测试结果的预测方法、装置、计算机设备及存储介质
CN112506556A (zh) * 2020-11-19 2021-03-16 杭州云深科技有限公司 应用程序分类方法、装置、计算机设备及存储介质
CN112506556B (zh) * 2020-11-19 2023-08-25 杭州云深科技有限公司 应用程序分类方法、装置、计算机设备及存储介质
CN113342335A (zh) * 2021-05-11 2021-09-03 北京大学 快应用页面选择方法、装置、设备及存储介质
CN113239799A (zh) * 2021-05-12 2021-08-10 北京沃东天骏信息技术有限公司 训练方法、识别方法、装置、电子设备和可读存储介质

Also Published As

Publication number Publication date
CN107678845B (zh) 2020-03-10
CN107678845A (zh) 2018-02-09

Similar Documents

Publication Publication Date Title
WO2019062414A1 (zh) 应用程序管控方法、装置、存储介质及电子设备
CN108337358B (zh) 应用清理方法、装置、存储介质及电子设备
CN108280458B (zh) 群体关系类型识别方法及装置
CN106650780B (zh) 数据处理方法及装置、分类器训练方法及系统
WO2019062413A1 (zh) 应用程序管控方法、装置、存储介质及电子设备
WO2017219548A1 (zh) 用户属性预测方法及装置
WO2019120019A1 (zh) 用户性别预测方法、装置、存储介质及电子设备
CN108228325B (zh) 应用管理方法和装置、电子设备、计算机存储介质
WO2019062418A1 (zh) 应用清理方法、装置、存储介质及电子设备
WO2019062342A9 (zh) 后台应用清理方法、装置、存储介质及电子设备
WO2017107422A1 (zh) 一种用户性别识别方法及装置
CN107832132B (zh) 应用控制方法、装置、存储介质及电子设备
CN107870810B (zh) 应用清理方法、装置、存储介质及电子设备
CN110163376B (zh) 样本检测方法、媒体对象的识别方法、装置、终端及介质
WO2019120023A1 (zh) 性别预测方法、装置、存储介质及电子设备
WO2019062405A1 (zh) 应用程序的处理方法、装置、存储介质及电子设备
WO2019062460A1 (zh) 应用控制方法、装置、存储介质以及电子设备
US11269966B2 (en) Multi-classifier-based recommendation method and device, and electronic device
CN107894827B (zh) 应用清理方法、装置、存储介质及电子设备
CN107943582B (zh) 特征处理方法、装置、存储介质及电子设备
CN111898675B (zh) 信贷风控模型生成方法、装置、评分卡生成方法、机器可读介质及设备
CN107885545B (zh) 应用管理方法、装置、存储介质及电子设备
US11809505B2 (en) Method for pushing information, electronic device
WO2019062416A1 (zh) 应用清理方法、装置、存储介质及电子设备
WO2019085754A1 (zh) 应用清理方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18862520

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18862520

Country of ref document: EP

Kind code of ref document: A1