WO2019062413A1 - 应用程序管控方法、装置、存储介质及电子设备 - Google Patents
应用程序管控方法、装置、存储介质及电子设备 Download PDFInfo
- Publication number
- WO2019062413A1 WO2019062413A1 PCT/CN2018/102239 CN2018102239W WO2019062413A1 WO 2019062413 A1 WO2019062413 A1 WO 2019062413A1 CN 2018102239 W CN2018102239 W CN 2018102239W WO 2019062413 A1 WO2019062413 A1 WO 2019062413A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- application
- sample set
- neural network
- network model
- prediction
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44594—Unloading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the present application belongs to the field of communications technologies, and in particular, to an application management and control method, device, storage medium, and electronic device.
- the application provides an application management method, device, storage medium and electronic device, which can improve the intelligence and accuracy of the application control.
- an embodiment of the present application provides an application management method, including:
- Collecting multi-dimensional feature information of the application as a sample constructing a sample set of the application, the sample set includes a first sample set and a second sample set, the first sample set including feature information of the application, The second sample set includes feature information of the electronic device;
- the cyclic neural network model and the stack self-coding neural network model are respectively trained by using the first sample set and the second sample set to obtain a trained prediction model;
- the prediction result is generated according to the predicted sample and the trained prediction model, and the application is controlled according to the prediction result.
- an application management device including:
- An acquisition module configured to collect multi-dimensional feature information of the application as a sample, and build a sample set of the application, the sample set includes a first sample set and a second sample set, where the first sample set includes an application Feature information, the second sample set includes feature information of the electronic device;
- a training module configured to use the first sample set and the second sample set to respectively train a cyclic neural network model and a stack self-coding neural network model to obtain a trained prediction model
- An obtaining module configured to acquire current multi-dimensional feature information of the application and serve as a prediction sample
- a control module configured to generate a prediction result according to the predicted sample and the trained prediction model, and control the application according to the prediction result.
- an embodiment of the present application provides a storage medium on which a computer program is stored, and when the computer program runs on a computer, causes the computer to execute the application management and control method described above.
- an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory has a computer program, and the processor is configured to execute the foregoing application management and control method by calling the computer program.
- FIG. 1 is a schematic diagram of a system of an application management device according to an embodiment of the present application.
- FIG. 2 is a schematic diagram of an application scenario of an application management and control device according to an embodiment of the present disclosure.
- FIG. 3 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application.
- FIG. 4 is another schematic flowchart of an application management and control method according to an embodiment of the present application.
- FIG. 5 is a schematic diagram of another application scenario of an application management device according to an embodiment of the present disclosure.
- FIG. 6 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application.
- FIG. 7 is another schematic structural diagram of an application management device according to an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
- FIG. 9 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
- module as used herein may be taken to mean a software object that is executed on the computing system.
- the different components, modules, engines, and services described herein can be viewed as implementation objects on the computing system.
- the apparatus and method described herein may be implemented in software, and may of course be implemented in hardware, all of which are within the scope of the present application.
- references to "an embodiment” herein mean that a particular feature, structure, or characteristic described in connection with the embodiments can be included in at least one embodiment of the present application.
- the appearances of the phrases in various places in the specification are not necessarily referring to the same embodiments, and are not exclusive or alternative embodiments that are mutually exclusive. Those skilled in the art will understand and implicitly understand that the embodiments described herein can be combined with other embodiments.
- the application in the background is usually cleaned according to the memory usage of the electronic device and the priority of each application to release the memory.
- Some applications are important to users, or users need to use some applications again in a short period of time. If these applications are cleaned up after cleaning, users will need to reload these applications when they use them again.
- the application process takes a lot of time and memory resources.
- the electronic device may be a smart phone, a tablet computer, a desktop computer, a notebook computer, or a handheld computer.
- FIG. 1 is a schematic diagram of a system for controlling an application program according to an embodiment of the present application.
- the application management device is mainly configured to: collect an application behavior sequence of the application as a first sample set, collect a static feature of the device of the electronic device as a second sample set; acquire a prediction model, and the prediction model includes a cyclic neural network model and a stack Self-encoding neural network model, and input the first sample set and the second sample set as training data into the cyclic neural network model and the stack self-encoding neural network model respectively, and learn to obtain the optimized parameters of the trained predictive model and generate Predicting a model; acquiring current usage information of the application and current feature information of the electronic device, generating a prediction result according to the prediction model, and determining, according to the prediction result, whether the application needs to be used to control the application, for example, cleaning , or freeze, etc.
- FIG. 2 is a schematic diagram of an application scenario of an application management device according to an embodiment of the present application.
- the application management device receives the control request, it detects that the application running in the background of the electronic device includes the application a, the application b, and the application c. Then, the multi-dimensional feature information corresponding to the application a, the multi-dimensional feature information corresponding to the application b, and the multi-dimensional feature information corresponding to the application c are respectively obtained, and the probability that the application a needs to be used is predicted by the prediction model, and the probability a' is obtained.
- the embodiment of the present application provides an application management and control method, and the execution subject of the application management method may be an application management device provided by the embodiment of the present application, or an electronic device integrated with the application management and control device, wherein the application control The device can be implemented in hardware or software.
- the embodiment of the present application will be described from the perspective of an application management device, and the application management device may be specifically integrated in an electronic device.
- the application management method includes: collecting a first sample set of the application, and a second sample set of the electronic device, acquiring a prediction model, and the prediction model includes a cyclic neural network model and a stack self-coding neural network model, which will be the same.
- the episode and the second sample set are respectively input into the cyclic neural network model and the stack self-encoding neural network model as training data.
- the optimization parameters of the prediction model are obtained and the prediction model is claimed, and the current usage information of the application and the current electronic device are obtained.
- the plurality of feature information generates a prediction result according to the prediction model, and controls the application according to the prediction result.
- An embodiment of the present application provides an application management and control method, including:
- Collecting multi-dimensional feature information of the application as a sample constructing a sample set of the application, the sample set includes a first sample set and a second sample set, the first sample set including feature information of the application, The second sample set includes feature information of the electronic device;
- the cyclic neural network model and the stack self-coding neural network model are respectively trained by using the first sample set and the second sample set to obtain a trained prediction model;
- the prediction result is generated according to the predicted sample and the trained prediction model, and the application is controlled according to the prediction result.
- the step of training the cyclic neural network model and the stack self-coding neural network model by using the first sample set and the second sample set respectively, and obtaining the predicted model of the training comprises:
- the first sample set and the second sample set are respectively input as training data into a cyclic neural network model and a stack self-coding neural network model for training to generate optimization parameters;
- a trained prediction model is generated according to the optimization parameter and the cyclic neural network model and the stack self-coding neural network model.
- the step of training the first sample set and the second sample set as training data into a cyclic neural network model and a stack self-encoding neural network model to generate optimized parameters includes:
- Training is performed according to the loss value to generate the optimization parameter.
- the step of training according to the loss value includes:
- Training is performed using a stochastic gradient descent method based on the loss value.
- the step of inputting the intermediate value into the fully connected layer to obtain a probability corresponding to the plurality of prediction results includes:
- Z K is the intermediate value and C is the number of categories of the predicted result. Is the jth intermediate value.
- the step of obtaining a loss value according to the plurality of the prediction results and a plurality of the probabilities corresponding thereto includes:
- the prediction result includes a first predicted value 1 and a second predicted value 0;
- the step of controlling the application according to the prediction result includes:
- the application is cleaned; when the predicted result is the second predicted value 0, the state of the application is kept unchanged.
- the cyclic neural network model is used for time series analysis on a background application
- the hidden layer size is 64
- the stack self-encoding neural network model is used to encode a static feature
- the hidden layer size is 64
- the activation function adopts a Sigmoid function:
- FIG. 3 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application.
- the application management and control method provided by the embodiment of the present application is applied to an electronic device, and the specific process may be as follows:
- Step 101 Collect multi-dimensional feature information of the application as a sample, and build a sample set of the application, where the sample set includes a first sample set and a second sample set, where the first sample set includes feature information of the application, and the second sample set Includes feature information for the electronic device.
- the sample of the first sample set may include usage information of the preset application
- the sample of the second sample set may include at least one of status information, time information, and location information of the electronic device.
- the usage information of the application may be the usage status of the background application, and is recorded every five minutes. The usage is recorded as 0, and the usage is not recorded as 1, and the data storage of each application is a binary vector.
- the status information of the electronic device may include, for example, screen brightness, state of charge, remaining power, WIFI status, time period in which the current time is, and the like, and may also include features related to the application, such as the type of the target application and the target application being The switching mode, wherein the switching mode may include being divided by a home key, being switched by a reset key, being switched by another APP, and the like.
- the time information may include, for example, a current time point, a work day, and the like.
- the location information may include, for example, GPS positioning, base station positioning, WIFI positioning, and the like.
- a plurality of characteristic information is collected as a sample, and then a first sample set of the preset application and a second sample set of the electronic device are formed.
- the preset application may be any application installed in the electronic device, such as a communication application, a multimedia application, a game application, a news application, or a shopping application.
- Step 102 The cyclic neural network model and the stack self-coding neural network model are respectively trained by using the first sample set and the second sample set, and the trained prediction model is obtained.
- the time dimension is a cyclic process, and the data of the past n moments must be included in the prediction of T+1 time, n we select 5, and the state of T+1 moment in the training process is also the label. information.
- the above stack self-encoding neural network model is used to encode static features, the hidden layer size is 64, a total of two layers, and the activation function uses the Sigmoid function:
- the prediction model includes an input layer, a hidden layer, a fusion layer, and a fully connected layer that are sequentially connected, and the prediction model may further include a classifier.
- the prediction model mainly includes a network structure part and a network training part, wherein the network structure part comprises an input layer, a hidden layer, a fusion layer and a full connection layer which are sequentially connected.
- the network training portion may include a classifier, and the classifier may be a Softmax classifier.
- FIG. 4 is another schematic flowchart of an application management and control method according to an embodiment of the present application.
- the training method specifically includes sub-steps:
- Sub-step 1021 inputting the output values of the cyclic neural network model and the stack self-encoding neural network model into the fusion layer to obtain an intermediate value.
- Sub-step 1022 the intermediate value is input to the fully connected layer to obtain a probability corresponding to the plurality of prediction results.
- the output result of the fully connected layer can be input to the Softmax classifier to obtain the probability of corresponding multiple prediction results.
- the output result of the fully connected layer includes the output of the cyclic neural network model and the stack self-encoding neural network model.
- the step of inputting the output value of the fully connected layer into the classifier may be performed by synthesizing the output values of the fully connected layer into the input classifier according to different weights. The weighted sum of the output values of the cyclic neural network model and the stack self-encoding neural network model.
- the specific formula is as follows:
- Z K APP is the output value of the cyclic neural network model
- Z K Device is the output value of the stack self-encoding neural network model.
- the probability of obtaining the prediction result may be based on the first preset formula, and the cyclic neural network model and the stack self-coding neural network model are combined into the input classifier through the output values of the fully connected layer, and corresponding prediction results are obtained. Probability, where the first preset formula is:
- Z K is the composite value of the output values of the cyclic neural network model and the stack self-encoding neural network model
- C is the number of categories of the prediction results. Is the jth composite value.
- the loss value is obtained according to the plurality of prediction results and the plurality of probabilities corresponding thereto.
- obtaining the loss value may obtain a loss value according to the plurality of prediction results and the plurality of probabilities corresponding thereto according to the second preset formula, where the second preset formula is:
- Sub-step 1024 training according to the loss value, to obtain optimized parameters.
- Training can be performed using a stochastic gradient descent method based on the loss value.
- the method is small batch size, the batch size is 128, and the maximum number of iterations is 100.
- the optimal parameters for training can also be trained according to the batch gradient descent method or the gradient descent method.
- Step 103 Acquire current multidimensional feature information of the application and use as a prediction sample.
- Step 104 Generate a prediction result according to the predicted sample and the trained prediction model, and control the application according to the prediction result.
- the prediction result includes a first predicted value 1 and a second predicted value 0, and the step of controlling the application according to the predicted result may specifically include:
- the application is cleaned; when the predicted result is the second predicted value 0, the state of the application is kept unchanged.
- the training process of the predictive model can be completed on the server side or on the electronic device side.
- the training process and the actual prediction process of the prediction model are completed on the server side, when the training prediction model needs to be used, the current usage information of the application and the current multiple feature information of the electronic device can be input to the server, and the actual prediction of the server is performed. After completion, the prediction result is sent to the electronic device, and the electronic device controls the application according to the prediction result.
- the electronic device When the training process and the actual prediction process of the prediction model are completed on the electronic device end, when the optimized prediction model needs to be used, the current usage information of the application and the current multiple feature information of the electronic device can be input to the electronic device, and the electronic After the actual prediction of the device is completed, the electronic device manages the application according to the predicted result.
- FIG. 5 is a schematic diagram of another application scenario of an application management device according to an embodiment of the present disclosure.
- the training process of the predictive model is completed on the server side, and the actual prediction process of the predictive model is completed on the electronic device side
- the current use information of the application and the current multiple feature information of the electronic device may be used.
- the electronic device controls the application according to the predicted result.
- the trained predictive model file can be transplanted to the smart device. If it is necessary to determine whether the current background application can be cleaned up, the current sample set is updated, and the trained predictive model file (model file) is input. , the calculation can get the predicted value.
- the method may further include:
- the application Detect whether the application enters the background, and if it enters the background, obtain the current usage information of the application and the current multiple feature information of the electronic device. Then, based on the prediction model and the optimization parameters, the prediction is generated, and the prediction result is generated, and the application is controlled according to the prediction result.
- the method may further include:
- the preset time is obtained. If the current system time reaches the preset time, the current usage information of the application and the current multiple feature information of the electronic device are obtained.
- the preset time can be a time point in the day, such as 9 am, or several time points in the day, such as 9 am, 6 pm, and the like. It can also be one or several time points in multiple days. Then, the prediction result is generated according to the prediction model and the optimization parameter, and the application is controlled according to the prediction result.
- the application management and control method constructs a sample set of an application by collecting multi-dimensional feature information of the application as a sample, and the sample set includes a first sample set of the application, and an electronic device.
- the first sample set and the second sample set are respectively input into the cyclic neural network model and the stack self-encoding neural network model as training data, and the trained predictive model is obtained, and the current multi-dimensional feature information of the application is obtained.
- the prediction result is generated according to the prediction sample and the trained prediction model, and the application is controlled according to the prediction result.
- This application can improve the accuracy of the prediction of the application, thereby improving the intelligence and accuracy of the control of the application entering the background.
- the application also provides an application management device, including:
- An acquisition module configured to collect multi-dimensional feature information of the application as a sample, and build a sample set of the application, the sample set includes a first sample set and a second sample set, where the first sample set includes an application Feature information, the second sample set includes feature information of the electronic device;
- a training module configured to use the first sample set and the second sample set to respectively train a cyclic neural network model and a stack self-coding neural network model to obtain a trained prediction model
- An obtaining module configured to acquire current multi-dimensional feature information of the application and serve as a prediction sample
- a control module configured to generate a prediction result according to the predicted sample and the trained prediction model, and control the application according to the prediction result.
- the training module is specifically configured to input the first sample set and the second sample set as training data into a cyclic neural network model and a stack self-coding neural network model respectively, to Generate optimization parameters;
- a trained prediction model is generated according to the optimization parameter and the cyclic neural network model and the stack self-coding neural network model.
- the training module specifically includes: a fusion layer, a full connection layer, a loss value calculator, and a training submodule;
- the fusion layer is configured to input the output values of the cyclic neural network model and the stack self-encoding neural network model into the fusion layer to obtain an intermediate value;
- the fully connected layer is configured to input the intermediate value into the fully connected layer to obtain a probability corresponding to the plurality of prediction results
- the loss value calculator is configured to obtain a loss value according to the plurality of the prediction results and a plurality of the probabilities corresponding thereto;
- the training submodule is configured to perform training according to the loss value to obtain the optimization parameter.
- the training sub-module is specifically configured to perform training by using a random gradient descent method according to the loss value.
- the training module is specifically configured to calculate, according to a first preset formula, an output result of the fully connected layer to obtain a probability corresponding to the plurality of prediction results, where the first preset formula for:
- Z K is the intermediate value and C is the number of categories of the predicted result. Is the jth intermediate value.
- the training module is specifically configured to:
- the prediction result includes a first predicted value 1 and a second predicted value 0;
- the control module is specifically configured to: when the prediction result is the first predicted value 1, clean the application; when the predicted result is the second predicted value 0, maintain the state of the application constant.
- the cyclic neural network model is used for time series analysis on a background application
- the hidden layer size is 64
- the stack self-encoding neural network model is used to encode a static feature
- the hidden layer size is 64
- the activation function adopts a Sigmoid function:
- FIG. 6 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application.
- the application management device 300 is applied to an electronic device, and the application management device 300 includes an acquisition module 301, a training module 302, an acquisition module 303, and a management module 304.
- the collecting module 301 is configured to collect multi-dimensional feature information of the application as a sample, and construct a sample set of the application, where the sample set includes a first sample set of the application, and a second sample set of the electronic device.
- the sample of the first sample set may include usage information of the preset application
- the sample of the second sample set may include at least one of status information, time information, and location information of the electronic device.
- the usage information of the application may be the usage status of the background application, and is recorded every five minutes. The usage is recorded as 0, and the usage is not recorded as 1, and the data storage of each application is a binary vector.
- the status information of the electronic device may include, for example, screen brightness, state of charge, remaining power, WIFI status, time period in which the current time is, and the like, and may also include features related to the application, such as the type of the target application and the target application being The switching mode, wherein the switching mode may include being divided by a home key, being switched by a reset key, being switched by another APP, and the like.
- the time information may include, for example, a current time point, a work day, and the like.
- the location information may include, for example, GPS positioning, base station positioning, WIFI positioning, and the like.
- a plurality of characteristic information is collected as a sample, and then a first sample set of the preset application and a second sample set of the electronic device are formed.
- the preset application may be any application installed in the electronic device, such as a communication application, a multimedia application, a game application, a news application, or a shopping application.
- the training module 302 is configured to input the first sample set and the second sample set as training data into the cyclic neural network model and the stack self-coding neural network model respectively to obtain a trained prediction model.
- the time dimension is a cyclic process, and the data of the past n moments must be included in the prediction of T+1 time, n we select 5, and the state of T+1 moment in the training process is also the label. information.
- the above stack self-encoding neural network model is used to encode static features, the hidden layer size is 64, a total of two layers, and the activation function uses the Sigmoid function:
- the first sample set and the second sample set are respectively input as training data into the cyclic neural network model and the stack self-encoding neural network model for training, and learning is performed to obtain optimized parameters of the trained predictive model.
- the prediction model includes an input layer, a hidden layer, a fusion layer, and a fully connected layer that are sequentially connected, and the prediction model may further include a classifier.
- the prediction model mainly includes a network structure part and a network training part, wherein the network structure part comprises an input layer, a hidden layer, a fusion layer and a full connection layer which are sequentially connected.
- the network training portion may include a classifier, and the classifier may be a Softmax classifier.
- FIG. 7 is another schematic structural diagram of an application management and control apparatus according to an embodiment of the present application.
- the training module 302 can specifically include a fusion layer 3021, a fully connected layer 3022, a loss calculator 3023, and a training submodule 3024.
- a fusion layer 3021 configured to input the output values of the cyclic neural network model and the stack self-coding neural network model into the fusion layer 3021 to obtain an intermediate value
- the fully connected layer 3022 is configured to input the intermediate value into the fully connected layer 3022 to obtain a probability corresponding to the plurality of prediction results.
- the output result of the fully connected layer can be input to the Softmax classifier to obtain the probability of corresponding multiple prediction results.
- the output result of the fully connected layer includes the output of the cyclic neural network model and the stack self-encoding neural network model, that is, the output value of the cyclic neural network model and the output value of the stack self-encoding neural network model.
- the step of inputting the output value of the fully connected layer into the classifier may be performed by synthesizing the output values of the fully connected layer into the input classifier according to different weights. The weighted sum of the output values of the cyclic neural network model and the stack self-encoding neural network model.
- the specific formula is as follows:
- Z K APP is the output value of the cyclic neural network model
- Z K Device is the output value of the stack self-encoding neural network model.
- the probability of obtaining the prediction result may be based on the first preset formula, and the cyclic neural network model and the stack self-coding neural network model are combined into the input classifier through the output values of the fully connected layer, and corresponding prediction results are obtained. Probability, where the first preset formula is:
- Z K is the composite value of the output values of the cyclic neural network model and the stack self-encoding neural network model
- C is the number of categories of the prediction results. Is the jth composite value.
- the loss value calculator 3023 can be used to obtain a loss value based on a plurality of prediction results and a plurality of probabilities corresponding thereto.
- obtaining the loss value may obtain a loss value according to the plurality of prediction results and the plurality of probabilities corresponding thereto according to the second preset formula, where the second preset formula is:
- the training sub-module 3024 can be used to train according to the loss value to obtain optimized parameters.
- Training can be performed using a stochastic gradient descent method based on the loss value.
- the method is small batch size, the batch size is 128, and the maximum number of iterations is 100.
- the optimal parameters for training can also be trained according to the batch gradient descent method or the gradient descent method.
- the obtaining module 303 is configured to acquire current multi-dimensional feature information of the application and use as a prediction sample.
- the control module 304 is configured to generate a prediction result according to the predicted sample and the trained prediction model, and control the application according to the prediction result.
- the prediction result includes a first predicted value 1 and a second predicted value 0, and the step of controlling the application according to the predicted result may specifically include:
- the application is cleaned; when the predicted result is the second predicted value 0, the state of the application is kept unchanged.
- the training process of the predictive model can be completed on the server side or on the electronic device side.
- the training process and the actual prediction process of the prediction model are completed on the server side, when the optimized prediction model needs to be used, the current usage information of the application and the current multiple feature information of the electronic device can be input to the server, and the actual prediction of the server After completion, the prediction result is sent to the electronic device, and the electronic device controls the application according to the prediction result.
- the electronic device When the training process and the actual prediction process of the prediction model are completed on the electronic device end, when the optimized prediction model needs to be used, the current usage information of the application and the current multiple feature information of the electronic device can be input to the electronic device, and the electronic After the actual prediction of the device is completed, the electronic device manages the application according to the predicted result.
- control module 304 is further configured to detect whether the application enters the background, and if entering the background, acquire current usage information of the application and multiple current feature information of the electronic device. Then, based on the prediction model and the optimization parameters, the prediction is generated, and the prediction result is generated, and the application is controlled according to the prediction result.
- control module 304 is further configured to acquire a preset time. If the current system time reaches the preset time, the current usage information of the application and the current multiple feature information of the electronic device are acquired.
- the preset time can be a time point in the day, such as 9 am, or several time points in the day, such as 9 am, 6 pm, and the like. It can also be one or several time points in multiple days. Then, the prediction result is generated according to the prediction model and the optimization parameter, and the application is controlled according to the prediction result.
- the application management device of the embodiment of the present application constructs a sample set of the application by collecting multi-dimensional feature information of the application as a sample, the sample set includes a first sample set of the application, and a second sample of the electronic device.
- the first sample set and the second sample set are respectively input into the cyclic neural network model and the stack self-encoding neural network model as training data, and the trained predictive model is obtained, and the current multi-dimensional feature information of the application is obtained.
- a prediction sample a prediction result is generated based on the prediction sample and the trained prediction model, and the application is controlled according to the prediction result.
- This application can improve the accuracy of the prediction of the application, thereby improving the intelligence and accuracy of the control of the application entering the background.
- the application management device is in the same concept as the application management and control method in the above embodiment, and any method provided in the embodiment of the application management and control method can be run on the application management device, and the specific implementation process thereof For details, refer to the embodiment of the application management method, which is not described here.
- the electronic device 400 includes a processor 401 and a memory 402.
- the processor 401 is electrically connected to the memory 402.
- the processor 400 is a control center of the electronic device 400 that connects various portions of the entire electronic device using various interfaces and lines, executes the electronic by running or loading a computer program stored in the memory 402, and recalling data stored in the memory 402.
- the memory 402 can be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by running computer programs and modules stored in the memory 402.
- the memory 402 can mainly include a storage program area and a storage data area, wherein the storage program area can store an operating system, a computer program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area can be stored according to Data created by the use of electronic devices, etc.
- memory 402 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 402 can also include a memory controller to provide processor 401 access to memory 402.
- the processor 401 in the electronic device 400 loads the instructions corresponding to the process of one or more computer programs into the memory 402 according to the following steps, and is stored in the memory 402 by the processor 401.
- the computer program in which to implement various functions, as follows:
- Collecting multi-dimensional feature information of the application as a sample constructing a sample set of the application, the sample set includes a first sample set of the application, and a second sample set of the electronic device, using the first sample set and the second sample set as
- the training data is input into the cyclic neural network model and the stack self-encoding neural network model for training, and the trained prediction model is obtained.
- the current multi-dimensional feature information of the application is obtained and used as a prediction sample, and the prediction is generated according to the prediction sample and the trained prediction model.
- the application is controlled based on the predicted results. This application can improve the accuracy of the prediction of the application, thereby improving the intelligence and accuracy of the control of the application entering the background.
- the processor 401 is further configured to perform the following steps:
- the first sample set and the second sample set are respectively input as training data into a cyclic neural network model and a stack self-coding neural network model for training to generate optimization parameters;
- a trained prediction model is generated according to the optimization parameter and the cyclic neural network model and the stack self-coding neural network model.
- the processor is trained by inputting the first sample set and the second sample set as training data into a cyclic neural network model and a stack self-coding neural network model respectively to generate an optimized parameter.
- 401 is also used to perform the following steps:
- Training is performed according to the loss value to generate the optimization parameter.
- the processor 401 when the training is performed according to the loss value, the processor 401 is further configured to perform the following steps:
- Training is performed using a stochastic gradient descent method based on the loss value.
- the processor 401 when the intermediate value is input to the fully connected layer to obtain a probability corresponding to the plurality of prediction results, the processor 401 is further configured to perform the following steps:
- the output result of the fully connected layer is calculated based on the first preset formula to obtain a probability corresponding to the plurality of prediction results, wherein the first preset formula is:
- Z K is the intermediate value and C is the number of categories of the predicted result. Is the jth intermediate value.
- the processor 401 when the loss value is obtained according to the plurality of the prediction results and the plurality of the probabilities corresponding thereto, the processor 401 is further configured to perform the following steps:
- the electronic device constructs a sample set of the application by collecting multi-dimensional feature information of the application, and the sample set includes a first sample set of the application and a second sample of the electronic device.
- the first sample set and the second sample set are respectively input into the cyclic neural network model and the stack self-encoding neural network model as training data, and the trained predictive model is obtained, and the current multi-dimensional feature information of the application is obtained and used as
- the prediction sample is generated based on the prediction sample and the trained prediction model, and the application is controlled according to the prediction result.
- This application can improve the accuracy of the prediction of the application, thereby improving the intelligence and accuracy of the control of the application entering the background.
- the electronic device 400 may further include: a display 403, a radio frequency circuit 404, an audio circuit 405, and a power source 406.
- the display 403, the radio frequency circuit 404, the audio circuit 405, and the power source 406 are electrically connected to the processor 401, respectively.
- Display 403 can be used to display information entered by the user or information provided to the user, as well as various graphical user interfaces, which can be comprised of graphics, text, icons, video, and any combination thereof.
- the display 403 can include a display panel.
- the display panel can be configured in the form of a liquid crystal display (LCD), or an organic light-emitting diode (OLED).
- LCD liquid crystal display
- OLED organic light-emitting diode
- the radio frequency circuit 404 can be used to transmit and receive radio frequency signals to establish wireless communication with network devices or other electronic devices through wireless communication, and to transmit and receive signals with network devices or other electronic devices.
- the audio circuit 405 can be used to provide an audio interface between the user and the electronic device through the speaker and the microphone.
- Power source 406 can be used to power various components of electronic device 400.
- the power supply 406 can be logically coupled to the processor 401 through a power management system to enable functions such as managing charging, discharging, and power management through the power management system.
- the electronic device 400 may further include a camera, a Bluetooth module, and the like, and details are not described herein.
- the embodiment of the present application further provides a storage medium storing a computer program, and when the computer program runs on the computer, causing the computer to execute the application management and control method in any of the above embodiments.
- the storage medium may be a magnetic disk, an optical disk, a read only memory (ROM), or a random access memory (RAM).
- ROM read only memory
- RAM random access memory
- the computer program can be stored in a computer readable storage medium, such as in a memory of the electronic device, and executed by at least one processor in the electronic device, and can include, for example, an application management method during execution.
- the storage medium may be a magnetic disk, an optical disk, a read only memory, a random access memory, or the like.
- each functional module may be integrated into one processing chip, or each module may exist physically separately, or two or more modules may be integrated into one module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
- An integrated module, if implemented in the form of a software functional module and sold or used as a standalone product, may also be stored in a computer readable storage medium such as a read only memory, a magnetic disk or an optical disk.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
一种应用程序管控方法,包括:采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集,所述样本集包括第一样本集和第二样本集,所述第一样本集包括应用程序的特征信息,所述第二样本集包括电子设备的特征信息(101);利用所述第一样本集和所述第二样本集分别对循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型(102);获取所述应用程序当前的多维特征信息并作为预测样本(103);根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控(104)。还公开一种应用程序管控装置、存储介质及电子设备。
Description
本申请要求于2017年09月30日提交中国专利局、申请号为CN 201710920013.4、申请名称为“应用程序管控方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请属于通信技术领域,尤其涉及一种应用程序管控方法、装置、存储介质及电子设备。
随着电子技术的发展,人们通常在电子设备上安装很多应用程序。当用户在电子设备中打开多个应用程序时,若用户退回电子设备的桌面或者停留在某一应用程序的应用界面或者管控电子设备屏幕,则用户打开的多个应用程序依然会在电子设备的后台运行。
申请内容
本申请提供一种应用程序管控方法、装置、存储介质及电子设备,能够提升对应用程序进行管控的智能化和准确性。
第一方面,本申请实施例提供一种应用程序管控方法,包括:
采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集,所述样本集包括第一样本集和第二样本集,所述第一样本集包括应用程序的特征信息,所述第二样本集包括电子设备的特征信息;
利用所述第一样本集和所述第二样本集分别对循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型;
获取所述应用程序当前的多维特征信息并作为预测样本;
根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控。
第二方面,本申请实施例提供一种应用程序管控装置,包括:
采集模块,用于采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集,所述样本集包括第一样本集和第二样本集,所述第一样本集包括应用程序的特征信息,所述第二样本集包括电子设备的特征信息;
训练模块,用于利用所述第一样本集和所述第二样本集分别对循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型;
获取模块,用于获取所述应用程序当前的多维特征信息并作为预测样本;
管控模块,用于根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控。
第三方面,本申请实施例提供一种存储介质,其上存储有计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行上述的应用程序管控方法。
第四方面,本申请实施例提供一种电子设备,包括处理器和存储器,所述存储器有计算机程序,所述处理器通过调用所述计算机程序,用于执行上述的应用程序管控方法。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的应用程序管控装置的系统示意图。
图2为本申请实施例提供的应用程序管控装置的应用场景示意图。
图3为本申请实施例提供的应用程序管控方法的流程示意图。
图4为本申请实施例提供的应用程序管控方法的另一流程示意图。
图5为本申请实施例提供的应用程序管控装置的另一应用场景示意图。
图6为本申请实施例提供的应用程序管控装置的结构示意图。
图7为本申请实施例提供的应用程序管控装置的另一结构示意图。
图8为本申请实施例提供的电子设备的结构示意图。
图9为本申请实施例提供的电子设备的另一结构示意图。
请参照图式,其中相同的组件符号代表相同的组件,本申请的原理是以实施在一适当的运算环境中来举例说明。以下的说明是基于所例示的本申请具体实施例,其不应被视为限制本申请未在此详述的其它具体实施例。
在以下的说明中,本申请的具体实施例将参考由一部或多部计算机所执行的步骤及符号来说明,除非另有述明。因此,这些步骤及操作将有数次提到由计算机执行,本文所指的计算机执行包括了由代表了以一结构化型式中的数据的电子信号的计算机处理单元的操作。此操作转换该数据或将其维持在该计算机的内存系统中的位置处,其可重新配置或另外以本领域测试人员所熟知的方式来改变该计算机的运作。该数据所维持的数据结构为该内存的实体位置,其具有由该数据格式所定义的特定特性。但是,本申请原理以上述文字来说明,其并不代表为一种限制,本领域测试人员将可了解到以下所述的多种步骤及操作亦可实施在硬件当中。
本文所使用的术语“模块”可看作为在该运算系统上执行的软件对象。本文所述的不同组件、模块、引擎及服务可看作为在该运算系统上的实施对象。而本文所述的装置及方法可以以软件的方式进行实施,当然也可在硬件上进行实施,均在本申请保护范围之内。
本申请中的术语“第一”、“第二”和“第三”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或模块的过程、方法、系统、产品或设备没有限定于已列出的步骤或模块,而是某些实施例还包括没有列出的步骤或模块,或某些实施例还包括对于这些过程、方法、产品或设备固有的其它步骤或模块。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
在现有技术中,对后台的应用程序进行管控时,通常直接根据电子设备的内存占用情况以及各应用程序的优先级,对后台的部分应用程序进行清理,以释放内存。有些应用程 序对用户很重要、或者用户在短时间内需要再次使用某些应用程序,若在对后进行清理时将这些应用程序清理掉,则用户再次使用这些应用程序时需要电子设备重新加载这些应用程序的进程,需要耗费大量时间及内存资源。
然而处于后台的很多应用程序用户一段时间内并不会使用,但是这些后台运行的应用程序会严重地占用电子设备的内存,并且导致电子设备的耗电速度加快。其中,该电子设备可以是智能手机、平板电脑、台式电脑、笔记本电脑、或者掌上电脑等设备。
请参阅图1,图1为本申请实施例提供的应用程序管控装置的系统示意图。该应用程序管控装置主要用于:采集应用程序的应用行为序列作为第一样本集,采集电子设备的设备静态特征作为第二样本集;获取预测模型,预测模型包括循环神经网络模型和栈式自编码神经网络模型,并将第一样本集和第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型,进行学习以得到训练后的预测模型的优化参数并生成预测模型;获取应用程序当前的使用信息和电子设备当前的多个特征信息,根据预测模型,生成预测结果,并根据预测结果判断该应用程序是否需要被使用,以对应用程序进行管控,例如清理、或者冻结等。
具体的,请参阅图2,图2为本申请实施例提供的应用程序管控装置的应用场景示意图。比如,应用程序管控装置在接收到管控请求时,检测到在电子设备的后台运行的应用程序包括应用程序a、应用程序b以及应用程序c。然后分别获取应用程序a对应的多维特征信息、应用程序b对应的多维特征信息以及应用程序c对应的多维特征信息,通过预测模型对应用程序a是否需要被使用的概率进行预测,得到概率a’,通过预测模型对应用程序b是否需要被使用的概率进行预测,得到概率b’,通过预测模型对应用程序c是否需要被使用的概率进行预测,得到概率c’;根据概率a’、概率b’以及概率c’对后台运行的应用程序a、应用程序b以及应用程序c进行管控,例如将概率最低的应用程序b关闭。
本申请实施例提供一种应用程序管控方法,该应用程序管控方法的执行主体可以是本申请实施例提供的应用程序管控装置,或者集成了该应用程序管控装置的电子设备,其中该应用程序管控装置可以采用硬件或者软件的方式实现。
本申请实施例将从应用程序管控装置的角度进行描述,该应用程序管控装置具体可以集成在电子设备中。该应用程序管控方法包括:采集应用程序的第一样本集,以及电子设备的第二样本集,获取预测模型,预测模型包括循环神经网络模型和栈式自编码神经网络模型,将第一样本集和第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型,训练后得到预测模型的优化参数并声称预测模型,获取应用程序当前的使用信息和电子设备当前的多个特征信息,根据预测模型,生成预测结果,并根据预测结果对应用程序进行管控。
本申请实施例提供一种应用程序管控方法,包括:
采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集,所述样本集包括第一样本集和第二样本集,所述第一样本集包括应用程序的特征信息,所述第二样本集包括电子设备的特征信息;
利用所述第一样本集和所述第二样本集分别对循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型;
获取所述应用程序当前的多维特征信息并作为预测样本;
根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控。
在一实施例中,利用所述第一样本集和所述第二样本集分别对循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型的步骤,包括:
将所述第一样本集和所述第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,以生成优化参数;
根据所述优化参数和所述循环神经网络模型、栈式自编码神经网络模型生成训练后的预测模型。
在一实施例中,将所述第一样本集和所述第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,以生成优化参数的步骤,包括:
将所述循环神经网络模型和栈式自编码神经网络模型的输出值输入融合层以得到中间值;
将所述中间值输入全连接层以得到对应多个预测结果的概率;
根据多个所述预测结果和与其对应的多个所述概率得到损失值;
根据所述损失值进行训练,生成所述优化参数。
在一实施例中,所述根据所述损失值进行训练的步骤,包括:
根据所述损失值利用随机梯度下降法进行训练。
在一实施例中,所述将所述中间值输入全连接层以得到对应多个所述预测结果的概率的步骤,包括:
将所述全连接层的输出结果基于第一预设公式计算以得到对应多个所述预测结果的概率,其中所述第一预设公式为:
在一实施例中,所述根据多个所述预测结果和与其对应的多个所述概率得到损失值的步骤,包括:
基于第二预设公式根据多个所述预测结果和与其对应的多个所述概率得到损失值,其中所述第二预设公式为:
其中C为预测结果的类别数,y
k为真实值。
在一实施例中,所述预测结果包括第一预测值1和第二预测值0;
所述根据所述预测结果对所述应用程序进行管控的步骤,包括:
当所述预测结果为第一预测值1时,则清理所述应用程序;当所述预测结果为第二预测值0时,则保持所述应用程序的状态不变。
在一实施例中,所述循环神经网络模型用于对后台应用做时间序列分析,隐含层大小为64,共两层,激活函数采用线性函数y=x。
在一实施例中,所述栈式自编码神经网络模型用于编码静态特征,隐含层大小为64,共两层,激活函数采用Sigmoid函数:
请参阅图3,图3为本申请实施例提供的应用程序管控方法的流程示意图。本申请实施例提供的应用程序管控方法应用于电子设备,具体流程可以如下:
步骤101,采集应用程序的多维特征信息作为样本,构建应用程序的样本集,样本集包括第一样本集和第二样本集,第一样本集包括应用程序的特征信息,第二样本集包括电子设备的特征信息。
具体的,第一样本集的样本可以包括预设应用程序的使用信息,第二样本集的样本可以包括电子设备的状态信息、时间信息和位置信息等中的至少一项。
其中应用程序的使用信息可以为后台应用的使用状态,每隔五分钟记录一次,使用记为0,不使用记为1,每个应用的数据存储为一个二值向量。电子设备的状态信息可以包括如屏幕亮度、充电状态、剩余电量、WIFI状态、当前时间所处的时间段等,还可以包括与应用程序相关的特征,比如目标应用程序的类型以及目标应用程序被切换的方式,其中切换方式可以包括分为被home键切换、被recent键切换、被其他APP切换等。时间信息可以包括如当前时间点、工作日等。位置信息可以包括如GPS定位、基站定位、WIFI定位等。
将多个特性信息作为样本采集,然后形成预设应用程序的第一样本集和电子设备的第二样本集。
其中,预设应用程序可以是安装在电子设备中的任意应用程序,例如通讯应用程序、多媒体应用程序、游戏应用程序、资讯应用程序、或者购物应用程序等等。
步骤102,利用第一样本集和第二样本集分别对循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型。
在一实施例中,上述循环神经网络模型用于对后台应用做时间序列分析,隐含层大小为64,共两层,激活函数采用线性函数y=x。在循环神经网络中,时间维度是一个循环的过程,将过去n个时刻的数据都要纳入T+1时刻的预测之中,n我们选取5,训练过程中T+1时刻的状态同时是标签信息。我们采用滑动窗的方式训练,即每次预测都根据其前五个时刻的历史状态。
上述栈式自编码神经网络模型用于编码静态特征,隐含层大小为64,共两层,激活函数采用Sigmoid函数:
预测模型包括依次连接的输入层、隐含层、融合层和全连接层,预测模型还可以包括 分类器。具体的,该预测模型主要包括网络结构部分和网络训练部分,其中网络结构部分包括依次连接的输入层、隐含层、融合层和全连接层。
在一实施例中,网络训练部分可以包括分类器,分类器可以为Softmax分类器。
请一并参阅图4,图4为本申请实施例提供的应用程序管控方法的另一流程示意图。训练方法具体包括子步骤:
子步骤1021,将所述循环神经网络模型和栈式自编码神经网络模型的输出值输入融合层以得到中间值。
子步骤1022,将中间值输入全连接层以得到对应多个预测结果的概率。
需要说明的是,可以将全连接层的输出结果输入Softmax分类器以得到对应多个预测结果的概率。其中,全连接层的输出结果包括循环神经网络模型和栈式自编码神经网络模型的输出结果。具体的,将全连接层的输出值输入分类器的步骤,可以为将全连接层的输出值按不同权重合成输入分类器。即将循环神经网络模型和栈式自编码神经网络模型的输出值加权和。具体公式如下:
Z
K=Z
K
APP+λ*Z
K
Device,
其中,λ为权重,Z
K
APP为循环神经网络模型的输出值,Z
K
Device为栈式自编码神经网络模型的输出值。
在一些实施例中,得到预测结果的概率可以基于第一预设公式将循环神经网络模型和栈式自编码神经网络模型经过全连接层的输出值合成输入分类器,并得到对应多个预测结果的概率,其中第一预设公式为:
子步骤1023,根据多个预测结果和与其对应的多个概率得到损失值。
在一些实施例中,得到损失值可以基于第二预设公式根据多个预测结果和与其对应的多个概率得到损失值,其中第二预设公式为:
其中C为预测结果的类别数,y
k为真实值。
子步骤1024,根据损失值进行训练,得到优化参数。
可以根据损失值利用随机梯度下降法进行训练。并采用小批量的方式,批量大小为128,最大迭代次数100次,训练得到最优参数还可以根据批量梯度下降法或梯度下降法进行训练。
步骤103,获取应用程序当前的多维特征信息并作为预测样本。
步骤104,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对应用程序进行管控。
若需要判断当前后台应用是否可清理,获取应用程序的当前使用信息和电子设备当前的多个特征信息,以作为第一样本集和第二样本集输入到预测模型,预测模型计算即可得到预测值。判断应用程序是否需要清理。
其中,预测结果包括第一预测值1和第二预测值0,根据预测结果对应用程序进行管控的步骤可以具体包括:
当预测结果为第一预测值1时,则清理应用程序;当预测结果为第二预测值0时,则保持应用程序的状态不变。
需要说明的是,预测模型的训练过程可以在服务器端也可以在电子设备端完成。当预测模型的训练过程、实际预测过程都在服务器端完成时,需要使用训练后的预测模型时,可以将应用程序的当前使用信息和电子设备当前的多个特征信息输入到服务器,服务器实际预测完成后,将预测结果发送至电子设备端,电子设备再根据预测结果管控该应用程序。
当预测模型的训练过程、实际预测过程都在电子设备端完成时,需要使用优化后的预测模型时,可以将应用程序的当前使用信息和电子设备当前的多个特征信息输入到电子设备,电子设备实际预测完成后,电子设备根据预测结果管控该应用程序。
请参阅图5,图5为本申请实施例提供的应用程序管控装置的另一应用场景示意图。当预测模型的训练过程在服务器端完成,预测模型的实际预测过程在电子设备端完成时,需要使用优化后的预测模型时,可以将应用程序的当前使用信息和电子设备当前的多个特征信息输入到电子设备,电子设备实际预测完成后,电子设备根据预测结果管控该应用程序。可选的,可以将训练好的预测模型文件(model文件)移植到智能设备上,若需要判断当前后台应用是否可清理,更新当前的样本集,输入到训练好的预测模型文件(model文件),计算即可得到预测值。
在一些实施例中,在获取应用程序和电子设备当前的多个特征信息的步骤之前,还可以包括:
检测应用程序是否进入后台,若进入后台,则获取应用程序的当前使用信息和电子设备当前的多个特征信息。然后根据预测模型、优化参数进行预测,生成预测结果,并根据预测结果对应用程序进行管控。
在一些实施例中,在获取应用程序的当前使用信息和电子设备当前的多个特征信息的步骤之前,还可以包括:
获取预设时间,若当前系统时间到达预设时间时,则获取应用程序的当前使用信息和电子设备当前的多个特征信息。其中预设时间可以为一天中的一个时间点,如上午9点,也可以为一天中的几个时间点,如上午9点、下午6点等。也可以为多天中的一个或几个时间点。然后根据预测模型、优化参数生成预测结果,并根据预测结果对应用程序进行管控。
上述所有的技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。
由上述可知,本申请实施例提供的应用程序管控方法,通过采集应用程序的多维特征 信息作为样本,构建应用程序的样本集,样本集包括应用程序的第一样本集,以及电子设备的第二样本集,将第一样本集和第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型,获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对所述应用程序进行管控。本申请可以提高对应用程序进行预测的准确性,从而提升对进入后台的应用程序进行管控的智能化和准确性。
本申请还提供一种应用程序管控装置,包括:
采集模块,用于采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集,所述样本集包括第一样本集和第二样本集,所述第一样本集包括应用程序的特征信息,所述第二样本集包括电子设备的特征信息;
训练模块,用于利用所述第一样本集和所述第二样本集分别对循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型;
获取模块,用于获取所述应用程序当前的多维特征信息并作为预测样本;
管控模块,用于根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控。
在一实施例中,所述训练模块,具体用于将所述第一样本集和所述第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,以生成优化参数;
根据所述优化参数和所述循环神经网络模型、栈式自编码神经网络模型生成训练后的预测模型。
在一实施例中,所述训练模块具体包括:融合层、全连接层、损失值计算器和训练子模块;
所述融合层,用于将所述循环神经网络模型和栈式自编码神经网络模型的输出值输入融合层以得到中间值;
所述全连接层,用于将所述中间值输入所述全连接层以得到对应多个所述预测结果的概率;
所述损失值计算器,用于根据多个所述预测结果和与其对应的多个所述概率得到损失值;
所述训练子模块,用于根据所述损失值进行训练,得到所述优化参数。
在一实施例中,所述训练子模块,具体用于根据所述损失值利用随机梯度下降法进行训练。
在一实施例中,所述训练模块,具体用于将所述全连接层的输出结果基于第一预设公式计算以得到对应多个所述预测结果的概率,其中所述第一预设公式为:
在一实施例中,所述训练模块具体用于:
基于第二预设公式根据多个所述预测结果和与其对应的多个所述概率得到损失值,其中所述第二预设公式为:
其中C为预测结果的类别数,y
k为真实值。
在一实施例中,所述预测结果包括第一预测值1和第二预测值0;
所述管控模块,具体用于:当所述预测结果为第一预测值1时,则清理所述应用程序;当所述预测结果为第二预测值0时,则保持所述应用程序的状态不变。
在一实施例中,所述循环神经网络模型用于对后台应用做时间序列分析,隐含层大小为64,共两层,激活函数采用线性函数y=x。
在一实施例中,所述栈式自编码神经网络模型用于编码静态特征,隐含层大小为64,共两层,激活函数采用Sigmoid函数:
请参阅图6,图6为本申请实施例提供的应用程序管控装置的结构示意图。其中该应用程序管控装置300应用于电子设备,该应用程序管控装置300包括采集模块301、训练模块302、获取模块303以及管控模块304。
其中,采集模块301,用于采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集,所述样本集包括应用程序的第一样本集,以及电子设备的第二样本集。
具体的,第一样本集的样本可以包括预设应用程序的使用信息,第二样本集的样本可以包括电子设备的状态信息、时间信息和位置信息等中的至少一项。
其中应用程序的使用信息可以为后台应用的使用状态,每隔五分钟记录一次,使用记为0,不使用记为1,每个应用的数据存储为一个二值向量。电子设备的状态信息可以包括如屏幕亮度、充电状态、剩余电量、WIFI状态、当前时间所处的时间段等,还可以包括与应用程序相关的特征,比如目标应用程序的类型以及目标应用程序被切换的方式,其中切换方式可以包括分为被home键切换、被recent键切换、被其他APP切换等。时间信息可以包括如当前时间点、工作日等。位置信息可以包括如GPS定位、基站定位、WIFI定位等。
将多个特性信息作为样本采集,然后形成预设应用程序的第一样本集和电子设备的第二样本集。
其中,预设应用程序可以是安装在电子设备中的任意应用程序,例如通讯应用程序、多媒体应用程序、游戏应用程序、资讯应用程序、或者购物应用程序等等。
训练模块302,用于将第一样本集和第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型。
在一实施例中,上述循环神经网络模型用于对后台应用做时间序列分析,隐含层大小为64,共两层,激活函数采用线性函数y=x。在循环神经网络中,时间维度是一个循环的过程,将过去n个时刻的数据都要纳入T+1时刻的预测之中,n我们选取5,训练过程中 T+1时刻的状态同时是标签信息。我们采用滑动窗的方式训练,即每次预测都根据其前五个时刻的历史状态。
上述栈式自编码神经网络模型用于编码静态特征,隐含层大小为64,共两层,激活函数采用Sigmoid函数:
将第一样本集和第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,进行学习以得到训练后的预测模型的优化参数。
预测模型包括依次连接的输入层、隐含层、融合层和全连接层,预测模型还可以包括分类器。具体的,该预测模型主要包括网络结构部分和网络训练部分,其中网络结构部分包括依次连接的输入层、隐含层、融合层和全连接层。
在一实施例中,网络训练部分可以包括分类器,分类器可以为Softmax分类器。
请一并参阅图7,图7为本申请实施例提供的应用程序管控装置的另一结构示意图。在一些实施方式中,训练模块302可以具体包括融合层3021、全连接层3022、损失计算器3023和训练子模块3024。
融合层3021,用于将所述循环神经网络模型和栈式自编码神经网络模型的输出值输入融合层3021以得到中间值;
全连接层3022,用于将中间值输入全连接层3022以得到对应多个预测结果的概率。
需要说明的是,可以将全连接层的输出结果输入Softmax分类器以得到对应多个预测结果的概率。其中,全连接层的输出结果包括循环神经网络模型和栈式自编码神经网络模型的输出结果,也即循环神经网络模型的输出值和栈式自编码神经网络模型的输出值。具体的,将全连接层的输出值输入分类器的步骤,可以为将全连接层的输出值按不同权重合成输入分类器。即将循环神经网络模型和栈式自编码神经网络模型的输出值加权和。具体公式如下:
Z
K=Z
K
APP+λ*Z
K
Device,
其中,λ为权重,Z
K
APP为循环神经网络模型的输出值,Z
K
Device为栈式自编码神经网络模型的输出值。
在一些实施例中,得到预测结果的概率可以基于第一预设公式将循环神经网络模型和栈式自编码神经网络模型经过全连接层的输出值合成输入分类器,并得到对应多个预测结果的概率,其中第一预设公式为:
损失值计算器3023,可以用于根据多个预测结果和与其对应的多个概率得到损失值。
在一些实施例中,得到损失值可以基于第二预设公式根据多个预测结果和与其对应的多个概率得到损失值,其中第二预设公式为:
其中C为预测结果的类别数,y
k为真实值。
训练子模块3024,可以用于根据损失值进行训练,得到优化参数。
可以根据损失值利用随机梯度下降法进行训练。并采用小批量的方式,批量大小为128,最大迭代次数100次,训练得到最优参数还可以根据批量梯度下降法或梯度下降法进行训练。
获取模块303,用于获取所述应用程序当前的多维特征信息并作为预测样本。
管控模块304,用于根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对所述应用程序进行管控。
若需要判断当前后台应用是否可清理,获取应用程序的当前使用信息和电子设备当前的多个特征信息,以作为第一样本集和第二样本集输入到预测模型,预测模型计算即可得到预测值。判断应用程序是否需要清理。
其中,预测结果包括第一预测值1和第二预测值0,根据预测结果对应用程序进行管控的步骤可以具体包括:
当预测结果为第一预测值1时,则清理应用程序;当预测结果为第二预测值0时,则保持应用程序的状态不变。
需要说明的是,预测模型的训练过程可以在服务器端也可以在电子设备端完成。当预测模型的训练过程、实际预测过程都在服务器端完成时,需要使用优化后的预测模型时,可以将应用程序的当前使用信息和电子设备当前的多个特征信息输入到服务器,服务器实际预测完成后,将预测结果发送至电子设备端,电子设备再根据预测结果管控该应用程序。
当预测模型的训练过程、实际预测过程都在电子设备端完成时,需要使用优化后的预测模型时,可以将应用程序的当前使用信息和电子设备当前的多个特征信息输入到电子设备,电子设备实际预测完成后,电子设备根据预测结果管控该应用程序。
在一些实施例中,管控模块304,还用于检测到应用程序是否进入后台,若进入后台,则获取应用程序的当前使用信息和电子设备当前的多个特征信息。然后根据预测模型、优化参数进行预测,生成预测结果,并根据预测结果对应用程序进行管控。
在一些实施例中,管控模块304,还用于获取预设时间,若当前系统时间到达预设时间时,则获取应用程序的当前使用信息和电子设备当前的多个特征信息。其中预设时间可以为一天中的一个时间点,如上午9点,也可以为一天中的几个时间点,如上午9点、下午6点等。也可以为多天中的一个或几个时间点。然后根据预测模型、优化参数生成预测结果,并根据预测结果对应用程序进行管控。
上述所有的技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。
由上述可知,本申请实施例的应用程序管控装置,通过采集应用程序的多维特征信息作为样本,构建应用程序的样本集,样本集包括应用程序的第一样本集,以及电子设备的 第二样本集,将第一样本集和第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型,获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对所述应用程序进行管控。本申请可以提高对应用程序进行预测的准确性,从而提升对进入后台的应用程序进行管控的智能化和准确性。
本申请实施例中,应用程序管控装置与上文实施例中的应用程序管控方法属于同一构思,在应用程序管控装置上可以运行应用程序管控方法实施例中提供的任一方法,其具体实现过程详见应用程序管控方法的实施例,此处不再赘述。
本申请实施例还提供一种电子设备。请参阅图8,电子设备400包括处理器401以及存储器402。其中,处理器401与存储器402电性连接。
处理器400是电子设备400的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或加载存储在存储器402内的计算机程序,以及调用存储在存储器402内的数据,执行电子设备400的各种功能并处理数据,从而对电子设备400进行整体监控。
存储器402可用于存储软件程序以及模块,处理器401通过运行存储在存储器402的计算机程序以及模块,从而执行各种功能应用以及数据处理。存储器402可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的计算机程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器402还可以包括存储器控制器,以提供处理器401对存储器402的访问。
在本申请实施例中,电子设备400中的处理器401会按照如下的步骤,将一个或一个以上的计算机程序的进程对应的指令加载到存储器402中,并由处理器401运行存储在存储器402中的计算机程序,从而实现各种功能,如下:
采集应用程序的多维特征信息作为样本,构建应用程序的样本集,样本集包括应用程序的第一样本集,以及电子设备的第二样本集,将第一样本集和第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型,获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对所述应用程序进行管控。本申请可以提高对应用程序进行预测的准确性,从而提升对进入后台的应用程序进行管控的智能化和准确性。
在一些实施方式中,在将所述第一样本集和所述第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型时,处理器401还用于执行以下步骤:
将所述第一样本集和所述第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,以生成优化参数;
根据所述优化参数和所述循环神经网络模型、栈式自编码神经网络模型生成训练后的预测模型。
在一些实施方式中,在将所述第一样本集和所述第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,以生成优化参数时,处理器401还 用于执行以下步骤:
将所述循环神经网络模型和栈式自编码神经网络模型的输出值输入融合层以得到中间值;
将所述中间值输入全连接层以得到对应多个预测结果的概率;
根据多个所述预测结果和与其对应的多个所述概率得到损失值;
根据所述损失值进行训练,生成所述优化参数。
在一些实施方式中,在所述根据所述损失值进行训练时,处理器401还用于执行以下步骤:
根据损失值利用随机梯度下降法进行训练。
在一些实施方式中,在将所述中间值输入全连接层以得到对应多个所述预测结果的概率时,处理器401还用于执行以下步骤:
将全连接层的输出结果基于第一预设公式计算以得到对应多个预测结果的概率,其中第一预设公式为:
在一些实施方式中,在根据多个所述预测结果和与其对应的多个所述概率得到损失值时,处理器401还用于执行以下步骤:
基于第二预设公式根据多个预测结果和与其对应的多个概率得到损失值,其中第二预设公式为:
其中C为预测结果的类别数,y
k为真实值。
由上述可知,本申请实施例提供的电子设备,通过采集应用程序的多维特征信息作为样本,构建应用程序的样本集,样本集包括应用程序的第一样本集,以及电子设备的第二样本集,将第一样本集和第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型,获取应用程序当前的多维特征信息并作为预测样本,根据预测样本和训练后的预测模型生成预测结果,并根据预测结果对所述应用程序进行管控。本申请可以提高对应用程序进行预测的准确性,从而提升对进入后台的应用程序进行管控的智能化和准确性。
请一并参阅图9,在一些实施方式中,电子设备400还可以包括:显示器403、射频电路404、音频电路405以及电源406。其中,其中,显示器403、射频电路404、音频电路405以及电源406分别与处理器401电性连接。
显示器403可以用于显示由用户输入的信息或提供给用户的信息以及各种图形用户接 口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示器403可以包括显示面板,在一些实施方式中,可以采用液晶显示器(Liquid Crystal Display,LCD)、或者有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板。
射频电路404可以用于收发射频信号,以通过无线通信与网络设备或其他电子设备建立无线通讯,与网络设备或其他电子设备之间收发信号。
音频电路405可以用于通过扬声器、传声器提供用户与电子设备之间的音频接口。
电源406可以用于给电子设备400的各个部件供电。在一些实施例中,电源406可以通过电源管理系统与处理器401逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管图9中未示出,电子设备400还可以包括摄像头、蓝牙模块等,在此不再赘述。
本申请实施例还提供一种存储介质,存储介质存储有计算机程序,当计算机程序在计算机上运行时,使得计算机执行上述任一实施例中的应用程序管控方法。
在本申请实施例中,存储介质可以是磁碟、光盘、只读存储器(Read Only Memory,ROM)、或者随机存取记忆体(Random Access Memory,RAM)等。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
需要说明的是,对本申请实施例的应用程序管控方法而言,本领域普通测试人员可以理解实现本申请实施例应用程序管控方法的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,计算机程序可存储于一计算机可读取存储介质中,如存储在电子设备的存储器中,并被该电子设备内的至少一个处理器执行,在执行过程中可包括如应用程序管控方法的实施例的流程。其中,的存储介质可为磁碟、光盘、只读存储器、随机存取记忆体等。
对本申请实施例的应用程序管控装置而言,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中,存储介质譬如为只读存储器,磁盘或光盘等。
以上对本申请实施例所提供的一种应用程序管控方法、装置、存储介质及电子设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。
Claims (20)
- 一种应用程序管控方法,其中,所述方法包括:采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集,所述样本集包括第一样本集和第二样本集,所述第一样本集包括应用程序的特征信息,所述第二样本集包括电子设备的特征信息;利用所述第一样本集和所述第二样本集分别对循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型;获取所述应用程序当前的多维特征信息并作为预测样本;根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控。
- 根据权利要求1所述的应用程序管控方法,其中,利用所述第一样本集和所述第二样本集分别对循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型的步骤,包括:将所述第一样本集和所述第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,以生成优化参数;根据所述优化参数和所述循环神经网络模型、栈式自编码神经网络模型生成训练后的预测模型。
- 根据权利要求2所述的应用程序管控方法,其中,将所述第一样本集和所述第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,以生成优化参数的步骤,包括:将所述循环神经网络模型和栈式自编码神经网络模型的输出值输入融合层以得到中间值;将所述中间值输入全连接层以得到对应多个预测结果的概率;根据多个所述预测结果和与其对应的多个所述概率得到损失值;根据所述损失值进行训练,生成所述优化参数。
- 根据权利要求3所述的应用程序管控方法,其中,所述根据所述损失值进行训练的步骤,包括:根据所述损失值利用随机梯度下降法进行训练。
- 根据权利要求1所述的应用程序管控方法,其中,所述预测结果包括第一预测值1和第二预测值0;所述根据所述预测结果对所述应用程序进行管控的步骤,包括:当所述预测结果为第一预测值1时,则清理所述应用程序;当所述预测结果为第二预测值0时,则保持所述应用程序的状态不变。
- 根据权利要求1所述的应用程序管控方法,其中,所述循环神经网络模型用于对后台应用做时间序列分析,隐含层大小为64,共两层,激活函数采用线性函数y=x。
- 一种应用程序管控装置,其中,所述装置包括:采集模块,用于采集应用程序的多维特征信息作为样本,构建所述应用程序的样本集,所述样本集包括第一样本集和第二样本集,所述第一样本集包括应用程序的特征信息,所述第二样本集包括电子设备的特征信息;训练模块,用于利用所述第一样本集和所述第二样本集分别对循环神经网络模型和栈式自编码神经网络模型进行训练,得到训练后的预测模型;获取模块,用于获取所述应用程序当前的多维特征信息并作为预测样本;管控模块,用于根据所述预测样本和训练后的预测模型生成预测结果,并根据所述预测结果对所述应用程序进行管控。
- 根据权利要求10所述的应用程序管控装置,其中,所述训练模块,具体用于将所述第一样本集和所述第二样本集作为训练数据分别输入循环神经网络模型和栈式自编码神经网络模型进行训练,以生成优化参数;根据所述优化参数和所述循环神经网络模型、栈式自编码神经网络模型生成训练后的预测模型。
- 根据权利要求11所述的应用程序管控装置,其中,所述训练模块具体包括:融合层、全连接层、损失值计算器和训练子模块;所述融合层,用于将所述循环神经网络模型和栈式自编码神经网络模型的输出值输入融合层以得到中间值;所述全连接层,用于将所述中间值输入所述全连接层以得到对应多个所述预测结果的 概率;所述损失值计算器,用于根据多个所述预测结果和与其对应的多个所述概率得到损失值;所述训练子模块,用于根据所述损失值进行训练,得到所述优化参数。
- 根据权利要求12所述的应用程序管控装置,其中,所述训练子模块,具体用于根据所述损失值利用随机梯度下降法进行训练。
- 根据权利要求10所述的应用程序管控装置,其中,所述预测结果包括第一预测值1和第二预测值0;所述管控模块,具体用于:当所述预测结果为第一预测值1时,则清理所述应用程序;当所述预测结果为第二预测值0时,则保持所述应用程序的状态不变。
- 根据权利要求10所述的应用程序管控装置,其中,所述循环神经网络模型用于对后台应用做时间序列分析,隐含层大小为64,共两层,激活函数采用线性函数y=x。
- 一种存储介质,其上存储有计算机程序,其中,当所述计算机程序在计算机上运行时,使得所述计算机执行如权利要求1至9任一项所述的应用程序管控方法。
- 一种电子设备,包括处理器和存储器,所述存储器有计算机程序,其中,所述处理器通过调用所述计算机程序,用于执行如权利要求1至9任一项所述的应用程序管控方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710920013.4 | 2017-09-30 | ||
CN201710920013.4A CN107678799B (zh) | 2017-09-30 | 2017-09-30 | 应用程序管控方法、装置、存储介质及电子设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019062413A1 true WO2019062413A1 (zh) | 2019-04-04 |
Family
ID=61139518
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/102239 WO2019062413A1 (zh) | 2017-09-30 | 2018-08-24 | 应用程序管控方法、装置、存储介质及电子设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107678799B (zh) |
WO (1) | WO2019062413A1 (zh) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110502269A (zh) * | 2019-07-24 | 2019-11-26 | 深圳壹账通智能科技有限公司 | 应用程序优化方法、设备、存储介质及装置 |
CN111460732A (zh) * | 2020-03-31 | 2020-07-28 | 深圳大学 | 一种平面电机非线性模型的构建方法 |
CN111832993A (zh) * | 2020-07-13 | 2020-10-27 | 深圳市今天国际物流技术股份有限公司 | 一种仓储物流系统的预测维护方法及相关组件 |
CN111950503A (zh) * | 2020-06-16 | 2020-11-17 | 中国科学院地质与地球物理研究所 | 航空瞬变电磁数据处理方法、装置及计算设备 |
CN112748941A (zh) * | 2020-08-06 | 2021-05-04 | 腾讯科技(深圳)有限公司 | 基于反馈信息的目标应用程序的更新方法和装置 |
CN113342474A (zh) * | 2021-06-29 | 2021-09-03 | 中国农业银行股份有限公司 | 客户流量的预测、模型训练的方法、设备及存储介质 |
CN113837227A (zh) * | 2021-08-26 | 2021-12-24 | 北京智芯微电子科技有限公司 | 负载预测方法、装置、芯片、电子设备及存储介质 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107678799B (zh) * | 2017-09-30 | 2019-10-25 | Oppo广东移动通信有限公司 | 应用程序管控方法、装置、存储介质及电子设备 |
CN107678845B (zh) * | 2017-09-30 | 2020-03-10 | Oppo广东移动通信有限公司 | 应用程序管控方法、装置、存储介质及电子设备 |
CN108595228B (zh) | 2018-05-10 | 2021-03-12 | Oppo广东移动通信有限公司 | 应用程序预测模型建立方法、装置、存储介质及移动终端 |
CN108595227A (zh) | 2018-05-10 | 2018-09-28 | Oppo广东移动通信有限公司 | 应用程序预加载方法、装置、存储介质及移动终端 |
CN108710513B (zh) | 2018-05-15 | 2020-07-21 | Oppo广东移动通信有限公司 | 应用程序启动方法、装置、存储介质及终端 |
CN108804157A (zh) | 2018-06-05 | 2018-11-13 | Oppo广东移动通信有限公司 | 应用程序预加载方法、装置、存储介质及终端 |
CN111274118B (zh) * | 2018-12-05 | 2024-05-14 | 阿里巴巴集团控股有限公司 | 一种应用优化处理方法、装置和系统 |
CN111797866A (zh) * | 2019-04-09 | 2020-10-20 | Oppo广东移动通信有限公司 | 特征提取方法、装置、存储介质及电子设备 |
CN110263029B (zh) * | 2019-05-06 | 2023-06-23 | 平安科技(深圳)有限公司 | 数据库生成测试数据的方法、装置、终端及介质 |
CN111079053A (zh) * | 2019-12-19 | 2020-04-28 | 北京安兔兔科技有限公司 | 一种产品信息展示方法、装置、电子设备及存储介质 |
CN112633473A (zh) * | 2020-12-18 | 2021-04-09 | 展讯通信(上海)有限公司 | 基于ai的可穿戴设备及其应用数据处理方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150363687A1 (en) * | 2014-06-13 | 2015-12-17 | International Business Machines Corporation | Managing software bundling using an artificial neural network |
CN106484077A (zh) * | 2016-10-19 | 2017-03-08 | 上海青橙实业有限公司 | 移动终端及其基于应用软件分类的省电方法 |
CN106900070A (zh) * | 2017-01-09 | 2017-06-27 | 北京邮电大学 | 一种移动设备多应用程序数据传输能耗优化方法 |
CN107678799A (zh) * | 2017-09-30 | 2018-02-09 | 广东欧珀移动通信有限公司 | 应用程序管控方法、装置、存储介质及电子设备 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102508701A (zh) * | 2011-10-18 | 2012-06-20 | 北京百纳威尔科技有限公司 | 自动控制应用程序运行处理方法和用户终端 |
CN107133094B (zh) * | 2017-06-05 | 2021-11-02 | 努比亚技术有限公司 | 应用管理方法、移动终端及计算机可读存储介质 |
-
2017
- 2017-09-30 CN CN201710920013.4A patent/CN107678799B/zh active Active
-
2018
- 2018-08-24 WO PCT/CN2018/102239 patent/WO2019062413A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150363687A1 (en) * | 2014-06-13 | 2015-12-17 | International Business Machines Corporation | Managing software bundling using an artificial neural network |
CN106484077A (zh) * | 2016-10-19 | 2017-03-08 | 上海青橙实业有限公司 | 移动终端及其基于应用软件分类的省电方法 |
CN106900070A (zh) * | 2017-01-09 | 2017-06-27 | 北京邮电大学 | 一种移动设备多应用程序数据传输能耗优化方法 |
CN107678799A (zh) * | 2017-09-30 | 2018-02-09 | 广东欧珀移动通信有限公司 | 应用程序管控方法、装置、存储介质及电子设备 |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110502269A (zh) * | 2019-07-24 | 2019-11-26 | 深圳壹账通智能科技有限公司 | 应用程序优化方法、设备、存储介质及装置 |
CN111460732B (zh) * | 2020-03-31 | 2023-05-16 | 深圳大学 | 一种平面电机非线性模型的构建方法 |
CN111460732A (zh) * | 2020-03-31 | 2020-07-28 | 深圳大学 | 一种平面电机非线性模型的构建方法 |
CN111950503A (zh) * | 2020-06-16 | 2020-11-17 | 中国科学院地质与地球物理研究所 | 航空瞬变电磁数据处理方法、装置及计算设备 |
CN111950503B (zh) * | 2020-06-16 | 2024-01-30 | 中国科学院地质与地球物理研究所 | 航空瞬变电磁数据处理方法、装置及计算设备 |
CN111832993A (zh) * | 2020-07-13 | 2020-10-27 | 深圳市今天国际物流技术股份有限公司 | 一种仓储物流系统的预测维护方法及相关组件 |
CN111832993B (zh) * | 2020-07-13 | 2023-06-30 | 深圳市今天国际物流技术股份有限公司 | 一种仓储物流系统的预测维护方法及相关组件 |
CN112748941B (zh) * | 2020-08-06 | 2023-12-12 | 腾讯科技(深圳)有限公司 | 基于反馈信息的目标应用程序的更新方法和装置 |
CN112748941A (zh) * | 2020-08-06 | 2021-05-04 | 腾讯科技(深圳)有限公司 | 基于反馈信息的目标应用程序的更新方法和装置 |
CN113342474A (zh) * | 2021-06-29 | 2021-09-03 | 中国农业银行股份有限公司 | 客户流量的预测、模型训练的方法、设备及存储介质 |
CN113342474B (zh) * | 2021-06-29 | 2024-04-30 | 中国农业银行股份有限公司 | 客户流量的预测、模型训练的方法、设备及存储介质 |
CN113837227A (zh) * | 2021-08-26 | 2021-12-24 | 北京智芯微电子科技有限公司 | 负载预测方法、装置、芯片、电子设备及存储介质 |
CN113837227B (zh) * | 2021-08-26 | 2024-02-02 | 北京智芯微电子科技有限公司 | 负载预测方法、装置、芯片、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN107678799A (zh) | 2018-02-09 |
CN107678799B (zh) | 2019-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019062413A1 (zh) | 应用程序管控方法、装置、存储介质及电子设备 | |
JP7206288B2 (ja) | 音楽推薦方法、装置、コンピューティング機器及び媒体 | |
US12079726B2 (en) | Probabilistic neural network architecture generation | |
WO2019120019A1 (zh) | 用户性别预测方法、装置、存储介质及电子设备 | |
WO2019062414A1 (zh) | 应用程序管控方法、装置、存储介质及电子设备 | |
WO2018120813A1 (zh) | 一种智能推荐方法和终端 | |
WO2022016556A1 (zh) | 一种神经网络蒸馏方法以及装置 | |
KR102031271B1 (ko) | 콘텐트 검색 엔진 | |
US11249645B2 (en) | Application management method, storage medium, and electronic apparatus | |
CN113284142B (zh) | 图像检测方法、装置、计算机可读存储介质及计算机设备 | |
WO2019062418A1 (zh) | 应用清理方法、装置、存储介质及电子设备 | |
CN111538852B (zh) | 多媒体资源处理方法、装置、存储介质及设备 | |
WO2019062405A1 (zh) | 应用程序的处理方法、装置、存储介质及电子设备 | |
CN107870810B (zh) | 应用清理方法、装置、存储介质及电子设备 | |
CN107885545B (zh) | 应用管理方法、装置、存储介质及电子设备 | |
CN112287994A (zh) | 伪标签处理方法、装置、设备及计算机可读存储介质 | |
CN113111917B (zh) | 一种基于双重自编码器的零样本图像分类方法及装置 | |
CN111125519A (zh) | 用户行为预测方法、装置、电子设备以及存储介质 | |
WO2019085750A1 (zh) | 应用程序管控方法、装置、介质及电子设备 | |
CN112381236A (zh) | 联邦迁移学习的数据处理方法、装置、设备及存储介质 | |
CN116401522A (zh) | 一种金融服务动态化推荐方法和装置 | |
WO2019062411A1 (zh) | 后台应用程序管控方法、存储介质及电子设备 | |
CN107741867B (zh) | 应用程序管理方法、装置、存储介质及电子设备 | |
CN108681480B (zh) | 后台应用程序管控方法、装置、存储介质及电子设备 | |
CN116204709A (zh) | 一种数据处理方法及相关装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18861334 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18861334 Country of ref document: EP Kind code of ref document: A1 |