WO2019085750A1 - 应用程序管控方法、装置、介质及电子设备 - Google Patents

应用程序管控方法、装置、介质及电子设备 Download PDF

Info

Publication number
WO2019085750A1
WO2019085750A1 PCT/CN2018/110519 CN2018110519W WO2019085750A1 WO 2019085750 A1 WO2019085750 A1 WO 2019085750A1 CN 2018110519 W CN2018110519 W CN 2018110519W WO 2019085750 A1 WO2019085750 A1 WO 2019085750A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
value
probability value
training model
feature information
Prior art date
Application number
PCT/CN2018/110519
Other languages
English (en)
French (fr)
Inventor
梁昆
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP18873772.0A priority Critical patent/EP3706043A4/en
Publication of WO2019085750A1 publication Critical patent/WO2019085750A1/zh
Priority to US16/848,270 priority patent/US20200241483A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0205Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric not using a model or a simulator of the controlled system
    • G05B13/026Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric not using a model or a simulator of the controlled system using a predictor
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
    • G05B13/027Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/561Adding application-functional data or data for application control, e.g. adding metadata
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/32Operator till task planning
    • G05B2219/32193Ann, neural base quality management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present application relates to the field of electronic device terminals, and in particular, to an application management method, device, medium, and electronic device.
  • the embodiment of the present application provides an application management method, device, medium, and electronic device to intelligently close an application.
  • An embodiment of the present application provides an application management and control method, which is applied to an electronic device, where the application management method includes the following steps:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate the first training model, and the nonlinear support vector machine algorithm is used to generate the second training model.
  • the current feature information s of the application is input into the first training model for calculation to obtain a first closing probability value, and when the first closing probability value is within the hesitation interval, the application is The current feature information s is input to the second training model for calculation to obtain a second closing probability value, and when the second closing probability value is greater than the determination value, the application is closed.
  • the embodiment of the present application further provides an application management method device, where the device includes:
  • An obtaining module configured to obtain the application sample vector set, where the sample vector in the sample vector set includes historical feature information x i of multiple dimensions of the application;
  • a generating module configured to calculate a sample vector set by using a BP neural network algorithm, generate a first training model, and generate a second training model by using a nonlinear support vector machine algorithm;
  • a calculation module configured to: when the application enters the background, input the current feature information s of the application into the first training model for calculation, to obtain a first closing probability value, when the first closing probability value is within a hesitation interval, The current feature information s of the application is input into the second training model for calculation to obtain a second closing probability value, and when the second closing probability value is greater than the determination value, the application is closed.
  • the embodiment of the present application further provides a medium in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor to execute the application management method described above.
  • the embodiment of the present application further provides an electronic device, where the electronic device includes a processor and a memory, the electronic device is electrically connected to the memory, the memory is used to store instructions and data, and the processor is configured to execute the following step:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the BP neural network algorithm is used to calculate the sample vector set to generate the first training model, and the second training model is generated by the nonlinear support vector machine algorithm.
  • the current feature information s of the application is input into the first training model for calculation to obtain a first closing probability value, and when the first closing probability value is within the hesitation interval, the application is The current feature information s is input to the second training model for calculation to obtain a second closing probability value, and when the second closing probability value is greater than the determination value, the application is closed.
  • the embodiment of the present application provides an application management method, device, medium, and electronic device to intelligently close an application.
  • FIG. 1 is a schematic diagram of a system of an application management device according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an application scenario of an application management and control device according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic flowchart of an application management and control method according to an embodiment of the present application.
  • FIG. 4 is another schematic flowchart of an application management and control method according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
  • FIG. 6 is another schematic structural diagram of an apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 8 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
  • An application management method is applied to an electronic device, wherein the application management method includes:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate the first training model, and the nonlinear support vector machine algorithm is used to generate the second training model;
  • the current feature information s of the application is input into the first training model for calculation to obtain a first closing probability value, and when the first closing probability value is within the hesitation interval, the application is The current feature information s is input to the second training model for calculation to obtain a second closing probability value, and when the second closing probability value is greater than the determination value, the application is closed.
  • the BP neural network algorithm is used to calculate a sample vector set to generate a first training model, including:
  • the sample vector set is brought into the network structure for calculation to obtain the first training model.
  • the definition network structure including:
  • the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
  • the hidden layer including M nodes
  • the classification layer adopts a softmax function, and the softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value;
  • the output layer comprising 2 nodes
  • the activation function adopting a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1;
  • the batch size is A
  • the learning rate is set, and the learning rate is B.
  • the sample vector set is brought into a network structure for calculation, and the first training model China is obtained, including:
  • the predicted probability value is brought into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the network structure is modified according to the predicted result value y to obtain a first training model.
  • the generating the second training model by using the nonlinear support vector machine algorithm includes:
  • the second training model is obtained by defining a Gaussian kernel function.
  • the method further includes: when the first shutdown probability value is outside the hesitation interval, determining whether the first shutdown probability value is smaller than a minimum value of the hesitation interval or greater than a maximum value of the hesitation interval.
  • the application management method when the first closing probability value is less than the minimum value of the hesitation interval, the application is retained; when the first closing probability value is greater than the maximum value of the hesitation interval, the application is closed.
  • the current feature information s of the application is input into the first training model for calculation, to obtain a first closing probability value, when the first closing probability The value is within the hesitation interval, and the current feature information s of the application is input into the second training model for calculation to obtain a second closing probability value, and when the second closing probability value is greater than the determination value, the application is closed.
  • the first closing probability value is smaller than a minimum value of the hesitation interval or greater than a maximum value of the hesitation interval.
  • An application management device comprising:
  • An obtaining module configured to obtain the application sample vector set, where the sample vector in the sample vector set includes historical feature information x i of multiple dimensions of the application;
  • a generating module is configured to calculate a sample vector set by using a BP neural network algorithm, generate a first training model, and generate a second training model by using a nonlinear support vector machine algorithm;
  • a calculation module configured to: when the application enters the background, input the current feature information s of the application into the first training model for calculation, to obtain a first closing probability value, when the first closing probability value is within a hesitation interval, The current feature information s of the application is input into the second training model for calculation to obtain a second closing probability value, and when the second closing probability value is greater than the determination value, the application is closed.
  • An electronic device comprising: a processor and a memory, the electronic device being electrically connected to the memory, the memory for storing instructions and data, the processor for performing:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the BP neural network algorithm is used to calculate the sample vector set to generate the first training model, and the nonlinear support vector machine algorithm is used to generate the second training model;
  • the current feature information s of the application is input into the first training model for calculation to obtain a first closing probability value, and when the first closing probability value is within the hesitation interval, the application is The current feature information s is input to the second training model for calculation to obtain a second closing probability value, and when the second closing probability value is greater than the determination value, the application is closed.
  • the processor in the calculating a sample vector set by using a BP neural network algorithm to generate a first training model, the processor further performs:
  • the sample vector set is brought into the network structure for calculation to obtain the first training model.
  • the processor further performs:
  • the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
  • the hidden layer including M nodes
  • the classification layer adopts a softmax function, and the softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value;
  • the output layer comprising 2 nodes
  • the activation function adopting a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1;
  • the batch size is A
  • the learning rate is set, and the learning rate is B.
  • the processor when the sample vector set is brought into a network structure for calculation, and the first training model is obtained, the processor further performs:
  • the predicted probability value is brought into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the network structure is modified according to the predicted result value y to obtain a first training model.
  • the processor in the generating a second training model by using a nonlinear support vector machine algorithm, the processor further performs:
  • the second training model is obtained by defining a Gaussian kernel function.
  • the processor further performs: when the first off probability value is outside the hesitation interval, determining whether the first off probability value is less than a minimum value of the hesitation interval or greater than a maximum value of the hesitation interval.
  • the application when the first closing probability value is less than the minimum value of the hesitation interval, the application is retained; when the first closing probability value is greater than the maximum value of the hesitation interval, the application is closed.
  • the processor executes:
  • the first closing probability value is smaller than a minimum value of the hesitation interval or greater than a maximum value of the hesitation interval.
  • the application management method provided by the present application is mainly applied to electronic devices such as a wristband, a smart phone, a tablet based on an Apple system or an Android system, or a smart mobile electronic device such as a Windows or Linux based notebook computer.
  • the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
  • FIG. 1 is a schematic diagram of a system for controlling an application program according to an embodiment of the present application.
  • the application management device is mainly configured to: obtain historical feature information x i of the application from a database, and then calculate the historical feature information x i by an algorithm to obtain a training model, and secondly, the current feature information of the application.
  • the training model is input for calculation, and the calculation result is used to judge whether the application can be closed to control the preset application, such as closing, or freezing.
  • FIG. 2 is a schematic diagram of an application scenario of an application management and control method according to an embodiment of the present application.
  • the historical feature information x i of the application is obtained from the database, and then the historical feature information x i is calculated by an algorithm to obtain a training model, and secondly, when the application control device detects that the application enters When the electronic device is in the background, the current feature information s of the application is input into the training model for calculation, and the calculation result determines whether the application can be closed.
  • the historical feature information x i of the application a is obtained from the database, and then the historical feature information x i is calculated by an algorithm to obtain a training model, and secondly, when the application control device detects that the application a enters the electronic device In the background, the current feature information s of the application is input into the training model for calculation, and the calculation result determines that the application a can be closed, and the application a is closed, when the application control device detects that the application b enters the background of the electronic device. At this time, the current feature information s of the application b is input into the training model for calculation, and it is judged by the calculation result that the application b needs to be retained, and the application b is retained.
  • the embodiment of the present application provides an application management method, and the execution entity of the application management method may be an application management device provided by an embodiment of the present invention, or an electronic device of the application management device, where the application The control device can be implemented in hardware or software.
  • FIG. 3 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application.
  • the application management and control method provided by the embodiment of the present application is applied to an electronic device, and the specific process may be as follows:
  • Step S11 Acquire the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the feature information of the multiple dimensions may refer to Table 1.
  • the feature information of the ten dimensions shown in Table 1 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
  • historical features of six dimensions can be selected:
  • WiFi whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
  • step S12 the BP neural network algorithm is used to calculate the sample vector set to generate the first training model, and the nonlinear support vector machine algorithm is used to generate the second training model.
  • FIG. 4 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application.
  • the step S12 includes steps S121 and S122, wherein step S121 is to calculate a sample vector set by using a BP neural network algorithm to generate a first training model, and step S122 is to generate a second training model by using a nonlinear support vector machine algorithm.
  • step S121 and step S122 can be reversed.
  • the step S121 may include:
  • Step S1211 defining a network structure
  • Step S1212 Bring the sample vector set into the network structure for calculation to obtain the first training model.
  • step S1211 the defining the network structure includes:
  • Step S1211a setting an input layer, the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i .
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
  • the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
  • Step S1211b setting a hidden layer, the hidden layer including M nodes.
  • the hidden layer may include a plurality of implicit layers.
  • the number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
  • the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer.
  • the first implicit layering includes 10 nodes
  • the second implicit layering includes 5 nodes
  • the third implicit layering includes 5 nodes.
  • Step S1211c setting a classification layer, the classification layer adopts a softmax function, and the softmax function is
  • p is the predicted probability value
  • Z K is the intermediate value
  • C is the number of categories of the predicted result. Is the jth intermediate value.
  • step S1211d an output layer is set, and the output layer includes two nodes.
  • Step S1211e setting an activation function, the activation function adopting a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1.
  • step S1211f the batch size is set, and the batch size is A.
  • the batch size can be flexibly adjusted according to actual conditions.
  • the batch size can be 50-200.
  • the batch size is 128.
  • step S1211g the learning rate is set, and the learning rate is B.
  • the learning rate can be flexibly adjusted according to actual conditions.
  • the learning rate can be from 0.1 to 1.5.
  • the learning rate is 0.9.
  • step S1212 the step of bringing the sample vector set into the network structure for calculation, the step of obtaining the first training model may include:
  • step S1212a the sample vector set is input at the input layer for calculation, and an output value of the input layer is obtained.
  • Step S1212b inputting an output value of the input layer in the hidden layer to obtain an output value of the hidden layer.
  • the output value of the input layer is an input value of the hidden layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output of the input layer is the input value of the first implicit layer.
  • the output value of the first implicit layer is an input value of the second implicit layer.
  • the output value of the second implicit layer is an input value of the third implicit layer, and so on.
  • Step S1212c inputting an output value of the hidden layer in the classification layer to calculate, to obtain the predicted probability value [p 1 p 2 ] T , where p 1 is a predicted close probability value, and p 2 is a predicted retention probability value.
  • the output value of the hidden layer is an input value of the classification layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output value of the last implicit layer is the input value of the classification layer.
  • Step S1212d the predicted probability value is brought into the output layer for calculation, and the predicted result value y is obtained.
  • p 1 is greater than p 2
  • y [1 0] T
  • the output value of the classification layer is an input value of the output layer.
  • Step S1212e the network structure is modified according to the predicted result value y to obtain a first training model.
  • the step S122 may include:
  • Step S1221 labeling the sample vectors in the sample vector set to generate a label result y i of each sample vector
  • Step S1222 A second training model is obtained by defining a Gaussian kernel function.
  • step S1221 the sample vectors in the sample vector set are marked to generate a label result y i for each sample vector.
  • step S1222 a second training model is obtained by defining a Gaussian kernel function.
  • the kernel function is a Gaussian kernel function
  • K(x, x i ) is the Euclidean distance between any point x in space to a certain center x i
  • is the width parameter of the Gaussian kernel function
  • the step of obtaining a training model by defining a Gaussian kernel function and defining a model function and a classification decision function according to a Gaussian kernel function may be: defining a model function and a classification decision according to a Gaussian kernel function by defining a Gaussian kernel function a function, a target optimization function is defined by a model function and a classification decision function, and an optimal solution of the target optimization function is obtained by a sequence minimum optimization algorithm to obtain a second training model, wherein the target optimization function is Wherein the target optimization function is to find a minimum value on the parameters ( ⁇ 1 , ⁇ 2, ..., ⁇ i ), one ⁇ i corresponds to one sample (x i , y i ), and the total number of variables is equal to the training sample Capacity m.
  • the optimal solution can be written as
  • the second training model is
  • the g(x) is a training model output value
  • the output value is a second closing probability value
  • Step S13 when the application enters the background, the current feature information s of the application is input into the first training model for calculation, and the first closing probability value is obtained, and when the first closing probability value is within the hesitation interval, The current feature information s of the application is input into the second training model for calculation to obtain a second closing probability value, and when the second closing probability value is greater than the determination value, the application is closed.
  • the step S13 may include:
  • Step S131 Collect current feature information s of the application.
  • the dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
  • Step S132 Bring the current feature information s into the first training model for calculation to obtain a first closing probability.
  • the current feature information s is input into the first training model to calculate a probability value [p 1 ' p 2 '] T of the classification layer, where p 1 ' is the first closing probability value, and p 2 ' is the first The probability value is retained.
  • Step S133 Determine whether the first closing probability value is within the hesitation interval.
  • the hesitation interval is 0.4-0.6.
  • the minimum value of the hesitation interval is 0.4.
  • the maximum value of the hesitation interval is 0.6.
  • step S134 and step S135 are performed, and when the first closing probability value is outside the hesitation interval, step S136 is performed.
  • Step S134 input current feature information s of the application into the second training model for calculation, to obtain a second closing probability value.
  • Step S135 determining whether the second closing probability value is greater than the determination value.
  • the determination value may be set to zero.
  • g(s) > the application is closed; when g(s) ⁇ 0, the application is retained.
  • step S136 it is determined whether the first closing probability value is smaller than the minimum value of the hesitation interval or greater than the maximum value of the hesitation interval.
  • the application management and control method provided by the application obtains the historical feature information x i , generates a first training model by using a BP neural network algorithm, and generates a second training model by using a nonlinear support vector machine algorithm.
  • the detection application enters the background, Thereby, the current feature information s of the application is brought into the first training model to obtain a first closing probability value, and when the first closing probability value is within the hesitation interval, the current feature information s of the application is input into the second
  • the training model performs calculations to obtain a second closing probability value, thereby determining whether the application needs to be closed, and intelligently closing the application.
  • FIG. 5 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application.
  • the device 30 includes an acquisition module 31, a generation module 32 and a calculation module 33.
  • the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
  • the obtaining module 31 is configured to obtain the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • FIG. 6 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application.
  • the device 30 further includes a detection module 34 for detecting that the application enters the background.
  • the device 30 can also include a storage module 35.
  • the storage module 35 is configured to store historical feature information x i of the application .
  • the feature information of the multiple dimensions may refer to Table 2.
  • the feature information of the ten dimensions shown in Table 2 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
  • historical features of six dimensions can be selected:
  • WiFi whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
  • the generating module 32 is configured to calculate a sample vector set by using a BP neural network algorithm to generate a first training model, and generate a second training model by using a nonlinear support vector machine algorithm.
  • the generating module 32 includes a first generating module 321 and a second generating module 322.
  • the first generating module 321 is configured to calculate a sample vector set by using a BP neural network algorithm to generate a first training model.
  • the second generation module 322 is configured to generate a second training model by using a nonlinear support vector machine algorithm.
  • the first generation module 321 includes a definition module 3211 and a first solution module 3212.
  • the definition module 3211 is used to define a network structure.
  • the definition module 3211 may include an input layer definition module 3211a, an implicit layer definition module 3211b, a classification layer definition module 3211c, an output layer definition module 3211d, an activation function definition module 3211e, a batch size definition module 3211f, and a learning rate definition module 3211g.
  • the input layer definition module 3211a is configured to set an input layer, the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i .
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
  • the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
  • the hidden layer definition module 3211b is configured to set an implicit layer, and the hidden layer includes M nodes.
  • the hidden layer may include a plurality of implicit layers.
  • the number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
  • the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer.
  • the first implicit layering includes 10 nodes
  • the second implicit layering includes 5 nodes
  • the third implicit layering includes 5 nodes.
  • the classification layer definition module 3211c is configured to set a classification layer, the classification layer adopts a softmax function, and the softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value.
  • the output layer definition module 3211d is configured to set an output layer, and the output layer includes 2 nodes.
  • the activation function definition module 3211e is configured to set an activation function, the activation function adopts a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1.
  • the batch size definition module 3211f is configured to set a batch size, and the batch size is A.
  • the batch size can be flexibly adjusted according to actual conditions.
  • the batch size can be 50-200.
  • the batch size is 128.
  • the learning rate definition module 3211g is configured to set a learning rate, and the learning rate is B.
  • the learning rate can be flexibly adjusted according to actual conditions.
  • the learning rate can be from 0.1 to 1.5.
  • the learning rate is 0.9.
  • the input layer definition module 3211a sets the input layer
  • the hidden layer definition module 3211b sets the hidden layer
  • the classification layer definition module 3211c sets the classification layer
  • the output layer definition module 3211d The setting output layer, the activation function definition module 3211e setting the activation function, the batch size definition module 3211f setting the batch size, and the learning rate definition module 3211g setting the learning rate can be flexibly adjusted.
  • the first solution module 3212 is configured to bring the sample vector set into the network structure for calculation to obtain a first training model.
  • the first solution module 3212 may include a first solution sub-module 3212a, a second solution sub-module 3212b, a third solution sub-module 3212c, a fourth solution sub-module 3212d, and a correction module 3212e.
  • the first solution sub-module 3212a is configured to input the sample vector set at the input layer for calculation to obtain an output value of the input layer.
  • the second solution sub-module 3212b is configured to input an output value of the input layer at the hidden layer to obtain an output value of the hidden layer.
  • the output value of the input layer is an input value of the hidden layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output of the input layer is the input value of the first implicit layer.
  • the output value of the first implicit layer is an input value of the second implicit layer.
  • the output value of the second implicit layer is an input value of the third implicit layer, and so on.
  • the third solution sub-module 3212c is configured to input an output value of the hidden layer in the classification layer to calculate, to obtain the predicted probability value [p 1 p 2 ] T .
  • the output value of the hidden layer is an input value of the classification layer.
  • the fourth solution sub-module 3212d is configured to bring the predicted probability value into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • p 1 is less than or equal to
  • the output value of the classification layer is an input value of the output layer.
  • the modification module 3212e is configured to modify the network structure according to the prediction result value y to obtain a first training model.
  • the second generation module 322 includes a training module 3221 and a second solution module 3222.
  • the training module 3221 is configured to mark the sample vectors in the sample vector set to generate a labeled result y i for each sample vector.
  • the second solution module 3222 is configured to obtain a second training model by defining a Gaussian kernel function.
  • the kernel function is a Gaussian kernel function
  • K(x, x i ) is the Euclidean distance between any point x in space to a certain center x i
  • is the width parameter of the Gaussian kernel function
  • the second solving module 3222 can be used to define a model function and a classification decision function according to a Gaussian kernel function by defining a Gaussian kernel function, and define a target optimization function through a model function and a classification decision function, by using a sequence.
  • the minimum optimization algorithm obtains the optimal solution of the target optimization function, and obtains a training model, and the target optimization function is Wherein the target optimization function is to find a minimum value on the parameters ( ⁇ 1 , ⁇ 2, ..., ⁇ i ), one ⁇ i corresponds to one sample (x i , y i ), and the total number of variables is equal to the training sample Capacity m.
  • the optimal solution can be written as
  • the second training model is
  • the g(x) is a training model output value
  • the output value is a second closing probability value
  • the calculating module 33 is configured to: when the application enters the background, input the current feature information s of the application into the first training model for calculation, to obtain a first closing probability value, when the first closing probability value is in a hesitation interval The current feature information s of the application is input into the second training model for calculation to obtain a second closing probability value, and when the second closing probability value is greater than the determination value, the application is closed.
  • the calculation module 33 may include an acquisition module 330 , a first calculation module 331 , and a second calculation module 332 .
  • the collecting module 330 is configured to collect the current feature information s of the application when the application enters the background;
  • the dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
  • the first calculating module 331 is configured to input the current feature information s of the application into the first training model for calculation when the application enters the background, to obtain a first closing probability value.
  • the current feature information s is input into the first training model to calculate a probability value [p 1 ' p 2 '] T of the classification layer, where p 1 ' is the first closing probability value, and p 2 ' is the first The probability value is retained.
  • the calculation module 33 further includes a first determination module 333.
  • the first determining module 333 is configured to determine whether the first closing probability value is in a hesitation interval.
  • the hesitation interval is 0.4-0.6.
  • the minimum value of the hesitation interval is 0.4.
  • the maximum value of the hesitation interval is 0.6.
  • the second calculating module 332 is configured to input the current feature information s of the application into the second training model for calculation when the first closing probability value is within a hesitation interval, to obtain a second closing probability value.
  • the calculation module 33 further includes a second determination module 334.
  • the second determining module 334 is configured to determine whether the second closing probability value is greater than the determining value.
  • the determination value may be set to zero.
  • g(s) > the application is closed; when g(s) ⁇ 0, the application is retained.
  • the calculation module 33 further includes a third determination module 335.
  • the third determining module 335 is configured to determine, when the first closing probability value is outside the hesitation interval, whether the first closing probability value is smaller than a minimum value of the hesitation interval or greater than a maximum value of the hesitation interval.
  • the collection module 331 is further configured to collect the current feature information s according to the predetermined acquisition time, and store the current feature information s in the storage module 35.
  • the collection module 331 is further configured to collect and detect The application enters the current feature information s corresponding to the time point in the background, and the current feature information s is input into the calculation module 33 to be brought into the training model for calculation.
  • the apparatus 30 can also include a shutdown module 36 for shutting down the application when it is determined that the application needs to be closed.
  • the apparatus for the application management method provided by the application obtains the first training model by using the BP neural network algorithm by acquiring the historical feature information x i , and generates the second training model by using the nonlinear support vector machine algorithm when detecting the application
  • the current feature information s of the application is brought into the first training model to obtain a first closing probability value, and when the first closing probability value is within the hesitation interval, the current feature information s of the application is input.
  • the second training model performs calculation to obtain a second closing probability value, thereby determining whether the application needs to be closed, and intelligently closing the application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device 500 includes a processor 501 and a memory 502.
  • the processor 501 is electrically connected to the memory 502.
  • the processor 501 is a control center of the electronic device 500, and connects various parts of the entire electronic device 500 by various interfaces and lines, by running or loading an application stored in the memory 502, and calling data stored in the memory 502, executing The various functions of the electronic device and the processing of the data enable overall monitoring of the electronic device 500.
  • the processor 501 in the electronic device 500 loads the instructions corresponding to the process of one or more applications into the memory 502 according to the following steps, and is stored and stored in the memory 502 by the processor 501.
  • the application thus implementing various functions:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the BP neural network algorithm is used to calculate the sample vector set to generate the first training model, and the nonlinear support vector machine algorithm is used to generate the second training model;
  • the current feature information s of the application is input into the first training model for calculation to obtain a first closing probability value, and when the first closing probability value is within the hesitation interval, the application is The current feature information s is input to the second training model for calculation to obtain a second closing probability value, and when the second closing probability value is greater than the determination value, the application is closed.
  • the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
  • the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the feature information of the multiple dimensions may refer to Table 3.
  • the feature information of the ten dimensions shown in Table 3 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
  • historical features of six dimensions can be selected:
  • WiFi whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
  • the processor 501 calculates a sample vector set by using a BP neural network algorithm, and the generating the first training model further includes:
  • the sample vector set is brought into the network structure for calculation to obtain the first training model.
  • the defined network structure includes:
  • the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
  • the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
  • a hidden layer is set, the hidden layer including M nodes.
  • the hidden layer may include a plurality of implicit layers.
  • the number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
  • the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer.
  • the first implicit layering includes 10 nodes
  • the second implicit layering includes 5 nodes
  • the third implicit layering includes 5 nodes.
  • the classification layer adopts a softmax function, and the softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value.
  • An output layer is set, the output layer comprising 2 nodes.
  • the activation function adopting a sigmoid function
  • the sigmoid function is Wherein the range of f(x) is 0 to 1.
  • the batch size can be flexibly adjusted according to actual conditions.
  • the batch size can be 50-200.
  • the batch size is 128.
  • the learning rate is set, and the learning rate is B.
  • the learning rate can be flexibly adjusted according to actual conditions.
  • the learning rate can be from 0.1 to 1.5.
  • the learning rate is 0.9.
  • the step of bringing the sample vector set into the network structure for calculation, and obtaining the first training model may include:
  • the sample vector set is input at the input layer for calculation to obtain an output value of the input layer.
  • An output value of the input layer is input to the hidden layer to obtain an output value of the hidden layer.
  • the output value of the input layer is an input value of the hidden layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output of the input layer is the input value of the first implicit layer.
  • the output value of the first implicit layer is an input value of the second implicit layer.
  • the output value of the second implicit layer is an input value of the third implicit layer, and so on.
  • the output value of the hidden layer is input at the classification layer to calculate, and the predicted probability value [p 1 p 2 ] T is obtained .
  • the output value of the hidden layer is an input value of the classification layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output value of the last implicit layer is the input value of the classification layer.
  • the predicted probability value is brought into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the output value of the classification layer is an input value of the output layer.
  • the network structure is modified according to the predicted result value y to obtain a first training model.
  • the processor 501 calculates a sample vector set by using a nonlinear support vector machine algorithm, and the generating the second training model further includes:
  • the second training model is obtained by defining a Gaussian kernel function.
  • the kernel function is a Gaussian kernel function
  • K(x, x i ) is the Euclidean distance between any point x in space to a certain center x i
  • is the width parameter of the Gaussian kernel function
  • the step of obtaining the second training model by defining a Gaussian kernel function is to obtain a second training model by defining a Gaussian kernel function, defining a model function and a classification decision function according to the Gaussian kernel function, and the model is obtained.
  • the step of obtaining a training model is to define a model function and a classification decision function according to a Gaussian kernel function by defining a Gaussian kernel function.
  • the target optimization function is defined by the model function and the classification decision function, and the optimal solution of the target optimization function is obtained by the sequence minimum optimization algorithm, and the second training model is obtained, and the target optimization function is Wherein the target optimization function is to find a minimum value on the parameters ( ⁇ 1 , ⁇ 2, ..., ⁇ i ), one ⁇ i corresponds to one sample (x i , y i ), and the total number of variables is equal to the training sample Capacity m.
  • the optimal solution can be written as
  • the second training model is
  • the g(x) is a training model output value
  • the output value is a second closing probability value
  • the processor 501 inputs the current feature information s of the application into the training model for calculation:
  • the current feature information s of the application is collected.
  • the dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
  • the current feature information s is brought into the first training model for calculation to obtain a first closing probability value.
  • the current feature information s is input into the training model to calculate a predicted probability value [p 1 ' p 2 '] T of the classification layer, where p 1 ' is the first closed probability value, and p 2 ' is the first reservation Probability value.
  • the hesitation interval is 0.4-0.6.
  • the minimum value of the hesitation interval is 0.4.
  • the maximum value of the hesitation interval is 0.6.
  • the current feature information s of the application is input into the second training model for calculation to obtain a second closing probability value.
  • the determination value may be set to zero.
  • g(s) > the application is closed; when g(s) ⁇ 0, the application is retained.
  • the first closing probability value is smaller than a minimum value of the hesitation interval or greater than a maximum value of the hesitation interval.
  • Memory 502 can be used to store applications and data.
  • the program stored in the memory 502 contains instructions executable in the processor.
  • the program can constitute various functional modules.
  • the processor 501 executes various function applications and data processing by running a program stored in the memory 502.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device 500 further includes a radio frequency circuit 503, a display screen 504, a control circuit 505, an input unit 506, an audio circuit 507, a sensor 508, and a power source 509.
  • the processor 501 is electrically connected to the radio frequency circuit 503, the display screen 504, the control circuit 505, the input unit 506, the audio circuit 507, the sensor 508, and the power source 509, respectively.
  • the radio frequency circuit 503 is configured to transceive radio frequency signals to communicate with a server or other electronic device over a wireless communication network.
  • the display screen 504 can be used to display information entered by the user or information provided to the user as well as various graphical user interfaces of the terminal, which can be composed of images, text, icons, video, and any combination thereof.
  • the control circuit 505 is electrically connected to the display screen 504 for controlling the display screen 504 to display information.
  • the input unit 506 can be configured to receive input digits, character information, or user characteristic information (eg, fingerprints), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function controls.
  • user characteristic information eg, fingerprints
  • the audio circuit 507 can provide an audio interface between the user and the terminal through a speaker and a microphone.
  • Sensor 508 is used to collect external environmental information.
  • Sensor 508 can include one or more of ambient brightness sensors, acceleration sensors, gyroscopes, and the like.
  • Power source 509 is used to power various components of electronic device 500.
  • the power supply 509 can be logically coupled to the processor 501 through a power management system to enable functions such as managing charging, discharging, and power management through the power management system.
  • the electronic device 500 may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the electronic device provided by the present application generates a first training model by using a BP neural network algorithm by acquiring historical feature information x i , and generates a second training model by using a nonlinear support vector machine algorithm, when the detection application enters the background, thereby
  • the current feature information s of the application is brought into the first training model to obtain a first closing probability value.
  • the first closing probability value is within the hesitation interval
  • the current feature information s of the application is input into the second training model.
  • the calculation is performed to obtain a second closing probability value, thereby determining whether the application needs to be closed, and intelligently closing the application.
  • the embodiment of the present invention further provides a medium in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor to execute the application management method described in any of the above embodiments.
  • the application management method, the device, the medium, and the electronic device provided by the embodiments of the present invention belong to the same concept, and the specific implementation process thereof is described in the full text of the specification, and details are not described herein again.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
  • ROM Read Only Memory
  • RAM Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Automation & Control Theory (AREA)
  • Probability & Statistics with Applications (AREA)
  • Mathematical Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Algebra (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Mathematics (AREA)
  • Library & Information Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)
  • Stored Programmes (AREA)

Abstract

本申请所提供的应用程序管控方法、装置、介质及电子设备,通过获取历史特征信息x i ,采用BP神经网络算法生成第一训练模型,采用非线性支持向量机算法生成第二训练模型,当检测应用程序进入后台时,通过根据应用程序的当前特征信息s对第一训练模型和所述第二训练模型进行计算,进而判断所述应用程序是否需要关闭。

Description

应用程序管控方法、装置、介质及电子设备
本申请要求于2017年10月31日提交中国专利局、申请号为201711047050.5、申请名称为“应用程序管控方法、装置、介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备终端领域,具体涉及一种应用程序管控方法、装置、介质及电子设备。
背景技术
终端用户每天会使用大量应用,通常一个应用被推到后台后,如果及时不清理会占用宝贵的系统内存资源,并且会影响系统功耗。因此,有必要提供一种应用程序管控方法、装置、介质及电子设备。
技术问题
本申请实施例提供一种应用程序管控方法、装置、介质及电子设备,以智能关闭应用程序。
技术解决方案
本申请实施例提供一种应用程序管控方法,应用于电子设备,所述应用程序管控方法包括以下步骤:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算,生成第一训练模型,采用非线性支持向量机算法生成第二训练模型;
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序。
本申请实施例还提供一种应用程序管控方法装置,所述装置包括:
获取模块,用于获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
生成模块,用于采用BP神经网络算法对样本向量集进行计算,生成第一训练模型,采用非线性支持向量机算法生成第二训练模型;以及
计算模块,用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序。
本申请实施例还提供一种介质,所述介质中存储有多条指令,所述指令适于由处理器加载以执行上述的应用程序管控方法。
本申请实施例还提供一种电子设备,所述电子设备包括处理器和存储器,所述电子设备与所述存储器电性连接,所述存储器用于存储指令和数据,所述处理器用于执行以下步骤:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
采用BP神经网络算法对样本向量集进行计算,生成第一训练模型,采用非线性支持向量机算法生成第二训练模型;
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序。
有益效果
本申请实施例提供一种应用程序管控方法、装置、介质及电子设备,以智能关闭应用程序。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的应用程序管控装置的一种系统示意图。
图2为本申请实施例提供的应用程序管控装置的应用场景示意图。
图3为本申请实施例提供的应用程序管控方法的一种流程示意图。
图4为本申请实施例提供的应用程序管控方法的另一种流程示意图。
图5为本申请实施例提供的装置的一种结构示意图。
图6为本申请实施例提供的装置的另一种结构示意图。
图7为本申请实施例提供的电子设备的一种结构示意图。
图8为本申请实施例提供的电子设备的另一种结构示意图。
本发明的实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
一种应用程序管控方法,应用于电子设备,其中,所述应用程序管控方法包括:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算生成第一训练模型,采用非线性支持向量机算法生成第二训练模型;以及
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序。
在所述应用程序管控方法中,在所述采用BP神经网络算法对样本向量集进行计算,生成第一训练模型中,包括:
定义网络结构;以及
将样本向量集带入网络结构进行计算,得到第一训练模型。
在所述应用程序管控方法中,在所述定义网络结构中,包括:
设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同;
设定隐含层,所述隐含层包括M个节点;
设定分类层,所述分类层采用softmax函数,所述softmax函数为
Figure PCTCN2018110519-appb-000001
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110519-appb-000002
为第j个中间值;
设定输出层,所述输出层包括2个节点;
设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110519-appb-000003
其中,所述f(x)的范围为0到1;
设定批量大小,所述批量大小为A;以及
设定学习率,所述学习率为B。
在所述应用程序管控方法中,在所述将样本向量集带入网络结构进行计算,得到第一训练模型中国,包括:
在输入层输入所述样本向量集进行计算,得到输入层的输出值;
在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值;
在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T,其中,p 1为预测关闭概率值,p 2为预测保留概率值;
将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T;以及
根据预测结果值y修正所述网络结构,得到第一训练模型。
在所述应用程序管控方法中,在所述采用非线性支持向量机算法生成第二训练模型中,包括:
对样本向量集中的样本向量进行标记,生成每个样本向量的标记结果y i;以及
通过定义高斯核函数,得到第二训练模型。
在所述应用程序管控方法中,当第二关闭概率值小于判定值,则保留所述应用程序。
在所述应用程序管控方法中,还包括:当第一关闭概率值处于犹豫区间之外时,则判断第一关闭概率值是小于犹豫区间的最小值还是大于犹豫区间的最大值。
在所述应用程序管控方法中,当第一关闭概率值小于犹豫区间的最小值,则保留所述应用程序;当第一关闭概率值大于犹豫区间的最大值,则关闭所述应用程序。
在所述应用程序管控方法中,在所述当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序中,包括:
采集所述应用程序的当前特征信息s;
将当前特征信息s带入第一训练模型进行计算,得到第一关闭概率;
判断第一关闭概率值是否处于犹豫区间之内;
将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值;
判断第二关闭概率值是否大于判定值;以及
判断第一关闭概率值是小于犹豫区间的最小值还是大于犹豫区间的最大值。
一种应用程序管控装置,所述装置包括:
获取模块,用于获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
生成模块,用于采用BP神经网络算法对样本向量集进行计算,生成第一训练模型,采用非线性支持向量机算法生成第二训练模型;
计算模块,用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序。
一种介质,其中,所述介质中存储有多条指令,所述指令适于由处理器加载以执行如前所述的应用程序管控方法。
一种电子设备,其中,所述电子设备包括处理器和存储器,所述电子设备与所述存储器电性连接,所述存储器用于存储指令和数据,所述处理器用于执行:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
采用BP神经网络算法对样本向量集进行计算生成第一训练模型,采用非线性支持向量机算法生成第二训练模型;以及
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二 训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序。
在所述电子设备中,在所述采用BP神经网络算法对样本向量集进行计算,生成第一训练模型中,所述处理器还执行:
定义网络结构;以及
将样本向量集带入网络结构进行计算,得到第一训练模型。
在所述电子设备中,在所述定义网络结构中,所述处理器还执行:
设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同;
设定隐含层,所述隐含层包括M个节点;
设定分类层,所述分类层采用softmax函数,所述softmax函数为
Figure PCTCN2018110519-appb-000004
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110519-appb-000005
为第j个中间值;
设定输出层,所述输出层包括2个节点;
设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110519-appb-000006
其中,所述f(x)的范围为0到1;
设定批量大小,所述批量大小为A;以及
设定学习率,所述学习率为B。
在所述电子设备中,在所述将样本向量集带入网络结构进行计算,得到第一训练模型中,所述处理器还执行:
在输入层输入所述样本向量集进行计算,得到输入层的输出值;
在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值;
在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T,其中,p 1为预测关闭概率值,p 2为预测保留概率值;
将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T;以及
根据预测结果值y修正所述网络结构,得到第一训练模型。
在所述电子设备中,在所述采用非线性支持向量机算法生成第二训练模型中,所述处理器还执行:
对样本向量集中的样本向量进行标记,生成每个样本向量的标记结果y i;以及
通过定义高斯核函数,得到第二训练模型。
在所述电子设备中,当第二关闭概率值小于判定值,则保留所述应用程序
在所述电子设备中,所述处理器还执行:当第一关闭概率值处于犹豫区间之外时,则判断第一关闭概率值是小于犹豫区间的最小值还是大于犹豫区间的最大值。
在所述电子设备中,当第一关闭概率值小于犹豫区间的最小值,则保留所述应用程序;当第一关闭概率值大于犹豫区间的最大值,则关闭所述应用程序。
在所述电子设备中,在所述当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序中,所述处理器还执行:
采集所述应用程序的当前特征信息s;
将当前特征信息s带入第一训练模型进行计算,得到第一关闭概率;
判断第一关闭概率值是否处于犹豫区间之内;
将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值;
判断第二关闭概率值是否大于判定值;以及
判断第一关闭概率值是小于犹豫区间的最小值还是大于犹豫区间的最大值。
本申请提供的应用程序管控方法,主要应用于电子设备,如:手环、智能手机、基于苹果系统或安卓系统的平板电脑、或基于Windows或Linux系统的笔记本电脑等智能移动电子设备。需要说明的是,所述应用程序可以为聊天应用程序、视频应用程序、音乐应用程序、购物应用程序、共享单车应用程序或手机银行应用程序等。
请参阅图1,图1为本申请实施例提供的应用程序管控装置的系统示意图。所述应用程序管控装置主要用于:从数据库中获取应用程序的历史特征信息x i,然后,将历史特征信息x i通过算法进行计算,得到训练模型,其次,将应用程序的当前特征信息s输入训练模型进行计算,通过计算结果判断应用程序是否可关闭,以对预设应用程序进行管控,例如关闭、或者冻结等。
具体的,请参阅图2,图2为本申请实施例提供的应用程序管控方法的应用场景示意图。在一种实施例中,从数据库中获取应用程序的历史特征信息x i,然后,将历史特征信息x i通过算法进行计算,得到训练模型,其次,当应用程序管控装置在检测到应用程序进入电子设备的后台时,将应用程序的当前特征信息s输入训练模型进行计算,通过计算结果判断应用程序是否可关闭。比如,从数据库中获取应用程序a的历史特征信息x i,然后,将历史特征信息x i通过算法进行计算,得到训练模型,其次,当应用程序管控装置在检测到应用程序a进入电子设备的后台时,将应用程序的当前特征信息s输入训练模型进行计算,通过计算结果判断应用程序a可关闭,并将应用程序a关闭,当应用程序管控装置在检测到应用程序b进入电子设备的后台时,将应用程序b的当前特征信息s输入训练模型进行计算,通过计算结果判断应用程序b需要保留,并将应用程序b保留。
本申请实施例提供一种应用程序管控方法,所述应用程序管控方法的执行主体可以是本发明实施例提供的应用程序管控装置,或者成了该应用程序管控装置的电子设备,其中该应用程序管控装置可以采用硬件或者软件的方式实现。
请参阅图3,图3为本申请实施例提供的应用程序管控方法的流程示意图。本申请实施例提供的应用程序管控方法应用于电子设备,具体流程可以如下:
步骤S11,获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
其中,从样本数据库中获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
其中,所述多个维度的特征信息可以参考表1。
Figure PCTCN2018110519-appb-000007
Figure PCTCN2018110519-appb-000008
表1
需要说明的是,以上表1示出的10个维度的特征信息仅为本申请实施例中的一种,但是本申请并不局限于表1示出的10个维度的特征信息,也可以为其中之一、或者其中至少两个,或者全部,亦或者还可以包括其他维度的特征信息,例如,当前是否在充电、当前的电量或者当前是否连接WiFi等。
在一种实施例中,可以选取6个维度的历史特征信息:
A、应用程序在后台驻留的时间;
B、屏幕是否为亮,例如,屏幕亮,记为1,屏幕熄灭,记为0;
C、当周总使用次数统计;
D、当周总使用时间统计;
E、WiFi是否打开,例如,WiFi打开,记为1,WiFi关闭,记为0;以及
F、当前是否在充电,例如,当前正在充电,记为1,当前未在充电,记为0。
步骤S12,采用BP神经网络算法对样本向量集进行计算生成第一训练模型,采用非线性支持向量机算法生成第二训练模型。
请参阅图4,图4为本申请实施例提供的应用程序管控方法的流程示意图。
所述步骤S12包括为步骤S121和步骤S122,其中,步骤S121为采用BP神经网络算法对样本向量集进行计算生成第一训练模型,步骤S122为采用非线性支持向量机算法生成第二训练模型。
需要说明的是,步骤S121和步骤S122的顺序可以调换。
所述步骤S121可以包括:
步骤S1211:定义网络结构;以及
步骤S1212:将样本向量集带入网络结构进行计算,得到第一训练模型。
在步骤S1211中,所述定义网络结构包括:
步骤S1211a,设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同。
其中,所述历史特征信息x i的维数小于10个,所述输入层的节点数小于10个,以简化运算过程。
在一种实施例中,所述历史特征信息x i的维数为6维,所述输入层包括6个节点。
步骤S1211b,设定隐含层,所述隐含层包括M个节点。
其中,所述隐含层可以包括多个隐含分层。每一所述隐含分层的节点数小于10个,以简化运算过程。
在一种实施例中,所述隐含层可以包括第一隐含分层,第二隐含分层和第三隐含分层。所述第一隐含分层包括10个节点,第二隐含分层包括5个节点,第三隐含分层包括5个节点。
步骤S1211c,设定分类层,所述分类层采用softmax函数,所述softmax函数为
Figure PCTCN2018110519-appb-000009
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110519-appb-000010
为第j个中间值。
步骤S1211d,设定输出层,所述输出层包括2个节点。
步骤S1211e,设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110519-appb-000011
其中,所述f(x)的范围为0到1。
步骤S1211f,设定批量大小,所述批量大小为A。
其中,所述批量大小可以根据实际情况灵活调整。所述批量大小可以为50-200。
在一种实施例中,所述批量大小为128。
步骤S1211g,设定学习率,所述学习率为B。
其中,所述学习率可以根据实际情况灵活调整。所述学习率可以为0.1-1.5。
在一种实施例中,所述学习率为0.9。
需要说明的是,所述步骤S1211a、S1211b、S1211c、S1211d、S1211e、S1211f、S1211g的先后顺序可以灵活调整。
在步骤S1212中,所述将样本向量集带入网络结构进行计算,得到第一训练模型的步骤可以包括:
步骤S1212a,在输入层输入所述样本向量集进行计算,得到输入层的输出值。
步骤S1212b,在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值。
其中,所述输入层的输出值为所述隐含层的输入值。
在一种实施例中,所述隐含层可以包括多个隐含分层。所述输入层的输出值为第一隐含分层的输入值。所述第一隐含分层的输出值为第二隐含分层的输入值。所述第二隐含分层的输出值为所述第三隐含分层的输入值,依次类推。
步骤S1212c,在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T,其中,p 1为预测关闭概率值,p 2为预测保留概率值。
其中,所述隐含层的输出值为所述分类层的输入值。
在一种实施例中,所述隐含层可以包括多个隐含分层。最后一个隐含分层的输出值为所述分类层的输入值。
步骤S1212d,将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T
其中,所述分类层的输出值为所述输出层的输入值。
步骤S1212e,根据预测结果值y修正所述网络结构,得到第一训练模型。
所述步骤S122可以包括:
步骤S1221:对样本向量集中的样本向量进行标记,生成每个样本向量的标记结果y i;以及
步骤S1222:通过定义高斯核函数,得到第二训练模型。
在步骤S1221中,对样本向量集中的样本向量进行标记,生成每个样本向量的标记结果y i
比如,可以对样本向量集中的样本向量进行标记,在非线性支持向量机算法中输入样本向量,生成每个样本向量的标记结果y i,形成样本向量结果集T={(x 1,y 1),(x 2,y 2),...,(x m,y m)},输入样本向量x i∈R n,y i∈{+1,-1},i=1,2,3,...,n,R n表示样本向量所在的输入空间,n表示输入空间的维数,y i表示输入样本向量对应的标记结果。
在步骤S1222中,通过定义高斯核函数,得到第二训练模型。
在一种实施例中,所述核函数为高斯核函数为
Figure PCTCN2018110519-appb-000012
其中,K(x,x i)为空间中任一点x到某一中心x i之间欧氏距离,σ为高斯核函数的宽度参数。
在一种实施例中,所述通过定义高斯核函数,得到训练模型的步骤可以为通过定义高斯核函数,根据高斯核函数定义模型函数和分类决策函数,得到第二训练模型,所述模型函数为
Figure PCTCN2018110519-appb-000013
所述分类决策函数为
Figure PCTCN2018110519-appb-000014
其中,f(x)为分类决策值,α i是拉格朗日因子,b为偏置系数,当f(x)=1时,代表所述应用程序”需关闭”,当f(x)=-1时,代表所述应用程序“需保留”。
在一种实施例中,所述通过定义高斯核函数,根据高斯核函数定义模型函数和分类决策函数,得到训练模型的步骤可以为通过定义高斯核函数,根据高斯核函数定义模型函数和分类决策函数,通过模型函数和分类决策函数定义目标最优化函数,通过序列最小优化算法得到目标优化函数的最优解,得到第二训练模型,所述目标优化函数为
Figure PCTCN2018110519-appb-000015
其中,所述目标最优化函数为在参数(α 12,…,α i)上求最小值,一个α i对应于一个样本(x i,y i),变量的总数等于训练样本的容量m。
在一种实施例中,所述最优解可以记为
Figure PCTCN2018110519-appb-000016
所述第二训练模型为
Figure PCTCN2018110519-appb-000017
所述g(x)为训练模型输出值,所述输出值为第二关闭概率值。
步骤S13,当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序。
请参阅图4,在一种实施例中,所述步骤S13可以包括:
步骤S131:采集所述应用程序的当前特征信息s。
其中,采集的所述应用程序的当前特征信息s的维度与采集的所述应用程序的历史特征信息x i的维度相同。
步骤S132:将当前特征信息s带入第一训练模型进行计算,得到第一关闭概率。
其中,将当前特征信息s输入所述第一训练模型进行计算得到分类层的概率值[p 1’ p 2’] T,其中,p 1’为第一关闭概率值,p 2’为第一保留概率值。
步骤S133:判断第一关闭概率值是否处于犹豫区间之内。
其中,所述犹豫区间为0.4-0.6。所述犹豫区间的最小值为0.4。所述犹豫区间的最大值为0.6。
当所述第一关闭概率值处于犹豫区间之内,执行步骤S134和步骤S135,当所述第一关闭概率值处于犹豫区间之外,执行步骤S136。
步骤S134,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值。
其中,将当前特征信息s带入公式计算
Figure PCTCN2018110519-appb-000018
得到第二关闭概率值g(s)。
步骤S135,判断第二关闭概率值是否大于判定值。
需要说明的是,所述判定值可以设置为0。当g(s)>0,则关闭所述应用程序;当g(s)<0,则保 留所述应用程序。
步骤S136,判断第一关闭概率值是小于犹豫区间的最小值还是大于犹豫区间的最大值。
其中,当第一关闭概率值小于犹豫区间的最小值,则保留所述应用程序;当第一关闭概率值大于犹豫区间的最大值,则关闭所述应用程序。
本申请所提供的应用程序管控方法,通过获取历史特征信息x i,采用BP神经网络算法生成第一训练模型,采用非线性支持向量机算法生成第二训练模型,当检测应用程序进入后台时,从而将应用程序的当前特征信息s带入第一训练模型,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,进而判断所述应用程序是否需要关闭,智能关闭应用程序。
请参阅图5,图5为本申请实施例提供的应用程序管控装置的结构示意图。所述装置30包括获取模块31,生成模块32和计算模块33。
需要说明的是,所述应用程序可以为聊天应用程序、视频应用程序、音乐应用程序、购物应用程序、共享单车应用程序或手机银行应用程序等。
所述获取模块31用于获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
其中,从样本数据库中获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
请参阅图6,图6为本申请实施例提供的应用程序管控装置的结构示意图。所述装置30还包括检测模块34,用于检测所述应用程序进入后台。
所述装置30还可以包括储存模块35。所述储存模块35用于储存应用程序的历史特征信息x i
其中,所述多个维度的特征信息可以参考表2。
Figure PCTCN2018110519-appb-000019
表2
需要说明的是,以上表2示出的10个维度的特征信息仅为本申请实施例中的一种,但是本申请并不局限于表1示出的10个维度的特征信息,也可以为其中之一、或者其中至少两个,或者全部,亦或者还可以包括其他维度的特征信息,例如,当前是否在充电、当前的电量或者当前是否连接WiFi等。
在一种实施例中,可以选取6个维度的历史特征信息:
A、应用程序在后台驻留的时间;
B、屏幕是否为亮,例如,屏幕亮,记为1,屏幕熄灭,记为0;
C、当周总使用次数统计;
D、当周总使用时间统计;
E、WiFi是否打开,例如,WiFi打开,记为1,WiFi关闭,记为0;以及
F、当前是否在充电,例如,当前正在充电,记为1,当前未在充电,记为0。
所述生成模块32用于采用BP神经网络算法对样本向量集进行计算生成第一训练模型,采用非线性支持向量机算法生成第二训练模型。
所述生成模块32包括第一生成模块321和第二生成模块322。所述第一生成模块321用于采用BP神经网络算法对样本向量集进行计算生成第一训练模型。所述第二生成模块322用于采用非线性支持向量机算法生成第二训练模型。
请参阅图6,所述第一生成模块321包括定义模块3211和第一求解模块3212。所述定义模块3211用于定义网络结构。
所述定义模块3211可以包括输入层定义模块3211a、隐含层定义模块3211b、分类层定义模块3211c、输出层定义模块3211d、激活函数定义模块3211e、批量大小定义模块3211f和学习率定义模块3211g。
所述输入层定义模块3211a用于设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同。
其中,所述历史特征信息x i的维数小于10个,所述输入层的节点数小于10个,以简化运算过程。
在一种实施例中,所述历史特征信息x i的维数为6维,所述输入层包括6个节点。
所述隐含层定义模块3211b用于设定隐含层,所述隐含层包括M个节点。
其中,所述隐含层可以包括多个隐含分层。每一所述隐含分层的节点数小于10个,以简化运算过程。
在一种实施例中,所述隐含层可以包括第一隐含分层,第二隐含分层和第三隐含分层。所述第一隐含分层包括10个节点,第二隐含分层包括5个节点,第三隐含分层包括5个节点。
所述分类层定义模块3211c用于设定分类层,所述分类层采用softmax函数,所述softmax函数为
Figure PCTCN2018110519-appb-000020
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110519-appb-000021
为第j个中间值。
所述输出层定义模块3211d用于设定输出层,所述输出层包括2个节点。
所述激活函数定义模块3211e用于设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110519-appb-000022
其中,所述f(x)的范围为0到1。
所述批量大小定义模块3211f用于设定批量大小,所述批量大小为A。
其中,所述批量大小可以根据实际情况灵活调整。所述批量大小可以为50-200。
在一种实施例中,所述批量大小为128。
所述学习率定义模块3211g用于设定学习率,所述学习率为B。
其中,所述学习率可以根据实际情况灵活调整。所述学习率可以为0.1-1.5。
在一种实施例中,所述学习率为0.9。
需要说明的是,所述输入层定义模块3211a设定输入层、所述隐含层定义模块3211b设定隐含层、所述分类层定义模块3211c设定分类层、所述输出层定义模块3211d设定输出层、所述激活函数定义模块3211e设定激活函数、所述批量大小定义模块3211f设定批量大小和所述学习率定义模块3211g设定学习率的先后顺序可以灵活调整。
所述第一求解模块3212用于将样本向量集带入网络结构进行计算,得到第一训练模型。
所述第一求解模块3212可以包括第一求解分模块3212a、第二求解分模块3212b、第三求解分模块3212c、第四求解分模块3212d和修正模块3212e。
所述第一求解分模块3212a用于在输入层输入所述样本向量集进行计算,得到输入层的输出值。
所述第二求解分模块3212b用于在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值。
其中,所述输入层的输出值为所述隐含层的输入值。
在一种实施例中,所述隐含层可以包括多个隐含分层。所述输入层的输出值为第一隐含分层的输入值。所述第一隐含分层的输出值为第二隐含分层的输入值。所述第二隐含分层的输出值为所述第三隐含分层的输入值,依次类推。
所述第三求解分模块3212c用于在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T
其中,所述隐含层的输出值为所述分类层的输入值。
所述第四求解分模块3212d用于将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T
其中,所述分类层的输出值为所述输出层的输入值。
所述修正模块3212e用于根据预测结果值y修正所述网络结构,得到第一训练模型。
所述第二生成模块322包括训练模块3221和第二求解模块3222。
所述训练模块3221用于对样本向量集中的样本向量进行标记,生成每个样本向量的标记结果y i
比如,可以对样本向量集中的样本向量进行标记,在非线性支持向量机算法中输入样本向量,生成每个样本向量的标记结果y i,形成样本向量结果集T={(x 1,y 1),(x 2,y 2),...,(x m,y m)},输入样本向量x i∈R n,y i∈{+1,-1},i=1,2,3,...,n,R n表示样本向量所在的输入空间,n表示输入空间的维数,y i表示输入样本向量对应的标记结果。
所述第二求解模块3222用于通过定义高斯核函数,得到第二训练模型。
在一种实施例中,所述核函数为高斯核函数为
Figure PCTCN2018110519-appb-000023
其中,K(x,x i)为空间中任一点x到某一中心x i之间欧氏距离,σ为高斯核函数的宽度参数。
在一种实施例中,所述第二求解模块3222可以用于通过定义高斯核函数,根据高斯核函数定义模型函数和分类决策函数,得到训练模型,所述模型函数为
Figure PCTCN2018110519-appb-000024
所述分类决策函数为
Figure PCTCN2018110519-appb-000025
其中,f(x)为分类决策值,α i是拉格朗日因子,b为偏置系数,当f(x)=1时,代表所述应用程序“需关闭”,当f(x)=-1时,代表所述应用程序“需保留”。
在一种实施例中,所述第二求解模块3222可以用于通过定义高斯核函数,根据高斯核函数定义模型函数和分类决策函数,通过模型函数和分类决策函数定义目标最优化函数,通过序列最小优化算法得到目标优化函数的最优解,得到训练模型,所述目标优化函数为
Figure PCTCN2018110519-appb-000026
其中,所述目标最优化函数为在参数(α 12,…,α i)上求最小值,一个α i对应于一个样本(x i,y i),变量的总数等于训练样本的容量m。
在一种实施例中,所述最优解可以记为
Figure PCTCN2018110519-appb-000027
所述第二训练模型为
Figure PCTCN2018110519-appb-000028
所述g(x)为训练模型输出值,所述输出值为第二关闭概率值。
所述计算模块33用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序。
请参阅图6,在一种实施例中,所述计算模块33可以包括采集模块330、第一计算模块331和第二计算模块332。
所述采集模块330用于当应用程序进入后台,采集所述应用程序的当前特征信息s;
其中,采集的所述应用程序的当前特征信息s的维度与采集的所述应用程序的历史特征信息x i的维度相同。
所述第一计算模块331用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值。
其中,将当前特征信息s输入所述第一训练模型进行计算得到分类层的概率值[p 1’ p 2’] T,其中,p 1’为第一关闭概率值,p 2’为第一保留概率值。
所述计算模块33还包括第一判断模块333。所述第一判断模块333用于判断第一关闭概率值是否处于犹豫区间。
其中,所述犹豫区间为0.4-0.6。所述犹豫区间的最小值为0.4。所述犹豫区间的最大值为0.6。
所述第二计算模块332用于当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值。
其中,将当前特征信息s带入公式计算
Figure PCTCN2018110519-appb-000029
得到第二关闭概率值g(s)。
所述计算模块33还包括第二判断模块334。所述第二判断模块334用于判断第二关闭概率值是否大于判定值。
需要说明的是,所述判定值可以设置为0。当g(s)>0,则关闭所述应用程序;当g(s)<0,则保留所述应用程序。
所述计算模块33还包括第三判断模块335。所述第三判断模块335用于当第一关闭概率值处于犹豫区间之外时,判断第一关闭概率值是小于犹豫区间的最小值还是大于犹豫区间的最大值。
其中,当第一关闭概率值小于犹豫区间的最小值,则保留所述应用程序;当第一关闭概率值大于犹豫区间的最大值,则关闭所述应用程序。
在一种实施例中,所述采集模块331还可以用于根据预定采集时间定时采集当前特征信息s,并将当前特征信息s存入储存模块35,所述采集模块331还用于采集检测到应用程序进入后台的时间点对应的当前特征信息s,并将该当前特征信息s输入计算模块33带入训练模型进行计算。
所述装置30还可以包括关闭模块36,用于当判断应用程序需要关闭时,将所述应用程序关闭。
本申请所提供的用于应用程序管控方法的装置,通过获取历史特征信息x i,采用BP神经网络算法生成第一训练模型,采用非线性支持向量机算法生成第二训练模型,当检测应用程序进入后台时,从而将应用程序的当前特征信息s带入第一训练模型,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,进而判断所述应用程序是否需要关闭,智能关闭应用程序。
请参阅图7,图7为本申请实施例提供的电子设备的结构示意图。所述电子设备500包括:处理器501和存储器502。其中,处理器501与存储器502电性连接。
处理器501是电子设备500的控制中心,利用各种接口和线路连接整个电子设备500的各个部分,通过运行或加载存储在存储器502内的应用程序,以及调用存储在存储器502内的数据,执行电子设备的各种功能和处理数据,从而对电子设备500进行整体监控。
在本实施例中,电子设备500中的处理器501会按照如下的步骤,将一个或一个以上的应用程序的进程对应的指令加载到存储器502中,并由处理器501来运行存储在存储器502中的应用程序,从而实现各种功能:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
采用BP神经网络算法对样本向量集进行计算生成第一训练模型,采用非线性支持向量机算法生成第二训练模型;以及
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序。
需要说明的是,所述应用程序可以为聊天应用程序、视频应用程序、音乐应用程序、购物应用程序、共享单车应用程序或手机银行应用程序等。
其中,从样本数据库中获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
其中,所述多个维度的特征信息可以参考表3。
Figure PCTCN2018110519-appb-000030
Figure PCTCN2018110519-appb-000031
表3
需要说明的是,以上表3示出的10个维度的特征信息仅为本申请实施例中的一种,但是本申请并不局限于表1示出的10个维度的特征信息,也可以为其中之一、或者其中至少两个,或者全部,亦或者还可以包括其他维度的特征信息,例如,当前是否在充电、当前的电量或者当前是否连接WiFi等。
在一种实施例中,可以选取6个维度的历史特征信息:
A、应用程序在后台驻留的时间;
B、屏幕是否为亮,例如,屏幕亮,记为1,屏幕熄灭,记为0;
C、当周总使用次数统计;
D、当周总使用时间统计;
E、WiFi是否打开,例如,WiFi打开,记为1,WiFi关闭,记为0;以及
F、当前是否在充电,例如,当前正在充电,记为1,当前未在充电,记为0。
在一种实施例中,所述处理器501采用BP神经网络算法对样本向量集进行计算,生成第一训练模型还包括:
定义网络结构;以及
将样本向量集带入网络结构进行计算,得到第一训练模型。
其中,所述定义网络结构包括:
设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同;
其中,所述历史特征信息x i的维数小于10个,所述输入层的节点数小于10个,以简化运算过程。
在一种实施例中,所述历史特征信息x i的维数为6维,所述输入层包括6个节点。
设定隐含层,所述隐含层包括M个节点。
其中,所述隐含层可以包括多个隐含分层。每一所述隐含分层的节点数小于10个,以简化运算过程。
在一种实施例中,所述隐含层可以包括第一隐含分层,第二隐含分层和第三隐含分层。所述第一隐含分层包括10个节点,第二隐含分层包括5个节点,第三隐含分层包括5个节点。
设定分类层,所述分类层采用softmax函数,所述softmax函数为
Figure PCTCN2018110519-appb-000032
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110519-appb-000033
为第j个中间值。
设定输出层,所述输出层包括2个节点。
设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110519-appb-000034
其中,所述f(x)的范围为0到1。
设定批量大小,所述批量大小为A。
其中,所述批量大小可以根据实际情况灵活调整。所述批量大小可以为50-200。
在一种实施例中,所述批量大小为128。
设定学习率,所述学习率为B。
其中,所述学习率可以根据实际情况灵活调整。所述学习率可以为0.1-1.5。
在一种实施例中,所述学习率为0.9。
需要说明的是,所述设定输入层、设定隐含层、设定分类层、设定输出层、设定激活函数、设定批量大小、设定学习率的先后顺序可以灵活调整。
所述将样本向量集带入网络结构进行计算,得到第一训练模型的步骤可以包括:
在输入层输入所述样本向量集进行计算,得到输入层的输出值。
在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值。
其中,所述输入层的输出值为所述隐含层的输入值。
在一种实施例中,所述隐含层可以包括多个隐含分层。所述输入层的输出值为第一隐含分层的输入值。所述第一隐含分层的输出值为第二隐含分层的输入值。所述第二隐含分层的输出值为所述第三隐含分层的输入值,依次类推。
在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T
其中,所述隐含层的输出值为所述分类层的输入值。
在一种实施例中,所述隐含层可以包括多个隐含分层。最后一个隐含分层的输出值为所述分类层的输入值。
将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T
其中,所述分类层的输出值为所述输出层的输入值。
根据预测结果值y修正所述网络结构,得到第一训练模型。
在一种实施例中,所述处理器501采用非线性支持向量机算法对样本向量集进行计算,生成第二训练模型还包括:
对样本向量集中的样本向量进行标记,生成每个样本向量的标记结果y i;以及
通过定义高斯核函数,得到第二训练模型。
在一种实施例中,可以对样本向量集中的样本向量进行标记,在非线性支持向量机算法中输入样本向量,生成每个样本向量的标记结果y i,形成样本向量结果集T={(x 1,y 1),(x 2,y 2),...,(x m,y m)},输入样本向量x i∈R n,y i∈{+1,-1},i=1,2,3,...,n,R n表示样本向量所在的输入空间,n表示输入空间的维数,y i表示输入样本向量对应的标记结果。
在一种实施例中,所述核函数为高斯核函数为
Figure PCTCN2018110519-appb-000035
其中,K(x,x i)为空间中任一点x到某一中心x i之间欧氏距离,σ为高斯核函数的宽度参数。
在一种实施例中,所述通过定义高斯核函数,得到第二训练模型的步骤为通过定义高斯核函数,根据高斯核函数定义模型函数和分类决策函数,得到第二训练模型,所述模型函数为
Figure PCTCN2018110519-appb-000036
所述分类决策函数为
Figure PCTCN2018110519-appb-000037
其中,f(x)为分类决策值,α i是拉格朗日因子,b为偏置系数,当f(x)=1时,代表所述应用程序”需关闭”,当f(x)=-1时,代表所述应用程序“需保留”。
在一种实施例中,所述通过定义高斯核函数,根据高斯核函数定义模型函数和分类决策函数,得到训练模型的步骤为通过定义高斯核函数,根据高斯核函数定义模型函数和分类决策函数,通过模型函数和分类决策函数定义目标最优化函数,通过序列最小优化算法得到目标优化函数的最优解,得到第二训练模型,所述目标优化函数为
Figure PCTCN2018110519-appb-000038
其中,所述目标最优化函数为 在参数(α 12,…,α i)上求最小值,一个α i对应于一个样本(x i,y i),变量的总数等于训练样本的容量m。
在一种实施例中,所述最优解可以记为
Figure PCTCN2018110519-appb-000039
所述第二训练模型为
Figure PCTCN2018110519-appb-000040
所述g(x)为训练模型输出值,所述输出值为第二关闭概率值。
所述当应用程序进入后台,所述处理器501将所述应用程序的当前特征信息s输入所述训练模型进行计算的步骤包括:
采集所述应用程序的当前特征信息s。
其中,采集的所述应用程序的当前特征信息s的维度与采集的所述应用程序的历史特征信息x i的维度相同。
将当前特征信息s带入第一训练模型进行计算,得到第一关闭概率值。
其中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,其中,p 1’为第一关闭概率值,p 2’为第一保留概率值。
判断第一关闭概率值是否处于犹豫区间之内。
其中,所述犹豫区间为0.4-0.6。所述犹豫区间的最小值为0.4。所述犹豫区间的最大值为0.6。
当所述第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值。
其中,将当前特征信息s带入公式计算
Figure PCTCN2018110519-appb-000041
得到第二关闭概率值g(s)。
判断第二关闭概率值是否大于判定值。
需要说明的是,所述判定值可以设置为0。当g(s)>0,则关闭所述应用程序;当g(s)<0,则保留所述应用程序。
判断第一关闭概率值是小于犹豫区间的最小值还是大于犹豫区间的最大值。
其中,当第一关闭概率值小于犹豫区间的最小值,则保留所述应用程序;当第一关闭概率值大于犹豫区间的最大值,则关闭所述应用程序。
存储器502可用于存储应用程序和数据。存储器502存储的程序中包含有可在处理器中执行的指令。所述程序可以组成各种功能模块。处理器501通过运行存储在存储器502的程序,从而执行各种功能应用以及数据处理。
在一些实施例中,如图8所示,图8为本申请实施例提供的电子设备的结构示意图。所述电子设备500还包括:射频电路503、显示屏504、控制电路505、输入单元506、音频电路507、传感器508以及电源509。其中,处理器501分别与射频电路503、显示屏504、控制电路505、输入单元506、音频电路507、传感器508以及电源509电性连接。
射频电路503用于收发射频信号,以通过无线通信网络与服务器或其他电子设备进行通信。
显示屏504可用于显示由用户输入的信息或提供给用户的信息以及终端的各种图形用户接口,这些图形用户接口可以由图像、文本、图标、视频和其任意组合来构成。
控制电路505与显示屏504电性连接,用于控制显示屏504显示信息。
输入单元506可用于接收输入的数字、字符信息或用户特征信息(例如指纹),以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
音频电路507可通过扬声器、传声器提供用户与终端之间的音频接口。
传感器508用于采集外部环境信息。传感器508可以包括环境亮度传感器、加速度传感器、陀螺仪等传感器中的一种或多种。
电源509用于给电子设备500的各个部件供电。在一些实施例中,电源509可以通过电源管理系统与处理器501逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管图8中未示出,电子设备500还可以包括摄像头、蓝牙模块等,在此不再赘述。
本申请所提供的电子设备,通过获取历史特征信息x i,采用BP神经网络算法生成第一训练模型,采用非线性支持向量机算法生成第二训练模型,当检测应用程序进入后台时,从而将应用程序的当前特征信息s带入第一训练模型,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,进而判断所述应用程序是否需要关闭,智能关闭应用程序。
本发明实施例还提供一种介质,该介质中存储有多条指令,该指令适于由处理器加载以执行上述任一实施例所述的应用程序管控方法。
本发明实施例提供的应用程序管控方法、装置、介质及电子设备属于同一构思,其具体实现过程详见说明书全文,此处不再赘述。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
以上对本申请实施例提供的应用程序管控方法、装置、介质及电子设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施例进行了阐述,以上实施例的说明只是用于帮助理解本申请。同时,对于本领域的技术人员,依据本申请的思想,在具体实施例及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。。

Claims (20)

  1. 一种应用程序管控方法,应用于电子设备,其中,所述应用程序管控方法包括:
    获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
    采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算生成第一训练模型,采用非线性支持向量机算法生成第二训练模型;以及
    当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序。
  2. 如权利要求1所述的应用程序管控方法,其中,在所述采用BP神经网络算法对样本向量集进行计算,生成第一训练模型中,包括:
    定义网络结构;以及
    将样本向量集带入网络结构进行计算,得到第一训练模型。
  3. 如权利要求2所述的应用程序管控方法,其中,在所述定义网络结构中,包括:
    设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同;
    设定隐含层,所述隐含层包括M个节点;
    设定分类层,所述分类层采用softmax函数,所述softmax函数为
    Figure PCTCN2018110519-appb-100001
    其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
    Figure PCTCN2018110519-appb-100002
    为第j个中间值;
    设定输出层,所述输出层包括2个节点;
    设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
    Figure PCTCN2018110519-appb-100003
    其中,所述f(x)的范围为0到1;
    设定批量大小,所述批量大小为A;以及
    设定学习率,所述学习率为B。
  4. 如权利要求3所述的应用程序管控方法,其中,在所述将样本向量集带入网络结构进行计算,得到第一训练模型中国,包括:
    在输入层输入所述样本向量集进行计算,得到输入层的输出值;
    在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值;
    在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T,其中,p 1为预测关闭概率值,p 2为预测保留概率值;
    将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T;以及
    根据预测结果值y修正所述网络结构,得到第一训练模型。
  5. 如权利要求1所述的应用程序管控方法,其中,在所述采用非线性支持向量机算法生成第二训练模型中,包括:
    对样本向量集中的样本向量进行标记,生成每个样本向量的标记结果y i;以及
    通过定义高斯核函数,得到第二训练模型。
  6. 如权利要求1所述的应用程序管控方法,其中,当第二关闭概率值小于判定值,则保留所述应用程序。
  7. 如权利要求1所述的应用程序管控方法,其中,还包括:当第一关闭概率值处于犹豫区间之外时,则判断第一关闭概率值是小于犹豫区间的最小值还是大于犹豫区间的最大值。
  8. 如权利要求7所述的应用程序管控方法,其中,当第一关闭概率值小于犹豫区间的最小值,则保留所述应用程序;当第一关闭概率值大于犹豫区间的最大值,则关闭所述应用程序。
  9. 如权利要求1所述的应用程序管控方法,其中,在所述当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序中,包括:
    采集所述应用程序的当前特征信息s;
    将当前特征信息s带入第一训练模型进行计算,得到第一关闭概率;
    判断第一关闭概率值是否处于犹豫区间之内;
    将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值;
    判断第二关闭概率值是否大于判定值;
    判断第一关闭概率值是小于犹豫区间的最小值还是大于犹豫区间的最大值。
  10. 一种应用程序管控装置,其中,所述装置包括:
    获取模块,用于获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
    生成模块,用于采用BP神经网络算法对样本向量集进行计算,生成第一训练模型,采用非线性支持向量机算法生成第二训练模型;
    计算模块,用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序。
  11. 一种介质,其中,所述介质中存储有多条指令,所述指令适于由处理器加载以执行如权利要求1至9中任一项所述的应用程序管控方法。
  12. 一种电子设备,其中,所述电子设备包括处理器和存储器,所述电子设备与所述存储器电性连接,所述存储器用于存储指令和数据,所述处理器用于执行:
    获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
    采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算生成第一训练模型,采用非线性支持向量机算法生成第二训练模型;以及
    当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序。
  13. 如权利要求12所述的电子设备,其中,在所述采用BP神经网络算法对样本向量集进行计算,生成第一训练模型中,所述处理器还执行:
    定义网络结构;以及
    将样本向量集带入网络结构进行计算,得到第一训练模型。
  14. 如权利要求13所述的电子设备,其中,在所述定义网络结构中,所述处理器还执行:
    设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同;
    设定隐含层,所述隐含层包括M个节点;
    设定分类层,所述分类层采用softmax函数,所述softmax函数为
    Figure PCTCN2018110519-appb-100004
    其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
    Figure PCTCN2018110519-appb-100005
    为第j个中间值;
    设定输出层,所述输出层包括2个节点;
    设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
    Figure PCTCN2018110519-appb-100006
    其中,所述f(x)的范围为0到1;
    设定批量大小,所述批量大小为A;以及
    设定学习率,所述学习率为B。
  15. 如权利要求14所述的电子设备,其中,在所述将样本向量集带入网络结构进行计算,得到第一训练模型中,所述处理器还执行:
    在输入层输入所述样本向量集进行计算,得到输入层的输出值;
    在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值;
    在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T,其中,p 1为预测关闭概率值,p 2为预测保留概率值;
    将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T;以及
    根据预测结果值y修正所述网络结构,得到第一训练模型。
  16. 如权利要求12所述的电子设备,其中,在所述采用非线性支持向量机算法生成第二训练模型中,所述处理器还执行:
    对样本向量集中的样本向量进行标记,生成每个样本向量的标记结果y i;以及
    通过定义高斯核函数,得到第二训练模型。
  17. 如权利要求12所述的电子设备,其中,当第二关闭概率值小于判定值,则保留所述应用程序。
  18. 如权利要求12所述的电子设备,其中,所述处理器还执行:当第一关闭概率值处于犹豫区间之外时,则判断第一关闭概率值是小于犹豫区间的最小值还是大于犹豫区间的最大值。
  19. 如权利要求18所述的电子设备,其中,当第一关闭概率值小于犹豫区间的最小值,则保留所述应用程序;当第一关闭概率值大于犹豫区间的最大值,则关闭所述应用程序。
  20. 如权利要求12所述的电子设备,其中,在所述当应用程序进入后台,将所述应用程序的当前特征信息s输入所述第一训练模型进行计算,得到第一关闭概率值,当第一关闭概率值处于犹豫区间之内,将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值,当第二关闭概率值大于判定值,则关闭所述应用程序中,所述处理器还执行:
    采集所述应用程序的当前特征信息s;
    将当前特征信息s带入第一训练模型进行计算,得到第一关闭概率;
    判断第一关闭概率值是否处于犹豫区间之内;
    将所述应用程序的当前特征信息s输入所述第二训练模型进行计算,得到第二关闭概率值;
    判断第二关闭概率值是否大于判定值;以及
    判断第一关闭概率值是小于犹豫区间的最小值还是大于犹豫区间的最大值。
PCT/CN2018/110519 2017-10-31 2018-10-16 应用程序管控方法、装置、介质及电子设备 WO2019085750A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18873772.0A EP3706043A4 (en) 2017-10-31 2018-10-16 METHOD AND DEVICE FOR CONTROLLING AN APPLICATION PROGRAM, MEDIUM AND ELECTRONIC DEVICE
US16/848,270 US20200241483A1 (en) 2017-10-31 2020-04-14 Method and Device for Managing and Controlling Application, Medium, and Electronic Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711047050.5 2017-10-31
CN201711047050.5A CN107844338B (zh) 2017-10-31 2017-10-31 应用程序管控方法、装置、介质及电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/848,270 Continuation US20200241483A1 (en) 2017-10-31 2020-04-14 Method and Device for Managing and Controlling Application, Medium, and Electronic Device

Publications (1)

Publication Number Publication Date
WO2019085750A1 true WO2019085750A1 (zh) 2019-05-09

Family

ID=61681681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/110519 WO2019085750A1 (zh) 2017-10-31 2018-10-16 应用程序管控方法、装置、介质及电子设备

Country Status (4)

Country Link
US (1) US20200241483A1 (zh)
EP (1) EP3706043A4 (zh)
CN (1) CN107844338B (zh)
WO (1) WO2019085750A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461897A (zh) * 2020-02-28 2020-07-28 上海商汤智能科技有限公司 一种获取核保结果的方法及相关装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844338B (zh) * 2017-10-31 2019-09-13 Oppo广东移动通信有限公司 应用程序管控方法、装置、介质及电子设备
JP6699702B2 (ja) * 2018-10-17 2020-05-27 トヨタ自動車株式会社 内燃機関の制御装置及びその制御方法、並びに内燃機関を制御するための学習モデル及びその学習方法
CN112286440A (zh) * 2020-11-20 2021-01-29 北京小米移动软件有限公司 触摸操作分类、模型训练方法及装置、终端及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566612A (zh) * 2009-05-27 2009-10-28 复旦大学 一种污水化学需氧量软测量方法
CN104463243A (zh) * 2014-12-01 2015-03-25 中科创达软件股份有限公司 基于平均脸特征的性别检测方法
CN104766097A (zh) * 2015-04-24 2015-07-08 齐鲁工业大学 基于bp神经网络和支持向量机的铝板表面缺陷分类方法
US20170132528A1 (en) * 2015-11-06 2017-05-11 Microsoft Technology Licensing, Llc Joint model training
CN107844338A (zh) * 2017-10-31 2018-03-27 广东欧珀移动通信有限公司 应用程序管控方法、装置、介质及电子设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6941301B2 (en) * 2002-01-18 2005-09-06 Pavilion Technologies, Inc. Pre-processing input data with outlier values for a support vector machine
US8266083B2 (en) * 2008-02-07 2012-09-11 Nec Laboratories America, Inc. Large scale manifold transduction that predicts class labels with a neural network and uses a mean of the class labels
US8281166B2 (en) * 2008-03-10 2012-10-02 Virdiem Corporation System and method for computer power control
US8335935B2 (en) * 2010-03-29 2012-12-18 Intel Corporation Power management based on automatic workload detection
US9684787B2 (en) * 2014-04-08 2017-06-20 Qualcomm Incorporated Method and system for inferring application states by performing behavioral analysis operations in a mobile device
CN104484223B (zh) * 2014-12-16 2018-02-16 北京奇虎科技有限公司 一种安卓系统应用关闭方法和装置
CN106484077A (zh) * 2016-10-19 2017-03-08 上海青橙实业有限公司 移动终端及其基于应用软件分类的省电方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566612A (zh) * 2009-05-27 2009-10-28 复旦大学 一种污水化学需氧量软测量方法
CN104463243A (zh) * 2014-12-01 2015-03-25 中科创达软件股份有限公司 基于平均脸特征的性别检测方法
CN104766097A (zh) * 2015-04-24 2015-07-08 齐鲁工业大学 基于bp神经网络和支持向量机的铝板表面缺陷分类方法
US20170132528A1 (en) * 2015-11-06 2017-05-11 Microsoft Technology Licensing, Llc Joint model training
CN107844338A (zh) * 2017-10-31 2018-03-27 广东欧珀移动通信有限公司 应用程序管控方法、装置、介质及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3706043A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461897A (zh) * 2020-02-28 2020-07-28 上海商汤智能科技有限公司 一种获取核保结果的方法及相关装置

Also Published As

Publication number Publication date
US20200241483A1 (en) 2020-07-30
CN107844338A (zh) 2018-03-27
CN107844338B (zh) 2019-09-13
EP3706043A4 (en) 2021-01-06
EP3706043A1 (en) 2020-09-09

Similar Documents

Publication Publication Date Title
WO2019085750A1 (zh) 应用程序管控方法、装置、介质及电子设备
WO2019062358A1 (zh) 应用程序管控方法及终端设备
WO2019085749A1 (zh) 应用程序管控方法、装置、介质及电子设备
WO2019062317A1 (zh) 应用程序管控方法及电子设备
WO2019062413A1 (zh) 应用程序管控方法、装置、存储介质及电子设备
WO2019120019A1 (zh) 用户性别预测方法、装置、存储介质及电子设备
CN105512685B (zh) 物体识别方法和装置
US11249645B2 (en) Application management method, storage medium, and electronic apparatus
CN112069414A (zh) 推荐模型训练方法、装置、计算机设备及存储介质
CN112329740B (zh) 图像处理方法、装置、存储介质和电子设备
WO2019062460A1 (zh) 应用控制方法、装置、存储介质以及电子设备
US20140038674A1 (en) Two-phase power-efficient activity recognition system for mobile devices
WO2019062405A1 (zh) 应用程序的处理方法、装置、存储介质及电子设备
WO2020048392A1 (zh) 应用程序的病毒检测方法、装置、计算机设备及存储介质
CN107659717B (zh) 状态检测方法、装置和存储介质
CN111368525A (zh) 信息搜索方法、装置、设备及存储介质
CN113284142A (zh) 图像检测方法、装置、计算机可读存储介质及计算机设备
US9471873B1 (en) Automating user patterns on a user device
WO2019062462A1 (zh) 应用控制方法、装置、存储介质以及电子设备
CN111339737A (zh) 实体链接方法、装置、设备及存储介质
CN113505256A (zh) 特征提取网络训练方法、图像处理方法及装置
WO2019062404A1 (zh) 应用程序的处理方法、装置、存储介质及电子设备
CN107729144B (zh) 应用控制方法、装置、存储介质及电子设备
CN107861770B (zh) 应用程序管控方法、装置、存储介质及终端设备
CN110738267A (zh) 图像分类方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18873772

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018873772

Country of ref document: EP

Effective date: 20200602