CN109992367A - Application processing method and device, electronic equipment, computer readable storage medium - Google Patents

Application processing method and device, electronic equipment, computer readable storage medium Download PDF

Info

Publication number
CN109992367A
CN109992367A CN201711484440.9A CN201711484440A CN109992367A CN 109992367 A CN109992367 A CN 109992367A CN 201711484440 A CN201711484440 A CN 201711484440A CN 109992367 A CN109992367 A CN 109992367A
Authority
CN
China
Prior art keywords
sample
feature
application
target
information gain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711484440.9A
Other languages
Chinese (zh)
Inventor
方攀
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711484440.9A priority Critical patent/CN109992367A/en
Priority to PCT/CN2018/117694 priority patent/WO2019128598A1/en
Publication of CN109992367A publication Critical patent/CN109992367A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4418Suspend and resume; Hibernate and awake
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Abstract

This application involves a kind of application processing methods and device, electronic equipment, computer readable storage medium.This method comprises: obtaining the fisrt feature data of each feature, the fisrt feature data are characteristic of the character pair under prediction time;It obtains for predicting whether user can use the decision-tree model of the target application within preset duration, the initial time of the preset duration is prediction time;Using the fisrt feature data as the input of the decision-tree model, prediction result is exported;When the prediction result, which is, to use the target application within preset duration, the target application is freezed.The accuracy freezed to target application can be improved in above-mentioned application processing method and device, electronic equipment, computer readable storage medium.

Description

Application processing method and device, electronic equipment, computer readable storage medium
Technical field
This application involves data processing fields, more particularly to a kind of application processing method and device, electronic equipment, calculating Machine readable storage medium storing program for executing.
Background technique
With the development of mobile communication technology, is both provided in Mobile operating system and resource constraint is carried out to background application Method.In traditional operating system, when system resource utilization rate is excessively high, the background application of part can be carried out at compulsory withdrawal Reason is used with recycling some resources to foreground application.
For whether to carry out compulsory withdrawal to corresponding background application, conventional method is by according to each background application In the duration reside permanently from the background, using frequency, using factors such as durations, choose the duration that backstage resides permanently it is longer, using Lower or shorter using the duration background application of frequency carries out compulsory withdrawal to the background application of selection.For traditional method, Or can exist corresponding background application just by killing soon, user uses the application again in the short time, thus system needs Resource after the recovery is loaded again, causes the accuracy of recycling not high.
Summary of the invention
The embodiment of the present application provides a kind of application processing method and device, electronic equipment, computer readable storage medium, can To improve the accuracy to the resource reclaim of background application.
A kind of application processing method, comprising: obtain the fisrt feature data of each feature, the fisrt feature data are pair Answer characteristic of the feature under prediction time;It obtains for predicting whether user can use the target within preset duration The decision-tree model of application, the initial time of the preset duration are prediction time;Using the fisrt feature data as described in The input of decision-tree model exports prediction result;When the prediction result is that will not use the target within preset duration In application, freezing to the target application.
A kind of to apply processing unit, described device includes: that characteristic obtains module, for obtaining the first of each feature Characteristic, the fisrt feature data are characteristic of the character pair under prediction time;Decision-tree model obtains module, It is described default for obtaining for predicting whether user can use the decision-tree model of the target application within preset duration The initial time of duration is prediction time;Prediction module, for using the fisrt feature data as the decision-tree model Input exports prediction result;Application processing module will not use described for working as the prediction result within preset duration When target application, the target application is freezed.
A kind of electronic equipment, including memory and processor store computer program, the calculating in the memory When machine program is executed by the processor, so that the processor executes the application processing method in each embodiment of the application Step.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor The step of application processing method in each embodiment of the application is realized when row.
Application processing method and device provided by the embodiments of the present application, electronic equipment, computer readable storage medium, pass through The decision-tree model of target application is pre-set, and obtains the fisrt feature data of each feature and determining for the target application Plan tree-model is to obtain the user that decision-tree model is exported using the fisrt feature data as the input of decision-tree model The no prediction result that the target application can be used within preset duration, when the prediction result is that will not make within preset duration When with target application, target application is freezed, is applied with limited target to the occupancy of resource, improve to target application into The accuracy that row freezes, and then also improve the validity to system resource release.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is the schematic diagram of internal structure of electronic equipment in one embodiment;
Fig. 2 is the part frame schematic diagram of the system in one embodiment in electronic equipment;
Fig. 3 is the applied environment figure of application processing method in one embodiment;
Fig. 4 is the flow chart of application processing method in one embodiment;
Fig. 5 is to carry out sample point to sample set for the information gain-ratio that sample set is classified according to feature in one embodiment Class is generated for predicting whether user can be within preset duration using the flow chart of the decision-tree model of target application;
Fig. 6 A is the schematic diagram of decision tree in one embodiment;
Fig. 6 B is a kind of schematic diagram of decision tree in another embodiment;
Fig. 6 C is the schematic diagram of decision tree in another embodiment;
Fig. 7 is the flow chart that the information gain-ratio that feature classifies for target sample collection is obtained in one embodiment;
Fig. 8 is the flow chart of application processing method in another embodiment;
Fig. 9 is the structural block diagram that processing unit is applied in one embodiment;
Figure 10 is the structural block diagram that processing unit is applied in another embodiment;
Figure 11 is the structural block diagram that processing unit is applied in another embodiment;
Figure 12 is the block diagram of the part-structure of mobile phone in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and It is not used in restriction the application.
It is appreciated that term " first " used in the present invention, " second " etc. can be used to describe various elements herein, But these elements should not be limited by these terms.These terms are only used to distinguish the first element from the other element.Citing comes It says, without departing from the scope of the invention, fisrt feature data can be known as to second feature data, and similarly, Second feature data can be known as to fisrt feature data.Fisrt feature data and second feature data both characteristic, But it is not same characteristic.
In one embodiment, as shown in Figure 1, providing the schematic diagram of internal structure of a kind of electronic equipment.The electronics is set Standby includes processor, memory and the display screen connected by system bus.Wherein, which calculates and controls for providing Ability supports the operation of entire electronic equipment.Memory for storing data, program, and/or instruction code etc., on memory At least one computer program is stored, which can be executed by processor, and provide in the embodiment of the present application to realize Application processing method suitable for electronic equipment.Memory may include magnetic disk, CD, read-only memory (Read-Only Memory, ROM) etc. non-volatile memory mediums or random access memory (Random-Access-Memory, RAM) etc..Example Such as, in one embodiment, memory includes non-volatile memory medium and built-in storage.Non-volatile memory medium is stored with Operating system, database and computer program.It is stored in the database for realizing one provided by above each embodiment The relevant data of application processing method are planted, such as the information such as title that can be stored with each process or application.The computer program It can be performed by processor, for realizing a kind of application processing method provided by each embodiment of the application.Built-in storage The running environment of cache is provided for operating system, database and the computer program in non-volatile memory medium.Display Screen can be touch screen, for example be that capacitance plate or electrical screen may be used also for showing the interface information of the corresponding application of the first process Be used to detect the touch operation for acting on the display screen, corresponding instruction is generated, for example carry out the switching of front and back application Instruction etc..
It will be understood by those skilled in the art that structure shown in Fig. 1, only part relevant to application scheme is tied The block diagram of structure, does not constitute the restriction for the electronic equipment being applied thereon to application scheme, and specific electronic equipment can be with Including than more or fewer components as shown in the figure, perhaps combining certain components or with different component layouts.Such as electricity Sub- equipment further includes the network interface connected by system bus, and network interface can be Ethernet card or wireless network card etc., is used It is communicated in external electronic equipment, for example can be used for same server and communicated.For another example on the electronic equipment not In the presence of the display connected by system bus, or external display device can be connected.
In one embodiment, as shown in Fig. 2, providing the part architecture diagram of a kind of electronic equipment.Wherein, which sets It include JAVA space layer 210, local ccf layer 220 and kernel (Kernel) space layer 230 in standby architecture system.JAVA It may include that freezing pipe ought to use 212 in space layer 210, electronic equipment can ought to be realized with 212 to each by the freezing pipe Strategy is freezed in application, and the management operation such as freeze and thaw is done to the related application of backstage power consumption.Include in local ccf layer 220 Resource prioritization and limitation management module 222 and platform freezing pipe manage module 224.Electronic equipment can pass through resource prioritization and limit In different tissue of the application in different priorities and different resource of 222 real-time servicing of management module processed, and according to upper layer Demand adjusts the resource group of application program to reach optimization performance, saves the effect of power consumption.Electronic equipment can be by flat Platform freezes management module 224 for can freezing from the background for task according to the length for entering freeze-off time, and it is preset to be assigned to correspondence The frozen crust of different levels, optionally, the frozen crust may include three, be respectively: CPU limitation sleep pattern, CPU freeze to sleep Sleep mode, process deep freezing mode.Wherein, CPU limits sleep pattern and refers to the progress of associated process occupied cpu resource Limitation, makes associated process occupy less cpu resource, vacant CPU resource is tilted to other not frozen processes, limit The occupancy to cpu resource has been made, has also accordingly limited process to the occupancy of Internet resources and I/O interface resource;CPU freezes to sleep Sleep mode, which refers to, forbids associated process using CPU, and retains the occupancy to memory, when being forbidden to use cpu resource, corresponding net Network resource and I/O interface resource are also prohibited from using;Process deep freezing mode refers in addition to being forbidden to use cpu resource, Further the occupied memory source of associated process is recycled, the memory of recycling uses for other processes.Kernel spacing It include UID management module 231, Cgroup module 232, Binder control module 233, proceeding internal memory recycling module 234 in layer 230 And freezes time-out and exit module 235.Wherein, UID management module 231 is for realizing the User Identity based on application The resource of (User Identifier, UID) Lai Guanli third-party application is freezed.It is compared to based on process identity mark Know the control of (Process Identifier, PID) Lai Jinhang process, is more convenient for being managed collectively the application of a user by UID Resource.Cgroup module 232 for provide a set of perfect central processing unit (Central Processing Unit, CPU), CPUSET, memory (memory), input/output (input/output, I/O) and the relevant resource constraint mechanism of Net. Binder manages module 233 for realizing the control of the backstage binder priority communicated.Wherein, local ccf layer 220 connects Mouth mold block includes to develop to the binder interface on upper layer, the frame on upper layer or send using the binder interface by offer Resource constraint or the instruction freezed are to resource prioritization and limitation management module 222 and platform freezing pipe reason module 224.Process Memory recycle module 234 can freeze shape when some third-party application is chronically in this way for realizing process deep freezing mode When state, the file area of process can be mainly discharged, to reach the module for saving memory, also accelerate this and apply to open in next time Speed when dynamic.Freeze time-out and exits exception of the module 235 for solving to occur freezing overtime scene generation.Pass through above-mentioned frame Structure is, it can be achieved that application processing method in each embodiment of the application.
In one embodiment, as shown in figure 3, being the application scenarios schematic diagram of application processing method in one embodiment, Electronic equipment can acquire under different time sections, for embody user behavior habit default feature characteristic as sample, Sample set is formed, sample classification is carried out to sample set according to information gain-ratio of the feature for sample classification, to construct prediction Whether the decision-tree model of target application can be used within preset duration;Then by the feature of the default feature under prediction time Input of the data as the decision-tree model, obtains prediction result.When prediction result be when prediction result be will not be when default When using target application within length, target application is freezed, otherwise, does not freeze target application.
In one embodiment, as shown in figure 4, providing a kind of application processing method, the present embodiment is applied in this way It is illustrated for electronic equipment as shown in Figure 1.This method comprises:
Step 402, the fisrt feature data of each feature are obtained, fisrt feature data are character pair under prediction time Characteristic.
Characteristic is the data for embodying the feature of the operating habit of application of the user to electronic equipment.This feature can It is one of or more to application operating time duration etc. for the equipment software and hardware feature, equipment use state, user of electronic equipment Kind dimension.Electronic equipment can bury a little under the preset path of system, according to the preset each feature of sample frequency real-time detection Characteristic.Or can detect start it is a certain inscribed on startup in application, being recorded in the application, the spy of correlated characteristic Data and the application identities of the application are levied, the characteristic of the application identities and record is associated.
Optionally, this feature includes current slot when recording corresponding data, current date classification, a upper prospect The application identities of application, the application identities of upper prospect application, current Wireless Fidelity WiFi (WIreless Fidelity) connection status, connection WiFi WiFi mark, apply backstage stop duration, apply in backstage retention Between go out the duration of screen, the plug state of current earphone, present charge state, current battery charge, using the mode being switched, answer The one or more of them feature such as number of classification, application switching to foreground.Wherein, date category may include working day and Day off, WiFi connection status include not connected WiFi and have connected WiFi, and WiFi mark can be service set SSID (Service Set Identifier) or basic service set identification BSSID (Basic Service Set Identifier) etc. Wherein it is any number of can unique identification WiFi information;The classification of application includes social application, payment application, game application, tool Classifications, the mode classifications of application such as class application may include a variety of, it is not limited to this.Application switching indicates that application switches from foreground To backstage or from foreground is switched to from the background, switching mode may include directly opening application according to the starting icon of application and being formed Application switching, click the notification message of application and formed application switching, directly exit application and application switching etc. of formation. Upper prospect application indicates at the time of recording this feature, in the upper primary application for being in front stage operation, a upper application It indicates at the time of recording this feature, the upper last application for being in front stage operation.
Fisrt feature data refer under prediction time, the characteristic of character pair, be related to when this feature in application, Then for target application, for example, fisrt feature data may include under prediction time current slot, current date classification, on One prospect application name, upper prospect application name, current WiFi connection status, connection WiFi WiFi mark, Target application backstage stop duration, target application the backstage retention period go out the duration of screen, the plug state of current earphone, Classification, the target application of mode, target application that present charge state, current battery charge, target application are switched are switched to The one or more of them feature such as the number on foreground.Wherein, prediction time can be current time.
In one embodiment, electronic equipment can ought to be in running background with 212 by freezing pipe to initiate to be directed to One or more application freeze to instruct, this freezes instruction and can and start to execute according to the instruction to obtain the first of each feature Characteristic.Wherein, it can trigger the application when the available resources for detecting electronic equipment are lower than preset threshold and freeze to instruct, Or application initiation can be freezed to instruct when detecting that a certain application is switched to backstage.Freeze to instruct targeted answer With as target application.
Step 404, it obtains for predicting whether user can use the decision-tree model of target application within preset duration; The initial time of preset duration is prediction time.
Decision-tree model is the model that whether can be used within preset duration for predicting target application. The initial time of preset duration is prediction time, i.e., prediction target application whether can from current time, preset duration it Whether can inside be used.Preset duration can be any appropriate duration, can be arranged based on experience value, for example can be 1 minute, 5 Minute, 10 minutes or 30 minutes etc..Target application is the application that treatment is surveyed, wherein the target application can in electronic equipment, It is currently at the one or more application of running background.
In one embodiment, electronic equipment has preset decision-tree model corresponding with target application, and acquisition and target Using corresponding decision-tree model, to predict whether the target application can be used by a user within preset duration.For not Same application, can be correspondingly arranged different decision-tree models, for example, then a corresponding decision tree can be arranged for each application Model, or for the application of each type, corresponding decision-tree model is set.
Step 406, using fisrt feature data as the input of decision-tree model, prediction result is exported.
Electronic equipment can using collected each feature in the fisrt feature data under prediction time as the decision tree The data of model, and the decision-tree model is run, to obtain the output result of decision tree.The data result is prediction result. Wherein, prediction result includes that target application will not be used within preset duration or can use within preset duration It arrives.
Step 408, when prediction result, which is, to use target application within preset duration, target application is frozen Knot.
It is that target application can will not be freezed within preset duration using then when measuring prediction result.It is optional Prediction result can be sent to platform as shown in Figure 2 and freeze management module 224 by ground, electronic equipment, and platform freezes to manage mould When block 224 receives the prediction result that will not use target application within preset duration, the target application can be initiated to freeze Operation to limit resource workable for target application, for example can take CPU limitation sleep pattern, CPU to freeze to sleep Any one mode therein such as mode or process deep freezing mode freezes target application.
Above-mentioned application processing method by pre-setting the decision-tree model of target application, and obtains each feature Fisrt feature data and the target application decision-tree model, using the fisrt feature data as the defeated of decision-tree model Enter, to obtain whether the user that decision-tree model is exported can use the prediction result of the target application within preset duration, When the prediction result, which is, to use target application within preset duration, target application is freezed, with limited target Using the occupancy to resource.Due to being also required to consume certain system resource when freezing application, meeting is predicted whether by carrying out Target application is used within preset duration, is then freezed when not in use, is freezed to reduce to target application After, user can use within preset duration again and arrive the application, and need to thaw to the application, and cause extra jelly Knot operation, improves the accuracy freezed to target application, and then also improves the validity to system resource release.
In one embodiment, before step 402, further includes: obtain the second feature data of preset each feature As sample, sample set is generated, second feature data are the character pair before prediction time, when starting with reference to application Characteristic includes target application with reference to application;When the data volume of sample set is more than preset threshold, according to feature for sample The information gain-ratio for collecting classification carries out sample classification to sample set, generates for predicting whether user can make within preset duration With the decision-tree model of target application.
Wherein, second feature data are the characteristic of each feature of record before prediction time, second spy Levying data has incidence relation with the corresponding application identities with reference to application, i.e., before the prediction time, answers detecting to refer to It when with being activated, is then recorded under Startup time, the characteristic (i.e. second feature data) of each feature, and this is second special Sign data and the application identities of reference application establish incidence relation.
Electronic equipment can acquire the second feature data of each feature according to predeterminated frequency by historical time section, will be every The secondary second feature being recorded is as sample, to form sample set.When the data volume of the sample set of accumulation is more than preset threshold When, then start the decision-tree model that each application for needing to predict is constructed according to the sample set accumulated.Historical time section, example It such as can be over 7 days, 10 days;Predeterminated frequency, such as can be and acquire within every 10 minutes primary, per half an hour acquisition once.It can With understanding, the multi-dimensional feature data of one acquisition constitutes a sample, and multiple samples of multi collect constitute sample set.
After constituting sample set, each sample in sample set can be marked, the application identities of marker samples, Obtain the sample label of each sample.Label to each sample is corresponding associated application identities.Due to being directed to mesh Mark application, thus the sample label can classify that belong to target application and non-targeted application namely sample class include that " target is answered With " and " non-targeted application ".Will be when the target application be activated, the sample of the second feature data for the feature being recorded Label is set as " target application ", when being activated for non-targeted application, the sample of the second feature data for the feature being recorded This label is set as " non-targeted application ".It is alternatively possible to numerical value " 1 " expression " target application ", it is " non-with numerical value " 0 " expression Target application ", vice versa.
Preset threshold can be preset any appropriate numerical value, for example can be 10000, i.e., the sample size that ought be recorded is super When 10000, then start to construct decision-tree model.Optionally, sample size is bigger, then the decision-tree model constructed is relatively more quasi- Really.
In one embodiment, for convenient for sample classification, the feature that numerical value unused in characteristic can be directly indicated The numerical quantization of information appliance body comes out, for example, being directed to current this feature of WiFi connection status, can indicate to have connected with numerical value 1 It meets WiFi, indicates not connected WiFi with numerical value 0 (vice versa).
The embodiment of the present application can carry out sample point to sample set for the information gain-ratio that sample set is classified based on feature Class, to construct whether prediction user can use the decision-tree model of target application within preset duration.For example, can be based on C4.5 algorithm constructs decision-tree model.
Wherein, decision tree is a kind of a kind of tree relying on decision and setting up.In machine learning, decision tree is a kind of Prediction model, representative is a kind of a kind of mapping relations between object properties and object value, some is right for each node on behalf As, each of tree diverging paths represent some possible attribute value, and each leaf node then correspond to from root node to The value of object represented by leaf node path experienced.Decision tree only has single output, can be with if there is multiple outputs Establish independent decision tree respectively to handle different output.
Wherein, C4.5 algorithm is one kind of decision tree, it is that a series of classification used in machine learning and data mining are asked Algorithm in topic is by a kind of improved important algorithm of ID3.Its target is supervised learning: a data set is given, wherein Each tuple can be described with one group of attribute value, each tuple belong in the classification of a mutual exclusion certain is a kind of. The target of C4.5 is to find a dependence value to the mapping relations of classification, and this mapping can be used for new by study The unknown entity of classification classify.
Based on "ockham's razor" principle, i.e., ID3 (Iterative Dichotomiser 3,3 generation of iteration binary tree) is With doing more things with less thing as far as possible.In information theory, it is expected that information is smaller, then information gain is bigger, thus Purity is higher.The core concept of ID3 algorithm is exactly to carry out the selection of metric attribute with information gain, information gain after selection division Maximum attribute is divided.The algorithm traverses possible decision space using top-down greedy search.
In the embodiment of the present application, information gain-ratio can be with is defined as: information gain and feature of the feature for sample classification For the ratio between the division information of sample classification.Information gain-ratio acquisition modes specifically refer to following description.
Information gain exactly sees a feature t for feature one by one, when system has it and do not have it Information content is respectively how many, and the difference of the two is exactly that this feature gives system bring information content, i.e. information gain.
Division information is used to measure the range and uniformity coefficient of feature division data (such as sample set), which can be with The entropy being characterized.
In one embodiment, as shown in figure 5, the information gain-ratio classified according to feature for sample set to sample set into Row sample classification is generated for predicting whether user can use the decision-tree model of target application within preset duration, comprising:
Step 501, using sample set as the nodal information of the root node of decision tree;The nodal information of root node is determined as Current target sample collection to be sorted.
I.e. in the case where starting building, using the sample set as current target sample collection to be sorted.Point of sample set Class includes " target application " and " non-targeted application ".
Step 502, the information gain-ratio that feature classifies for target sample collection is obtained.
Step 503, it is concentrated according to information gain-ratio from target sample and chooses sample as division feature.
Dividing feature is the feature chosen from feature according to the information gain-ratio that each feature classifies for sample set, is used for Classify to sample set.Wherein, there are many modes that division feature is chosen according to information gain-ratio, such as in order to promote sample classification Accuracy, can choose the corresponding feature of maximum information ratio of profit increase for divide feature.
In one embodiment, in order to promote the accuracy of determination of decision-tree model, a ratio of profit increase threshold can also be set Value;When maximum information gain-ratio is greater than the threshold value, the corresponding feature of the information gain-ratio is chosen just to divide feature.It can be from Maximum target information ratio of profit increase is chosen in information gain-ratio;Judge whether target information ratio of profit increase is greater than preset threshold;If so, The corresponding feature of target information ratio of profit increase is then chosen as current division feature.
, can be using present node as leaf node when target information ratio of profit increase is not more than preset threshold, and choose sample Output of the most sample classification of this quantity as the leaf node.Wherein, sample class includes " target application " and " non-targeted Using ", when output is " target application ", that is, indicating can be within preset duration using target application is arrived, when output is " non- When target application ", that is, indicates to use within preset duration and arrive target application.Wherein, preset threshold can be according to reality Demand setting, such as 0.9,0.8.For example, when information gain-ratio 0.9 of the feature 1 for sample classification is maximum information gain When, when predetermined gain ratio step threshold value is 0.8, since maximum information ratio of profit increase is greater than preset threshold, at this point it is possible to by 1 conduct of feature Divide feature.In another example when preset threshold is 1, then maximum information ratio of profit increase is less than preset threshold, at this point it is possible to will work as Front nodal point is as leaf node.
Step 504, target sample collection is divided according to division feature, generates at least one subsample collection.
There are many modes for carrying out classifying and dividing to sample according to division feature, for example, can be based on the spy for dividing feature Value indicative divides sample set.Target sample can be obtained and concentrate the characteristic value for dividing feature;According to characteristic value to target sample This collection is divided.It is concentrated for example, can will divide the identical sample of characteristic value in sample set and be divided into same subsample.Example Such as, the characteristic value for dividing feature includes: 0,1,2, then at this point it is possible to the sample that the characteristic value for dividing feature is 0 be classified as it is a kind of, The sample that characteristic value is 1 is classified as sample that is a kind of, being 2 by characteristic value and is classified as one kind.
Step 505, the division feature that sample is concentrated in each subsample is removed;The child node of present node is generated, and will be gone Nodal information except the subsample collection after division feature as child node.
Step 506, judge whether child node meets default classification termination condition;It is caused, 507 are thened follow the steps, it otherwise, will Target sample collection is updated to removal and divides the subsample collection after feature, and returns to step 502.
Step 507, using child node as leaf node, leaf is arranged in the classification of the subsample collection after feature is divided according to removal The output of child node.
When child node meets default classification termination condition, it can stop to the son using child node as leaf node The sample set of node is classified, and can concentrate the classification of sample that the output of the leaf node is arranged based on subsample after removal. There are many modes of the output of classification setting leaf node based on sample.For example, sample size in sample set after can removing Output of most classifications as the leaf node.
Wherein, presetting classification termination condition can set according to actual needs, and child node meets default classification and terminates item When part, using current node as leaf node, stopping carries out participle classification to the corresponding sample set of child node;Child node is not When meeting default classification termination condition, continue to classify to the corresponding volume sample set of child node.For example, default classification terminates item Part may include: child node removal after in the set of subsample the categorical measure of sample whether be preset quantity, if so, determining Child node meets default classification termination condition.
By the above method, the decision-tree model of target application can be constructed, so that being according to decision-tree model prediction It is no that the target application can be used within preset duration.
For example, for sample set D { sample 1, sample 2 ... sample i ... sample n }, wherein sample includes several Feature A.
Firstly, initializing to samples all in sample set, then, a root node d is generated, and sample set D is made For the nodal information of root node d, Fig. 6 A is such as referred to.
Calculate the information gain-ratio g that each feature such as feature A classifies for sample setR(D,A)1、gR(D,A)2…… gR(D, A)m;Choose maximum information gain-ratio gR(D,A)max。
As maximum information gain-ratio gR(D, A) max be less than preset threshold ε when, current node as leaf node, and Choose output of the most sample class of sample size as leaf node.
As maximum information gain-ratio gRWhen (D, A) max is greater than preset threshold ε, information gain g can be chosenR(D,A) The corresponding feature of max is as division feature Ag, according to feature AgTo sample set D { sample 1, sample 2 ... sample i ... sample N } it is divided, specifically, to AgEach value ai, according to Ag=aiD is divided into several nonempty sets Di, as The child node of present node.Sample set is such as divided into two sub- sample set D1{ sample 1, sample 2 ... sample k } and D2{ sample K+1 ... sample n }.
Feature A will be divided in subsample collection D1 and D2gRemoval is A-Ag.The child node d of root node d is generated with reference to Fig. 6 A1With d2, and by subsample collection D1As child node d1Nodal information, using subsample collection D2 as the nodal information of child node d2.
Then, for each child node, for each child node, with A-AgAs feature, the D of child nodeiAs data Collection, the above-mentioned step of recursive call construct subtree, until meeting default classification termination condition.
With child node d1For, judge whether child node meets default classification termination condition, if so, current son is saved Point d1As leaf node, and according to child node d1Concentrate the classification of sample that leaf node output is set in corresponding subsample.
When child node is unsatisfactory for default classification termination condition, by the way of the above-mentioned classification based on information gain, continue Classify to the corresponding subsample collection of child node, such as with child node d2For can calculate A2In sample set each feature relative to The information gain-ratio g of sample classificationR(D, A) chooses maximum information gain-ratio gR(D, A) max, when maximum information gain-ratio gRWhen (D, A) max is greater than preset threshold ε, information gain-ratio g can be chosenR(D, A) corresponding feature is to divide feature Ag(such as Feature Ai+1), based on division feature AgBy D2Several subsample collection are divided into, it such as can be by D2It is divided into subsample collection D21、D22、 D23, then, by subsample collection D21、D22、D23In division feature AgRemoval, and generate present node d2Child node d21、d22、 d23, removal is divided into feature AgSample set D afterwards21、D22、D23Respectively as child node d21、d22、d23Nodal information.
And so on, by it is above-mentioned based on information gain-ratio classification in the way of may be constructed out decision as shown in Figure 6B Tree, the output of the leaf node of the decision tree be included within preset duration will use target application or preset duration it It not will use target application inside.Wherein, expression " target application " and " non-targeted to answer can be corresponded to by exporting result "Yes" and "No" With ", that is, indicate " target application can be used within preset duration " and " target application will not be used within preset duration ".
In one embodiment, in order to promote the speed and efficiency predicted using decision tree, can also node it Between path on the corresponding characteristic value for dividing feature of label.For example, during the above-mentioned classification based on information gain, it can be with In the present node characteristic value for dividing feature corresponding to label on its child node path.
For example, dividing feature AgCharacteristic value when including: 0,1, can be in d2Label 1 on path between d, in d1With Label 0 on path between d, and so on, it, can be in the path subscript of present node and its child node after each divide Note is corresponding to divide characteristic value such as 0 or 1, can obtain decision tree as shown in Figure 6 C.Similarly, output result "Yes" and "No" can correspond to expression " target application " and " non-targeted application ", that is, indicate " can within preset duration use target application " and " target application will not be used within preset duration ".
In one embodiment, as shown in fig. 7, the information gain-ratio that acquisition feature classifies for target sample collection includes:
Step 702, the information gain that feature classifies for target sample collection is obtained.
Information gain indicates that the uncertainty of the information of the class (" target application " or " non-targeted application ") of some feature subtracts Few degree.
Step 704, the division information that feature classifies for target sample collection is obtained.
Division information is used to measure the range and uniformity coefficient of feature division data (such as sample set), which can be with The entropy being characterized.
Step 706, the information gain-ratio classified according to information gain and division acquisition of information feature for target sample collection.
Information gain-ratio, which can be characterized, believes the division of sample classification the information gain and feature of sample set classification The ratio between breath.It can be divided by according to the information gain of acquisition with corresponding division information, obtain information gain-ratio.
Feature integrates difference of the information gain of classification also between empirical entropy and conditional entropy for target sample.Mesh can be obtained Mark the empirical entropy of sample classification;Feature is obtained for the conditional entropy of target sample collection classification results;According to conditional entropy and empirical entropy, Obtain the information gain that feature classifies for target sample collection.Wherein it is possible to obtain positive sample occurs in sample set first The second probability that probability and negative sample occur in sample set, positive sample are the sample that sample class is " target application ", are born Sample is the sample that sample class is " non-targeted application ";The empirical entropy of sample is obtained according to the first probability and the second probability.
In one embodiment, for example, for sample set D { sample 1, sample 2 ... sample i ... sample n }, sample packet Multidimensional characteristic is included, such as feature A.Feature A can obtain the information gain-ratio of sample classification by following formula:
Wherein, gR(D, A) is characterized the information gain-ratio that A classifies for sample set D, and g (D, A) is characterized A for sample The information gain of classification, HA(D) the division information of A, the i.e. entropy of feature A are characterized.
Wherein, gR(D, A) can be obtained by following formula:
The empirical entropy that H (D) classifies for sample set D, and H (D | A) it is characterized the conditional entropy that A classifies for sample set D.
If the sample size that sample class is " target application " is j, the sample size of " non-targeted application " is n-j;This When, probability of occurrence p of the positive sample in sample set D1=j/n, probability of occurrence p of the negative sample in sample set D2=n-j/n.So Afterwards, the calculation formula based on following empirical entropy calculates the empirical entropy H (D) of sample classification:
In decision tree classification problem, information gain is exactly decision tree information after carrying out Attributions selection and dividing preceding and division Difference.In this implementation, the empirical entropy H (D) of sample classification are as follows:
H (D)=p1 log p1+p2 log p2
In one embodiment, sample set can be divided by several subsample collection according to feature A, then, obtains each son The comentropy of sample set classification and the probability that occurs in sample set of each characteristic value of this feature A, according to the comentropy and The probability can be divided after comentropy, i.e. this feature AiFor the conditional entropy of sample set classification results.
For example, sample characteristics A can be by following for the conditional entropy of sample set D classification results for sample characteristics A Formula is calculated:
Wherein, n is characterized the value kind number of A, i.e. characteristic value number of types.At this point, piIt is i-th kind of value for A characteristic value The probability that occurs in sample set D of sample, AiFor i-th kind of value of A.(D | A=Ai) it is subsample collection DiThe experience of classification Entropy, subsample collection DiThe A characteristic value of middle sample is i-th kind of value.
For example, with the value kind number of feature A for 3, i.e. A1、A2、A3For, at this point it is possible to which feature A is by sample set D { sample 1, sample 2 ... sample i ... sample n } it is divided into three sub- sample sets, characteristic value A1D1{ sample 1, sample 2 ... sample This d }, characteristic value A2D2{ sample d+1 ... sample e }, characteristic value are the D3 { sample e+1 ... sample n } of A3.D, e is equal For positive integer, and it is less than n.
At this point, conditional entropy of the feature A for sample set D classification results are as follows:
H (D | A)=p1H (D | A=A1)+p2H (D | A=A2)+p3H (D | A=A3)
Wherein, p1=D1/ D, p2=D2/D, p2=D3/D;
H(D|A1) it is subsample collection D1The comentropy of classification, i.e. empirical entropy can pass through the calculation formula of above-mentioned empirical entropy It is calculated.
The empirical entropy H (D) and feature A for obtaining sample classification for sample set D classification results conditional entropy H (D | A after), the information gain that feature A classifies for sample set D can be calculated, is such as calculated by the following formula to obtain:
Namely the information gain that feature A classifies for sample set D are as follows: empirical entropy H (D) and feature A classifies for sample set D As a result the difference of conditional entropy H (D | A).
Wherein, the entropy that feature is characterized the division information that sample set is classified.Can the value based on feature in mesh sample The sample distribution probability of this concentration obtains.For example, HA(D) it can be obtained by following formula:
It is characterized the value classification of A, or kind number.
Wherein, DiThe sample set for being i-th kind for sample set D feature A.
In one embodiment, before freezing to target application, further include detection target application whether belong to it is white List application, if so, exempting to freeze to target application;Otherwise, it executes and target application is freezed.
Wherein, detection target application whether belong to white list application can be in the arbitrary process before freezing to target application It executes.The white list for the application for exempting to freeze has been preset in electronic equipment, the application in white list can be the customized setting of user, or Person can be arranged for system default.Had recorded in the white list can in case the application freezed application message, such as recordable application Application identities.When target application, which is in this, exempts to freeze list, then target application is not freezed, if not in white list In, and prediction result is when will not use target application within preset duration, just to freeze to target application.By into one Step setting white list, further improves the accuracy freezed to application.
In one embodiment, as shown in figure 8, providing another application processing method, this method comprises:
Step 801, the second feature data of preset each feature are obtained as sample, generate sample set.
The feature of acquisition may include a variety of, be the feature of 14 dimensions acquired in electronic equipment as shown in table 1 below.It is real In border, the quantity for the characteristic information that a sample is included can also be less than table 1 more than the quantity than information shown in table 1 The quantity of shown information, the specific features information taken can also be different from shown in table 1, are not especially limited herein.
Table 1
Electronic equipment in advance within the nearest period, can acquire the feature letter of above-mentioned multiple features according to predeterminated frequency Breath, as sample.The multi-dimensional feature data of one acquisition constitutes a sample, and multiple samples of multi collect constitute sample set. Wherein, it can bury under the preset path of system a little, a certain be carved on startup in application, being recorded in the application detecting to start Under, the application identities of the characteristic of correlated characteristic and the application close the characteristic of the application identities and record Connection.
After constituting sample set, each sample in sample set can be marked, obtain the sample of each sample Label, the sample label can be the application identities of corresponding application, or corresponding using affiliated applicating category.Relative to be detected Target application, the sample label can be divided into " target application " or " non-targeted application ", or can be to be divided into " with answering for target application It is identical with type ", or " different from the application type of target application ".
Step 802, when the data volume of sample set is more than preset threshold, using sample set as the section of the root node of decision tree Point information;The nodal information of root node is determined as current target sample collection to be sorted.
Namely determine the sample set of root node as current target sample collection to be sorted.For example, with reference to Fig. 6 A, for sample This collection D { sample 1, sample 2 ... sample i ... sample n }, can first generate the root node d of decision tree, and sample set D is made For the nodal information of root node d.
Step 803, the information gain-ratio that feature classifies for target sample collection is obtained, is determined from information gain-ratio maximum Information gain-ratio.
In one embodiment, formula H (D, A)=p can be passed through first1H (D | A=A1)+ p2H (D | A=A2)+p3H(D|A =A3) feature A is calculated for the conditional entropy H (D | A) of sample set D classification results, then pass throughFeature A is calculated for sample set The information gain g (D, A) of D classification, then according to mistakeFeature is calculated to classify for target sample collection Information gain-ratio gR(D,A)。
Step 804, detect whether maximum information gain-ratio is greater than preset threshold, if so, 805 are thened follow the steps, otherwise, Execute step 806.
Such as, it can be determined that maximum information gain gRWhether (D, A) max is greater than preset threshold epsilon, which can be with It sets according to actual needs.
Step 805, the corresponding sample of maximum information gain-ratio is chosen as feature is divided, according to division feature to target Sample set is divided, at least one subsample collection is generated.
Step 806, using present node as leaf node, and the most sample classification of sample size is chosen as leaf section The output of point.
For example, working as maximum information gain gRThe corresponding feature of (D, A) max is characterized AgWhen, it can be with selected characteristic AgFor Divide feature.
Specifically, sample set can be divided by several subsample collection, subsample according to the characteristic value kind number for dividing feature The quantity of collection is identical as characteristic value kind number.For example, can will divide the identical sample of characteristic value in sample set is divided into same son In sample set.For example, the characteristic value for dividing feature includes: 0,1,2, then at this point it is possible to the sample that the characteristic value for dividing feature is 0 Originally it is classified as sample that is a kind of, being 1 by characteristic value and is classified as sample that is a kind of, being 2 by characteristic value being classified as one kind.
For example, sample set D can be divided into D when there are two types of the values of division feature i1{ sample 1, sample 2 ... sample This k } and D2{ sample k+1 ... sample n }.It is then possible to by subsample collection D1And D2In division feature AgRemoval, i.e. A-Ag
Step 807, the division feature that sample is concentrated in each subsample is removed, generates the child node of present node, and will go Nodal information except the subsample collection after division feature as child node.
Wherein, the corresponding child node of a sub- sample set.For example, examine Fig. 6 A generate root node d child node d1 and D2, and by subsample collection D1As child node d1Nodal information, by subsample collection D2As child node d2Nodal information.
It in one embodiment, can also be by the corresponding road for dividing characteristic value setting child node and present node of child node On diameter, convenient for the subsequent prediction for being made whether to will use application, with reference to Fig. 6 C.
Step 808, whether the categorical measure of the subsample concentration sample after judging the corresponding removal division feature of child node Otherwise, target sample collection is updated to the subsample after removal divides feature if so, thening follow the steps 809 for preset quantity Collection, and return to step 803.
When child node is unsatisfactory for default classification termination condition, by the way of the above-mentioned classification based on information gain, continue Classify to the corresponding subsample collection of child node, such as with child node d2For can calculate A2In sample set each feature relative to The information gain-ratio g of sample classificationR(D, A) chooses maximum information gain-ratio gR(D, A) max, when maximum information gain-ratio gRWhen (D, A) max is greater than preset threshold ε, information gain-ratio g can be chosenR(D, A) corresponding feature is to divide feature Ag(such as Feature Ai+1), D2 is divided into several subsample collection based on feature Ag is divided, D2 can be such as divided into subsample collection D21、D22、 D23, then, by subsample collection D21、D22、D23In division feature AgRemoval, and generate present node d2Child node d21、d22、 d23, removal is divided into feature AgSample set D afterwards21、D22、D23Respectively as child node d21、d22, d23 nodal information.
If child node meets the default classification termination condition, concentrate the classification of sample as the leaf subsample The output of child node.When the sample for only having classification to be " target application " is concentrated in subsample after such as removing, it is possible to by " target Using " output as the leaf node.
Step 809, using child node as leaf node, leaf is arranged in the classification of the subsample collection after feature is divided according to removal The output of child node.
For example, in child node d1Subsample collection D1When classification, if maximum information gain is small and preset threshold, at this point, can With by subsample collection D1Output of the most sample class of middle sample size as the leaf node.Such as the sample of " target application " Quantity is most, then " target application " can be used as leaf node d1Output.
Step 810, the fisrt feature data of each feature are obtained, fisrt feature data are character pair under prediction time Characteristic.
After having constructed decision-tree model, at the time of acquisition needs to predict whether to will use target application, acquire pre- at this The characteristic for the feature inscribed when survey.
Step 811, it obtains for predicting whether user can use the decision-tree model of target application within preset duration, The initial time of preset duration is prediction time.
Step 812, using fisrt feature data as the input of decision-tree model, prediction result is exported, when prediction result is When will not use target application within preset duration, step 813 is executed, otherwise, executes step 815.
Electronic equipment can obtain the decision-tree model built in advance, using the fisrt feature data as the defeated of the model Enter, obtains exporting result accordingly.
Step 813, whether detection target application belongs to white list application, if so, thening follow the steps 814, otherwise, executes step Rapid 815.
Step 814, target application is exempted to freeze.
Step 815, target application is freezed.
When prediction result is will not use target application within preset duration, and target application is not in white list, Can freeze management module 224 by platform can initiate freeze operation to the target application, to resource workable for target application It is limited, for example can take that CPU limitation sleep pattern, that CPU freezes sleep pattern or process deep freezing mode etc. is therein Any one mode freezes target application.
Above-mentioned application processing method, by constructing the decision for predicting whether meeting target application within preset duration Tree-model, and the fisrt feature data of each feature and the decision-tree model of the target application are obtained, by the fisrt feature number According to the input as decision-tree model, to obtain whether the user that decision-tree model is exported can use this within preset duration The prediction result of target application, when the prediction result, which is, to use target application within preset duration, to target application Freezed, the occupancy to resource is applied with limited target.Due to being also required to consume certain system resource when freezing application, By carrying out predicting whether then be freezed when not in use within preset duration using target application, to reduce After freezing to target application, user can use within preset duration again and arrive the application, and need to carry out the application It thaws, and causes extra freeze operation, improve the accuracy freezed to target application, and then also improve to system The validity of resource release.
Although should be understood that Fig. 4, Fig. 5, Fig. 7 and Fig. 8 flow chart in each step according to arrow instruction according to Secondary display, but these steps are not that the inevitable sequence according to arrow instruction successively executes.Unless having herein explicitly Bright, there is no stringent sequences to limit for the execution of these steps, these steps can execute in other order.Moreover, Fig. 4, At least part step in Fig. 5, Fig. 7 and Fig. 8 may include multiple sub-steps perhaps these sub-steps of multiple stages or rank Section is not necessarily to execute completion in synchronization, but can execute at different times, these sub-steps or stage Execution sequence is also not necessarily and successively carries out, but can be with the sub-step or stage of other steps or other steps extremely Few a part executes in turn or alternately.
In one embodiment, as shown in figure 9, providing a kind of using processing unit, which includes that characteristic obtains Modulus block 902, decision-tree model obtain module 904, prediction module 906 and application processing module 908.Wherein, characteristic The fisrt feature data that module 902 is used to obtain each feature are obtained, fisrt feature data are character pair under prediction time Characteristic;Decision-tree model obtains module 904 for obtaining for predicting whether user can use within preset duration The decision-tree model of target application, the initial time of preset duration are prediction time;Prediction module 906 is used for fisrt feature number According to the input as decision-tree model, prediction result is exported;Application processing module 908 is used to when prediction result be that will not preset When using target application within duration, target application is freezed.
In one embodiment, it provides as shown in Figure 10 another using processing unit, the device further include:
Decision tree constructs module 910, and the second feature data for obtaining preset each feature generate sample as sample This collection, second feature data are the characteristic of the character pair before prediction time, when starting with reference to application, with reference to answering With including target application;When the data volume of sample set is more than preset threshold, the information that sample set is classified is increased according to feature Beneficial rate carries out sample classification to sample set, generates for predicting whether user can use determining for target application within preset duration Plan tree-model.
In one embodiment, decision tree building module 910 is also used to using sample set as the section of the root node of decision tree Point information;The nodal information of root node is determined as current target sample collection to be sorted;Feature is obtained for target sample collection The information gain-ratio of classification;It is concentrated according to information gain-ratio from target sample and chooses sample as division feature;It is special according to dividing Sign divides target sample collection, generates at least one subsample collection;Remove the division feature that sample is concentrated in each subsample; The child node of present node is generated, and using the subsample collection after removal division feature as the nodal information of child node;Judgement Whether node meets default classification termination condition;If it is not, target sample collection is then updated to the subsample after removal divides feature Collection, and return to execute and obtain the information gain-ratio that feature classifies for target sample collection;If so, using child node as leaf section Point, the output of the classification setting leaf node of the subsample collection after feature is divided according to removal.
In one embodiment, decision tree building module 910 is also used to determine that maximum information increases from information gain-ratio Beneficial rate;When maximum information gain-ratio is greater than preset threshold, the corresponding sample of maximum information gain-ratio is chosen as division Feature;When maximum information gain-ratio is not more than preset threshold, using present node as leaf node, and sample size is chosen Output of most sample classifications as leaf node.
In one embodiment, after decision tree building module 910 is also used to judge that the corresponding removal of child node divides feature Subsample concentrate sample categorical measure whether be preset quantity;If so, determining that child node meets default classification and terminates item Part.
In one embodiment, decision tree building module 910 is also used to obtain the letter that feature classifies for target sample collection Cease gain;Obtain the division information that feature classifies for target sample collection;According to information gain and division acquisition of information feature pair In the information gain-ratio of target sample collection classification.
In one embodiment, decision tree building module 910 is also used to pass throughCalculate feature pair In the information gain-ratio of target sample collection classification;Wherein, D indicates that sample set, g (D, A) are characterized what A classified for sample set D Information gain, HA(D) the division information of A, g are characterizedR(D, A) is characterized the information gain-ratio that A classifies for sample set D;g(D, A) pass through It is calculated;Wherein, H It (D) is the empirical entropy of sample set D classification, and H (D | A) it is characterized the conditional entropy that A classifies for sample set D, n is characterized the sample of A Value species number, piIt is characterized the probability that A takes the sample of i-th kind of value to occur in sample set, n and i are just greater than zero Integer.
In one embodiment, as shown in figure 11, another is provided using processing unit, the device further include:
Using detection module 912, for detecting whether target application belongs to white list application.
Application processing module 908 is also used to when prediction result, which is, to use target application within preset duration, right Target application is freezed;When prediction result is that can belong to white name using target application or target application within preset duration List to target application in application, exempt to freeze.
The above-mentioned division using modules in processing unit is only used for for example, in other embodiments, can will answer It is divided into different modules, as required with processing unit to complete above-mentioned all or part of function using processing unit.
Specific about application processing unit limits the restriction that may refer to above for application processing method, herein not It repeats again.The above-mentioned modules using in processing unit can be realized fully or partially through software, hardware and combinations thereof.On Stating each module can be embedded in the form of hardware or independently of in the processor in electronic equipment, can also be stored in a software form In memory in electronic equipment, the corresponding operation of the above modules is executed in order to which processor calls.
There is provided in the embodiment of the present application can be the shape of computer program using realizing for the modules in processing unit Formula.The computer program can be run on the electronic equipments such as terminal or server.The program module that the computer program is constituted can It is stored on the memory of electronic equipment.When the computer program is executed by processor, realize described in the embodiment of the present application Application processing method the step of.
In one embodiment, a kind of electronic equipment is provided, including memory, processor and storage are on a memory simultaneously The computer program that can be run on a processor is realized when processor executes computer program and is answered provided by the various embodiments described above The step of with processing method.
In one embodiment, a kind of computer readable storage medium is additionally provided, on the computer readable storage medium It is stored with computer program, when which executes processor, realizes application described in each embodiment of the application The step of processing method.
In one embodiment, a kind of computer program product comprising instruction is provided, when it runs on computers When, so that computer executes application processing method described in each embodiment of the application.
The embodiment of the present application also provides a kind of computer equipments.As shown in figure 12, for ease of description, illustrate only with The relevant part of the embodiment of the present application, it is disclosed by specific technical details, please refer to the embodiment of the present application method part.The calculating Machine equipment can be include mobile phone, tablet computer, PDA (Personal Digital Assistant, personal digital assistant), Any terminal devices such as POS (Point of Sales, point-of-sale terminal), vehicle-mounted computer, wearable device are with computer equipment For mobile phone:
Figure 12 is the block diagram of the part-structure of mobile phone relevant to computer equipment provided by the embodiments of the present application.With reference to figure 12, mobile phone includes: radio frequency (Radio Frequency, RF) circuit 1210, memory 1220, input unit 1230, display unit 1240, sensor 1250, voicefrequency circuit 1260, Wireless Fidelity (wireless fidelity, WiFi) module 1270, processor The components such as 1280 and power supply 1290.It will be understood by those skilled in the art that handset structure shown in Figure 12 does not constitute opponent The restriction of machine may include perhaps combining certain components or different component layouts than illustrating more or fewer components.
Wherein, RF circuit 1210 can be used for receiving and sending messages or communication process in, signal sends and receivees, can be by base station After downlink information receives, handled to processor 1280;The data of uplink can also be sent to base station.In general, RF circuit includes But be not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (Low Noise Amplifier, LNA), duplexer etc..In addition, RF circuit 1210 can also be communicated with network and other equipment by wireless communication.It is above-mentioned wireless Any communication standard or agreement, including but not limited to global system for mobile communications (Global System of can be used in communication Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), CDMA (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE)), Email, Short message service (Short Messaging Service, SMS) etc..
Memory 1220 can be used for storing software program and module, and processor 1280 is stored in memory by operation 1220 software program and module, thereby executing the various function application and data processing of mobile phone.Memory 1220 can be led It to include program storage area and data storage area, wherein program storage area can be needed for storage program area, at least one function Application program (such as application program, application program of image player function of sound-playing function etc.) etc.;It data storage area can Storage uses created data (such as audio data, address list etc.) etc. according to mobile phone.In addition, memory 1220 can wrap High-speed random access memory is included, can also include nonvolatile memory, for example, at least disk memory, a flash memories Part or other volatile solid-state parts.
Input unit 1230 can be used for receiving the number or character information of input, and generates and set with the user of mobile phone 1200 It sets and the related key signals of function control inputs.Specifically, input unit 1230 may include touch panel 1231 and other Input equipment 1232.Touch panel 1231, alternatively referred to as touch screen collect the touch operation (ratio of user on it or nearby Such as user is using finger, stylus any suitable object or attachment on touch panel 1231 or near touch panel 1231 Operation), and corresponding attachment device is driven according to preset formula.In one embodiment, touch panel 1231 can Including both touch detecting apparatus and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and examine Touch operation bring signal is surveyed, touch controller is transmitted a signal to;Touch controller receives touching from touch detecting apparatus Information is touched, and is converted into contact coordinate, then gives processor 1280, and order that processor 1280 is sent can be received and added To execute.Furthermore, it is possible to realize touch panel using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves 1231.In addition to touch panel 1231, input unit 1230 can also include other input equipments 1232.Specifically, other are inputted Equipment 1232 can include but is not limited to one in physical keyboard, function key (such as volume control button, switch key etc.) etc. Kind is a variety of.
Display unit 1240 can be used for showing information input by user or be supplied to user information and mobile phone it is each Kind menu.Display unit 1240 may include display panel 1241.In one embodiment, liquid crystal display can be used (Liquid Crystal Display, LCD), Organic Light Emitting Diode (Organic Light- Emitting Diode, ) etc. OLED forms configure display panel 1241.In one embodiment, touch panel 1231 can cover display panel 1241, After touch panel 1231 detects touch operation on it or nearby, processor 1280 is sent to determine touch event Type is followed by subsequent processing device 1280 according to the type of touch event and provides corresponding visual output on display panel 1241.Although In Figure 12, touch panel 1231 and display panel 1241 are as two independent components the input of realizing mobile phone and defeated Enter function, but in some embodiments it is possible to touch panel 1231 is integrated with display panel 1241 and realizes the defeated of mobile phone Enter and output function.
Mobile phone 1200 may also include at least one sensor 1250, such as optical sensor, motion sensor and other biographies Sensor.Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ring The light and shade of border light adjusts the brightness of display panel 1241, and proximity sensor can close display when mobile phone is moved in one's ear Panel 1241 and/or backlight.Motion sensor may include acceleration transducer, can detect all directions by acceleration transducer The size of upper acceleration can detect that size and the direction of gravity when static, the application that can be used to identify mobile phone posture is (such as horizontal Vertical screen switching), Vibration identification correlation function (such as pedometer, tap) etc.;In addition, mobile phone can also configure gyroscope, barometer, Other sensors such as hygrometer, thermometer, infrared sensor etc..
Voicefrequency circuit 1260, loudspeaker 1261 and microphone 1262 can provide the audio interface between user and mobile phone.Sound Electric signal after the audio data received conversion can be transferred to loudspeaker 1261, by 1261 turns of loudspeaker by frequency circuit 1260 It is changed to voice signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 1262, by voicefrequency circuit Audio data is converted to after 1260 receptions, then by after the processing of audio data output processor 1280, can be sent out through RF circuit 1210 Another mobile phone is given, or audio data is exported to memory 1220 so as to subsequent processing.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronics postal by WiFi module 1270 Part, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Figure 12 is shown WiFi module 1270, but it is understood that, and it is not belonging to must be configured into for mobile phone 1200, it can according to need and save Slightly.
Processor 1280 is the control centre of mobile phone, using the various pieces of various interfaces and connection whole mobile phone, By running or execute the software program and/or module that are stored in memory 1220, and calls and be stored in memory 1220 Interior data execute the various functions and processing data of mobile phone, to carry out integral monitoring to mobile phone.In one embodiment, Processor 1280 may include one or more processing units.In one embodiment, processor 1280 can integrate application processor And modem, wherein the main processing operation system of application processor, user interface and application program etc.;Modem Main processing wireless communication.It is understood that above-mentioned modem can not also be integrated into processor 1280.For example, The processor 1280 can integrate application processor and baseband processor, baseband processor with and other peripheral chips etc. it is composable Modem.Mobile phone 1200 further includes the power supply 1290 (such as battery) powered to all parts, it is preferred that power supply can lead to Cross power-supply management system and processor 1280 be logically contiguous, thus by power-supply management system realize management charging, electric discharge and The functions such as power managed.
In one embodiment, mobile phone 1200 can also include camera, bluetooth module etc..
In the embodiment of the present application, when processor included by the mobile phone executes the computer program of storage on a memory Realize above-mentioned described application processing method.
Any reference to memory, storage, database or other media used in this application may include non-volatile And/or volatile memory.Suitable nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), Electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include depositing at random Access to memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM is available in many forms, all Such as static RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
Above embodiments only express the several embodiments of the application, and the description thereof is more specific and detailed, but can not Therefore it is interpreted as the limitation to the application the scope of the patents.It should be pointed out that for those of ordinary skill in the art, Without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection model of the application It encloses.Therefore, the scope of protection shall be subject to the appended claims for the application patent.

Claims (11)

1. a kind of application processing method characterized by comprising
The fisrt feature data of each feature are obtained, the fisrt feature data are characteristic of the character pair under prediction time According to;
It obtains for predicting whether user can use the decision-tree model of the target application within preset duration, it is described default The initial time of duration is prediction time;
Using the fisrt feature data as the input of the decision-tree model, prediction result is exported;
When the prediction result, which is, to use the target application within preset duration, the target application is frozen Knot.
2. the method according to claim 1, wherein in the fisrt feature number for obtaining preset each feature According to before, further includes:
The second feature data of preset each feature are obtained as sample, generate sample set, the second feature data be Before prediction time, the characteristic of character pair when starting with reference to application is described to answer with reference to application including the target With;
When the data volume of the sample set is more than preset threshold, the information gain-ratio classified according to feature for sample set is to sample This collection carries out sample classification, generates for predicting whether user can use the decision tree of the target application within preset duration Model.
3. according to the method described in claim 2, it is characterized in that, the information gain classified according to feature for sample set Rate carries out sample classification to sample set, generates for predicting whether user can use the target application within preset duration Decision-tree model, comprising:
Using the sample set as the nodal information of the root node of decision tree;
The nodal information of the root node is determined as current target sample collection to be sorted;
Obtain the information gain-ratio that feature classifies for target sample collection;
It is concentrated according to the information gain-ratio from the target sample and chooses sample as division feature;
The target sample collection is divided according to the division feature, generates at least one subsample collection;
Remove the division feature that sample is concentrated in each subsample;
The child node of present node is generated, and the subsample collection after removal division feature is believed as the node of the child node Breath;
Judge whether the child node meets default classification termination condition;
If it is not, the target sample collection is then updated to the subsample collection after the removal divides feature, and returns and execute acquisition The information gain-ratio that feature classifies for target sample collection;
If so, using the child node as leaf node, the classification of the subsample collection after feature is divided according to the removal is set Set the output of the leaf node.
4. according to the method described in claim 3, it is characterized in that, it is described according to the information gain-ratio from the target sample It concentrates and chooses sample as division feature, comprising:
Maximum information gain-ratio is determined from the information gain-ratio;
When the maximum information gain-ratio is greater than preset threshold, chooses the corresponding sample of the maximum information gain-ratio and make To divide feature;
When the maximum information gain-ratio is not more than preset threshold, using present node as leaf node, and sample is chosen Output of the most sample classification of quantity as the leaf node.
5. according to the method described in claim 3, it is characterized in that, described judge whether the child node meets default classification eventually Only condition, comprising:
Judge that the subsample after the corresponding removal of the child node divides feature concentrates whether the categorical measure of sample is present count Amount;
If so, determining that the child node meets default classification termination condition.
6. method according to any one of claim 3 to 5, which is characterized in that the acquisition feature is for target sample Collect the information gain-ratio of classification, comprising:
Obtain the information gain that feature classifies for target sample collection;
Obtain the division information that feature classifies for target sample collection;
The information gain-ratio classified according to the information gain and the division acquisition of information feature for target sample collection.
7. according to the method described in claim 6, it is characterized in that, described obtain according to the information gain and the division information The information gain-ratio for taking feature to classify target sample collection, comprising:
Pass throughCalculate the information gain-ratio that feature classifies for target sample collection;
Wherein, D indicates that sample set, g (D, A) are characterized the information gain that A classifies for sample set D, HA(D) it is characterized point of A Split information, gR(D, A) is characterized the information gain-ratio that A classifies for sample set D;
The g (D, A) passes throughMeter It obtains;
Wherein, the empirical entropy that H (D) classifies for sample set D, and H (D | A) it is characterized the conditional entropy that A classifies for sample set D, n is spy Levy the sample value species number of A, piIt is characterized the probability that A takes the sample of i-th kind of value to occur in sample set, n and i are big In zero positive integer.
8. the method according to claim 1, wherein it is described the target application is freezed before, also Include:
Detect whether the target application belongs to white list application, if so, exempting to freeze to the target application;Otherwise, it executes It is described that the target application is freezed.
9. a kind of apply processing unit, which is characterized in that described device includes:
Characteristic obtains module, and for obtaining the fisrt feature data of each feature, the fisrt feature data are corresponding special Levy the characteristic under prediction time;
Decision-tree model obtains module, for obtaining for predicting whether user can be answered within preset duration using the target Decision-tree model, the initial time of the preset duration are prediction time;
Prediction module, for exporting prediction result using the fisrt feature data as the input of the decision-tree model;
Application processing module is used for when the prediction result, which is, to use the target application within preset duration, right The target application is freezed.
10. a kind of electronic equipment, including memory and processor, computer program, the calculating are stored in the memory When machine program is executed by the processor, so that the processor is executed as at application described in any item of the claim 1 to 8 The step of reason method.
11. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program It realizes when being executed by processor such as the step of application processing method described in any item of the claim 1 to 8.
CN201711484440.9A 2017-12-29 2017-12-29 Application processing method and device, electronic equipment, computer readable storage medium Pending CN109992367A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711484440.9A CN109992367A (en) 2017-12-29 2017-12-29 Application processing method and device, electronic equipment, computer readable storage medium
PCT/CN2018/117694 WO2019128598A1 (en) 2017-12-29 2018-11-27 Application processing method, electronic device, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711484440.9A CN109992367A (en) 2017-12-29 2017-12-29 Application processing method and device, electronic equipment, computer readable storage medium

Publications (1)

Publication Number Publication Date
CN109992367A true CN109992367A (en) 2019-07-09

Family

ID=67066384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711484440.9A Pending CN109992367A (en) 2017-12-29 2017-12-29 Application processing method and device, electronic equipment, computer readable storage medium

Country Status (2)

Country Link
CN (1) CN109992367A (en)
WO (1) WO2019128598A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708427A (en) * 2020-05-29 2020-09-25 广州三星通信技术研究有限公司 Method for managing terminal and terminal
CN112130991A (en) * 2020-08-28 2020-12-25 北京思特奇信息技术股份有限公司 Application program control method and system based on machine learning
CN112256354A (en) * 2020-11-25 2021-01-22 Oppo(重庆)智能科技有限公司 Application starting method and device, storage medium and electronic equipment
CN112330069A (en) * 2020-11-27 2021-02-05 上海眼控科技股份有限公司 Early warning removing method and device, electronic equipment and storage medium
CN112390388A (en) * 2020-11-25 2021-02-23 创新奇智(青岛)科技有限公司 Model training method, aeration value estimation method and device and electronic equipment
WO2021047665A1 (en) * 2019-09-12 2021-03-18 华为技术有限公司 Method and device for predicting connection state between terminals, and analysis device
CN113627932A (en) * 2021-08-11 2021-11-09 中国银行股份有限公司 Method and device for controlling waiting time of terminal application account in network-free state
CN114416600A (en) * 2022-03-29 2022-04-29 腾讯科技(深圳)有限公司 Application detection method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020087499A1 (en) * 2001-01-03 2002-07-04 Stockfisch Thomas P. Methods and systems of classifying multiple properties simultaneously using a decision tree
CN105389193A (en) * 2015-12-25 2016-03-09 北京奇虎科技有限公司 Accelerating processing method, device and system for application, and server
CN106250532A (en) * 2016-08-04 2016-12-21 广州优视网络科技有限公司 Application recommendation method, device and server
CN106793031A (en) * 2016-12-06 2017-05-31 常州大学 Based on the smart mobile phone energy consumption optimization method for gathering competing excellent algorithm
CN107133094A (en) * 2017-06-05 2017-09-05 努比亚技术有限公司 Application management method, mobile terminal and computer-readable recording medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868222A (en) * 2015-09-17 2016-08-17 乐视网信息技术(北京)股份有限公司 Task scheduling method and device
CN106294743A (en) * 2016-08-10 2017-01-04 北京奇虎科技有限公司 The recommendation method and device of application function

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020087499A1 (en) * 2001-01-03 2002-07-04 Stockfisch Thomas P. Methods and systems of classifying multiple properties simultaneously using a decision tree
CN105389193A (en) * 2015-12-25 2016-03-09 北京奇虎科技有限公司 Accelerating processing method, device and system for application, and server
CN106250532A (en) * 2016-08-04 2016-12-21 广州优视网络科技有限公司 Application recommendation method, device and server
CN106793031A (en) * 2016-12-06 2017-05-31 常州大学 Based on the smart mobile phone energy consumption optimization method for gathering competing excellent algorithm
CN107133094A (en) * 2017-06-05 2017-09-05 努比亚技术有限公司 Application management method, mobile terminal and computer-readable recording medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
秦文: "分类技术中的决策树算法分析", 《深圳信息职业技术学院学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021047665A1 (en) * 2019-09-12 2021-03-18 华为技术有限公司 Method and device for predicting connection state between terminals, and analysis device
CN111708427A (en) * 2020-05-29 2020-09-25 广州三星通信技术研究有限公司 Method for managing terminal and terminal
CN112130991A (en) * 2020-08-28 2020-12-25 北京思特奇信息技术股份有限公司 Application program control method and system based on machine learning
CN112256354A (en) * 2020-11-25 2021-01-22 Oppo(重庆)智能科技有限公司 Application starting method and device, storage medium and electronic equipment
CN112390388A (en) * 2020-11-25 2021-02-23 创新奇智(青岛)科技有限公司 Model training method, aeration value estimation method and device and electronic equipment
CN112390388B (en) * 2020-11-25 2022-06-14 创新奇智(青岛)科技有限公司 Model training method, aeration value estimation method and device and electronic equipment
CN112330069A (en) * 2020-11-27 2021-02-05 上海眼控科技股份有限公司 Early warning removing method and device, electronic equipment and storage medium
CN113627932A (en) * 2021-08-11 2021-11-09 中国银行股份有限公司 Method and device for controlling waiting time of terminal application account in network-free state
CN113627932B (en) * 2021-08-11 2024-02-27 中国银行股份有限公司 Method and device for controlling waiting time of terminal application account in network-free state
CN114416600A (en) * 2022-03-29 2022-04-29 腾讯科技(深圳)有限公司 Application detection method and device, computer equipment and storage medium
CN114416600B (en) * 2022-03-29 2022-06-28 腾讯科技(深圳)有限公司 Application detection method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2019128598A1 (en) 2019-07-04

Similar Documents

Publication Publication Date Title
CN109992367A (en) Application processing method and device, electronic equipment, computer readable storage medium
CN107220076B (en) A kind of method for recovering internal storage and device
CN107809542B (en) Application program control method and device, storage medium and electronic equipment
CN110008008A (en) Applied program processing method and device, electronic equipment, computer readable storage medium
CN104679969A (en) Method and device for avoiding user churn
CN108419145A (en) The generation method and device and computer readable storage medium of a kind of video frequency abstract
CN107272872A (en) Electricity-saving control method and Related product
CN109992398A (en) Method for managing resource, device, mobile terminal and computer readable storage medium
CN109992400A (en) Resource allocation methods, device, mobile terminal and computer readable storage medium
CN110045809A (en) Information processing method, device, computer equipment and computer readable storage medium
CN110018902A (en) Internal memory processing method and device, electronic equipment, computer readable storage medium
CN110018900A (en) Internal memory processing method and device, electronic equipment, computer readable storage medium
CN108616653A (en) Information processing method, device, mobile terminal and computer readable storage medium
CN109992364A (en) Using freezing method, device, computer equipment and computer readable storage medium
CN110377527A (en) A kind of method and relevant device of memory management
CN105447583A (en) User churn prediction method and device
CN104978353B (en) A kind of generation control method of desktop application, apparatus and system
CN109992402A (en) Internal memory processing method and device, electronic equipment, computer readable storage medium
CN110032431A (en) Application processing method and device, electronic equipment, computer readable storage medium
CN109992965A (en) Process handling method and device, electronic equipment, computer readable storage medium
CN110032439A (en) Method for managing resource, device, mobile terminal and computer readable storage medium
CN110018905A (en) Information processing method, device, computer equipment and computer readable storage medium
CN109992380A (en) Applied program processing method and device, electronic equipment, computer readable storage medium
CN110032397A (en) Application processing method and device, electronic equipment, computer readable storage medium
CN110032430A (en) Applied program processing method and device, electronic equipment, computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190709

RJ01 Rejection of invention patent application after publication