CN109684090A - A kind of resource allocation methods and device - Google Patents

A kind of resource allocation methods and device Download PDF

Info

Publication number
CN109684090A
CN109684090A CN201811552871.9A CN201811552871A CN109684090A CN 109684090 A CN109684090 A CN 109684090A CN 201811552871 A CN201811552871 A CN 201811552871A CN 109684090 A CN109684090 A CN 109684090A
Authority
CN
China
Prior art keywords
application
foreground
resource
priority
backstage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811552871.9A
Other languages
Chinese (zh)
Inventor
苏威
刘春海
唐小凯
孙海
蒋意
董志刚
刘桦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201811552871.9A priority Critical patent/CN109684090A/en
Publication of CN109684090A publication Critical patent/CN109684090A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Abstract

This application provides a kind of resource allocation methods and devices, this method comprises: being periodically detected the resource service condition of foreground application and the resource service condition of background application;The resource of default use state is provided up according to the operation that the resource service condition of foreground application and background application is foreground application.This method can make foreground application obtain enough resources, improve user experience.

Description

A kind of resource allocation methods and device
Technical field
The present invention relates to field of communication technology, in particular to a kind of resource allocation methods and device.
Background technique
Currently, universal with cell phone apparatus, cell phone system is increasing, is mounted on mobile phone application software increasingly Greatly.After many application programs are run starting by user, the application program of each starting, either on foreground, (user is visible Or interacted with user) or background program (user is sightless) all occupy the resource on mobile phone.Such as CPU Resource, memory source, I/O resource, Internet resources etc..
Current techniques, the application program of each starting correspond to respective process, and system is determined by the load of computing system The resource of the corresponding CPU of system.But system fails the scene applied in view of current mobile phone, backstage scene, and each The factors such as the corresponding cpu load of a scene.It will appear phenomena such as long to subscriber response time, using operation Caton, user experience It is poor.
Summary of the invention
In view of this, the application provides a kind of resource allocation methods, foreground application can be made to obtain enough resources, improved User experience.
In order to solve the above technical problems, the technical solution of the application is achieved in that
A kind of resource allocation methods, this method comprises:
It is periodically detected the resource service condition of foreground application and the resource service condition of background application;
Default use is provided up according to the operation that the resource service condition of foreground application and background application is foreground application The resource of state.
A kind of resource allocation device, the device include: detection unit and processing unit;
The detection unit, for being periodically detected the resource service condition of foreground application and the resource of background application Service condition;
The resource of the processing unit, foreground application and background application for being detected according to the detection unit uses feelings Condition is that the operation of foreground application provides up the resource of default use state.
As can be seen from the above technical solution, it is carried out real in the application by the resource service condition of detection front and back application When analyze, guarantee to provide enough resources for foreground application so that the operation of foreground application reaches preset state.The program can So that foreground application is obtained enough resources, improves user experience.
Detailed description of the invention
Fig. 1 is resource allocation flow diagram in the embodiment of the present application;
Fig. 2 is the apparatus structure schematic diagram for being applied to above-mentioned technology in the embodiment of the present application.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and examples, Technical solution of the present invention is described in detail.
A kind of resource allocation methods are provided in the embodiment of the present application, the resource service condition applied by detection front and back, It is analyzed in real time, guarantees to provide enough resources for foreground application so that the operation of foreground application reaches preset state.The party Case can make foreground application obtain enough resources, improve user experience.
With reference to the accompanying drawing, resource allocation process in the embodiment of the present application is described in detail.Money is executed in the embodiment of the present application The equipment of source distribution can be the equipment such as mobile phone, IPAD, but be not limited to above equipment, below for convenience, referred to as Equipment.
It is resource allocation flow diagram in the embodiment of the present application referring to Fig. 1, Fig. 1.Specific steps are as follows:
Step 101, the resource service condition of device periodically detection foreground application and the resource of background application use feelings Condition.
The period that the inspection of resource service condition is carried out in the embodiment of the present application can according to actual needs, the length one of setting Point is shorter, and the application is not limited this.
Foreground application in the embodiment of the present application is to operate in the application on foreground, and background application is to operate in answering for backstage With.
If there is no foreground application or background application, then the corresponding parameter of resource service condition for corresponding to application is It is 0.
Resource in the embodiment of the present application can be one of the following or any combination:
Cpu resource, memory source, I/O resource, network bandwidth resources etc.;When resource is any combination of above-mentioned resource, It is detected respectively for different resources.
Step 102, which mentions according to the operation that the resource service condition of foreground application and background application is foreground application For reaching the resource of default use state.
So that the operation of foreground application is reached the resource of default use state in the embodiment of the present application, can be the resource provided Make that the ratio of front stage operation is smoother, Caton does not occur;Send that message, to receive Message Time Delay smaller etc., according to specific application The resource that corresponding default use state needs is set, that is to say, that the operation of foreground application is made to reach a kind of perfect condition.
It is described in detail separately below for the specific Resource Guarantee process of different resources:
It is being mounted with corresponding in application, priority can be distributed for corresponding application in the embodiment of the present application, specifically:
Backstage priority is distributed for each application;Same foreground priority is distributed for all applications, is distributed for each application Backstage priority can be the same or different.
That is a foreground priority, a backstage priority are assigned with for each application;When an application is in When backstage, corresponding backstage priority;When an application is in foreground, corresponding foreground priority;Wherein, foreground priority is greater than All backstage priority.
For cpu resource:
Default use is provided up according to the operation that the resource service condition of foreground application and background application is foreground application The resource of state, specifically includes:
It is that currently running each application distributes cpu resource according to the priority of each application, wherein priority gets over what high score was matched Cpu resource is more.
It is the biggish resource of application allocation proportion of front stage operation in foreground application initial launch, here specific real Now, the percentage that distribution cpu resource can be predefined for each application, is applied on foreground at this in use, answering for this With the cpu resource for distributing corresponding ratio, remaining cpu resource is sequentially allocated according to the priority of each background application;
It can also be divided according to the total number of currently running foreground application and background application, if total number is 5, then It is divided according to the cpu resource of equal difference or Geometric Sequence ratio: dividing 50% cpu resource for foreground application, be priority Highest background application divides 25% cpu resource, is the cpu resource that the high background application of preferential level divides 12.5%, is High second of priority of background application divides 6.25% cpu resource;It is divided for the minimum background application of priority remaining Cpu resource.
During foreground application has been run, it is periodically detected the loading condition and background application of foreground application CPU The loading condition of CPU.
When the cpu load for detecting any background application is greater than the cpu resource for distributing to the application, for the application configuration Temporary preference grade is the application distribution cpu resource according to the temporary preference grade;Wherein, the temporary preference grade is less than and answers for this With the backstage priority of configuration.
This kind of the occurrence of, illustrates that having background application to rob accounts for cpu resource, therefore, it is desirable to force the background application Backstage priority reduce, as reduce how much, can according to its cpu load be greater than distribute to the application cpu resource number It determines, be more than is more, can be set that temporary preference grade is lower, and be more than lacks, it is higher that temporary preference grade can be set;For The application distributes less cpu resource, thus the free time can go out a part of cpu resource, if the foreground application of highest priority It needs, so that it may voluntarily seize cpu resource.
When user opens the high background application of many consumption cpu loads, at this point, user is if it is desired to play games, and preceding Platform opens a game, if mobile phone substantially will appear Caton phenomenon when without cpu resource allocation strategy.When implementing, this is special When the cpu resource strategy of benefit, foreground process can be assigned in the group comprising more cpu resources, the process on backstage is assigned to In group containing a small amount of cpu resource, more cpu resources are used in the foreground process that user uses, has been guaranteed to ensure that The fluency of the game of mobile phone, to increase the experience of user.
For memory source:
Realization provides up default according to the operation that the resource service condition of foreground application and background application is foreground application When the resource of use state, specifically include:
For any application when needing in front stage operation, is applied according to this and save as this in needing in front stage operation and answer Use storage allocation.
Since the use of memory source is relatively more fixed, can be according to memory source used in the previous application starting should It applies and distributes enough memory sources in foreground use.
If idle memory source is inadequate, i.e., when current idle memory source is not enough application distribution, it will work as Before operate in backstage application according to the corresponding application of the orderly close-down of priority from small to large, can be distributed to until releasing Currently in front stage operation using required memory.
This kind of implementation is to discharge enough memories by closing the small application of backstage priority, is made to foreground application With.
When user opens the application on multiple backstages, the application on backstage has already consumed by a large amount of memory.At this point, opening again One game application (needing the application of high memory), at this time mobile phone will appear recycling memory caused by Caton or low memory and It is unable to run.When the memory source allocation strategy for implementing this patent, cycle detection and enough memories can be reserved to foreground application, The phenomenon that so as to avoid mobile phone Caton and low memory.
For I/O resource:
Realization provides up default according to the operation that the resource service condition of foreground application and background application is foreground application When the resource of use state, specifically include:
I/O resource is distributed according to for the priority of its distribution for each application;It wherein, is preferably the high application point of priority With I/O resource.
In this kind of implementation, when any application needs to be written and read, determine need in the buffer queue read and write whether It needs to read and write using the high application of priority in the presence of than this, if it is not, can be with direct read/write;Otherwise, wait in line, until When the priority of this application in the application of read-write being waited to be highest, then it is written and read.
Due to the foreground highest priority of foreground application, so the read-write of foreground application can be kept with most fast speed always Degree is completed.
When user is at background copy heap file (or picture), film is downloaded or is cached on foreground by video software.Due to Front and back has a large amount of file disk to operate, at this point, front and back process distribution IO is unreasonable, caused foreground Caton.With setting about Machine use, file system fragmentation aggravation, IO read-write also can be slower and slower, Caton thus can aggravate.Therefore, by implementing this patent Afterwards, it is ensured that the process on foreground can obtain the resource of more IO, to solve foreground because of card caused by I/O resource Problem.
For network bandwidth resources:
Realization provides up default according to the operation that the resource service condition of foreground application and background application is foreground application When the resource of use state, specifically include:
The first scene:
Guarantee the quick start of foreground application, specific implementation are as follows:
It applies when any in foreground initiation, the network bandwidth resources that background application occupies is restricted to preset threshold;
When application when foreground has been turned on completion, cancel the limitation to the network speed of background application.
It is quick to foreground application to the relatively more of the network bandwidth resources of background application limitation in this kind of implementation After starting, then cancel the limitation to background network bandwidth resources.
Second of scene:
Guarantee that foreground application obtains enough network bandwidth resources, implement are as follows:
When foreground and backstage are currently running in the presence of application, the first preset difference value is limited to the network speed of background application;
If it is determined that the network speed on foreground increases, and total network speed is not reduced, then default poor to the network speed limitation second of background application Value;Until the network speed on foreground is not further added by.
The dynamic strategy updated and adjustment has limited: in the case where the network speed strategy on restricted backstage, net The performance of speed, if the network speed on foreground increases, total network speed is not reduced, that is to say, just increasing again when there is gain on foreground Backstage limits.In this way can maximized dynamic regulation network speed, by dynamically adjusting the network speed on limitation backstage, Internet resources Foreground more is distributed to, is that the application on foreground occupies higher bandwidth.If foreground is without income or total network speed There is the limitation for reducing background network in the case where very big decline.
The third scene:
When front and back does not all need too many network bandwidth resources, network bandwidth resources are no longer limited, it is specific real It is existing are as follows:
When the network speed for detecting backstage consumption is less than default backstage network speed threshold value, and the network speed of foreground consumption be less than it is default before When platform network speed threshold value, cancel the limitation to front and back network speed.
If the network speed of front and back is less than the reserved newest threshold value of network, just discharges and limit all network limitations, because Too small for the network speed of front and back, what there is no limit is necessary.In the case where smart machine closes screen or WIFI is closed, It should all implement to discharge the strategy that network speed limits.
Pass through the Internet resources of the application on reasonable dynamic limit backstage, so that it may so that foreground is assigned to higher network speed, The program of the user on foreground obtains higher bandwidth, so that user's online experience is more preferably.
Internet resources allocation example: with popularizing for network, many softwares all employ Internet resources.When user backstage Film is being downloaded with a sudden peal of thunder, when video is seen with live streaming software in foreground, just will appear Internet resources unreasonable distribution causes, foreground Live streaming has Caton, slack-off phenomenon.After implementing this patent, by the inspection of scene, the identification of network speed, after providing a kind of limitation The method of platform network speed downloading solves the problems, such as video playing Caton to ensure that foreground live streaming occupies enough network speeds.
Based on same inventive concept, a kind of resource allocation device is also provided in the embodiment of the present application.Referring to fig. 2, Fig. 2 is It is applied to the apparatus structure schematic diagram of above-mentioned technology in the embodiment of the present application.The device includes: detection unit 201 and processing unit 202;
Detection unit 201, the resource for the resource service condition and background application that are periodically detected foreground application make Use situation;
Processing unit 202, the resource service condition of foreground application and background application for being detected according to detection unit 201 The resource of default use state is provided up for the operation of foreground application.
Preferably, described device further comprises: configuration unit 203;
Configuration unit 203, for distributing backstage priority for each application;It is preferential that same foreground is distributed for all applications Grade, foreground priority are greater than all backstage priority;When an application is in backstage, corresponding backstage priority;It is answered when one When with being in foreground, corresponding foreground priority;
Processing unit 202 is specifically used for when the resource being cpu resource, according to the resource of foreground application and background application When service condition is that the operation of foreground application provides up the resource of default use state, comprising: according to the priority of each application Cpu resource is distributed for currently running each application, wherein the cpu resource that priority more high score is matched is more.
Preferably, processing unit 202, is further used for distributing to when the cpu load for detecting any background application is greater than It is the application distribution cpu resource according to the temporary preference grade for the application configuration temporary preference grade when cpu resource of the application; Wherein, the backstage priority that the temporary preference grade is less than for the application configuration.
Preferably,
Processing unit 202 is specifically used for when the resource being memory source, described according to foreground application and background application When resource service condition is that the operation of foreground application provides up the resource of default use state, comprising: work as any application It needs in front stage operation, is applied according to this and save as this in needing in front stage operation using storage allocation.
Preferably,
Processing unit 202 will currently be run when being further used for current idle memory source not enough and be application distribution On backstage using according to the corresponding application of the orderly close-down of priority from small to large, currently exist until releasing to distribute to Front stage operation using required memory.
Preferably, described device further comprises: configuration unit 203;
Configuration unit 203, for distributing backstage priority for each application;It is preferential that same foreground is distributed for all applications Grade, foreground priority are greater than all backstage priority;When an application is in backstage, corresponding backstage priority;It is answered when one When with being in foreground, corresponding foreground priority;
Processing unit 202 is specifically used for when the resource is I/O resource, described according to foreground application and background application Resource service condition is that the operation of foreground application provides up the resource of default use state, comprising: for it is each apply according to for Its priority distributed distributes I/O resource;It wherein, is preferably the high application distribution I/O resource of priority.
Preferably,
Processing unit 202, be further used for when the resource be network bandwidth resources when, it is described according to foreground application and after The resource service condition of platform application is that the operation of foreground application provides up the resource of default use state, comprising: is answered when any When used in foreground initiation, the network bandwidth resources that background application occupies are restricted to preset threshold;When the application on foreground has been turned on When completion, cancel the limitation to the network speed of background application.
Preferably,
Processing unit 202, when being further used for being currently running in the presence of application when foreground and from the background, to the net of background application The first preset difference value of speed limit system;If it is determined that the network speed on foreground increases, and total network speed is not reduced, then is limited the network speed of background application Second preset difference value;Until the network speed on foreground is not further added by.
Preferably,
Processing unit 202 is further used for being less than default backstage network speed threshold value when the network speed for detecting backstage consumption, and preceding When the network speed of platform consumption is less than default foreground network speed threshold value, cancel the limitation to front and back network speed.
The unit of above-described embodiment can integrate in one, can also be deployed separately;It can be merged into a unit, it can also To be further split into multiple subelements.
In conclusion the application is analyzed in real time, is guaranteed before being by the resource service condition of detection front and back application Platform application provides enough resources so that the operation of foreground application reaches preset state;Wherein, the resource is cpu resource, IO Any or any combination in resource, memory source and network bandwidth resources;It in this way can various resources to equipment side entirely Position is detected, and distribution, to ensure the use of foreground resource.The program can make foreground application obtain enough resources, Improve user experience.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Within mind and principle, any modification, equivalent substitution, improvement and etc. done be should be included within the scope of the present invention.

Claims (18)

1. a kind of resource allocation methods, which is characterized in that this method comprises:
It is periodically detected the resource service condition of foreground application and the resource service condition of background application;
Default use state is provided up according to the operation that the resource service condition of foreground application and background application is foreground application Resource.
2. the method according to claim 1, wherein the method further includes: for each application distribution after Platform priority;Same foreground priority is distributed for all applications, foreground priority is greater than all backstage priority;It is applied when one When in backstage, corresponding backstage priority;When an application is in foreground, corresponding foreground priority;
When the resource be cpu resource when, it is described according to the resource service condition of foreground application and background application be foreground application Operation provide up the resource of default use state, comprising:
It is that currently running each application distributes cpu resource according to the priority of each application, wherein priority gets over the CPU that high score is matched Resource is more.
3. according to the method described in claim 2, it is characterized in that, the method further includes:
It is interim for the application configuration when the cpu load for detecting any background application is greater than the cpu resource for distributing to the application Priority is the application distribution cpu resource according to the temporary preference grade;Wherein, the temporary preference grade is less than matches for the application The backstage priority set.
4. the method according to claim 1, wherein when the resource be memory source, it is described to be answered according to foreground It is that the operation of foreground application provides up the resource of default use state with the resource service condition with background application, comprising:
For any application when needing in front stage operation, is applied according to this and save as the application point in needing in front stage operation With memory.
5. according to the method described in claim 4, it is characterized in that, the method further includes:
When current idle memory source is not enough application distribution, the application on backstage will be currently operating at according to priority from small To the big corresponding application of orderly close-down, can distribute to until releasing currently in front stage operation using required interior It deposits.
6. the method according to claim 1, wherein the method further includes: for each application distribution after Platform priority;Same foreground priority is distributed for all applications, foreground priority is greater than all backstage priority;It is applied when one When in backstage, corresponding backstage priority;When an application is in foreground, corresponding foreground priority;
When the resource is I/O resource, described according to the resource service condition of foreground application and background application is foreground application Operation provides up the resource of default use state, comprising:
I/O resource is distributed according to for the priority of its distribution for each application;It wherein, is preferably the high application distribution IO of priority Resource.
7. method according to claim 1-6, which is characterized in that when the resource is network bandwidth resources, It is described that default use state is provided up according to the operation that the resource service condition of foreground application and background application is foreground application Resource, comprising:
It applies when any in foreground initiation, the network bandwidth resources that background application occupies is restricted to preset threshold;
When application when foreground has been turned on completion, cancel the limitation to the network speed of background application.
8. the method according to the description of claim 7 is characterized in that the method further includes:
When foreground and backstage are currently running in the presence of application, the first preset difference value is limited to the network speed of background application;
If it is determined that the network speed on foreground increases, and total network speed is not reduced, then limits the second preset difference value to the network speed of background application;Directly Network speed to foreground is not further added by.
9. according to the method described in claim 8, it is characterized in that, the method further includes:
Backstage network speed threshold value is preset when the network speed for detecting backstage consumption is less than, and the network speed of foreground consumption is less than default foreground net When fast threshold value, cancel the limitation to front and back network speed.
10. a kind of resource allocation device, which is characterized in that the device includes: detection unit and processing unit;
The detection unit, the resource for the resource service condition and background application that are periodically detected foreground application use Situation;
The resource service condition of the processing unit, foreground application and background application for being detected according to the detection unit is The operation of foreground application provides up the resource of default use state.
11. device according to claim 10, which is characterized in that described device further comprises: configuration unit;
The configuration unit, for distributing backstage priority for each application;Same foreground priority is distributed for all applications, it is preceding Platform priority is greater than all backstage priority;When an application is in backstage, corresponding backstage priority;When an application is in When foreground, corresponding foreground priority;
The processing unit is specifically used for when the resource being cpu resource, be used according to the resource of foreground application and background application Situation is the operation of foreground application when providing up the resource of default use state, comprising: according to the priority of each application to work as Each application distribution cpu resource of preceding operation, wherein the cpu resource that priority more high score is matched is more.
12. device according to claim 11, which is characterized in that
The processing unit is further used for working as the cpu load for detecting any background application greater than the CPU for distributing to the application It is the application distribution cpu resource according to the temporary preference grade for the application configuration temporary preference grade when resource;Wherein, described to face When priority be less than for the application configuration backstage priority.
13. device according to claim 10, which is characterized in that
The processing unit is specifically used for when the resource being memory source, the money according to foreground application and background application When source service condition is that the operation of foreground application provides up the resource of default use state, comprising: working as any application needs It to be applied according to this in front stage operation and save as this in needing in front stage operation using storage allocation.
14. device according to claim 13, which is characterized in that
The processing unit will be currently operating at when being further used for current idle memory source not enough and be application distribution The application on backstage can be distributed to currently until releasing preceding according to the corresponding application of the orderly close-down of priority from small to large Platform operation using required memory.
15. device according to claim 10, which is characterized in that described device further comprises: configuration unit;
The configuration unit, for distributing backstage priority for each application;Same foreground priority is distributed for all applications, it is preceding Platform priority is greater than all backstage priority;When an application is in backstage, corresponding backstage priority;When an application is in When foreground, corresponding foreground priority;
The processing unit is specifically used for when the resource is I/O resource, the money according to foreground application and background application Source service condition is that the operation of foreground application provides up the resource of default use state, comprising: for it is each apply according to for its The priority of distribution distributes I/O resource;It wherein, is preferably the high application distribution I/O resource of priority.
16. the described in any item devices of 0-15 according to claim 1, which is characterized in that
The processing unit is further used for when the resource is network bandwidth resources, described according to foreground application and backstage The resource service condition of application is that the operation of foreground application provides up the resource of default use state, comprising: when any application In foreground initiation, the network bandwidth resources that background application occupies are restricted to preset threshold;When the application on foreground has had been turned on Cheng Shi cancels the limitation to the network speed of background application.
17. device according to claim 16, which is characterized in that
The processing unit, when being further used for being currently running in the presence of application when foreground and from the background, to the network speed of background application Limit the first preset difference value;If it is determined that the network speed on foreground increases, and total network speed is not reduced, then to the network speed limitation of background application the Two preset difference values;Until the network speed on foreground is not further added by.
18. device according to claim 17, which is characterized in that
The processing unit is further used for being less than default backstage network speed threshold value, and foreground when the network speed for detecting backstage consumption When the network speed of consumption is less than default foreground network speed threshold value, cancel the limitation to front and back network speed.
CN201811552871.9A 2018-12-19 2018-12-19 A kind of resource allocation methods and device Pending CN109684090A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811552871.9A CN109684090A (en) 2018-12-19 2018-12-19 A kind of resource allocation methods and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811552871.9A CN109684090A (en) 2018-12-19 2018-12-19 A kind of resource allocation methods and device

Publications (1)

Publication Number Publication Date
CN109684090A true CN109684090A (en) 2019-04-26

Family

ID=66186327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811552871.9A Pending CN109684090A (en) 2018-12-19 2018-12-19 A kind of resource allocation methods and device

Country Status (1)

Country Link
CN (1) CN109684090A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110554922A (en) * 2019-09-05 2019-12-10 北京安云世纪科技有限公司 System resource allocation method and device
CN111200753A (en) * 2020-02-20 2020-05-26 四川长虹电器股份有限公司 Method for improving Android television network video playing fluency
CN112416548A (en) * 2020-11-16 2021-02-26 珠海格力电器股份有限公司 Kernel scheduling method, equipment, terminal and storage medium
CN115729684A (en) * 2021-08-25 2023-03-03 荣耀终端有限公司 Input/output request processing method and electronic equipment
CN116954931A (en) * 2023-09-20 2023-10-27 北京小米移动软件有限公司 Bandwidth allocation method and device, storage medium and electronic equipment
CN117130773A (en) * 2023-04-28 2023-11-28 荣耀终端有限公司 Resource allocation method, device and equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102791032A (en) * 2012-08-14 2012-11-21 华为终端有限公司 Network bandwidth distribution method and terminal
US20120324481A1 (en) * 2011-06-16 2012-12-20 Samsung Electronics Co. Ltd. Adaptive termination and pre-launching policy for improving application startup time
CN103577267A (en) * 2012-08-03 2014-02-12 上海博泰悦臻电子设备制造有限公司 Resource distribution method and resource distribution device of vehicle-mounted device
CN106533988A (en) * 2016-10-26 2017-03-22 维沃移动通信有限公司 Control method for network speed of application and mobile terminal
CN107145389A (en) * 2017-03-09 2017-09-08 深圳市先河系统技术有限公司 A kind of system process monitoring method and computing device
CN107205084A (en) * 2017-05-11 2017-09-26 北京小米移动软件有限公司 Network speed processing method, device and the terminal of application program
CN107463437A (en) * 2017-07-31 2017-12-12 广东欧珀移动通信有限公司 Using management-control method, device, storage medium and electronic equipment
CN107608785A (en) * 2017-08-15 2018-01-19 深圳天珑无线科技有限公司 Process management method, mobile terminal and readable storage medium
CN108834157A (en) * 2018-04-27 2018-11-16 努比亚技术有限公司 Internet wide band distribution, mobile terminal and computer readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120324481A1 (en) * 2011-06-16 2012-12-20 Samsung Electronics Co. Ltd. Adaptive termination and pre-launching policy for improving application startup time
CN103577267A (en) * 2012-08-03 2014-02-12 上海博泰悦臻电子设备制造有限公司 Resource distribution method and resource distribution device of vehicle-mounted device
CN102791032A (en) * 2012-08-14 2012-11-21 华为终端有限公司 Network bandwidth distribution method and terminal
CN106533988A (en) * 2016-10-26 2017-03-22 维沃移动通信有限公司 Control method for network speed of application and mobile terminal
CN107145389A (en) * 2017-03-09 2017-09-08 深圳市先河系统技术有限公司 A kind of system process monitoring method and computing device
CN107205084A (en) * 2017-05-11 2017-09-26 北京小米移动软件有限公司 Network speed processing method, device and the terminal of application program
CN107463437A (en) * 2017-07-31 2017-12-12 广东欧珀移动通信有限公司 Using management-control method, device, storage medium and electronic equipment
CN107608785A (en) * 2017-08-15 2018-01-19 深圳天珑无线科技有限公司 Process management method, mobile terminal and readable storage medium
CN108834157A (en) * 2018-04-27 2018-11-16 努比亚技术有限公司 Internet wide band distribution, mobile terminal and computer readable storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110554922A (en) * 2019-09-05 2019-12-10 北京安云世纪科技有限公司 System resource allocation method and device
CN111200753A (en) * 2020-02-20 2020-05-26 四川长虹电器股份有限公司 Method for improving Android television network video playing fluency
CN112416548A (en) * 2020-11-16 2021-02-26 珠海格力电器股份有限公司 Kernel scheduling method, equipment, terminal and storage medium
CN115729684A (en) * 2021-08-25 2023-03-03 荣耀终端有限公司 Input/output request processing method and electronic equipment
CN115729684B (en) * 2021-08-25 2023-09-19 荣耀终端有限公司 Input/output request processing method and electronic equipment
CN117130773A (en) * 2023-04-28 2023-11-28 荣耀终端有限公司 Resource allocation method, device and equipment
CN116954931A (en) * 2023-09-20 2023-10-27 北京小米移动软件有限公司 Bandwidth allocation method and device, storage medium and electronic equipment
CN116954931B (en) * 2023-09-20 2023-12-26 北京小米移动软件有限公司 Bandwidth allocation method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN109684090A (en) A kind of resource allocation methods and device
CN111240837B (en) Resource allocation method, device, terminal and storage medium
CN108495195B (en) Network live broadcast ranking list generation method, device, equipment and storage medium
US20170192819A1 (en) Method and electronic device for resource allocation
US9104476B2 (en) Opportunistic multitasking of VOIP applications
US11301300B2 (en) Method for resource allocation and terminal device
CN109542614B (en) Resource allocation method, device, terminal and storage medium
CN110333947B (en) Method, device, equipment and medium for loading subcontracting resources of game application
CN112988400B (en) Video memory optimization method and device, electronic equipment and readable storage medium
CN105512251B (en) A kind of page cache method and device
CN108848039A (en) The method and storage medium that server, message are distributed
CN110795056B (en) Method, device, terminal and storage medium for adjusting display parameters
CN109271253A (en) A kind of resource allocation method, apparatus and system
JP7100154B2 (en) Processor core scheduling method, equipment, terminals and storage media
CN111708642B (en) Processor performance optimization method and device in VR system and VR equipment
WO2020220971A1 (en) File loading method and apparatus, electronic device, and storage medium
CN110784422A (en) Cloud mobile phone network data separation method, device, medium and terminal equipment
CN115136564A (en) Preloading of applications and content within applications in user equipment
CN110955499A (en) Processor core configuration method, device, terminal and storage medium
CN111314249B (en) Method and server for avoiding data packet loss of 5G data forwarding plane
CN102143206A (en) Storage pool regulation method, device and system for cluster storage system
CN111338803A (en) Thread processing method and device
CN114880042A (en) Application starting method and device, electronic equipment and computer readable storage medium
CN116069518A (en) Dynamic allocation processing task method and device, electronic equipment and readable storage medium
CN114040378A (en) Application arranging method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190426

RJ01 Rejection of invention patent application after publication