CN112416548A - Kernel scheduling method, equipment, terminal and storage medium - Google Patents

Kernel scheduling method, equipment, terminal and storage medium Download PDF

Info

Publication number
CN112416548A
CN112416548A CN202011281463.1A CN202011281463A CN112416548A CN 112416548 A CN112416548 A CN 112416548A CN 202011281463 A CN202011281463 A CN 202011281463A CN 112416548 A CN112416548 A CN 112416548A
Authority
CN
China
Prior art keywords
background application
application
total resources
kernel
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011281463.1A
Other languages
Chinese (zh)
Other versions
CN112416548B (en
Inventor
孙红辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202011281463.1A priority Critical patent/CN112416548B/en
Publication of CN112416548A publication Critical patent/CN112416548A/en
Application granted granted Critical
Publication of CN112416548B publication Critical patent/CN112416548B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Abstract

The invention discloses a kernel scheduling method, equipment, a terminal and a storage medium, wherein the method comprises the following steps: acquiring total resources required by foreground application and total resources currently occupied by background application; if the sum of the total resources required by the foreground application and the total resources currently occupied by the background application is greater than the total resources of the kernel, the total resources currently occupied by the background application are reduced, and the kernel is scheduled until the total resources currently occupied by the background application meet preset conditions, so that the situation that the background application acquires the system resources according to the requirement of the foreground application on the system resources is dynamically adjusted, and the background application is prevented from seizing the foreground resources. By adopting the technical scheme of the invention, the response speed of foreground application can be improved.

Description

Kernel scheduling method, equipment, terminal and storage medium
Technical Field
The invention belongs to the technical field of terminals, and particularly relates to a kernel scheduling method, equipment, a terminal and a storage medium.
Background
With the rise of internet applications, more and more applications are running on the intelligent terminal device, and application manufacturers always think of various ways for background keep-alive in order to improve the experience of the applications and earn money demands.
In the prior art, each application corresponds to a respective process, and the system determines resources of a kernel corresponding to the system through the load of the computing system, but if the background applications are too many, each background application allocates some resources, so that the situation that the background applications occupy resources of foreground applications inevitably occurs, and the problem that the cruising time of a morton system of the foreground applications is shortened is caused.
Disclosure of Invention
The invention mainly aims to provide a kernel scheduling method, kernel scheduling equipment, a kernel scheduling terminal and a kernel scheduling storage medium, and aims to solve the problem that due to the fact that a plurality of background applications exist, the background applications occupy foreground application resources, and the time of endurance of a morton system of the foreground applications is shortened.
In order to solve the above problem, the present invention provides a kernel scheduling method, including:
acquiring total resources required by foreground application and total resources currently occupied by background application;
if the sum of the total resources required by the foreground application and the total resources currently occupied by the background application is greater than the total resources of the kernel, reducing the total resources currently occupied by the background application until the total resources currently occupied by the background application meet preset conditions, and finishing the scheduling of the kernel.
Further, in the above method for scheduling a kernel, reducing the total resources currently occupied by the background application includes:
by reducing the time slices allocated by the background application, the total resources currently occupied by the background application are reduced.
Further, in the above method for scheduling a kernel, reducing the time slice allocated by the background application includes:
the time slice allocated by the background application is reduced by increasing the process level of the background application.
Further, in the above kernel scheduling method, the number of the background applications is plural;
correspondingly, the process level of the background application is increased, and the process level comprises the following steps:
according to the historical use level of each background application, after at least one target background application is selected from a low level sequence to a high level sequence, the process level of at least one target background application is increased; or
And increasing the process level of all background applications.
Further, in the above method for scheduling a kernel, reducing the time slice allocated by the background application includes:
initializing a time slice allocated to a background application to be 0 on the basis of the principle that the foreground application preferably runs, and acquiring the time required by the foreground application to finish running in one period;
determining the remaining time in a period according to the time required by the foreground application to finish running in the period and the time corresponding to the period;
and taking the remaining time in the period as the time allocated by the background application in one period so as to reduce the time slice allocated by the background application.
Further, in the foregoing kernel scheduling method, the calculation formula of the total resource required by the foreground application is:
W1=K*C1/C;
wherein, W1 is the total resource required by foreground application, C1 is the average number of instructions executed by foreground application in each cycle, C is the number of instructions executed at the maximum operating frequency of the core in each cycle, and K is the maximum computing power at the maximum operating frequency of the core in each cycle;
the calculation formula of the total resources currently occupied by the background application is as follows:
W2=K*C2/C;
where W2 is the total resource required by the background application, and C2 is the average number of instructions executed by the background application in each cycle.
Further, in the foregoing method for scheduling a kernel, the preset condition includes:
the sum of the total required resources of the foreground application and the total resources currently occupied by the background application is less than or equal to the total resources of the kernel; or
The total resources currently occupied by the background application are reduced to 0.
The invention also provides a kernel scheduling device, which comprises a memory and a controller;
the memory has stored thereon a computer program which, when executed by the controller, implements the steps of the kernel scheduling method as described in any one of the above.
The invention also provides a terminal which is provided with the kernel scheduling equipment.
The present invention also provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the kernel scheduling method as described in any one of the above.
Compared with the prior art, one or more embodiments in the above scheme can have the following advantages or beneficial effects:
according to the kernel scheduling method, the kernel scheduling device, the kernel scheduling terminal and the storage medium, the total required resources of the foreground application and the total resources currently occupied by the background application are obtained, and when the sum of the total required resources of the foreground application and the total resources currently occupied by the background application is larger than the total resources of the kernel, the total resources currently occupied by the background application are reduced until the total resources currently occupied by the background application meet the preset conditions, the kernel scheduling is completed, the size of the system resources obtained by the background application is dynamically adjusted according to the requirement condition of the foreground application on the system resources, and the background application is prevented from occupying the foreground resources. By adopting the technical scheme of the invention, the response speed of foreground application can be improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flowchart of an embodiment of a kernel scheduling method of the present invention;
FIG. 2 is a schematic structural diagram of a kernel scheduling apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an embodiment of a kernel scheduling device according to the present invention.
Detailed Description
The following detailed description of the embodiments of the present invention will be provided with reference to the drawings and examples, so that how to apply the technical means to solve the technical problems and achieve the technical effects can be fully understood and implemented. It should be noted that, as long as there is no conflict, the embodiments and the features of the embodiments of the present invention may be combined with each other, and the technical solutions formed are within the scope of the present invention.
Example one
In order to solve the above technical problems in the prior art, an embodiment of the present invention provides a kernel scheduling method.
Fig. 1 is a flowchart of an embodiment of a kernel scheduling method of the present invention, and as shown in fig. 1, the kernel scheduling method of the present embodiment may specifically include the following steps:
100. acquiring total resources required by foreground application and total resources currently occupied by background application;
in a specific implementation process, the system can monitor the state information of foreground application and background application in real time, and acquire the total resource required by the foreground application according to the calculation formula (1):
W1=K*C1/C; (1)
wherein, W1 is the total resource required by the foreground application, C1 is the average number of instructions executed by the foreground application in each cycle, C is the number of instructions executed at the maximum operating frequency of the core in each cycle, and K is the maximum computing power at the maximum operating frequency of the core in each cycle. For example, taking a kernel of 2GHZ as an example, the maximum computation power is 1000 at the maximum frequency of 2G, and the number of instructions executed at the frequency of 2G in 1s is C.
Acquiring the total resources currently occupied by the background application according to a calculation formula (2):
W2=K*C2/C; (2)
where W2 is the total resource required by the background application, and C2 is the average number of instructions executed by the background application in each cycle.
It should be noted that, in this embodiment, the total required resources of the foreground application are preferably total resources corresponding to full-speed run of the kernel.
101. If the sum of the total resources required by the foreground application and the total resources currently occupied by the background application is greater than the total resources of the kernel, reducing the total resources currently occupied by the background application until the total resources currently occupied by the background application meet preset conditions, and finishing the scheduling of the kernel.
In this embodiment, after acquiring the total required resources of the foreground application and the total resources currently occupied by the background application, the sum obtained by adding the total required resources of the foreground application and the total resources currently occupied by the background application may be compared with the total resources of the kernel to obtain a comparison result. If the sum of the total resources required by the foreground application and the total resources currently occupied by the background application is greater than the total resources of the kernel, it is described that the background application occupies the resources of the foreground application, which may cause the problem that the time of endurance of the morton system of the foreground application is shortened. Therefore, in this embodiment, the total resources currently occupied by the background application can be reduced, and the scheduling of the kernel is completed until the total resources currently occupied by the background application meet the preset condition, so that the foreground application can be ensured to have sufficient resources. For example, when the foreground application a of the system is running, if the background application B, C, D is running, the kernel calculation power required by the running of the foreground application a is calculated, and the sum of the system resource required by the foreground application a and the resource required by the background application (B, C, D) is greater than the maximum system resource, so as to reduce the resources allocated by the background application, and meet the requirement of the resource required by the foreground application a.
In a specific implementation process, the total resources currently occupied by the background application can be reduced by reducing the time slice allocated by the background application. For example, the time slice allocated by the background application may be reduced by increasing the process level of the background application, or the time slice allocated by the background application may be reduced by adjusting the scheduling policy.
Specifically, the number of the background applications is usually multiple, and in this embodiment, when the total resources currently occupied by the background applications are reduced by reducing the time slices allocated to the background applications, the process level of at least one target background application may be increased after at least one target background application is selected in the order from the low level to the high level according to the historical usage level of each background application. For example, background applications frequently used by the user exist in the background, it may be determined that the usage level of the background application is high, and at this time, the background application may not be processed, but the reduced allocation time slice processing may be performed on other background applications with low usage levels.
In some embodiments, the process level of all background applications may also be increased.
In one implementation, when the scheduling policy is adjusted to reduce the time slice allocated by the background application, the time slice allocated by the background application can be initialized to 0 on the principle that the foreground application preferably runs, so that only foreground application can be run, and after the time required by the foreground application to complete running in one period is obtained, determining the remaining time in the period according to the time required by the foreground application to finish running in one period and the time corresponding to the period, and taking the remaining time in the period as the time distributed in one period by the background application, the time slices allocated by the background application are reduced, so that the background application starts to run after the foreground application finishes running, and the problem that the foreground application has shorter endurance time due to the fact that the background application occupies resources of the foreground application is solved.
It should be noted that, if the background application finishes running in the remaining time of the period, the kernel resource can be made available actively, and the scheduler enters an idle state and waits for the next period to arrive. If the background application does not finish running in the remaining time in the period, the allocated time slice of the background application needs to be initialized to 0 again after the next period comes, so that the foreground application runs.
In this embodiment, after the total resources currently occupied by the background application are reduced, if the sum of the total resources required by the foreground application and the total resources currently occupied by the background application is less than or equal to the total resources of the kernel, the reduction of the total resources currently occupied by the background application is stopped, or if the total resources currently occupied by the background application are reduced to 0, the reduction of the total resources currently occupied by the background application is stopped.
According to the kernel scheduling method, the total resources required by the foreground application and the total resources currently occupied by the background application are obtained, and when the sum of the total resources required by the foreground application and the total resources currently occupied by the background application is greater than the total resources of the kernel, the total resources currently occupied by the background application are reduced until the total resources currently occupied by the background application meet the preset conditions, the kernel scheduling is completed, the size of the system resources obtained by the background application is dynamically adjusted according to the requirement condition of the foreground application on the system resources, and the foreground resources are prevented from being occupied by the background application. By adopting the technical scheme of the invention, the response speed of foreground application can be improved.
It should be noted that the method of the embodiment of the present invention may be executed by a single device, such as a computer or a server. The method of the embodiment can also be applied to a distributed scene and completed by the mutual cooperation of a plurality of devices. In the case of such a distributed scenario, one device of the multiple devices may only perform one or more steps of the method according to the embodiment of the present invention, and the multiple devices interact with each other to complete the method.
Example two
In order to solve the above technical problems in the prior art, an embodiment of the present invention provides a kernel scheduling apparatus.
Fig. 2 is a schematic structural diagram of an embodiment of a kernel scheduler according to the present invention, and as shown in fig. 3, the kernel scheduler according to this embodiment includes an obtaining module 20 and a scheduling module 21.
An obtaining module 20, configured to obtain total resources required by foreground applications and total resources currently occupied by background applications;
specifically, the system can monitor the state information of foreground application and background application in real time, and acquire the total resource required by the foreground application according to the calculation formula (1). And (4) acquiring the total resources currently occupied by the background application according to the calculation formula (2).
It should be noted that, in this embodiment, the total required resources of the foreground application are preferably total resources corresponding to full-speed run of the kernel.
The scheduling module 21 is configured to reduce the total resources currently occupied by the background application if the sum of the total resources required by the foreground application and the total resources currently occupied by the background application is greater than the total resources of the kernel, and complete the scheduling of the kernel until the total resources currently occupied by the background application satisfy a preset condition.
In a specific implementation process, the total resources currently occupied by the background application can be reduced by reducing the time slice allocated by the background application. For example, the time slice allocated by the background application may be reduced by increasing the process level of the background application, or the time slice allocated by the background application may be reduced by adjusting the scheduling policy.
Specifically, the number of the background applications is usually multiple, and in this embodiment, when the total resources currently occupied by the background applications are reduced by reducing the time slices allocated to the background applications, the process level of at least one target background application may be increased after at least one target background application is selected in the order from the low level to the high level according to the historical usage level of each background application. For example, background applications frequently used by the user exist in the background, it may be determined that the usage level of the background application is high, and at this time, the background application may not be processed, but the reduced allocation time slice processing may be performed on other background applications with low usage levels.
In some embodiments, the process level of all background applications may also be increased.
In one implementation, when the scheduling policy is adjusted to reduce the time slice allocated by the background application, the time slice allocated by the background application can be initialized to 0 on the principle that the foreground application preferably runs, so that only foreground application can be run, and after the time required by the foreground application to complete running in one period is obtained, determining the remaining time in the period according to the time required by the foreground application to finish running in one period and the time corresponding to the period, and taking the remaining time in the period as the time distributed in one period by the background application, the time slices allocated by the background application are reduced, so that the background application starts to run after the foreground application finishes running, and the problem that the foreground application has shorter endurance time due to the fact that the background application occupies resources of the foreground application is solved.
The kernel scheduling device of this embodiment reduces the total resources currently occupied by the background application by acquiring the total resources required by the foreground application and the total resources currently occupied by the background application, and when the sum of the total resources required by the foreground application and the total resources currently occupied by the background application is greater than the total resources of the kernel, completes the scheduling of the kernel until the total resources currently occupied by the background application satisfy the preset condition, thereby realizing dynamic adjustment of the size of the system resources acquired by the background application according to the requirement of the foreground application on the system resources, and avoiding the occupation of the foreground resources by the background application. By adopting the technical scheme of the invention, the response speed of foreground application can be improved.
The apparatus of the foregoing embodiment is used to implement the corresponding method in the foregoing embodiment, and specific implementation schemes thereof may refer to the method described in the foregoing embodiment and relevant descriptions in the method embodiment, and have beneficial effects of the corresponding method embodiment, which are not described herein again.
EXAMPLE III
In order to solve the technical problems in the prior art, an embodiment of the present invention provides a kernel scheduling device.
Fig. 3 is a schematic structural diagram of an embodiment of a kernel scheduling device according to the present invention, and as shown in fig. 3, the kernel scheduling device of the present embodiment includes a memory 30 and a controller 31;
the memory 30 stores thereon a computer program which, when executed by the controller 31, implements the steps of the kernel scheduling method of the above-described embodiment.
Example four
In order to solve the technical problems in the prior art, embodiments of the present invention provide a terminal. The terminal is provided with the kernel scheduling equipment of the embodiment
EXAMPLE five
In order to solve the above technical problems in the prior art, embodiments of the present invention provide a storage medium.
The storage medium of this embodiment stores thereon a computer program, and the computer program, when executed by the controller, implements the steps of the kernel scheduling method of the above embodiment.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module 32, or each unit may exist alone physically, or two or more units are integrated in one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although the embodiments of the present invention have been described above, the above description is only for the convenience of understanding the present invention, and is not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for scheduling a kernel, comprising:
acquiring total resources required by foreground application and total resources currently occupied by background application;
if the sum of the total resources required by the foreground application and the total resources currently occupied by the background application is greater than the total resources of the kernel, reducing the total resources currently occupied by the background application until the total resources currently occupied by the background application meet preset conditions, and finishing the scheduling of the kernel.
2. The kernel scheduling method according to claim 1, wherein reducing the total resources currently occupied by the background application comprises:
by reducing the time slices allocated by the background application, the total resources currently occupied by the background application are reduced.
3. The method of claim 2, wherein reducing the time slice allocated by the background application comprises:
the time slice allocated by the background application is reduced by increasing the process level of the background application.
4. The kernel scheduling method according to claim 3, wherein the number of the background applications is plural;
correspondingly, the process level of the background application is increased, and the process level comprises the following steps:
according to the historical use level of each background application, after at least one target background application is selected from a low level sequence to a high level sequence, the process level of at least one target background application is increased; or
And increasing the process level of all background applications.
5. The method of claim 2, wherein reducing the time slice allocated by the background application comprises:
initializing a time slice allocated to a background application to be 0 on the basis of the principle that the foreground application preferably runs, and acquiring the time required by the foreground application to finish running in one period;
determining the remaining time in a period according to the time required by the foreground application to finish running in the period and the time corresponding to the period;
and taking the remaining time in the period as the time allocated by the background application in one period so as to reduce the time slice allocated by the background application.
6. The kernel scheduling method according to claim 1, wherein the calculation formula of the total required resources of the foreground application is:
W1=K*C1/C;
wherein, W1 is the total resource required by foreground application, C1 is the average number of instructions executed by foreground application in each cycle, C is the number of instructions executed at the maximum operating frequency of the core in each cycle, and K is the maximum computing power at the maximum operating frequency of the core in each cycle;
the calculation formula of the total resources currently occupied by the background application is as follows:
W2=K*C2/C;
where W2 is the total resource required by the background application, and C2 is the average number of instructions executed by the background application in each cycle.
7. The core scheduling method according to any one of claims 1 to 6, wherein the preset condition comprises:
the sum of the total required resources of the foreground application and the total resources currently occupied by the background application is less than or equal to the total resources of the kernel; or
The total resources currently occupied by the background application are reduced to 0.
8. A core scheduling apparatus, comprising a memory and a controller;
the memory has stored thereon a computer program which, when being executed by the controller, carries out the steps of the kernel scheduling method as claimed in any one of claims 1 to 7.
9. A terminal, characterized in that it is provided with a kernel scheduling device according to claim 8.
10. A storage medium having stored thereon a computer program for implementing the steps of the kernel scheduling method according to any one of claims 1 to 7 when executed by a processor.
CN202011281463.1A 2020-11-16 2020-11-16 Kernel scheduling method, equipment, terminal and storage medium Active CN112416548B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011281463.1A CN112416548B (en) 2020-11-16 2020-11-16 Kernel scheduling method, equipment, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011281463.1A CN112416548B (en) 2020-11-16 2020-11-16 Kernel scheduling method, equipment, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN112416548A true CN112416548A (en) 2021-02-26
CN112416548B CN112416548B (en) 2022-04-22

Family

ID=74832430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011281463.1A Active CN112416548B (en) 2020-11-16 2020-11-16 Kernel scheduling method, equipment, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112416548B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013044795A1 (en) * 2011-09-26 2013-04-04 中国移动通信集团公司 Terminal inter-application network resource allocation method and device thereof
CN106792165A (en) * 2016-12-02 2017-05-31 武汉斗鱼网络科技有限公司 A kind of resource dynamic regulation method and device
CN109684090A (en) * 2018-12-19 2019-04-26 三星电子(中国)研发中心 A kind of resource allocation methods and device
CN110018904A (en) * 2018-01-10 2019-07-16 广东欧珀移动通信有限公司 Information processing method, device, computer equipment and computer readable storage medium
CN110035169A (en) * 2018-01-12 2019-07-19 广东欧珀移动通信有限公司 Process handling method and device, electronic equipment, computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013044795A1 (en) * 2011-09-26 2013-04-04 中国移动通信集团公司 Terminal inter-application network resource allocation method and device thereof
CN106792165A (en) * 2016-12-02 2017-05-31 武汉斗鱼网络科技有限公司 A kind of resource dynamic regulation method and device
CN110018904A (en) * 2018-01-10 2019-07-16 广东欧珀移动通信有限公司 Information processing method, device, computer equipment and computer readable storage medium
CN110035169A (en) * 2018-01-12 2019-07-19 广东欧珀移动通信有限公司 Process handling method and device, electronic equipment, computer readable storage medium
CN109684090A (en) * 2018-12-19 2019-04-26 三星电子(中国)研发中心 A kind of resource allocation methods and device

Also Published As

Publication number Publication date
CN112416548B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
US7117499B2 (en) Virtual computer systems and computer virtualization programs
US20090248922A1 (en) Memory buffer allocation device and computer readable medium having stored thereon memory buffer allocation program
US20110161965A1 (en) Job allocation method and apparatus for a multi-core processor
US11347563B2 (en) Computing system and method for operating computing system
CN109284192B (en) Parameter configuration method and electronic equipment
CN111708642B (en) Processor performance optimization method and device in VR system and VR equipment
WO2016202153A1 (en) Gpu resource allocation method and system
CN110795323A (en) Load statistical method, device, storage medium and electronic equipment
CN112416548B (en) Kernel scheduling method, equipment, terminal and storage medium
CN111314249B (en) Method and server for avoiding data packet loss of 5G data forwarding plane
CN116578416A (en) Signal-level simulation acceleration method based on GPU virtualization
CN114490030A (en) Method and device for realizing self-adaptive dynamic redis connection pool
CN114327862A (en) Memory allocation method and device, electronic equipment and storage medium
CN113254208A (en) Load balancing method and device for server, server and storage medium
CN117349037B (en) Method, device, computer equipment and storage medium for eliminating interference in off-line application
CN114721834B (en) Resource allocation processing method, device, equipment, vehicle and medium
CN114510324B (en) Disk management method and system for KVM virtual machine with ceph volume mounted thereon
CN116468597B (en) Image rendering method and device based on multiple GPUs and readable storage medium
CN117858262B (en) Base station resource scheduling optimization method, device, base station, equipment, medium and product
US20230111051A1 (en) Virtualization method, device, board card and computer readable storage medium
CN111737176B (en) PCIE data-based synchronization device and driving method
CN117788261A (en) GPU computing resource scheduling method, device, equipment and storage medium
US10877552B1 (en) Dynamic power reduction through data transfer request limiting
CN116483565A (en) Efficient distribution method and device for heterogeneous CPU resources and electronic equipment
CN116204314A (en) Resource scheduling method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant