CN117056080A - Distribution method and device of computing resources, computer equipment and storage medium - Google Patents

Distribution method and device of computing resources, computer equipment and storage medium Download PDF

Info

Publication number
CN117056080A
CN117056080A CN202311107512.3A CN202311107512A CN117056080A CN 117056080 A CN117056080 A CN 117056080A CN 202311107512 A CN202311107512 A CN 202311107512A CN 117056080 A CN117056080 A CN 117056080A
Authority
CN
China
Prior art keywords
target
scene
determining
corresponding relation
computing resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311107512.3A
Other languages
Chinese (zh)
Inventor
徐良伟
岳仁举
田晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seuic Technologies Co Ltd
Original Assignee
Seuic Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seuic Technologies Co Ltd filed Critical Seuic Technologies Co Ltd
Priority to CN202311107512.3A priority Critical patent/CN117056080A/en
Publication of CN117056080A publication Critical patent/CN117056080A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The application provides a computing resource allocation method, a computing resource allocation device, computer equipment and a storage medium. The method comprises the following steps: when the triggering condition is met, determining a target scene according to the met triggering condition; determining a process contained in the target scene according to the first corresponding relation; determining target cores and performance parameters corresponding to each process according to the second corresponding relation; binding each process with a corresponding target core, and adjusting the working state of the target core according to the performance parameters. According to the scheme, when the terminal executes a specific task, the task scene where the terminal is located can be automatically identified, computing resources are dynamically and reasonably allocated, the process is efficiently allocated to the most suitable core, the system overhead is reduced through process core binding and performance parameter tuning, the service life of the central processing unit is prolonged, and the computing resources are reasonably utilized.

Description

Distribution method and device of computing resources, computer equipment and storage medium
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a method and apparatus for allocating computing resources, a computer device, and a storage medium.
Background
The allocation of computing resources is an important issue involved in controlling the processor, and many terminal operating systems incorporate resource scheduling algorithms to improve the performance, efficiency and stability of the terminal by allocating limited computing resources. The resource scheduling algorithm in the traditional technology cannot reasonably and accurately allocate the computing resources, so that the processing efficiency of tasks is affected, and the waste of the computing resources and the increase of power consumption are caused.
Disclosure of Invention
The present application aims to solve at least one of the above technical drawbacks, and in particular, the problem that it is difficult to allocate computing resources reasonably and accurately in the prior art.
In a first aspect, the present application provides a method for allocating computing resources, including:
when the triggering condition is met, determining a target scene according to the met triggering condition;
determining a process contained in the target scene according to the first corresponding relation;
determining target cores and performance parameters corresponding to each process according to the second corresponding relation;
binding each process with a corresponding target core, and adjusting the working state of the target core according to the performance parameters.
In one embodiment, the method for allocating computing resources further includes:
analyzing the system preset file to determine and store more than one candidate task scene and the first corresponding relation and the second corresponding relation of each candidate task scene.
In one embodiment, the method for allocating computing resources further includes:
receiving a scene configuration request through a first interface;
and adding or deleting the first corresponding relation and the second corresponding relation of the task scene to be selected and the task scene to be selected according to the indication of the scene configuration request.
In one embodiment, when the trigger condition is satisfied, determining the target scene according to the satisfied trigger condition includes:
and when the trigger condition is met and the target scene for computing resource allocation is not currently performed, determining the target scene according to the met trigger condition.
In one embodiment, when the trigger condition is satisfied, determining the target scene according to the satisfied trigger condition includes:
monitoring whether a trigger program in a trigger program list is started or not;
if yes, judging that the triggering condition is met, and determining the started triggering program as a target program;
and determining the task scene to be selected corresponding to the target program as a target scene according to the third corresponding relation.
In one embodiment, when the trigger condition is satisfied, determining the target scene according to the satisfied trigger condition includes:
monitoring whether the second interface receives an allocation starting request or not; the allocation initiation request includes a target scenario;
if yes, judging that the triggering condition is met, and extracting the target scene from the allocation starting request.
In one embodiment, the method for allocating computing resources further includes:
after the current target scene is finished, the binding of each process and the target core is released, and the working state of the target core is restored.
In a second aspect, the present application provides an allocation apparatus of computing resources, including:
the allocation triggering module is used for determining a target scene according to the satisfied triggering condition when the triggering condition is satisfied;
the first processing module is used for determining a process contained in the target scene according to the first corresponding relation;
the second processing module is used for determining target cores and performance parameters corresponding to each process according to the second corresponding relation;
and the distribution module is used for binding each process with the corresponding target core and adjusting the working state of the target core according to the performance parameters.
In a third aspect, the present application provides a computer device comprising one or more processors and a memory having stored therein computer readable instructions which, when executed by the one or more processors, perform the steps of the method for allocating computing resources in any of the embodiments described above.
In a fourth aspect, the present application provides a storage medium having stored therein computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the method for allocating computing resources in any of the embodiments described above.
From the above technical solutions, the embodiment of the present application has the following advantages:
based on any of the above embodiments, a target scene is dynamically determined according to a trigger condition, all relevant processes required in the target scene are obtained as allocation objects according to a first corresponding relationship, and then the processes are reasonably allocated to different cores according to a second corresponding relationship to exert performance advantages of the processes, and meanwhile, the operation parameters of each core are adjusted as required. According to the scheme, when the terminal executes a specific task, the task scene where the terminal is located can be automatically identified, computing resources are dynamically and reasonably allocated, the process is efficiently allocated to the most suitable core, the system overhead is reduced through process core binding and performance parameter tuning, the service life of the central processing unit is prolonged, and the computing resources are reasonably utilized. The method can also prevent the built-in scheduling strategy of the central processing unit from reassigning certain threads to unsuitable cores for running, improve multi-core cooperation efficiency and ensure task processing speed and stability.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the application, and that other drawings can be obtained from these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow chart illustrating a method for allocating computing resources according to an embodiment of the present application;
FIG. 2 is a flow chart of an apparatus for allocating computing resources according to an embodiment of the present application;
fig. 3 is an internal structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The computing resource in the application is the computing resource of a Central Processing Unit (CPU), the CPU is applied to the terminal, the system structure of the terminal can be a layered structure, and the layered structure can comprise an application layer, an application framework layer, a hardware abstraction layer and a kernel from top to bottom. The conventional resource allocation method sets priority for the program running on the terminal. Computing resources will be preferentially allocated to high priority programs. The program presents the form of a process in the computer system, and the process can be distinguished by a package name. Therefore, the corresponding priority can be set for the packet name, and when the resource allocation is performed, the corresponding priority is queried according to the identified packet name, so that the resource allocation is performed. However, in many task scenarios, especially for the terminal of the Android system, the processes located at the hardware abstraction layer and the kernel are involved, the package name is a concept of the application layer, the kernel process and the hardware abstraction layer process are underlying system processes, and are not started by a certain application package name, and usually there is no related application package, and by means of the package name, the processes cannot be identified, so that some processes involved in the task scenario are not preferentially allocated with computing resources. In addition, with the development of multi-core CPU technology, the allocation of computing resources generally allocates a process with a high priority to a large core to run, but it cannot be guaranteed that the process always runs on the large core, so that the task processing speed is affected. More importantly, when a large amount of computing resources are needed to process the current task, the current task can be automatically identified according to the instruction of a user or by a built-in algorithm, so that the central processor enters a performance mode, namely, all cores work at the highest frequency. Not all task scenarios have such high demands on computing resources, resulting in wasted computing resources and increased power consumption. In order to solve the above-mentioned problems, referring to fig. 1, the present application provides a method for allocating computing resources, which includes steps S102 to S108.
S102, when the trigger condition is met, determining a target scene according to the met trigger condition.
It can be understood that the triggering condition refers to a condition that needs to be met when the computing resource allocation is triggered, and after the triggering condition is met, the system will perform the flow of computing resource allocation. The number of the triggering conditions can be multiple, and each triggering condition corresponds to one task scene of the terminal. A task scenario refers to a usage scenario in which a terminal is in order to implement a certain task. For example, when the terminal is used for code scanning, the task scenario is a code scanning scenario. When the terminal is used for playing audio and video, the task scene is an audio and video playing scene. Therefore, when a certain trigger condition is met, according to the met trigger condition, determining which task scene the terminal is currently in, and taking the task scene as a target scene.
S104, determining the process contained in the target scene according to the first corresponding relation.
It will be appreciated that in different task scenarios, different processes need to be initiated so that the terminal can implement different functions. Thus, each task scenario that may be selected as the target scenario has been configured with a first correspondence that summarizes all processes contained in its corresponding task scenario, prior to each start of computing resource allocation. In addition, since the process may include more than one sub-thread, in order to better allocate the computing resource, the relevant information of each process carried by the first correspondence relationship further includes all sub-threads included in the process. The processes included in each task scenario include not only processes at the application layer or application framework layer, but also processes at the hardware abstraction layer and the kernel. Thus, in the first correspondence, the first identifier representing each process should be applicable to the application layer, the application framework layer, the hardware abstraction layer, and the kernel at the same time. Specifically, the first identification may include a Process Id (PID) and/or a child Thread Id (TID).
S106, determining target cores and performance parameters corresponding to the processes according to the second corresponding relation.
It can be understood that, for any task scenario, after summarizing the processes included in the task scenario, the most suitable core may be selected as the target core for each process in the task scenario according to the characteristics of each core owned by the central processing unit. The number of target cores corresponding to each process may be more than two. And selecting performance parameters which the target core needs to reach when the process runs on the target core according to the calculation force required by the process. Summarizing the configurations into a second correspondence, each task scenario that may be selected as the target scenario having been configured with the second correspondence before starting the computing resource allocation each time. In one embodiment, the performance parameter may be the operating frequency of the target core.
S108, binding each process with the corresponding target core, and adjusting the working state of the target core according to the performance parameters.
It can be understood that, for each process in the target scenario, the corresponding target core allocated according to the second correspondence is bound, that is, each process is specified to run only on its corresponding target core and not to be scheduled to other cores. The child threads included in the process also run on the target cores corresponding to the process to which the child threads belong, and cannot be scheduled to other cores. In addition, each process in the target scene is bound with the corresponding target core, and the bound target core is regulated according to the corresponding performance parameter of each process, so that the reasonable distribution of the computing resources of the central processing unit is ensured.
Based on the computing resource allocation method in the embodiment, a target scene is dynamically determined according to the triggering condition, all relevant processes required in the target scene are obtained as allocation objects according to the first corresponding relation, then the processes are reasonably allocated to different cores according to the second corresponding relation to exert performance advantages of the processes, and meanwhile, the operation parameters of each core are adjusted according to requirements. According to the scheme, when the terminal executes a specific task, the task scene where the terminal is located can be automatically identified, computing resources are dynamically and reasonably allocated, the process is efficiently allocated to the most suitable core, the system overhead is reduced through process core binding and performance parameter tuning, the service life of the central processing unit is prolonged, and the computing resources are reasonably utilized. The method can also prevent the built-in scheduling strategy of the central processing unit from reassigning certain threads to unsuitable cores for running, improve multi-core cooperation efficiency and ensure task processing speed and stability.
In one embodiment, the method for allocating computing resources further includes: analyzing the system preset file to determine and store more than one candidate task scene and the first corresponding relation and the second corresponding relation of each candidate task scene.
It can be understood that the system preset file refers to a file preset in an operating system of the terminal, which is used for definitely requiring task scenes of computing resource allocation in the terminal in advance, that is, task scenes to be selected. The first corresponding relation and the second corresponding relation corresponding to the task scenes to be selected are carried, so that the terminal can complete the setting before the computing resource allocation by analyzing the system preset file. The file analysis module can be additionally arranged in the operating system, can automatically analyze the system configuration file after the system is started, and stores the analyzed content into the storage module for loading and using when the central processing unit distributes computing resources. The task scene to be selected, which is set by the mode, mainly relates to a built-in program of the system, namely a pre-installed program of the operating system of the terminal, and the type and the characteristic of the program are relatively fixed and can be preconfigured.
In one embodiment, the method for allocating computing resources further includes:
(1) A scene configuration request is received via a first interface.
(2) And adding or deleting the first corresponding relation and the second corresponding relation of the task scene to be selected and the task scene to be selected according to the indication of the scene configuration request.
It can be appreciated that the present embodiment provides a callable first interface at the application layer, where the first interface may be used to flexibly configure the task scenario to be selected, and the process involved in each task scenario to be selected, the target core to which each process needs to be bound, and the performance parameters that need to be adjusted by the core, etc. The user or developer can send a scene configuration request to the user or developer by calling the first interface, so that dynamic management of the task scene to be selected is realized dynamically. Specifically, a new candidate task scene and the first corresponding relationship and the second corresponding relationship corresponding thereto may be added completely, or an existing candidate task scene and the first corresponding relationship and the second corresponding relationship corresponding thereto may be deleted completely. The original first corresponding relation and the second corresponding relation of the task scene to be selected can be changed according to an existing task scene to be selected. After the first interface is called, the information in the storage module is updated together. The task scene to be selected set by the mode mainly relates to an external application program, namely the application program which is installed by a user on the terminal can be flexibly configured.
In one embodiment, when the trigger condition is satisfied, determining the target scene according to the satisfied trigger condition includes: and when the trigger condition is met and the target scene for computing resource allocation is not currently performed, determining the target scene according to the met trigger condition.
It will be appreciated that when the trigger condition is satisfied, it should also be determined whether the previous target scene has ended, i.e. whether all processes in the previous target scene have been executed to completion. If the previous target scene is not finished, even if the trigger condition is met, the new target scene does not allocate resources according to the allocation method of the computing resources in the application. This is because if there are multiple scenarios running at the same time, a conflict in allocation policies may occur. For example, if scenario a requires process X to be bound to core 1 and its frequency to be set highest, while scenario B requires process Y to be bound to core 1 as well and its frequency to be set lowest. Or scenario a requires process X to be bound to core 1, while scenario B requires process X to be bound to core 2. The system cannot meet the requirements of both scenarios. Therefore, to avoid this, only one target scene is allowed to be allocated with computing resources according to the above procedure at the same time. In a specific embodiment, whether the target scene is being allocated to the computing resource is determined, or a flag bit may be set to record whether the target scene is currently being performed. When the triggering condition is met, checking the value of the flag bit, and if no other target scene is running, normally carrying out the allocation of the computing resources according to the flow. If other target scenes are in progress, an error message or prompt is returned, and the related process under the new scene can be directly refused to start, or the related process under the new scene is operated by the idle core on the premise of not influencing the previous target scene.
In one embodiment, when the trigger condition is satisfied, determining the target scene according to the satisfied trigger condition includes:
(1) And monitoring whether a trigger program in the trigger program list is started or not.
(2) If yes, judging that the triggering condition is met, and determining the started triggering program as the target program.
(3) And determining the task scene to be selected corresponding to the target program as a target scene according to the third corresponding relation.
It can be understood that a trigger list can also be set in the system, and all triggers requiring the allocation method of the present application are summarized in the trigger list. Thus, when any trigger program starts to run, the trigger condition is considered to be satisfied. And each trigger program has a corresponding task scene to be selected, so long as the trigger program is started, the system judges that the current target scene is the task scene to be selected corresponding to the started trigger program. And the third corresponding relation is the corresponding relation between each trigger program in the trigger program list and the stored task scene to be selected in the system.
In one embodiment, when the trigger condition is satisfied, determining the target scene according to the satisfied trigger condition includes:
(1) And monitoring whether the second interface receives the allocation starting request. The allocation initiation request includes a target scenario.
(2) If yes, judging that the triggering condition is met, and extracting the target scene from the allocation starting request.
It can be understood that the embodiment provides a callable second interface at the application layer, whether a system built-in program or an external application program, and when the system runs, if allocation of computing resources needs to be triggered, an allocation starting request carrying a target scene can be sent through the second interface, so that the system can start computing resource allocation of related scenes according to an instruction of the allocation starting request. In some embodiments, a third interface may also be provided at the application layer, for querying the current operating state of each core of the cpu. The third interface can facilitate each application program to inquire the use state of the central processing unit, so that a proper time is found to send out a distribution starting request.
In one embodiment, the method for allocating computing resources further includes: after the current target scene is finished, the binding of each process and the target core is released, and the working state of the target core is restored.
It can be understood that when the target scene is finished, the binding relation between each process and the target core in the target scene should be released, the occupied computing resource is released, and the bound target core can now be used by other processes, and the process originally bound to the target core can also run on other cores. Restoring the working state of the target core means that the parameters of the target core are restored to the default values, so as to avoid wasting computing resources. For example, when the performance parameter is frequency, the frequency of the target core may be restored to the default auto-tune range.
The application provides a computing resource allocation device, referring to fig. 2, comprising an allocation triggering module 210, a first processing module 220, a second processing module 230 and an allocation module 240.
The allocation triggering module 210 is configured to determine the target scenario according to the satisfied triggering condition when the triggering condition is satisfied.
The first processing module 220 is configured to determine a process included in the target scene according to the first correspondence.
The second processing module 230 is configured to determine, according to the second correspondence, a target core and a performance parameter corresponding to each process.
The allocation module 240 is configured to bind each process with a corresponding target core, and adjust a working state of the target core according to the performance parameter.
In one embodiment, the computing resource allocation apparatus further includes a file parsing module. The file analysis module is used for analyzing a system preset file so as to determine and store more than one task scene to be selected and a first corresponding relation and a second corresponding relation of each task scene to be selected.
In one embodiment, the computing resource allocation apparatus further includes a configuration request processing module. The configuration request processing module is used for receiving a scene configuration request through the first interface; and adding or deleting the first corresponding relation and the second corresponding relation of the task scene to be selected and the task scene to be selected according to the indication of the scene configuration request.
In one embodiment, the allocation triggering module 210 is configured to determine a target scenario according to the trigger condition that is satisfied when the trigger condition is satisfied and the target scenario for which the computing resource allocation is not currently performed.
In one embodiment, the allocation trigger module 210 is configured to monitor whether a trigger in the trigger list is activated; if yes, judging that the triggering condition is met, and determining the started triggering program as a target program; and determining the task scene to be selected corresponding to the target program as a target scene according to the third corresponding relation.
In one embodiment, the allocation triggering module 210 is configured to monitor whether the second interface receives an allocation initiation request; the allocation initiation request includes a target scenario; if yes, judging that the triggering condition is met, and extracting the target scene from the allocation starting request.
In one embodiment, the computing resource allocation apparatus further comprises a restoration module. And the recovery module is used for unbinding each process and the target core after the current target scene is ended, and recovering the working state of the target core.
For specific limitations on the allocation apparatus of the computing resources, reference may be made to the above limitation on the allocation method of the computing resources, which is not repeated here. The respective modules in the above-described computing resource allocation apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
The present application provides a computer device comprising one or more processors, and a memory having stored therein computer readable instructions that, when executed by the one or more processors, perform: when the triggering condition is met, determining a target scene according to the met triggering condition; determining a process contained in the target scene according to the first corresponding relation; determining target cores and performance parameters corresponding to each process according to the second corresponding relation; binding each process with a corresponding target core, and adjusting the working state of the target core according to the performance parameters.
In one embodiment, computer-readable instructions, when executed by one or more processors, perform: analyzing the system preset file to determine and store more than one candidate task scene and the first corresponding relation and the second corresponding relation of each candidate task scene.
In one embodiment, computer-readable instructions, when executed by one or more processors, perform: receiving a scene configuration request through a first interface; and adding or deleting the first corresponding relation and the second corresponding relation of the task scene to be selected and the task scene to be selected according to the indication of the scene configuration request.
In one embodiment, computer-readable instructions, when executed by one or more processors, perform: and when the trigger condition is met and the target scene for computing resource allocation is not currently performed, determining the target scene according to the met trigger condition.
In one embodiment, computer-readable instructions, when executed by one or more processors, perform: monitoring whether a trigger program in a trigger program list is started or not; if yes, judging that the triggering condition is met, and determining the started triggering program as a target program; according to the third corresponding relation, determining the task scene to be selected corresponding to the target program as a target scene
In one embodiment, computer-readable instructions, when executed by one or more processors, perform: monitoring whether the second interface receives an allocation starting request or not; the allocation initiation request includes a target scenario; if yes, judging that the triggering condition is met, and extracting the target scene from the allocation starting request.
In one embodiment, computer-readable instructions, when executed by one or more processors, perform: after the current target scene is finished, the binding of each process and the target core is released, and the working state of the target core is restored.
Schematically, as shown in fig. 3, fig. 3 is a schematic internal structure of a computer device according to an embodiment of the present application. Referring to FIG. 3, a computer device 300 includes a processing component 302 that further includes one or more processors, and memory resources represented by memory 301, for storing instructions, such as applications, executable by the processing component 302. The application program stored in the memory 301 may include one or more modules, each corresponding to a set of instructions. Furthermore, the processing component 302 is configured to execute instructions to perform the steps of the computing resource allocation method of any of the embodiments described above.
The computer device 300 may also include a power supply component 303 configured to perform power management of the computer device 300, a wired or wireless network interface 304 configured to connect the computer device 300 to a network, and an input output (I/O) interface 305.
It will be appreciated by those skilled in the art that the structure shown in FIG. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The present application provides a storage medium having stored therein computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform: when the triggering condition is met, determining a target scene according to the met triggering condition; determining a process contained in the target scene according to the first corresponding relation; determining target cores and performance parameters corresponding to each process according to the second corresponding relation; binding each process with a corresponding target core, and adjusting the working state of the target core according to the performance parameters.
In one embodiment, computer-readable instructions, when executed by one or more processors, cause the one or more processors to perform: analyzing the system preset file to determine and store more than one candidate task scene and the first corresponding relation and the second corresponding relation of each candidate task scene.
In one embodiment, computer-readable instructions, when executed by one or more processors, cause the one or more processors to perform: receiving a scene configuration request through a first interface; and adding or deleting the first corresponding relation and the second corresponding relation of the task scene to be selected and the task scene to be selected according to the indication of the scene configuration request.
In one embodiment, computer-readable instructions, when executed by one or more processors, cause the one or more processors to perform: and when the trigger condition is met and the target scene for computing resource allocation is not currently performed, determining the target scene according to the met trigger condition.
In one embodiment, computer-readable instructions, when executed by one or more processors, cause the one or more processors to perform: monitoring whether a trigger program in a trigger program list is started or not; if yes, judging that the triggering condition is met, and determining the started triggering program as a target program; according to the third corresponding relation, determining the task scene to be selected corresponding to the target program as a target scene
In one embodiment, computer-readable instructions, when executed by one or more processors, cause the one or more processors to perform: monitoring whether the second interface receives an allocation starting request or not; the allocation initiation request includes a target scenario; if yes, judging that the triggering condition is met, and extracting the target scene from the allocation starting request.
In one embodiment, computer-readable instructions, when executed by one or more processors, cause the one or more processors to perform: after the current target scene is finished, the binding of each process and the target core is released, and the working state of the target core is restored.
In the present specification, each embodiment is described in a progressive manner, and each embodiment focuses on the difference from other embodiments, and may be combined according to needs, and the same similar parts may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for allocating computing resources, comprising:
when a trigger condition is met, determining a target scene according to the trigger condition;
determining a process contained in the target scene according to the first corresponding relation;
determining target cores and performance parameters corresponding to the processes according to the second corresponding relation;
binding each process with the corresponding target core, and adjusting the working state of the target core according to the performance parameters.
2. The method of computing resource allocation according to claim 1, further comprising:
analyzing a system preset file to determine and store more than one candidate task scene and the first corresponding relation and the second corresponding relation of each candidate task scene.
3. The method of computing resource allocation according to claim 1, further comprising:
receiving a scene configuration request through a first interface;
and adding or deleting the first corresponding relation and the second corresponding relation of the task scene to be selected and the task scene to be selected according to the indication of the scene configuration request.
4. The method for allocating computing resources according to claim 1, wherein when a trigger condition is satisfied, determining a target scenario according to the trigger condition comprises:
and when the trigger condition is met and the target scene for computing resource allocation is not currently performed, determining the target scene according to the met trigger condition.
5. The method for allocating computing resources according to claim 1, wherein when a trigger condition is satisfied, determining a target scenario according to the trigger condition comprises:
monitoring whether a trigger program in a trigger program list is started or not;
if yes, judging that the triggering condition is met, and determining the started triggering program as a target program;
and determining the candidate task scene corresponding to the target program as the target scene according to the third corresponding relation.
6. The method for allocating computing resources according to claim 1, wherein when a trigger condition is satisfied, determining a target scenario according to the trigger condition comprises:
monitoring whether the second interface receives an allocation starting request or not; the allocation initiation request includes the target scenario;
if yes, judging that the triggering condition is met, and extracting the target scene from the distribution starting request.
7. The method of computing resource allocation according to claim 1, further comprising:
after the current target scene is finished, the binding of each process and the target core is released, and the working state of the target core is restored.
8. An apparatus for allocating computing resources, comprising:
the allocation triggering module is used for determining a target scene according to the triggering condition when the triggering condition is met;
the first processing module is used for determining a process contained in the target scene according to the first corresponding relation;
the second processing module is used for determining target cores and performance parameters corresponding to the processes according to the second corresponding relation;
and the distribution module is used for binding each process with the corresponding target core and adjusting the working state of the target core according to the performance parameters.
9. A computer device comprising one or more processors and a memory having stored therein computer readable instructions which, when executed by the one or more processors, perform the steps of the method of allocating computing resources of any of claims 1-7.
10. A storage medium having stored therein computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of allocating computing resources of any of claims 1-7.
CN202311107512.3A 2023-08-30 2023-08-30 Distribution method and device of computing resources, computer equipment and storage medium Pending CN117056080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311107512.3A CN117056080A (en) 2023-08-30 2023-08-30 Distribution method and device of computing resources, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311107512.3A CN117056080A (en) 2023-08-30 2023-08-30 Distribution method and device of computing resources, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117056080A true CN117056080A (en) 2023-11-14

Family

ID=88653412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311107512.3A Pending CN117056080A (en) 2023-08-30 2023-08-30 Distribution method and device of computing resources, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117056080A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117311994A (en) * 2023-11-28 2023-12-29 苏州元脑智能科技有限公司 Processing core isolation method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117311994A (en) * 2023-11-28 2023-12-29 苏州元脑智能科技有限公司 Processing core isolation method and device, electronic equipment and storage medium
CN117311994B (en) * 2023-11-28 2024-02-23 苏州元脑智能科技有限公司 Processing core isolation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10817331B2 (en) Execution of auxiliary functions in an on-demand network code execution system
CN102567072B (en) Resource allocation method, resource allocation device and resource allocation system
US11231955B1 (en) Dynamically reallocating memory in an on-demand code execution system
US20110302587A1 (en) Information processing device and information processing method
CN109564528B (en) System and method for computing resource allocation in distributed computing
CN117056080A (en) Distribution method and device of computing resources, computer equipment and storage medium
CN112698952A (en) Unified management method and device for computing resources, computer equipment and storage medium
WO2024082584A1 (en) Resource allocation method, container management assembly and resource allocation system
CN113157411A (en) Reliable configurable task system and device based on Celery
CN117149414A (en) Task processing method and device, electronic equipment and readable storage medium
CN112650541B (en) Application program starting acceleration method, system, equipment and storage medium
CN111586140A (en) Data interaction method and server
CN112068960A (en) CPU resource allocation method, device, storage medium and equipment
CN114461385A (en) Thread pool scheduling method, device and equipment and readable storage medium
CN111459676A (en) Node resource management method, device and storage medium
CN113268310B (en) Pod resource quota adjustment method and device, electronic equipment and storage medium
CN111143063B (en) Task resource reservation method and device
WO2015058594A1 (en) Process loading method, device and system
CN111382141A (en) Master-slave architecture configuration method, device, equipment and computer readable storage medium
CN116881003A (en) Resource allocation method, device, service equipment and storage medium
CN114579298A (en) Resource management method, resource manager, and computer-readable storage medium
CN112162864B (en) Cloud resource allocation method, device and storage medium
CN115114022A (en) Method, system, device and medium for using GPU resources
CN115220887A (en) Processing method of scheduling information, task processing system, processor and electronic equipment
CN112685174A (en) Container creation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination