WO2022095862A1 - 调整线程优先级的方法、终端及计算机可读存储介质 - Google Patents

调整线程优先级的方法、终端及计算机可读存储介质 Download PDF

Info

Publication number
WO2022095862A1
WO2022095862A1 PCT/CN2021/128287 CN2021128287W WO2022095862A1 WO 2022095862 A1 WO2022095862 A1 WO 2022095862A1 CN 2021128287 W CN2021128287 W CN 2021128287W WO 2022095862 A1 WO2022095862 A1 WO 2022095862A1
Authority
WO
WIPO (PCT)
Prior art keywords
thread
priority
state
adjusting
threads
Prior art date
Application number
PCT/CN2021/128287
Other languages
English (en)
French (fr)
Inventor
刘洪霞
阮美思
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP21888559.8A priority Critical patent/EP4242842A4/en
Priority to US18/036,145 priority patent/US20230409391A1/en
Publication of WO2022095862A1 publication Critical patent/WO2022095862A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • G06F9/4887Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues involving deadlines, e.g. rate based, periodic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4812Task transfer initiation or dispatching by interrupt, e.g. masked
    • G06F9/4818Priority circuits therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4812Task transfer initiation or dispatching by interrupt, e.g. masked
    • G06F9/4831Task transfer initiation or dispatching by interrupt, e.g. masked with variable priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/524Deadlock detection or avoidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms

Definitions

  • the present disclosure relates to, but is not limited to, the field of terminals, in particular, but not limited to, a method for adjusting thread priority, a terminal, and a computer-readable storage medium.
  • the method, terminal and computer-readable storage medium for adjusting thread priority provided by the present disclosure mainly solve the technical problem that the thread is executed according to the preset priority, which leads to the problem of low system performance.
  • an embodiment of the present disclosure provides a method for adjusting thread priority.
  • the method includes: monitoring the state of at least one thread; when monitoring that at least one thread is in a preset blocking state, detecting each The running state and associated state of the thread; the priority of one or at least two threads in the same process is adjusted according to the running state and the associated state of each thread.
  • An embodiment of the present disclosure further provides a terminal, where the terminal includes a monitoring module, a detection module, and an adjustment module.
  • the monitoring module is configured to monitor the status of at least one thread.
  • the detection module is configured to detect the running state and associated state of each thread in the same process when the monitoring module monitors that at least one thread is in a preset blocking state.
  • the adjustment module is configured to adjust the priority of one or at least two threads in the same process according to the running state and associated state of each thread.
  • An embodiment of the present disclosure further provides a terminal, where the terminal includes a processor, a memory, and a communication bus.
  • the communication bus is configured to implement the connection communication between the processor and the memory.
  • the processor is configured to execute one or more computer programs stored in the memory to implement the steps of the method of adjusting thread priority as described above.
  • Embodiments of the present disclosure further provide a computer storage medium, where one or more programs are stored in the computer-readable storage medium, and the one or more programs can be executed by one or more processors to implement the above-mentioned adjustment thread The steps of the priority method.
  • FIG. 1 is a basic flowchart of a method for adjusting thread priority according to Embodiment 1 of the present disclosure
  • FIG. 2 is a detailed flowchart of a method for adjusting thread priority according to Embodiment 2 of the present disclosure
  • FIG. 3 is a schematic diagram of the composition of a terminal according to Embodiment 3 of the present disclosure.
  • FIG. 4 is a schematic diagram of the composition of a terminal according to Embodiment 4 of the present disclosure.
  • FIG. 5 is a schematic state diagram of a thread according to Embodiment 4 of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a terminal according to Embodiment 5 of the present disclosure.
  • the present disclosure proposes a method for adjusting thread priority, which will be described below with reference to this embodiment.
  • FIG. 1 is a basic flowchart of a method for adjusting thread priority according to Embodiment 1 of the present disclosure, and the method includes the following steps S101 to S103.
  • At least one thread may include: a thread that calls a process scheduling function and needs to wait for system resources to suspend running; and/or, when there is an interrupted or abnormal process in the current instruction executed by the CPU, and in the current instruction
  • the thread that executes the process scheduling function For example, after the CPU executes the current instruction and before executing the next instruction, it detects that an interrupt or exception occurs after the execution of the current instruction, and compares the process priority of the interrupt or exception with the process priority of the next instruction. When the priority is higher than the process priority of the next instruction, the interrupt service routine will be executed, and when the interrupt is returned, the thread of the process scheduling function will be executed.
  • the calling process scheduling function can be schedule().
  • the IO flag can be set before calling the process scheduling function. At this time, it can be determined whether the thread is a state switch caused by IO resources.
  • the thread When the thread uses network resources, it will call the socket socket interface. Some flags can be added to the socket interface. When the thread state switches, you can know whether the thread is blocked due to network resources.
  • monitoring the state of at least one thread may include: detecting that at least one thread implements a waiting policy, and at least one thread is in a waiting state; or, detecting that at least one thread implements a sleep policy, and the sleep time exceeds a first preset duration , at least one thread changes from the waiting state to the ready state; or, it is detected that at least one thread executes the strategy of waiting to call the current thread and continues to execute the next thread after the execution of the current thread is completed, and the waiting time exceeds the second preset time length, and at least one thread is waited for.
  • the state transitions to the ready state; or, upon detecting that at least one thread issues a request for acquiring input or output resources, at least one thread transitions from the waiting state to the ready state.
  • the waiting strategy is the wait() method, which allows the current thread to suspend execution and release the object lock flag.
  • the sleep strategy is the sleep() method, which makes the current thread suspend execution for a period of time, allowing other threads to have the opportunity to continue execution, but it does not release the object lock.
  • the strategy of waiting for the execution of the current thread to continue to execute the next thread is the join() method, so that the thread that calls this method is executed before that, that is, waits for the thread that calls this method to finish executing before continuing to execute.
  • the preset blocking state in this step may include: blocking because the system resource cannot be obtained or the thread voluntarily giving up the CPU.
  • the system resources include at least one of the following: network resources and I/O resources.
  • the system resources may also include memory, CPU resources, and the like.
  • the synchronization lock is occupied by other threads.
  • the thread can directly conclude that the thread is in a preset blocking state.
  • the associated state associated with each thread may include: the priority order of each thread, or the priority order of each thread and the mutual wake-up information of each thread.
  • S103 Perform priority adjustment on one or at least two threads in the same process according to the running state and associated state of each thread.
  • Adjusting the priority of at least one thread in each thread according to the running state and priority of each thread may further include: when each thread has a third thread in a ready state and a fourth thread in a waiting state, confirming the third thread The priority of the thread and the fourth thread; when the priority of the third thread is lower than that of the fourth thread, and the third thread and the fourth thread wake up from each other, the priority of the third thread is adjusted to be before the fourth thread.
  • the information of each thread of the same process is counted, and according to the statistical information, it is determined whether there is a thread in the ready-runnable state in the current process that blocks a thread with a higher priority than the thread in the ready-runnable state.
  • the statistical information of each thread of the same process includes: the running state of the thread, the time slice, the priority, and the information that the threads wake up each other before. Among them, if it is found that two threads frequently wake up each other, one is in the runnable state and the other is in the wait state, the thread in the wait state is largely because the thread in the runnable state cannot be executed, and the priority of the runnable thread is increased. Priority access to resources.
  • the statistical information can also use the external proc of the operating system or the sys file system node to output thread status, scheduling information, IO information, network information, etc., and external applications can generate intelligent policies based on these information and input them to the system to make priority transfer more intelligent. .
  • each thread after adjusting the priority of one or at least two threads in the same process according to the running state and associated state of each thread, wait for the system resource to allocate a running time slice according to the adjusted priority of each thread Process each thread to restore the adjusted priority of each thread.
  • the state of at least one thread is monitored, and when it is detected that at least one thread is in a preset blocking state, the running state and the associated state of each thread in the same process are detected, and according to the running state and the associated state of each thread, the One or at least two threads in the same process are prioritized. This dynamically adjusts thread priorities and improves system performance.
  • the method for adjusting thread priority of the present disclosure can realize dynamic adjustment of thread priority, thereby improving system performance.
  • the method for adjusting thread priority of the present disclosure will be described below with reference to an application scenario.
  • FIG. 2 is a detailed flowchart of a method for adjusting thread priority according to Embodiment 2 of the present disclosure.
  • the method for adjusting thread priority includes the following steps S201 to S209.
  • S201 Monitor the thread state, when the thread state switches from the running state to the non-running state.
  • Synchronization blocking When the running thread acquires the synchronization lock of the object, if the synchronization lock is occupied by another thread, the virtual machine will put the thread into the "lock pool", indicating that the block is caused by lock competition, and no judgment is required. .
  • S203 Thread control management, maintaining the state of each thread of the same process.
  • S204 and S206 may be performed simultaneously.
  • S204 Statistical information on the running state, time slice and priority of the threads, and information on mutual wake-up between threads.
  • S207 The thread is in a ready state but has a low priority. If other threads of the same process have a high priority and are in a waiting state, the policy in the policy library is executed to transmit the priority.
  • S209 Restore the adjusted priority of at least one thread and wait for the next round of resource scheduling.
  • relevant information such as thread status, running time, priority, IO resource usage, network resource usage, etc. can be counted, so that it is possible to know which threads are related to each other. If one thread is in the wait state due to system resources such as IO and network, and another thread is in the runnable state (the thread in the runnable state can be executed, but cannot be executed because the time slice is used up), it indicates that the thread in the runnable state is very likely Because the time slice cannot be run, another thread is waiting, so the priority in the waiting state can be passed to the runnable thread, so that the runnable thread has time slice execution.
  • the present disclosure proposes a terminal, which will be described below with reference to this embodiment.
  • FIG. 3 is a schematic diagram of the composition of a terminal according to Embodiment 3 of the present disclosure.
  • the terminal includes a monitoring module 301 , a detection module 302 , and an adjustment module 303 .
  • the monitoring module 301 is configured to monitor the status of at least one thread.
  • the detection module 302 is configured to detect the running state and associated state of each thread in the same process when the monitoring module 301 monitors that at least one thread is in a preset blocking state.
  • the adjustment module 303 is configured to adjust the priority of one or at least two threads in the same process according to the running state and associated state of each thread.
  • At least one thread may include: a thread that calls a process scheduling function and needs to wait for system resources to suspend running; and/or, when there is an interrupted or abnormal process in the current instruction executed by the CPU, and in the current instruction
  • the thread that executes the process scheduling function For example, after the CPU executes the current instruction and before executing the next instruction, it detects that an interrupt or exception occurs after the execution of the current instruction, and compares the process priority of the interrupt or exception with the process priority of the next instruction. When the priority is higher than the process priority of the next instruction, the interrupt service routine will be executed, and when the interrupt is returned, the thread of the process scheduling function will be executed.
  • the calling process scheduling function can be schedule().
  • the IO flag can be set before calling the process scheduling function. At this time, it can be determined whether the thread is a state switch caused by IO resources.
  • the thread When the thread uses network resources, it will call the socket socket interface. Some flags can be added to the socket interface. When the thread state switches, you can know whether the thread is blocked due to network resources.
  • monitoring the state of at least one thread may include: detecting that at least one thread implements a waiting policy; or, detecting that at least one thread implements a sleep policy, and the sleep time exceeds a first preset duration, and at least one thread is in the waiting state Transfer to the ready state; or, detect that at least one thread executes the strategy of waiting to call the current thread and continue to execute the next thread after the execution is completed, and the waiting time exceeds the second preset time length, and at least one thread is transferred from the waiting state to the ready state; or , it is detected that at least one thread issues a request to obtain input or output resources, and at least one thread transitions from the waiting state to the ready state.
  • the waiting strategy is the wait() method, which allows the current thread to suspend execution and release the object lock flag.
  • the sleep strategy is the sleep() method, which makes the current thread suspend execution for a period of time, allowing other threads to have the opportunity to continue execution, but it does not release the object lock.
  • the strategy of waiting for the execution of the current thread to continue to execute the next thread is the join() method, so that the thread that calls this method is executed before that, that is, waits for the thread that calls this method to finish executing before continuing to execute.
  • the preset blocking state in this embodiment may include: blocking because the system resource cannot be obtained or the thread voluntarily giving up the CPU.
  • the system resources include at least one of the following: network resources and I/O resources.
  • the system resources may also include memory, CPU resources, and the like.
  • Adjusting the priority of at least one thread in each thread according to the running state and priority of each thread may further include: when each thread has a third thread in a ready state and a fourth thread in a waiting state, confirming the third thread The priority of the thread and the fourth thread; when the priority of the third thread is lower than that of the fourth thread, and the third thread and the fourth thread wake up from each other, the priority of the third thread is adjusted to be before the fourth thread.
  • the information of each thread of the same process is counted, and according to the statistical information, it is determined whether there is a thread in the ready-runnable state in the current process that blocks a thread with a higher priority than the thread in the ready-runnable state.
  • the statistical information of each thread of the same process includes: the running state of the thread, the time slice, the priority, and the information that the threads wake up each other before. Among them, if it is found that two threads frequently wake up each other, one is in the runnable state and the other is in the wait state, the thread in the wait state is largely because the thread in the runnable state cannot be executed, and the priority of the runnable thread is increased. Priority access to resources.
  • the statistical information can also use the external proc of the operating system or the sys file system node to output thread status, scheduling information, IO information, network information, etc., and external applications can generate intelligent policies based on these information and input them to the system to make priority transfer more intelligent. .
  • the process may further include: waiting for system resource allocation to run according to the adjusted priority of each thread The time slice of each thread is processed; the priority of each thread that is adjusted is restored.
  • the terminal includes a monitoring module, a detection module, and an adjustment module.
  • the monitoring module monitors that at least one thread is in a preset blocking state
  • the detection module detects the running state and associated state of each thread in the same process
  • the adjustment module adjusts the priority of one or at least two threads in the same process according to the detected running state and associated state of each thread in the same process. Dynamically adjust thread priority to improve system performance.
  • the method for adjusting thread priority of the present disclosure can realize dynamic adjustment of thread priority and improve system performance.
  • the terminal of the present disclosure will be described below with reference to an application scenario.
  • FIG. 4 is a schematic diagram of the composition of a terminal according to Embodiment 4 of the present disclosure.
  • the terminal includes: a monitoring/processing module 401 , a state management module 402 , an intelligent policy module 403 , an optimization module 404 and a recovery module 405 .
  • the monitoring/processing module 401 monitors the running state of the thread. According to different conditions of the thread in the execution process, at least three different running states can be defined, as shown in FIG. 5 .
  • the thread in the running state will enter the waiting state due to the occurrence of the waiting event.
  • the thread in the waiting state will enter the ready state, and the scheduling policy of the processor will cause the switching between the running state and the ready state, including the following each embodiment.
  • the CPU For passive calls, after the CPU executes the current instruction, before executing the next instruction, the CPU needs to determine whether an interrupt or exception occurs after the execution of the current instruction. If an interrupt occurs, the CPU will compare the priority of the incoming interrupt with the priority of the current process. If the priority of the new task is higher, the interrupt service routine will be executed. When the interrupt is returned, the thread scheduling function schedule will be executed, which needs to be done. monitor.
  • the IO flag will be set before calling the schedule function. At this time, it can be judged whether the thread is a state switch caused by IO resources.
  • the state management module 402 is responsible for the current states of various threads in the thread pool, namely wait, runnable, running, and block.
  • the intelligent strategy module 403 is responsible for counting the running status and time of each thread, and making corresponding plans.
  • the optimization module 404 dynamically adjusts the priority of the thread.
  • the restoration module 405 restores the default running state of the thread and waits for resource scheduling.
  • the present disclosure can dynamically adjust the priority of threads, schedule resources and release resources to the greatest extent through the collection of thread state information in the early stage, according to the running state and dynamic statistical information of time, so as to improve the resource waiting time of the application program and improve the It is one of the factors that affects the collection performance the most, especially for mobile phones with smaller memory and more applications installed, the more obvious the performance improvement effect will be. Especially for mobile phones with small memory, it can greatly reduce the occurrence of IO and improve the fluency of mobile phones.
  • the present disclosure may also be used for future in-vehicle products, computers, tablet computers, and the like.
  • This embodiment also provides a terminal, as shown in FIG. 6 , which includes a processor 601 , a memory 602 and a communication bus 603 .
  • the communication bus 603 is configured to implement connection communication between the processor 601 and the memory 602 .
  • the processor 601 is configured to execute one or more computer programs stored in the memory 602 to implement at least one step in the method for adjusting thread priority in the first embodiment or the second embodiment.
  • Embodiments of the present disclosure also provide a computer-readable storage medium included in any method or technology for storing information, such as computer-readable instructions, data structures, computer program modules, or other data Implemented volatile or nonvolatile, removable or non-removable media.
  • Computer-readable storage media include but are not limited to RAM (Random Access Memory, random access memory), ROM (Read-Only Memory, read-only memory), EEPROM (Electrically Erasable Programmable Read Only Memory, electrically erasable programmable read-only memory) ), flash memory or other memory technology, CD-ROM (Compact Disc Read-Only Memory), digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, Or any other medium that can be used to store the desired information and that can be accessed by a computer.
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • flash memory or other memory technology
  • CD-ROM Compact Disc Read-Only Memory
  • DVD digital versatile disk
  • the computer-readable storage medium in this embodiment may be used to store one or more computer programs, and the stored one or more computer programs may be executed by a processor, so as to realize the adjustment thread priority in the first embodiment or the second embodiment above at least one step of the method.
  • the functional modules/units in the system, and the device can be implemented as software (which can be implemented by computer program codes executable by a computing device). ), firmware, hardware, and their appropriate combination.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components Components execute cooperatively.
  • Some or all physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit .
  • communication media typically embodies computer readable instructions, data structures, computer program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and can include any information delivery, as is well known to those of ordinary skill in the art medium. Therefore, the present disclosure is not limited to any particular combination of hardware and software.

Abstract

本公开提供调整线程优先级的方法、终端及计算机可读存储介质,通过监控至少一个线程的状态;当监控到至少一个线程处于预设阻塞状态时,检测同一进程中的各线程的运行状态和关联状态;根据各线程的运行状态和关联状态对同一进程中的一个或至少两个线程进行优先级调整。

Description

调整线程优先级的方法、终端及计算机可读存储介质
相关申请的交叉引用
本申请要求享有2020年11月09日提交的名称为“调整线程优先级的方法、终端及计算机可读存储介质”的中国专利申请CN202011238906.9的优先权,其全部内容通过引用并入本申请中。
技术领域
本公开涉及但不限于终端领域,具体而言,涉及但不限于调整线程优先级的方法、终端及计算机可读存储介质。
背景技术
对于一些终端产品,其框架服务存在多个不同功能的线程,而不同线程需要相互通过锁的方式协调共享资源的访问,也是通过锁的方式来传递优先级。这就要求线程之间必须要有锁竞争,才能实现优先级传递。而对于锁竞争,存在多个线程必须获取同一个锁,在当前线程获取该锁的情形下,其他线程只能按照优先级获取该锁,无法动态调整线程的优先级,系统性能较低。
发明内容
本公开提供的一种调整线程优先级的方法、终端及计算机可读存储介质,主要解决的技术问题是线程按照预设优先级进行执行,导致的系统性能较低问题。
为解决上述技术问题,本公开实施例提供一种调整线程优先级的方法,该方法包括:监控至少一个线程的状态;当监控到至少一个线程处于预设阻塞状态时,检测同一进程中的各线程的运行状态和关联状态;根据各线程的运行状态和关联状态对同一进程中的一个或至少两个线程进行优先级调整。
本公开实施例还提供一种终端,该终端包括监控模块、检测模块、调整模块。监控模块配置为监控至少一个线程的状态。检测模块配置为当监控模块监控到至少一个线程处于预设阻塞状态时,检测同一进程中的各线程的运行状态和关联状态。调整模块配置为根据各线程的运行状态和关联状态对同一进程中的一个或至少两个线程进行优先级调整。
本公开实施例还提供一种终端,该终端包括处理器、存储器及通信总线。通信总线配 置为实现处理器和存储器之间的连接通信。处理器配置为执行存储器中存储的一个或者多个计算机程序,以实现如上所述的调整线程优先级方法的步骤。
本公开实施例还提供一种计算机存储介质,该计算机可读存储介质存储有一个或者多个程序,该一个或者多个程序可被一个或者多个处理器执行,以实现如上所述的调整线程优先级方法的步骤。
本公开其他特征和相应的有益效果在说明书的后面部分进行阐述说明,且应当理解,至少部分有益效果从本公开说明书中的记载变的显而易见。
附图说明
图1为本公开实施例一的调整线程优先级方法的基本流程图;
图2为本公开实施例二的调整线程优先级方法的细化流程图;
图3为本公开实施例三的终端的组成示意图;
图4为本公开实施例四的终端的组成示意图;
图5为本公开实施例四的线程的状态示意图;
图6为本公开实施例五的终端的结构示意图。
具体实施方式
为了使本公开的目的、技术方案及优点更加清楚明白,下面通过具体实施方式结合附图对本公开实施例作进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本公开,并不用于限定本公开。
实施例一
为了解决线程按照优先级进行执行、很难调整线程的优先级、系统性能较低等技术问题,本公开提出一种调整线程优先级的方法,下面结合本实施例对该方法进行说明。
请参见图1,图1为本公开实施例一的调整线程优先级方法的基本流程图,该方法包括如下步骤S101至S103。
S101、监控至少一个线程的状态。
在一些实施例中,至少一个线程可以包括:调用进程调度函数,且需要等待系统资源而暂停运行的线程;和/或,当CPU执行的当前指令中有中断或异常的进程,且当前指令中的中断或异常的进程优先级高于CPU执行下一条指令的进程优先级时,执行进程调度函 数的线程。例如,CPU在执行当前指令之后,在执行下一条指令之前,检测到当前指令执行后发生中断或异常,比较中断或异常的进程优先级与下一条指令的进程优先级,当中断或异常的进程优先级高于下一条指令的进程优先级时,则执行中断服务程序,在返回中断时,将执行进程调度函数的线程。其中调用的进程调度函数可以为schedule()。
而对于IO资源导致的线程调度,在调用进程调度schedule函数前可以设置IO的标志位,这时就可以判断线程是否是因为IO资源导致的状态切换。
线程使用网络资源时会调用套接字socket接口,在socket接口可以增加一些标志位,当线程状态切换时,就可以知道是否是因为网络资源导致出现的线程阻塞。
在一些实施例中,监控至少一个线程的状态可以包括:检测至少一个线程执行等待策略,至少一个线程为等待状态;或,检测到至少一个线程执行休眠策略,且休眠时间超过第一预设时长,至少一个线程由等待状态转入就绪状态;或,检测到至少一个线程执行等待调用当前线程执行完毕后继续执行下一线程的策略,且等待时间超过第二预设时长,至少一个线程由等待状态转入就绪状态;或,检测到至少一个线程发出获取输入或输出资源的请求,至少一个线程由等待状态转入就绪状态。其中等待策略为wait()方法,让当前线程暂停执行并释放对象锁标志。其中休眠策略为sleep()方法,使当前线程暂停执行一段时间,让其他线程有机会继续执行,但它并不释放对象锁。其中等待调用当前线程执行完毕后继续执行下一线程的策略为join()方法,使调用该方法的线程在此之前执行完毕,也就是等待调用该方法的线程执行完毕后再往下继续执行。
S102、当监控到至少一个线程处于预设阻塞状态时,检测同一进程中的各线程的运行状态和关联状态。
在本步骤中的预设堵塞状态可以包括:因为系统资源无法获取而堵塞或线程主动放弃CPU。其中系统资源至少包括以下一种:网络资源、I/O资源。系统资源还可以包括内存、CPU资源等。
对于至少一个线程在获取对象的同步锁时,同步锁被其他的线程占用。该线程可以直接得出线程处于预设阻塞状态。
其中关联各线程的关联状态可以包括:各线程的优先级顺序,或,各线程的优先级顺序和各线程的互相唤醒信息。
S103、根据各线程的运行状态和关联状态对同一进程中的一个或至少两个线程进行优先级调整。
在一些实施例中,根据各线程的运行状态和关联状态对同一进程中的一个或至少两个 线程进行优先级调整可以包括:确认各线程的优先级,根据各线程的运行状态、优先级调整各线程中至少一个线程的优先级。而根据各线程的运行状态、优先级来调整各线程中至少一个线程的优先级可以包括:当各线程中有处于就绪状态的第一线程,以及处于等待状态的第二线程时,确认第一线程与第二线程的优先级;当第一线程的优先级低于第二线程时,将第一线程的优先级调整至第二线程之前。
根据各线程的运行状态、优先级来调整各线程中至少一个线程的优先级还可以包括:当各线程中有处于就绪状态的第三线程,以及处于等待状态的第四线程时,确认第三线程与第四线程的优先级;当第三线程的优先级低于第四线程,第三线程和第四线程互相唤醒时,将第三线程的优先级调整至第四线程之前。
例如,统计同一进程的各线程的信息,根据统计的信息判断当前进程内部是否有处于就绪runnable状态的线程阻塞了比就绪runnable状态的线程高优先级的线程。其中统计同一进程的各线程的信息包括:线程的运行状态、时间片、优先级以及线程之前互相唤醒的信息。其中如果发现两个线程频繁的互相唤醒,一个是处于runnable状态,一个处于wait状态,则很大程度上是处于wait状态的线程是因为处于runnable的线程无法执行导致,提高runnable线程的优先级,优先获取资源。这样就可以根据统计的信息判断当前进程内部是否有处于runnable状态的低优先级的线程,阻塞了高优先级的线程。如果有,则需要调整低优先级的线程,使低优先级的线程优先获取资源。其中统计的信息还可以使用操作系统的外部proc或者sys文件系统节点输出线程状态、调度信息、IO信息、网络信息等,外部应用可以根据这些信息生成智能策略输入给系统,让优先级传递更智能。
在一些实施例中,根据各线程的运行状态和关联状态对同一进程中的一个或至少两个线程进行优先级调整之后,根据调整后的各线程的优先级,等待系统资源分配运行的时间片对各线程进行处理,恢复被调整的各线程的优先级。
根据本公开实施例,监控至少一个线程的状态,当监控到至少一个线程处于预设阻塞状态时,检测同一进程中的各线程的运行状态和关联状态,根据各线程的运行状态和关联状态对同一进程中的一个或至少两个线程进行优先级调整。这样动态调整了线程优先级,提高了系统性能。
实施例二
本公开的调整线程优先级的方法可实现线程优先级的动态调整,进而提高系统性能。为了便于理解,下面结合一种应用场景对本公开的调整线程优先级的方法进行说明。
图2为本公开实施例二提供的调整线程优先级方法的细化流程图,该调整线程优先级的方法包括如下步骤S201至S209。
S201:监控线程状态,当线程状态从运行状态切换非运行状态。
S202:判断线程是否因为IO、网络资源无法获取导致阻塞或者线程主动放弃CPU。
其中阻塞的方式有三种:
1)等待阻塞:运行的线程执行wait()方法,虚拟机会把该线程放入“等待池”中。线程进入此状态后,是不能自动唤醒的,必须依靠其他线程调用notify()或notifyAll()方法才能被唤醒,需要判断此状态。
2)同步阻塞:运行的线程在获取对象的同步锁时,若该同步锁被别的线程占用,则虚拟机会把该线程放入“锁池”中,说明是锁竞争导致的阻塞,无须判断。
3)其他阻塞:运行的线程执行sleep()或join()方法,或者发出了I/O请求时,虚拟机会把该线程置为阻塞状态。当sleep()状态超时、join()等待线程终止或者超时、或者I/O处理完毕时,线程重新转入就绪状态,需要判断此状态。
若是,则执行S203,若否,则执行S208。
S203:线程控制管理,对同一进程的各线程状态维护。
在本步骤后,可同时执行S204和S206。
S204:统计线程运行状态、时间片和优先级的信息,以及线程之间互相唤醒的信息。
S205:根据统计的信息形成策略。
如果发现两个线程频繁的互相唤醒,一个是处于runnable就绪状态,一个处于wait等待状态,则很大程度上是处于wait状态的线程是因为处于runnable的线程无法执行导致,提高runnable线程的优先级,优先获取资源。这样就可以根据统计的信息判断当前进程内部是否有处于runnable状态的低优先级的线程,阻塞了高优先级的线程。如果有,则需要调整低优先级的线程,使低优先级的线程优先获取资源。在执行完形成策略后执行S207。
S206:判断进程中是否有处于就绪状态的线程。若是,则执行S207。
S207:线程处于就绪状态但优先级比较低,若同一进程的其他线程优先级高且处于等待状态,则执行策略库中策略进行优先级传递。
S208:等待系统资源分配运行的时间片进行线程处理。
S209:恢复被调整的至少一个线程的优先级,待下一轮资源调度。
利用本公开实施例提供的调整线程优先级的方法,统计线程状态、运行时间、优先级、IO资源使用、网络资源使用等相关信息,这样就可以知道哪些线程之间有互相关联。如果 一个线程由于IO、网络等系统资源处于wait状态,另外一个线程处于runnable状态(处于runnable状态的线程是可以执行,但因为时间片用完而无法执行),那表明处于runnable状态的线程很可能因为得不到时间片运行导致另外一个线程处于等待,这样可以把处于等待状态的优先级传递给runnable的线程,让runnable线程有时间片执行。通过这种方式加快资源释放,改善系统性能,避免如果资源锁被低优先级线程持有,在系统繁忙的情况下,低优先级进程会因为IO或者CPU资源获取不到执行时间而导致锁无法释放,阻止了其他关键进程的执行。
实施例三
为了解决线程按照预设优先级进行执行,无法动态调整线程的优先级,系统性能较低的技术问题,本公开提出一种终端,下面结合本实施例对该终端进行说明。
请参见图3,图3为本公开实施例三的终端的组成示意图,该终端包括监控模块301、检测模块302、调整模块303。
监控模块301配置为监控至少一个线程的状态。
检测模块302配置为当监控模块301监控到至少一个线程处于预设阻塞状态时,检测同一进程中的各线程的运行状态和关联状态。
调整模块303配置为根据各线程的运行状态和关联状态对同一进程中的一个或至少两个线程进行优先级调整。
在一些实施例中,至少一个线程可以包括:调用进程调度函数,且需要等待系统资源而暂停运行的线程;和/或,当CPU执行的当前指令中有中断或异常的进程,且当前指令中的中断或异常的进程优先级高于CPU执行下一条指令的进程优先级时,执行进程调度函数的线程。例如,CPU在执行当前指令之后,在执行下一条指令之前,检测到当前指令执行后发生中断或异常,比较中断或异常的进程优先级与下一条指令的进程优先级,当中断或异常的进程优先级高于下一条指令的进程优先级时,则执行中断服务程序,在返回中断时,将执行进程调度函数的线程。其中调用的进程调度函数可以为schedule()。
而对于IO资源导致的线程调度,在调用进程调度schedule函数前可以设置IO的标志位,这时就可以判断线程是否是因为IO资源导致的状态切换。
线程使用网络资源时会调用套接字socket接口,在socket接口可以增加一些标志位,当线程状态切换时,就可以知道是否是因为网络资源导致出现的线程阻塞。
在一些实施例中,监控至少一个线程的状态可以包括:检测至少一个线程执行等待策略;或,检测到至少一个线程执行休眠策略,且休眠时间超过第一预设时长,至少一个线 程由等待状态转入就绪状态;或,检测到至少一个线程执行等待调用当前线程执行完毕后继续执行下一线程的策略,且等待时间超过第二预设时长,至少一个线程由等待状态转入就绪状态;或,检测到至少一个线程发出获取输入或输出资源的请求,至少一个线程由等待状态转入就绪状态。其中等待策略为wait()方法,让当前线程暂停执行并释放对象锁标志。其中休眠策略为sleep()方法,使当前线程暂停执行一段时间,让其他线程有机会继续执行,但它并不释放对象锁。其中等待调用当前线程执行完毕后继续执行下一线程的策略为join()方法,使调用该方法的线程在此之前执行完毕,也就是等待调用该方法的线程执行完毕后再往下继续执行。
本实施例中的预设堵塞状态可以包括:因为系统资源无法获取而堵塞或线程主动放弃CPU。其中系统资源至少包括以下一种:网络资源、I/O资源。系统资源还可以包括内存、CPU资源等。
在一些实施例中,根据各线程的运行状态和关联状态对同一进程中的一个或至少两个线程进行优先级调整可以包括:确认各线程的优先级,根据各线程的运行状态、优先级调整各线程中至少一个线程的优先级。而根据各线程的运行状态、优先级来调整各线程中至少一个线程的优先级可以包括:当各线程中有处于就绪状态的第一线程,以及处于等待状态的第二线程时,确认第一线程与第二线程的优先级;当第一线程的优先级低于第二线程时,将第一线程的优先级调整至第二线程之前。
根据各线程的运行状态、优先级来调整各线程中至少一个线程的优先级还可以包括:当各线程中有处于就绪状态的第三线程,以及处于等待状态的第四线程时,确认第三线程与第四线程的优先级;当第三线程的优先级低于第四线程,第三线程和第四线程互相唤醒时,将第三线程的优先级调整至第四线程之前。
例如,统计同一进程的各线程的信息,根据统计的信息判断当前进程内部是否有处于就绪runnable状态的线程阻塞了比就绪runnable状态的线程高优先级的线程。其中统计同一进程的各线程的信息包括:线程的运行状态、时间片、优先级以及线程之前互相唤醒的信息。其中如果发现两个线程频繁的互相唤醒,一个是处于runnable状态,一个处于wait状态,则很大程度上是处于wait状态的线程是因为处于runnable的线程无法执行导致,提高runnable线程的优先级,优先获取资源。这样就可以根据统计的信息判断当前进程内部是否有处于runnable状态的低优先级的线程,阻塞了高优先级的线程,如果有,则需要调整低优先级的线程,使低优先级的线程优先获取资源。其中统计的信息还可以使用操作系统的外部proc或者sys文件系统节点输出线程状态、调度信息、IO信息、网络信息等,外部应用可以根据这些信息生成智能策略输入给系统,让优先级传递更智能。
在一些实施例中,根据各线程的运行状态和关联状态对同一进程中的一个或至少两个线程进行优先级调整之后还可以包括:根据调整后的各线程的优先级,等待系统资源分配运行的时间片对各线程进行处理;恢复被调整的各线程的优先级。
利用本公开提供的终端,该终端包括监控模块、检测模块、调整模块,在监控模块监控到至少一个线程处于预设阻塞状态时,检测模块检测同一进程中的各线程的运行状态和关联状态,然后调整模块根据检测同一进程中的各线程的运行状态和关联状态对同一进程中的一个或至少两个线程进行优先级调整。动态调整了线程优先级,提高了系统性能。
实施例四
本公开的调整线程优先级的方法可实现了线程优先级的动态调整,提高了系统性能。为了便于理解,下面结合一种应用场景对本公开的终端进行说明。
图4为本公开实施例四提供的终端的组成示意图,该终端包括:监控/处理模块401、状态管理模块402、智能策略模块403、优化模块404和恢复模块405。
其中监控/处理模块401监控线程运行状态。根据线程在执行过程中的不同状况,至少可以定义三种不同的运行状态,如图5所示。
运行态的线程会由于出现等待事件而进入等待状态,当等待事件结束之后,等待状态的线程将进入就绪状态,而处理器的调度策略又会引起运行状态和就绪状态之间的切换,包括如下各实施方式。
1)在内核中主动直接调用进程调度函数schedule(),当线程需要等待资源而暂时停止运行时,会把状态置于挂起,并主动请求调度,放弃CPU,这时需要监控此线程。
2)对于被动调用,CPU在执行了当前指令之后,在执行下一条指令之前,CPU要判断在当前指令执行之后是否发生了中断或异常。如果发生了CPU将比较到来的中断优先级和当前进程的优先级,若新来任务的优先级更高,则执行中断服务程序,在返回中断时,将执行线程调度函数schedule,需要对此进行监控。
3)若由于IO资源导致的线程调度,在调用schedule函数前会设置IO的标志位,这时就可以判断线程是否是因为IO资源导致的状态切换。
4)线程使用网络资源时会调用socket接口,在socket接口可以增加一些标志位,当线程状态切换时,就可以知道是否是因为网络资源导致出现的线程阻塞。
状态管理模块402负责线程池中各种线程的当前状态,即wait、runnable、running、block。智能策略模块403负责统计各个线程的运行状态和时间,并做出相应的方案。优化模块404即动态调整线程的优先级。恢复模块405即恢复线程的默认运行状态等待资源调度。
本公开通过前期的线程状态信息收集,根据运行状态和时间动态的统计信息,动态调整线程的优先级,最大程度调度资源及释放资源,这样可以很好的改善应用程序发生的资源等待时间,改善了影响收集性能最大的一个因素,尤其是对内存越小,安装的应用越多的手机,性能改善效果会越明显。尤其是内存较小的手机,可以大幅减少IO的发生提升手机的流畅性。本公开除了用于通讯终端产品,还有可能用于未来的车载产品、计算机、平板电脑等。
实施例五
本实施例还提供了一种终端,参见图6所示,其包括处理器601、存储器602及通信总线603。
通信总线603配置为实现处理器601和存储器602之间的连接通信。
处理器601配置为执行存储器602中存储的一个或者多个计算机程序,以实现上述实施例一或实施例二中的调整线程优先级的方法中的至少一个步骤。
本公开实施例还提供了一种计算机可读存储介质,该计算机可读存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、计算机程序模块或其他数据)的任何方法或技术中实施的易失性或非易失性、可移除或不可移除的介质。计算机可读存储介质包括但不限于RAM(Random Access Memory,随机存取存储器)、ROM(Read-Only Memory,只读存储器)、EEPROM(Electrically Erasable Programmable Read Only Memory,带电可擦可编程只读存储器)、闪存或其他存储器技术、CD-ROM(Compact Disc Read-Only Memory,光盘只读存储器)、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。
本实施例中的计算机可读存储介质可用于存储一个或者多个计算机程序,其存储的一个或者多个计算机程序可被处理器执行,以实现上述实施例一或实施例二中的调整线程优先级的方法的至少一个步骤。
可见,本领域的技术人员应该明白,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件(可以用计算装置可执行的计算机程序代码来实现)、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。
此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、计算机程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。所以,本公开不限制于任何特定的硬件和软件结合。
以上内容是结合具体的实施方式对本公开实施例所作的进一步详细说明,不能认定本公开的具体实施只局限于这些说明。对于本公开所属技术领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本公开的保护范围。

Claims (10)

  1. 一种调整线程优先级的方法,其中,所述方法包括:
    监控至少一个线程的状态;
    当监控到所述至少一个线程处于预设阻塞状态时,检测同一进程中的各线程的运行状态和关联状态;
    根据所述各线程的运行状态和关联状态对所述同一进程中的一个或至少两个线程的各线程进行优先级调整。
  2. 如权利要求1所述调整线程优先级的方法,其中,所述至少一种线程包括:
    调用进程调度函数,且需要等待系统资源而暂停运行的线程;
    和/或,
    当CPU执行的当前指令中有中断或异常的进程,且所述当前指令中的中断或异常的进程优先级高于所述CPU执行下一条指令的进程优先级时,执行进程调度函数的线程。
  3. 如权利要求1所述调整线程优先级的方法,其中,所述监控至少一个线程的状态包括:
    检测所述至少一个线程执行等待策略,所述至少一个线程为等待状态;
    或,
    检测到所述至少一个线程执行休眠策略,且休眠时间超过第一预设时长,所述至少一个线程由等待状态转入就绪状态;
    或,
    检测到所述至少一个线程执行等待调用当前线程执行完毕后继续执行下一线程的策略,且等待时间超过第二预设时长,所述至少一个线程由等待状态转入就绪状态;
    或,
    检测到所述至少一个线程发出获取输入或输出资源的请求,所述至少一个线程由等待状态转入就绪状态。
  4. 如权利要求1-3任一项所述调整线程优先级的方法,其中,所述根据所述各线程的运行状态和关联状态对所述同一进程中的一个或至少两个线程进行优先级调整包括:
    确认各线程的优先级;
    根据所述各线程的运行状态、所述优先级调整各线程中至少一个线程的优先级。
  5. 如权利要求4所述调整线程优先级的方法,其中,所述根据所述各线程的运行状态、优先级来调整各线程中至少一个线程的优先级包括:
    当各线程中有处于就绪状态的第一线程,以及处于等待状态的第二线程时,
    确认第一线程与第二线程的优先级;
    当第一线程的优先级低于第二线程时,将第一线程的优先级调整至所述第二线程之前。
  6. 如权利要求4所述调整线程优先级的方法,其中,所述根据所述各线程的运行状态、优先级来调整各线程中至少一个线程的优先级包括:
    当各线程中有处于就绪状态的第三线程,以及处于等待状态的第四线程时,确认第三线程与第四线程的优先级;
    当第三线程的优先级低于第四线程,所述第三线程和所述第四线程互相唤醒时,将第三线程的优先级调整至所述第四线程之前。
  7. 如权利要求1-3任一项所述调整线程优先级的方法,其中,所述根据所述各线程的运行状态和关联状态对所述同一进程中的一个或至少两个线程进行优先级调整之后还包括:
    根据调整后的各线程的优先级,等待系统资源分配运行的时间片对各线程进行处理;
    恢复被调整的各线程的优先级。
  8. 一种终端,其中,所述终端包括:
    监控模块,配置为监控至少一个线程的状态;
    检测模块,配置为当所述监控模块监控到所述至少一个线程处于预设阻塞状态时,检测同一进程中的各线程的运行状态和关联状态;
    调整模块,配置为根据所述各线程的运行状态和关联状态对所述同一进程中的一个或至少两个线程进行优先级调整。
  9. 一种终端,其中,所述终端包括:
    处理器;
    存储器;
    通信总线,配置为实现处理器和存储器之间的连接通信,
    所述处理器配置为执行存储器中存储的一个或者多个计算机程序,以实现如权利要求1至7中任一项所述的调整线程优先级方法的步骤。
  10. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有一个或者多个计算机程序,所述一个或者多个计算机程序可被一个或者多个处理器执行,以实现如权利要求1至7中任一项所述的调整线程优先级方法的步骤。
PCT/CN2021/128287 2020-11-09 2021-11-03 调整线程优先级的方法、终端及计算机可读存储介质 WO2022095862A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21888559.8A EP4242842A4 (en) 2020-11-09 2021-11-03 EXECUTION THREAD PRIORITY SETTING METHOD, TERMINAL AND COMPUTER READABLE STORAGE MEDIUM
US18/036,145 US20230409391A1 (en) 2020-11-09 2021-11-03 Thread priority adjusting method, terminal, and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011238906.9A CN114461353A (zh) 2020-11-09 2020-11-09 调整线程优先级的方法、终端及计算机可读存储介质
CN202011238906.9 2020-11-09

Publications (1)

Publication Number Publication Date
WO2022095862A1 true WO2022095862A1 (zh) 2022-05-12

Family

ID=81403904

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/128287 WO2022095862A1 (zh) 2020-11-09 2021-11-03 调整线程优先级的方法、终端及计算机可读存储介质

Country Status (4)

Country Link
US (1) US20230409391A1 (zh)
EP (1) EP4242842A4 (zh)
CN (1) CN114461353A (zh)
WO (1) WO2022095862A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116700817A (zh) * 2022-11-10 2023-09-05 荣耀终端有限公司 应用程序运行的方法及电子设备
CN117112241B (zh) * 2023-10-24 2024-02-06 腾讯科技(深圳)有限公司 调度优先级调整方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1276887A (zh) * 1997-10-23 2000-12-13 国际商业机器公司 多线程处理器系统中的线程切换控制
US20070198980A1 (en) * 2006-02-22 2007-08-23 Samsung Electronics Co., Ltd. Apparatus for forcibly terminating thread blocked on input/output operation and method for the same
CN108509260A (zh) * 2018-01-31 2018-09-07 深圳市万普拉斯科技有限公司 线程识别处理方法、装置、计算机设备和存储介质
CN109992436A (zh) * 2017-12-29 2019-07-09 华为技术有限公司 线程阻塞检测方法及设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5247675A (en) * 1991-08-09 1993-09-21 International Business Machines Corporation Preemptive and non-preemptive scheduling and execution of program threads in a multitasking operating system
AU731871B2 (en) * 1996-11-04 2001-04-05 Sun Microsystems, Inc. Method and apparatus for thread synchronization in object-based systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1276887A (zh) * 1997-10-23 2000-12-13 国际商业机器公司 多线程处理器系统中的线程切换控制
US20070198980A1 (en) * 2006-02-22 2007-08-23 Samsung Electronics Co., Ltd. Apparatus for forcibly terminating thread blocked on input/output operation and method for the same
CN109992436A (zh) * 2017-12-29 2019-07-09 华为技术有限公司 线程阻塞检测方法及设备
CN108509260A (zh) * 2018-01-31 2018-09-07 深圳市万普拉斯科技有限公司 线程识别处理方法、装置、计算机设备和存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4242842A4

Also Published As

Publication number Publication date
CN114461353A (zh) 2022-05-10
EP4242842A1 (en) 2023-09-13
EP4242842A4 (en) 2024-04-24
US20230409391A1 (en) 2023-12-21

Similar Documents

Publication Publication Date Title
WO2022095862A1 (zh) 调整线程优先级的方法、终端及计算机可读存储介质
US9027027B2 (en) Thread management based on device power state
US8239869B2 (en) Method, system and apparatus for scheduling computer micro-jobs to execute at non-disruptive times and modifying a minimum wait time between the utilization windows for monitoring the resources
CA2849565C (en) Method, apparatus, and system for scheduling processor core in multiprocessor core system
CN107491346B (zh) 一种应用的任务处理方法、装置及系统
US20120210104A1 (en) Suspendable interrupts for processor idle management
CN109918141B (zh) 线程执行方法、装置、终端及存储介质
US8056083B2 (en) Dividing a computer job into micro-jobs for execution
CN111209110B (zh) 一种实现负载均衡的任务调度管理方法、系统和存储介质
US20180329750A1 (en) Resource management method and system, and computer storage medium
CN111897637B (zh) 作业调度方法、装置、主机及存储介质
JPWO2009060530A1 (ja) ネットワーク処理制御装置,プログラムおよび方法
US6820263B1 (en) Methods and system for time management in a shared memory parallel processor computing environment
WO2017156676A1 (zh) 一种针对应用的处理方法、装置及智能终端
CN112817772B (zh) 一种数据通信方法、装置、设备及存储介质
US8132171B2 (en) Method of controlling thread access to a synchronization object
US9128754B2 (en) Resource starvation management in a computer system
CN111538585A (zh) 一种基于node.js的服务器进程调度方法、系统和装置
CN114461365A (zh) 一种进程调度处理方法、装置、设备和存储介质
US9229716B2 (en) Time-based task priority boost management using boost register values
CA2767782A1 (en) Suspendable interrupts for processor idle management
AU2007261611A2 (en) Computer micro-jobs
CN110769046B (zh) 一种报文获取方法、装置、电子设备及机器可读存储介质
JP2008225641A (ja) コンピュータシステム、割り込み制御方法及びプログラム
CN112540886A (zh) Cpu负荷值检测方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21888559

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18036145

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2021888559

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021888559

Country of ref document: EP

Effective date: 20230609