WO2023125359A1 - 一种任务处理的方法及装置 - Google Patents

一种任务处理的方法及装置 Download PDF

Info

Publication number
WO2023125359A1
WO2023125359A1 PCT/CN2022/141776 CN2022141776W WO2023125359A1 WO 2023125359 A1 WO2023125359 A1 WO 2023125359A1 CN 2022141776 W CN2022141776 W CN 2022141776W WO 2023125359 A1 WO2023125359 A1 WO 2023125359A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
request
state
scheduling
type
Prior art date
Application number
PCT/CN2022/141776
Other languages
English (en)
French (fr)
Inventor
陈鑫
高智慧
高永良
郑立铭
林程
赵鸣越
代雷
李修昶
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023125359A1 publication Critical patent/WO2023125359A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of computer technology, in particular to a task processing method and device.
  • the computer system includes user mode and kernel mode.
  • thread A in user mode switches to thread B in user mode, it needs to enter the kernel mode first, and complete all tasks from thread A's user mode context, kernel mode context, and thread A in kernel mode.
  • the user mode context and all scheduling states of thread B can switching from thread A to thread B results in low switching efficiency and affects the performance of the computer system.
  • the embodiment of the present application provides a method for task processing, which is used to improve the switching efficiency of user mode tasks and improve the performance of a computer system.
  • Embodiments of the present application also provide corresponding devices, devices, computer-readable storage media, and computer program products.
  • the first aspect of the present application provides a method for task processing.
  • the method is applied in a computer system.
  • the computer system includes a user state and a kernel state.
  • the user state includes multiple tasks, and the tasks are threads or processes.
  • the method includes: State, detect the type of the first request entering the kernel entry, the kernel entry is the entry from the user state to the kernel state, the first request is triggered by the first task in the user state; when the type of the first request indicates that the first task is in the user state state pause, at least switch from the user state context of the first task to the user state context of the second task, and record the first scheduling state of the first task, the first scheduling state of the first task is the first task in the user state Paused state and running time of the first task from start to pause; run the second task in user mode.
  • the computer system may be a server, a terminal device or a virtual machine (virtual machine, VM).
  • the kernel mode and the user mode are two modes or two states of the operating system (operating system, OS).
  • the kernel mode is usually also called the privileged state, and the user mode is usually also called the non-privileged state.
  • a process is the smallest unit of resource allocation, and a thread is the smallest unit of operating system scheduling (processor scheduling).
  • a process can contain one or more threads.
  • the kernel entry may be any entry such as a system call entry, an exception entry, or an interrupt entry that can enter the kernel state from the user state.
  • the user mode context refers to a set of data necessary for a task to run in the user mode, such as data in registers of a processor.
  • Switching from the user mode context of the first task to the user mode context of the second task refers to moving the data required for the first task to run in the user mode from the register, and writing the data required for the second task to run in the user mode into the register.
  • the above registers may include any one or more of a general register, a program counter (program counter, PC), a program state register (program state, PS) and the like.
  • the scheduling state of a task may include whether the task is running or suspended, and the length of time the task runs, that is, the running time from the start of the task to the suspension, whether it enters the queue or exits the queue, and whether it is blocked or interrupted. Or exception, whether it is called by other threads, etc.
  • the first scheduling state of the first task includes the first task being suspended in the user mode and the running time of the first task from the start to the suspension, and the second scheduling state of the first task refers to the first task's All scheduling states except the first scheduling state.
  • the type of the first request may be a preconfigured request type or a non-preconfigured request type
  • the non-preconfigured request type refers to a request type that does not belong to the preconfigured request type.
  • the pre-configured request type is related to the business scenario, and in the business scenario, the number of occurrences of the pre-configured request type is higher than that of the non-pre-configured request type in the business scenario.
  • the pre-configured request type can be to create, read, and write files, directories, content of soft links, and file attributes; to control and manage file descriptors; One or more types of monitoring operations and other types.
  • Different request types can be represented by different identifiers.
  • the request type for creating a file can be represented by 00001
  • the request type for reading a file can be represented by 00002
  • other ways can also be used to represent different request types, as long as the corresponding request type can be determined, the specific expression form of the request type is not limited in this application.
  • the pre-configured request type may be one or more of a request type for receiving data packets, a request type for sending data packets, or a request type for monitoring.
  • the pre-configured request type may be an IO-driven request type.
  • the pre-configured request type may be a request type of an IO operation.
  • the pre-configured request type may be a clock request type.
  • the pre-configured request type may be a request type related to memory requests.
  • the pre-configured request type may be a request type of waiting for a signal.
  • the pre-configured request type can be a remote procedure call (remote procedure call, RPC) request type, a request type for sending a message, or a request type for a synchronization lock operation.
  • RPC remote procedure call
  • the pre-configured request type can be a mount request type and a status acquisition request type.
  • the pre-configured request type may be a request type that converts a synchronous operation into an asynchronous operation.
  • the selection of the pre-configured request type can be determined according to the actual situation, which is not limited in this application.
  • the steps in the above first aspect when the type of the first request indicates that the first task is suspended in the user mode, at least switch from the user mode context of the first task to the second task
  • the user mode context specifically: when the type of the first request indicates that the first task is suspended in the user mode, and the type of the first request is a pre-configured request type, only switch from the user mode context of the first task to the second task The user mode context of the second task.
  • the type of the first request is a pre-configured request type
  • the kernel-mode context reduces the content to be switched and improves the switching efficiency.
  • the type of the first request indicates that the first task is suspended in the user mode, at least switching from the user mode context of the first task to the context of the second task
  • the user mode context specifically: when the type of the first request indicates that the first task is suspended in the user mode, and the type of the first request is a non-preconfigured request type, switch from the user mode context of the first task to the second task
  • the user mode context of the first task and switch from the kernel mode context of the first task to the kernel mode context of the second task.
  • the kernel-mode context refers to a set of kernel-mode data that supports task running.
  • the type of the first request is a non-preconfigured request type, such as an interrupt request or exception request in some business scenarios does not belong to the preconfigured request type
  • the kernel-mode context is also switched. In this implementation, there is still no need to process all the scheduling states of the first task, and the task switching efficiency can still be improved.
  • the method further includes: after running the second task in the user mode, the method further includes: detecting the type of the second request entering the kernel entry, the second request is in the user mode Triggered by the target task, the target task is the second task or the last of at least one task that runs continuously after the second task, and both the second task and the at least one task trigger a request of a pre-configured request type; when the second request The type of indicates that the target task is suspended in user mode, and the second request type is a pre-configured request type, record the first scheduling status of the target task, and only switch from the user mode context of the target task to the user mode context of the third task , wherein the first scheduling state of the target task includes the target task being in a suspended state in the user state and the running time of the target task from running to pausing; the third task is running in the user state.
  • each task switch when switching from the first task to the second task by only switching the user mode context and recording the first scheduling state of the first task, the second task or several consecutive tasks initiate pre-configuration For the request of the request type, each task switch only needs to switch the user mode context of the task, and does not need to switch the kernel mode context of the task, which further improves the efficiency of task switching.
  • the method further includes: detecting the type of the second request entering the kernel entry, the second request is triggered by a target task in the user mode,
  • the target task is the second task or the last of at least one task that runs consecutively after the second task, when the target task is the last of at least one task, the second task, and at least one task that runs before the target task
  • the second task when switching from the first task to the second task by only switching the user mode context and recording the first scheduling state of the first task, the second task initiates a second request type of non-preconfigured request, or, the second task and several consecutive tasks all initiate a request of a pre-configured request type, and only initiate a second request of a non-pre-configured request type when reaching the target task, then it is necessary to start from the kernel of the first task
  • the state context is directly switched to the kernel state context of the target task, which reduces the content to be switched and improves the switching efficiency.
  • the method when the target task is not blocked, after switching from the kernel mode context of the first task to the kernel mode context of the target task, the method further includes: returning to the user mode to continue running target task.
  • the method when the target task is blocked, the method further includes: scheduling the third task through the native scheduling process, and switching from the target task to the third task, and the native scheduling process needs to process the All scheduling states of each task from the first task to at least one task; run the third task in user mode.
  • the third task needs to be scheduled through a native scheduling process.
  • the native scheduling process refers to the process of not only switching the user mode context and the kernel mode context but also processing all the scheduling states of the task before switching during the task switching process. In this application, while implementing fast switching, it also maintains compatibility with the native scheduling process.
  • the third task is scheduled through the native scheduling process, specifically: the second scheduling state of each task from the first task to at least one task is determined by The scheduling state when each task starts to run is modified to determine the scheduling state corresponding to each task when the native scheduling process is executed for the third task, wherein the second scheduling state of each task is divided by every A scheduling state other than the first scheduling state of a task.
  • the method further includes: saving the user mode context of the first task;
  • the user mode context of the first task is saved as the target context, and the target context is used when the first task is scheduled next time.
  • the temporarily save the user mode context of the first task after the first task triggers the first request, first temporarily save the user mode context of the first task, and when it is determined to perform fast switching for the first task, the temporarily saved first task
  • the user mode context of the system is saved as the target context, so that when the first task is scheduled next time, the target context can be directly used, which is conducive to quickly restoring the first task to the user mode for execution.
  • the first request includes the information of the second task, and the information of the second task is used to schedule the second task.
  • RPC remote procedure call
  • the computer system when the first task initiates the first request, the computer system will directly specify the information of the second task to be switched in the user mode, and the information of the second task may be an identifier of the second task. In this way, in the kernel state, the computer system can directly schedule the second task to switch according to the information of the second task, which further improves the efficiency of task switching.
  • the method further includes: recording information associated with the first request and the first task; running the second task to obtain a returned result; according to the information associated with the first task, Return the returned result to the first task; switch from the second task back to the first task to continue running.
  • the returned result will be Return to the first task, and the switch will continue to run the first task.
  • the second task is located in the first queue, the first queue is a First In, First Out (FIFO) queue, and the second task is among all tasks in the first queue The first task to enter the first queue.
  • FIFO First In, First Out
  • the first queue can also be called a fast queue.
  • the maintenance method of the first queue is first-in-first-out. When it is necessary to schedule tasks from the first queue, only the tasks currently in the first queue need to be scheduled. The latest task to enter the first queue.
  • the method before executing the native scheduling process for the third task, the method further includes: assigning tasks in the first queue and the first queue Synchronize the scheduling status of tasks in the second queue to the second queue, and synchronize the output information of the third task from the first queue to the second queue.
  • the second queue is a queue for the native scheduling process;
  • the position information of the task in the second queue is synchronized to the first queue, and the position information is used to adjust the position of the task in the first queue in the first queue.
  • the second queue may be called a slow queue, and the slow queue is used to execute a native scheduling process.
  • the tasks in the fast queue need to be synchronized to the slow queue, which is more conducive to compatibility with the native scheduling process.
  • the tasks in the fast queue may be rearranged in the slow queue according to the actual situation of each task, and these tasks are inserted into the appropriate position in the slow queue, and then at least The position information of a task in the slow queue is synchronized to the fast queue, so that the fast queue can optimize the order of at least one task in the fast queue according to the position information, so that tasks in the fast queue can get more fair scheduling opportunities.
  • the second aspect of the present application provides a task processing device, which includes a user mode and a kernel mode, the user mode includes multiple tasks, and the tasks are threads or processes, and the device includes: a detection unit, a first processing unit and a second processing unit unit, the functions of these units are as follows:
  • the detection unit is used to detect the type of the first request entering the kernel entry in the kernel state.
  • the kernel entry is the entry from the user state to the kernel state, and the first request is triggered by the first task in the user state.
  • the first processing unit is configured to at least switch from the user mode context of the first task to the user mode context of the second task when the type of the first request detected by the detection unit indicates that the first task is suspended in the user mode, and record the first The first scheduling state of the task, the first scheduling state of the first task is the first task being in a suspended state in the user mode and the running time of the first task from running to pausing.
  • the second processing unit is configured to run the second task switched by the first processing unit in the user mode.
  • the first processing unit is specifically configured to, when the type of the first request indicates that the first task is suspended in the user mode, and the type of the first request is a preconfigured request type, Then only switch from the user mode context of the first task to the user mode context of the second task.
  • the first processing unit is specifically configured to, when the type of the first request indicates that the first task is suspended in the user mode, and the type of the first request is a non-preconfigured request type, Then switch from the user mode context of the first task to the user mode context of the second task, and switch from the kernel mode context of the first task to the kernel mode context of the second task.
  • the preconfigured request type is related to a business scenario, and in the business scenario, the number of occurrences of the preconfigured request type is higher than the number of occurrences of the non-preconfigured request type in the business scenario .
  • the detection unit is further configured to detect the type of the second request entering the kernel entry, the second request is triggered by a target task in the user state, and the target task is the second task or in The last of the at least one task running consecutively after the second task, both the second task and the at least one task trigger a request of a preconfigured request type.
  • the first processing unit is further configured to record the first scheduling status of the target task when the type of the second request indicates that the target task is suspended in the user state, and the second request type is a preconfigured request type, and only from the target task
  • the user mode context is switched to the user mode context of the third task, wherein the first scheduling state of the target task includes the target task being in a suspended state in the user mode and the running time of the target task from starting to suspending.
  • the second processing unit is also used to run the third task in the user mode.
  • the detection unit is further configured to detect the type of the second request entering the kernel entry, the second request is triggered by a target task in the user state, and the target task is the second task or in The last of at least one task running consecutively after the second task, when the target task is the last of the at least one task, the second task, and each of the at least one tasks running before the target task triggers the pre- A request for the configured request type.
  • the first processing unit is further configured to record the first scheduling state of the target task when the type of the second request indicates that the target task is suspended in the user state, and the second request type is a non-preconfigured request type, and obtain the first scheduling status from the first task.
  • the kernel mode context of the target task is switched to the kernel mode context of the target task, wherein the first scheduling state of the target task includes the target task being in a suspended state in the user mode and the running time of the target task from starting to running to suspending.
  • the second processing unit is further configured to return to the user state to continue running the target task when the target task is not blocked.
  • the first processing unit is also used to schedule the third task through the native scheduling process when the target task is blocked, and switch from the target task to the third task.
  • the native scheduling process All scheduling states of each task from the first task to at least one task need to be processed.
  • the second processing unit is also used to run the third task in the user mode.
  • the first processing unit is specifically configured to change the second scheduling state of each task from the first task to at least one task from the scheduling state when each task starts running It is modified to determine the scheduling status of each task when the native scheduling process is executed for the third task, wherein the second scheduling status of each task is all scheduling statuses of each task except the first scheduling status of each task scheduling status.
  • the first request includes the information of the second task, and the information of the second task is used to schedule the second task.
  • RPC remote procedure call
  • the second processing unit is further configured to record the information associated with the first request and the first task; obtain the returned result by running the second task; according to the information associated with the first task information, return the returned result to the first task; switch from the second task back to the first task to continue running.
  • the second task is located in the first queue, the first queue is a first-in-first-out queue, and the second task is a task that first enters the first queue among all tasks in the first queue.
  • the second processing unit is further configured to synchronize the tasks in the first queue and the scheduling status of the tasks in the first queue to the second queue, and synchronize the output information of the third task from the first queue to the second queue, the second queue is a queue for the native scheduling process; synchronize the position information of the tasks in the first queue in the second queue For the first queue, the position information is used to adjust the positions of tasks in the first queue in the first queue.
  • the apparatus for task processing has the function of realizing the method of the first aspect or any possible implementation manner of the first aspect.
  • This function may be implemented by hardware, or may be implemented by executing corresponding software on the hardware.
  • the hardware or software includes one or more modules corresponding to the above functions, such as the above: detection unit, first processing unit and second processing unit, these several units can be realized by one processing unit or multiple processing units.
  • the relevant content of the second aspect or any possible implementation manner of the second aspect may be understood by referring to the first aspect and the relevant content of any possible implementation manner of the first aspect.
  • a third aspect of the present application provides a computer device, the computer device includes at least one processor, a memory, an input/output (input/output, I/O) interface, and a computer executable program stored in the memory and operable on the processor Instructions, when the computer-executed instructions are executed by the processor, the processor executes the method according to the above first aspect or any possible implementation manner of the first aspect.
  • the fourth aspect of the present application provides a computer-readable storage medium that stores one or more computer-executable instructions.
  • the computer-executable instructions are executed by a processor, one or more processors execute any of the above-mentioned first aspect or first aspect.
  • the fifth aspect of the present application provides a computer program product that stores one or more computer-executable instructions.
  • the computer-executable instructions are executed by one or more processors, one or more processors execute the above-mentioned first aspect or first A method for any one of the possible implementations of the aspect.
  • the sixth aspect of the present application provides a chip system, the chip system includes at least one processor, and the at least one processor is used to support the device for task processing to realize the above-mentioned first aspect or any possible implementation of the first aspect. function.
  • the system-on-a-chip may further include a memory for storing necessary program instructions and data of the device for task processing.
  • the system-on-a-chip may consist of chips, or may include chips and other discrete devices.
  • FIG. 1 is a schematic diagram of an embodiment of a computer system provided by an embodiment of the present application
  • Fig. 2 is a schematic diagram of another embodiment of the computer system provided by the embodiment of the present application.
  • FIG. 3 is a schematic diagram of an embodiment of a task processing method provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of another embodiment of the task processing method provided by the embodiment of the present application.
  • Fig. 5 is a schematic diagram of another embodiment of the task processing method provided by the embodiment of the present application.
  • Fig. 6 is a schematic diagram of another embodiment of the task processing method provided by the embodiment of the present application.
  • Fig. 7 is a schematic diagram of another embodiment of the task processing method provided by the embodiment of the present application.
  • Fig. 8 is a schematic diagram of another embodiment of the task processing method provided by the embodiment of the present application.
  • Fig. 9 is a schematic diagram of another embodiment of the computer system provided by the embodiment of the present application.
  • Fig. 10 is a schematic diagram of another embodiment of the task processing method provided by the embodiment of the present application.
  • Fig. 11 is a schematic diagram of another embodiment of the task processing method provided by the embodiment of the present application.
  • Fig. 12 is a schematic diagram of another embodiment of the task processing method provided by the embodiment of the present application.
  • Fig. 13 is a schematic diagram of an embodiment of a device for task processing provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the embodiment of the present application provides a method for task processing, which is used to improve the switching efficiency of user mode threads and improve the performance of the computer system.
  • Embodiments of the present application also provide corresponding devices, devices, computer-readable storage media, and computer program products. Each will be described in detail below.
  • the task processing method provided in the embodiment of the present application is applied to a computer system, and the computer system may be a server, a terminal device, or a virtual machine (virtual machine, VM).
  • the computer system may be a server, a terminal device, or a virtual machine (virtual machine, VM).
  • Terminal equipment also called user equipment (UE) is a device with wireless transceiver function, which can be deployed on land, including indoor or outdoor, handheld or vehicle-mounted; it can also be deployed on water (such as ships etc.); can also be deployed in the air (such as aircraft, balloons and satellites, etc.).
  • the terminal may be a mobile phone, a tablet computer (pad), a computer with a wireless transceiver function, a virtual reality (virtual reality, VR) terminal, an augmented reality (augmented reality, AR) terminal, an industrial control (industrial control) Wireless terminals in self driving, wireless terminals in remote medical, wireless terminals in smart grid, wireless terminals in transportation safety, Wireless terminals in smart cities, wireless terminals in smart homes, etc.
  • FIG. 1 is a schematic diagram of an architecture of a computer system.
  • the architecture of the computer system includes a user state 10 , a kernel state 20 and a hardware layer 30 .
  • the user state 10 and the kernel state 20 are two modes or two states of an operating system (operating system, OS).
  • the kernel state is also usually called a privileged state, and the user state is also usually called a non-privileged state.
  • the user state 10 includes multiple tasks, and the tasks refer to user programs, which may be processes or threads.
  • a process is the smallest unit of resource allocation, and a thread is the smallest unit of operating system scheduling (processor scheduling).
  • a process can contain one or more threads.
  • Kernel state 20 is that the OS is responsible for managing key resources, and provides OS call entry for user state processes or threads, and then provides services in the kernel, such as: blocking processing, page fault (page fault, PF) processing, page table management, and interrupt control and other services.
  • blocking processing page fault (page fault, PF) processing
  • page table management page management
  • interrupt control and other services such as: interrupt control and other services.
  • the hardware layer 30 includes the hardware resources that the kernel 20 state operation depends on, such as: processor, memory, memory management unit (memory management unit, MMU), and input/output (input/output, I/O) equipment and disk (disk) )wait.
  • the processor may include a register set, and the register set may include multiple types of registers, such as stack frame registers, general-purpose registers, and non-volatile (callee-saved) registers.
  • An MMU is a piece of computer hardware responsible for handling memory access requests from a central processing unit (CPU). Its functions include translation from virtual address to physical address, memory protection, control of CPU cache, etc.
  • an application is bound to a thread.
  • a thread When a thread is running in user mode, if blocking, page fault exception or interruption occurs, the thread will trigger a request to the kernel mode, and then in the kernel state to perform thread switching, and then return to user mode to run.
  • thread A when thread A is running in user mode, if blocking, page fault exception or interruption occurs, thread A will trigger a request to kernel mode, then perform thread switching in kernel mode, switch from thread A to thread B, and then return to user mode Run thread B.
  • Usually switching from thread A to thread B requires switching the user mode context and kernel mode context of thread A and thread B, as well as processing the scheduling status of thread A and thread B. Such a large number of switching content leads to low thread switching efficiency and affects the performance of computer systems.
  • the kernel state of the computer system includes a fast scheduling & switching module, a function processing module and a native scheduling & switching module , the native scheduling & switching module may also include a compatible detection module.
  • the fast scheduling & switching module is equivalent to adding an intermediate layer between the native scheduling & switching module and the user state, and performs fast scheduling & switching for tasks that trigger requests to enter the kernel state.
  • the fast scheduling & switching module is used to realize fast switching of tasks in user mode.
  • This function processing module is used to handle some lock operations, remote procedure call (remote procedure call, RPC), and kernel mode context switching and other operations.
  • the native scheduling & switching module is used to perform scheduling & switching of processes or threads through the native scheduling process of the computer system.
  • scheduling refers to scheduling resources
  • switching refers to switching processes or threads, and scheduling can also be understood as a prerequisite for switching.
  • the compatibility detection module is used to realize the compatibility between the fast scheduling & switching module, the function processing module and the original scheduling & switching module.
  • the solution provided by the embodiment of the present application can execute the process shown in FIG. 3 in the process of executing the task processing in the user state.
  • the embodiment of the present application provides An embodiment of the method of task processing includes:
  • the computer system is in the kernel state, and detects the type of the first request entering the kernel entry.
  • the kernel entry is the entry from the user state to the kernel state, and the first request is triggered by the first task in the user state.
  • the kernel entry may be any entry such as a system call entry, an exception entry, or an interrupt entry that can enter the kernel state from the user state.
  • the computer system When the type of the first request indicates that the first task is suspended in the user state, the computer system at least switches from the user state context of the first task to the user state context of the second task, and records the first scheduling state of the first task.
  • the first scheduling state of the first task is that the first task is in a suspended state in the user mode and the running time of the first task from running to pausing.
  • the first scheduling state of the first task is a part of all scheduling states of the first task.
  • the user mode context refers to a set of data necessary for a task to run in the user mode, such as data in registers of a processor.
  • Switching from the user mode context of the first task to the user mode context of the second task refers to moving the data required for the first task to run in the user mode from the register, and writing the data required for the second task to run in the user mode into the register.
  • the above registers may include any one or more of a general register, a program counter (program counter, PC), a program state register (program state, PS) and the like.
  • the scheduling status of a task may include whether the task is running or suspended, and the duration of the task running, that is, the running time from the start of the task to the suspension, whether it enters the queue or exits the queue, and whether it is blocked , interrupt or exception, whether it is called by other threads, etc.
  • the first scheduling state of the first task includes the first task being suspended in the user mode and the running time of the first task from the start to the suspension
  • the second scheduling state of the first task refers to the first task's All scheduling states except the first scheduling state.
  • the computer system runs the second task in the user state.
  • the type of the first request may be a preconfigured request type or a non-preconfigured request type
  • the non-preconfigured request type refers to a request type that does not belong to the preconfigured request type.
  • the pre-configured request type is related to the business scenario, and in the business scenario, the number of occurrences of the pre-configured request type is higher than that of the non-pre-configured request type in the business scenario.
  • the pre-configured request type can be to create, read, and write files, directories, content of soft links, and file attributes; to control and manage file descriptors; One or more types of monitoring operations and other types.
  • Different request types can be represented by different identifiers.
  • the request type for creating a file can be represented by 00001
  • the request type for reading a file can be represented by 00002
  • other ways can also be used to represent different request types, as long as the corresponding request type can be determined, the specific expression form of the request type is not limited in this application.
  • the pre-configured request type may be one or more of a request type for receiving data packets, a request type for sending data packets, or a request type for monitoring.
  • the pre-configured request type may be an IO-driven request type.
  • the pre-configured request type may be a request type of an IO operation.
  • the pre-configured request type may be a clock request type.
  • the pre-configured request type may be a request type related to memory requests.
  • the pre-configured request type may be a request type of waiting for a signal.
  • the pre-configured request type can be a remote procedure call (remote procedure call, RPC) request type, a request type for sending a message, or a request type for a synchronization lock operation.
  • RPC remote procedure call
  • the pre-configured request type can be a mount request type and a status acquisition request type.
  • the pre-configured request type may be a request type that converts a synchronous operation into an asynchronous operation.
  • the selection of the pre-configured request type can be determined according to the actual situation, which is not limited in this application.
  • the embodiment of the present application provides two fast switching schemes and a native switching scheme based on fast switching.
  • This scheme of task switching can be understood by referring to the schematic structural diagram shown in FIG. 4 .
  • the user state includes a first task and a second task
  • FIG. 4 shows three switching schemes.
  • Switching scheme 1 When switching from the first task to the second task, execute path 1 in Figure 4, only need to switch the user mode context of the first task and the user mode context of the second task, and record the first scheduling state, and The second scheduling state is not processed.
  • Switching scheme 2 When switching from the first task to the second task, execute path 2 in Figure 4, not only to switch the user mode context of the first task and the user mode context of the second task, but also to switch the kernel of the first task state context and the kernel state context of the second task, and record the first scheduling state, and do not process the second scheduling state.
  • Switching scheme 3 When switching from the first task to the second task, after switching the contexts of the user mode and the kernel mode, execute path 3 in Figure 4, and also execute the native scheduling process.
  • the type of the first request is a pre-configured request type
  • the first scheduling state of the first task indicates that the first task is in the suspended state in the user mode and the first task is from
  • the second scheduling state is a scheduling state other than the first scheduling state of the first task among all scheduling states of the first task during the running time from start running to suspending.
  • the pre-configured request type refers to a pre-defined type that does not need to switch the kernel mode context, such as: the pre-configured request type in each business scenario listed above.
  • the process of switching from task A (equivalent to the first task above) to task B (equivalent to the second task above) may include:
  • Task A running in user mode triggers a first request to kernel mode.
  • This process may be to remove the user mode context of task A from the register and save it in the memory.
  • This process may store the user mode context of task A according to the target structure, so that the user mode context of the target structure is called target context.
  • This target context is used when task A is scheduled next time.
  • the first request is a pre-configured request type, if it is determined to implement the switching solution of path 1 shown in FIG. 4 for this task switching process, then 504 or 505 may be performed.
  • the computer system When the task A initiates the first request, the computer system will directly specify the information of the task B to be switched in the user mode, and the information of the task B can be the identifier of the task B. In this way, in the kernel state, the computer system can directly schedule the task B to switch according to the information of the task B, which further improves the efficiency of task switching.
  • Task B is located in the first queue, and the first queue is a First In, First Out (FIFO) queue, and task B is the first task to enter the first queue among all tasks in the first queue.
  • FIFO First In, First Out
  • the switching scheme 1 only switches the context of the user mode, but does not switch the context of the kernel mode. Although the user mode is running task B, the kernel mode context of task A is still maintained in the kernel mode.
  • the switching scheme 1 provided by the embodiment of this application only needs to switch the user mode context of the first task and the second task for the request of the preconfigured request type, does not need to switch the kernel mode context, and only records the first scheduling state, The second scheduling state is not processed, which further improves the switching efficiency from the first task to the second task.
  • the type of the first request is a non-preconfigured request type, that is, when the type of the first request does not belong to the preconfigured request type, then switch from the user mode context of the first task to the user mode context of the second task, from The kernel state context of the first task is switched to the kernel state context of the second task, and the first scheduling state of the first task is recorded, and the second scheduling state of the first task is not processed.
  • the kernel-mode context refers to a set of kernel-mode data that supports task execution.
  • the type of the first request does not belong to the preconfigured request type, such as an interrupt request or an exception request in some scenarios, when switching the user mode context, the kernel mode context is also switched.
  • the process of switching from task A (equivalent to the first task above) to task B (equivalent to the second task above) may include:
  • Task A running in user mode triggers a first request to kernel mode.
  • This process may be to remove the user mode context of task A from the register and save it in the memory.
  • This process may store the user mode context of task A according to the target structure, so that the user mode context of the target structure is called target context.
  • This target context is used when task A is scheduled next time.
  • the type of the first request does not belong to the pre-configured request type, such as: interrupt request or exception request, if it is determined to execute the switching scheme of path 2 shown in FIG. 4 for this task switching process, then 604 or 605 can be executed.
  • the computer system When the task A initiates the first request, the computer system will directly specify the information of the task B to be switched in the user mode, and the information of the task B can be the identifier of the task B. In this way, in the kernel state, the computer system can directly schedule the task B to switch according to the information of the task B, which further improves the efficiency of task switching.
  • Task B is located in the first queue, and the first queue is a First In, First Out (FIFO) queue, and task B is the first task to enter the first queue among all tasks in the first queue.
  • FIFO First In, First Out
  • the switching solution 2 provided by the embodiment of the present application only needs to switch the user mode context and kernel mode context of the first task and the second task for requests that do not belong to the pre-configured request types, and only records the first scheduling state, not Processing the second scheduling state further improves the switching efficiency from the first task to the second task.
  • the embodiment of the present application also provides a recursive switching scheme, the recursive switching scheme includes: after running the second task in the user mode, detecting the type of the second request entering the kernel entry, the first The second request is triggered by the target task in user mode, and the target task is the second task or the last one of at least one task that runs continuously after the second task, and both the second task and at least one task trigger a pre-configured request type request; when the type of the second request indicates that the target task is suspended in user mode, and the second request type is a pre-configured request type, record the first scheduling state of the target task, and only switch from the user mode context of the target task to the second The user mode context of three tasks, wherein, the first scheduling state of the target task includes the target task being in a suspended state in the user mode and the running time of the target task from starting to suspending; running the third task in the user mode.
  • This solution can be understood as switching from task A to task B through path 1 in the above-mentioned FIG. 4 , and then switching to task C through path 1 .
  • the process can be understood by referring to FIG. 7.
  • the recursive switching process may include:
  • Task B running in the user mode triggers a second request to the kernel mode.
  • This process may be to move the user mode context of task B out of the register and save it in the memory.
  • This process may store the user mode context of task B according to the target structure, so that the user mode context of the target structure is called target context.
  • This target context is used the next time Task B is scheduled.
  • the second request is a pre-configured request type, if it is determined to implement the switching solution of path 1 shown in FIG. 4 for this task switching process, then 704 or 705 may be performed.
  • the computer system When the task B initiates the second request, the computer system will directly specify the information of the task C to be switched in the user mode, and the information of the task C may be the identifier of the task C. In this way, in the kernel state, the computer system can directly schedule the task C to switch according to the information of the task C, which further improves the efficiency of task switching.
  • Task C is located in the first queue, the first queue is a FIFO queue, and task C is the task that first enters the first queue among all the tasks in the first queue.
  • the recursive switching scheme provided by the embodiment of the present application only needs to switch the user mode context of task B for the request of the pre-configured request type, and the kernel mode can continue to remain in the kernel mode context of task A, which further improves task switching efficiency.
  • what this embodiment enumerates is only the situation that the target task is the second task. If multiple requests that trigger the pre-configured request type occur continuously after the second task, only the user mode context of the task can be switched each time. Greatly improve switching efficiency.
  • switching scheme B can be executed on the basis of the above switching scheme 1.
  • task B triggers the second request in user mode
  • the processing of this situation includes: detecting the type of the second request entering the kernel entry, the second request is triggered by the target task in the user mode, and the target task is the second task or at least one task that runs continuously after the second task The last one of , when the target task is the last of at least one task, the second task, and each task that runs before the target task in at least one task triggers a request of the preconfigured request type; when the second request The type of indicates that the target task is suspended in user mode, and the second request type is a non-preconfigured request type, record the first scheduling status of the target task, and switch from the kernel mode context of the first task to the kernel mode context of the target task , wherein the first scheduling state of the target task includes the target task being in a suspended state in the user mode and the running time of the target task
  • the second task when switching from the first task to the second task by only switching the user mode context and recording the first scheduling state of the first task, the second task initiates a second request of a non-preconfigured request type, Or, the second task and several consecutive tasks all initiate a request of a preconfigured request type, and only initiate a second request of a non-preconfigured request type when the target task arrives.
  • the kernel mode context of the target task which reduces the content to be switched and improves the switching efficiency.
  • the method when the target task is not blocked, after switching from the kernel mode context of the first task to the kernel mode context of the target task, the method further includes: returning to the user mode to continue running the target task. That is to say, when switching from the kernel mode context of the first task to the kernel mode context of the target task, if the target task is not blocked, the target task can be returned to the user mode to continue running, thereby realizing the rapid recovery of the target task.
  • the third task needs to be scheduled through a native scheduling process.
  • the native scheduling process refers to the process of not only switching the user mode context and the kernel mode context but also processing all the scheduling states of the task before switching during the task switching process.
  • the process can be understood as: when the target task is blocked, the method also includes: scheduling the third task through the native scheduling process, and switching from the target task to the third task, and the native scheduling process needs to process from the first task to at least one task All the scheduling status of each task in ; run the third task in user mode.
  • the native scheduling process is the switching scheme 3 of path 3 described in Figure 4.
  • the switching scheme 3 is introduced below.
  • the process of the switching solution 3 may be performed when it is determined that the type of the second request does not belong to the preconfigured request type.
  • the process takes the target task as the second task as an example, and the process of determining that the original scheduling process needs to be executed includes:
  • the target task is not the second task
  • the third task is scheduled through the native scheduling process, specifically: the second scheduling state of each task from the first task to at least one task, when each task starts to run
  • the scheduling status of is modified to determine the scheduling status corresponding to each task when the native scheduling process is executed for the third task, wherein the second scheduling status of each task is all scheduling statuses of each task except the first scheduling status of each task Scheduling state other than state.
  • the native scheduling process when executing the native scheduling process, it is necessary to synchronize the latest scheduling status of each of the first task, the second task, and at least one task, so that the kernel does not perceive the first task, the second task, and at least one task.
  • the fast switching that occurred before the tasks in this way, the switching that occurred before these tasks will not affect the native scheduling process at all, and is very compatible with the native scheduling process.
  • the scheduling status synchronization process described in step 803 above includes queue synchronization.
  • the queue synchronization process includes: synchronizing at least one task in the first queue and the scheduling status of at least one task to the second queue, and transferring the second task from The output information in the first queue is synchronized to the second queue, and then the position information of at least one task in the second queue is synchronized to the first queue.
  • the second queue is a queue used for the native scheduling process.
  • the first queue may be called a fast queue
  • the second queue may be called a slow queue
  • the slow queue is used to execute a native scheduling process.
  • the tasks in the fast queue need to be synchronized to the slow queue, which is more conducive to compatibility with the native scheduling process.
  • the tasks in the fast queue may be rearranged in the slow queue according to the actual situation of each task, and these tasks are inserted into the appropriate position in the slow queue, and then at least The position information of a task in the slow queue is synchronized to the fast queue, so that the fast queue can optimize the order of at least one task in the fast queue according to the position information, so that tasks in the fast queue can get more fair scheduling opportunities.
  • the above-described task processing process can be applied to multiple scenarios such as token scheduling, simplified fair scheduling, and RPC scheduling.
  • the solution of combining these scenarios with a computer system can be understood by referring to FIG. 9 .
  • the user state may include multiple processes, and each process may include multiple threads.
  • the fast scheduling & switching module may include the following functional units.
  • Kernel mode quick call interface can be understood as the kernel entry, the first request or the second request in the above embodiment will enter the kernel mode quick call interface, and the kernel mode quick call interface can detect the first The type of the first request or the second request, and then determine the subsequent steps, such as: enter the function processing unit or the context compatible unit next.
  • Function processing unit It is used to process functions that can be processed quickly. Generally speaking, it is a function that has a relatively simple state and is relatively independent from the original process, such as: processing lock operations.
  • the scheduling switching framework includes a new scheduling interface and a fast switching unit.
  • the new scheduling interface is used to schedule tasks in the fast queue
  • the fast switching unit is used to switch the user mode context of the task through the fast path (path 1 above).
  • Scheduler This part supports different functions to implement different scheduling strategies, such as token-based scheduling, simplified and fair scheduling that is more general and does not limit scenarios, and RPC scheduling based on remote calls.
  • the context compatibility unit is configured to determine that the type of the first request or the second request does not belong to the preconfigured request type, and then the process enters the context compatibility unit to switch the kernel mode context of the task.
  • Scheduling Compatible Processing Interface It is used to provide a series of callback functions for calling by native queue access probes of native scheduling, so as to realize fast switching and data synchronization of native scheduling process.
  • Manage Allocation Units Used to manage ranges of threads that can be quickly scheduled. Among them, including group attribute relationship and relationship management, the scheduler can perform token scheduling according to the management allocation unit, simplify fair scheduling or RPC scheduling based on remote calls, and the like.
  • the native scheduling & switching module includes a kernel context switching unit and a native queue access probe.
  • the kernel context switching unit is used to switch the kernel state context of a task
  • the native queue access probe is used to realize fast switching and data synchronization of the native scheduling process.
  • the above embodiment shown in FIG. 9 includes RPC scheduling and simplified fair scheduling.
  • the following describes the task processing process provided by the embodiment of the present application in combination with these two scheduling scenarios.
  • the system is usually divided into multiple processes or threads, using the model of client (client) & server (server).
  • client client
  • server server
  • threads can act as servers.
  • the process of the task processing method provided by the embodiment of the present application includes:
  • the thread B is found through the next thread specified in the user mode (the information of the thread B, such as the identifier of the thread B).
  • the scheduling status needs to be synchronized, such as the execution time of thread A and thread B, queue status, etc.
  • the synchronization is not only the currently executing Tasks, also includes all tasks executed since the last synchronization occurred. This ends up making the native scheduling & switching module look as if fast switching didn't happen.
  • Thread A as the client specifies the required thread B as the server. At this time, the corresponding thread B can be woken up and scheduled to this thread B, and thread A itself can be marked as blocked at the same time. And record that the caller of thread B is thread A, until thread B returns the result, then wake up thread A.
  • the processing scheme in the RPC scheduling process of this application directly specifies the threads that need to be scheduled, and when switching the context of thread A and thread B, the kernel mode context can be switched or not switched Kernel mode context, but does not need to handle all scheduling states of thread A and thread B, which improves the speed of thread switching during RPC scheduling.
  • the RPC scheduling scheme provided by the embodiment of the present application has significantly improved the switching speed.
  • a large number of experiments were done, and the experimental results in Table 1 below were obtained when the number of experiments for each type was 100,000 times.
  • test case communication method frequency Maximum (nanoseconds ns) min(ns) mean(ns)
  • Case1 this application 100000 49280 2240 2385 Case2 DOMAIN 100000 1025920 25800 43819 Case3 MQ+SHM 100000 1573960 19600 31074 Case4 PIPE 100000 1120000 23960 38055
  • the process includes:
  • thread A When thread A is running in user mode, thread A is blocked quickly, thus triggering a request to kernel mode.
  • thread A After thread A is blocked quickly, it enters the suspended state.
  • the fast queue is a FIFO queue. Because thread B is the first to enter the fast queue among all the threads in the current queue, thread B is scheduled first.
  • the fast context of thread B may be saved when thread B is fast blocked.
  • thread A can be woken up through the fast queue processing unit of thread C.
  • the execution time of thread A and thread B, queue status, etc. here, not only the currently executing tasks are synchronized, but also all tasks executed since the last synchronization occurred. This ends up making the native scheduling & switching module look as if fast switching didn't happen.
  • the synchronization process of the fast queue and the slow queue will occur during the state synchronization process.
  • the synchronization process of these two queues will be introduced below in conjunction with FIG. 12 .
  • the process includes:
  • thread C, thread E and thread G in the fast queue shown in FIG. 12 are all synchronized to the slow queue.
  • thread C thread C
  • thread E thread E
  • thread G the running time of the threads running during the two synchronizations will also be synchronized to the slow queue.
  • the running time of these threads can be tracked through the load way to determine.
  • thread B in the slow queue also needs to be output.
  • the threads synchronized from the fast queue are reordered in the slow queue.
  • This step is sorted by thread fairness.
  • the task processing device includes a user state and a kernel state.
  • the user state includes multiple tasks, and the tasks are threads or processes.
  • the task processing device 130 provided by the embodiment of the present application includes:
  • the detection unit 1301 is configured to detect the type of the first request entering the kernel entry in the kernel state.
  • the kernel entry is the entry from the user state to the kernel state, and the first request is triggered by the first task in the user state.
  • the detection unit 1301 is used to execute step 401 in the method embodiment.
  • the first processing unit 1302 is configured to at least switch from the user mode context of the first task to the user mode context of the second task when the type of the first request detected by the detection unit 1301 indicates that the first task is suspended in the user mode, and record The first scheduling state of the first task, the first scheduling state of the first task is the first task being in a suspended state in the user mode and the running time of the first task from running to pausing.
  • the first processing unit 1302 is configured to execute step 402 in the method embodiment.
  • the second processing unit 1303 is configured to run the second task switched by the first processing unit 1302 in the user mode.
  • the second processing unit 1303 is configured to execute step 403 in the method embodiment.
  • the first request of the first task enters the kernel state from the user state, by detecting the type of the first request, it is determined that the first task is suspended in the user state, and only records that the first task is in the user state
  • the paused state and the running time of the first task from start to pause are not processed in other scheduling states. In this way, in the process of switching from the first task to the second task, the content of processing can be reduced and the switching efficiency can be improved. , thereby improving the performance of the computer system.
  • the first processing unit 1302 is specifically configured to, when the type of the first request indicates that the first task is suspended in the user state, and the type of the first request is a pre-configured request type, then only from the user of the first task state context switch to the user state context of the second task.
  • the first processing unit 1302 the first processing unit is specifically configured to: when the type of the first request indicates that the first task is suspended in the user mode, and the type of the first request is a non-preconfigured request type, then from the first request type The user mode context of one task is switched to the user mode context of the second task, and the kernel mode context of the first task is switched to the kernel mode context of the second task.
  • the preconfigured request type is related to the business scenario, and in the business scenario, the frequency of occurrence of the preconfigured request type is higher than that of the non-preconfigured request type in the business scenario.
  • the detection unit 1301 is also configured to detect the type of the second request entering the kernel entry, the second request is triggered by the target task in the user mode, and the target task is the second task or at least The last of one task, the second task and at least one task all trigger a request of the preconfigured request type.
  • the first processing unit 1302 is further configured to record the first scheduling status of the target task when the type of the second request indicates that the target task is suspended in the user state, and the second request type is a preconfigured request type, and only from the target task
  • the user mode context of is switched to the user mode context of the third task, wherein the first scheduling state of the target task includes the target task being in a suspended state in the user mode and the running time of the target task from starting to running to being suspended.
  • the second processing unit 1303 is further configured to run the third task in the user mode.
  • the detection unit 1301 is also configured to detect the type of the second request entering the kernel entry, the second request is triggered by the target task in the user mode, and the target task is the second task or at least The last of a task, when the target task is the last of at least one task, the second task, and each of the at least one tasks that run before the target task triggers a request of the preconfigured request type.
  • the first processing unit 1302 is further configured to record the first scheduling state of the target task when the type of the second request indicates that the target task is suspended in the user state, and the second request type is a non-preconfigured request type, and start from the first
  • the kernel mode context of the task is switched to the kernel mode context of the target task, wherein the first scheduling state of the target task includes the target task being in a suspended state in the user mode and the running time of the target task from starting to suspending.
  • the second processing unit 1303 is further configured to return to the user state to continue running the target task when the target task is not blocked.
  • the second processing unit 1303 is also used to schedule the third task through the native scheduling process when the target task is blocked, and switch from the target task to the third task.
  • the native scheduling process needs to process from the first task to at least All scheduling statuses for each task in a task.
  • the second processing unit 1303 is further configured to run the third task in the user mode.
  • the first processing unit 1302 is specifically configured to modify the second scheduling state of each task from the first task to at least one task from the scheduling state when each task starts running to determine that the third task is executed
  • the native scheduling process corresponds to the scheduling status of each task, wherein the second scheduling status of each task is the scheduling status of all scheduling statuses of each task except the first scheduling status of each task.
  • the first request includes the information of the second task, and the information of the second task is used to schedule the second task.
  • the device 130 also includes a saving unit 1304, which is used to save the user state context of the first task; when the type of the first request is a pre-configured request type, the user state context of the first task is The context is saved as the target context, and the target context is used when the first task is scheduled next time.
  • a saving unit 1304 which is used to save the user state context of the first task; when the type of the first request is a pre-configured request type, the user state context of the first task is The context is saved as the target context, and the target context is used when the first task is scheduled next time.
  • the first request includes the information of the second task, and the information of the second task is used to schedule the second task.
  • the second processing unit 1303 is also configured to record information associated with the first request and the first task; obtain a return result by running the second task; return the return result to the The first task; switch from the second task back to the first task to continue running.
  • the second task is located in the first queue, the first queue is a first-in-first-out queue, and the second task is a task that first enters the first queue among all the tasks in the first queue.
  • the second processing unit 1303 is also configured to synchronize the tasks in the first queue and the scheduling status of the tasks in the first queue to the second queue , and synchronize the output information of the third task from the first queue to the second queue, the second queue is a queue for the native scheduling process; synchronize the position information of the tasks in the first queue in the second queue to For the first queue, the position information is used to adjust the positions of the tasks in the first queue in the first queue.
  • FIG. 14 is a schematic diagram of a possible logical structure of a computer device 140 provided by an embodiment of the present application.
  • the computer device 140 includes: a processor 1401 , a communication interface 1402 , a memory 1403 , a disk 1404 and a bus 1405 .
  • the processor 1401, communication interface 1402, memory 1403, and disk 1404 are connected to each other through a bus 1405.
  • the processor 1401 is used to control and manage the actions of the computer device 140 , for example, the processor 1401 is used to execute the steps in the method embodiments in FIGS. 3 to 12 .
  • Communication interface 1402 is used to support computer device 140 in communicating.
  • the memory 1403 is used to store program codes and data of the computer device 140 and provide memory space for processes or threads. Disk user stores physical pages swapped out from memory.
  • the processor 1401 may be a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic devices, transistor logic devices, hardware components or any combination thereof. It can implement or execute the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor 1401 may also be a combination that implements computing functions, for example, a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and the like.
  • the bus 1405 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus, etc.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • a computer-readable storage medium is also provided, and computer-executable instructions are stored in the computer-readable storage medium.
  • the processor of the device executes the computer-executable instructions
  • the device executes the above-mentioned FIG. Steps performed by the processor in FIG. 8 .
  • a computer program product includes computer-executable instructions stored in a computer-readable storage medium; when the processor of the device executes the computer-executable instructions , the device executes the steps executed by the processor in FIG. 3 to FIG. 12 above.
  • a system-on-a-chip is further provided, and the system-on-a-chip includes a processor, and the processor is used for task processing to implement the above-mentioned steps performed by the processor in FIG. 3 to FIG. 12 .
  • the system-on-a-chip may further include a memory, which is used to store program instructions and data necessary for the device for data task processing.
  • the system-on-a-chip may consist of chips, or may include chips and other discrete devices.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separated, and a component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods in the various embodiments of the embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Stored Programmes (AREA)

Abstract

本申请公开了一种任务处理的方法,该方法应用于计算机系统中,该计算机系统包括用户态和内核态,用户态包括多个任务,任务为线程或进程,该方法包括:在内核态,检测进入内核入口的第一请求的类型,当第一请求的类型指示第一任务在用户态暂停,则至少从第一任务的用户态上下文切换到第二任务的用户态上下文,并记录第一任务处于暂停状态及从开始运行到暂停的运行时间,且不处理第一任务的其他调度状态;在用户态运行第二任务。本申请技术方案由于不切换内核态上下文,且只记录第一任务的部分调度状态,其他调度状态均不做处理,这样,减少切换的内容,提高切换效率,从而提升计算机系统的性能。

Description

一种任务处理的方法及装置
本申请要求于2021年12月28日提交中国专利局、申请号为202111633084.9、发明名称为“一种任务处理的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及一种任务处理的方法及装置。
背景技术
随着硬件技术的发展,计算机系统的计算资源越来越丰富,有成百甚至上千个处理核,每个核的计算速度也越来越快。另一方面,软件系统需要处理的线程或进程等任务也越来越多,任务之间的切换次数也大大增加,对性能的要求越来越高。
计算机系统包括用户态和内核态,用户态的线程A在切换到用户态的线程B时,需要先进入内核态,在内核态完成从线程A的用户态上下文、内核态上下文以及线程A的所有调度状态到线程B的内核态上下文、用户态上下文以及线程B的所有调度状态的切换后,才能从线程A切换到线程B,这导致切换效率低下,影响计算机系统的性能。
发明内容
本申请实施例提供一种任务处理的方法,用于提高用户态任务的切换效率,提升计算机系统的性能。本申请实施例还提供了相应的装置、设备、计算机可读存储介质,以及计算机程序产品等。
本申请第一方面提供一种任务处理的方法,该方法应用于计算机系统中,该计算机系统包括用户态和内核态,用户态包括多个任务,任务为线程或进程,该方法包括:在内核态,检测进入内核入口的第一请求的类型,内核入口为从用户态到内核态的入口,第一请求是用户态的第一任务触发的;当第一请求的类型指示第一任务在用户态暂停,则至少从第一任务的用户态上下文切换到第二任务的用户态上下文,并记录第一任务的第一调度状态,第一任务的第一调度状态为第一任务在用户态处于暂停状态以及第一任务从开始运行到暂停的运行时间;在用户态运行第二任务。
本申请中,计算机系统可以为服务器、终端设备或虚拟机(virtual machine,VM)。内核态和用户态是操作系统(operating system,OS)的两种模式或两种状态,内核态通常也称为特权状态,用户态通常也称为非特权状态。进程是资源分配的最小单位,线程是操作系统调度(处理器调度)的最小单位。一个进程可以包括一个或多个线程。
本申请中,内核入口可以是系统调用的入口,异常入口或中断入口等任意一种可以从用户态进入内核态的入口。
本申请中,用户态上下文指的是任务在用户态运行所必不可少的一组数据,如处理器的寄存器中的数据。从第一任务的用户态上下文切换到第二任务的用户态上下文指的是将 第一任务在用户态运行所需要的数据从寄存器中移出,将第二任务在用户态运行所需要的数据写入到寄存器中。以上寄存器可以包括通用寄存器、程序计数器(program counter,PC)、程序状态寄存器(program state,PS)等任意一个或多个。
本申请中,任务的调度状态可以包括任务处于运行状态或暂停状态,以及任务运行的时间长度,也就是任务从开始运行到暂停的运行时间,以及是否进队列或出队列,是否发生阻塞、中断或异常,是否被其他线程调用等状态。本申请中,第一任务的第一调度状态包括第一任务在用户态处于暂停状态以及第一任务从开始运行到暂停的运行时间,第一任务的第二调度状态指的是第一任务的所有调度状态中除第一调度状态之外的调度状态。
由上述第一方面可知,当第一任务的第一请求从用户态进入内核态后,通过检测第一请求的类型,确定第一任务在用户态暂停,则只记录该第一任务在用户态处于暂停状态以及第一任务从开始运行到暂停的运行时间,其他调度状态均不做处理,这样,在从第一任务切换到第二任务进行运行的过程中,可以减少处理的内容,提高切换效率,从而提高计算机系统的性能。
上述第一方面中,第一请求的类型可以为预先配置的请求类型,或者非预先配置的请求类型,非预先配置的请求类型指的是不属于预先配置的请求类型的请求类型。
本申请中,预先配置的请求类型与业务场景相关,在业务场景中,预先配置的请求类型出现的次数高于业务场景中非预先配置的请求类型出现的次数。
本申请中,对一些业务场景中的预先配置的请求类型列举如下:
在侧重文件系统的业务场景中,该预先配置的请求类型可以是对文件、目录、软链接的内容、文件属性进行创建、读操作、写操作,对文件描述符进行控制操作、管理操作,文件监控操作等类型中的一种或者多种,不同的请求类型可以使用不同的标识来表示,如:对文件进行创建操作的请求类型可以用00001来表示,对文件进行读操作的请求类型可以用00002来表示,当然,也可以用其他的方式来表示不同的请求类型,只要能确定出是相应的请求类型即可,本申请中对请求类型的具体表现形式不做限定。
在侧重网络系统的业务场景中,该预先配置的请求类型可以是接收数据包的请求类型,发送数据包的请求类型或者监听的请求类型中的一种或多种。
在侧重硬件驱动输入/输出(input/output,IO)的业务场景中,该预先配置的请求类型可以是驱动IO的请求类型。
在侧重IO多路复用的业务场景中,该预先配置的请求类型可以是IO操作的请求类型。
在侧重时钟(Timer)操作的业务场景中,该预先配置的请求类型可以是时钟请求类型。
在侧重内存操作的业务场景中,该预先配置的请求类型可以是与内存请求相关的请求类型。
在侧重信号处理的业务场景中,该预先配置的请求类型可以是等待信号的请求类型。
在侧重进程间通信的业务场景中,该预先配置的请求类型可以是远程过程调用(remote procedure call,RPC)的请求类型,发送消息的请求类型,同步锁操作的请求类型。
在侧重文件系统管理的业务场景中,该预先配置的请求类型可以是mount的请求类型,获取状态的请求类型。
在侧重异步操作的场景中,该预先配置的请求类型可以是将同步操作转换为异步操作的请求类型。
以上只是一些示例,在不同的业务场景,预先配置的请求类型的选择可以根据实际情况确定,本申请中对此不做限定。
在第一方面的一种可能的实现方式中,上述第一方面中的步骤:当第一请求的类型指示第一任务在用户态暂停,至少从第一任务的用户态上下文切换到第二任务的用户态上下文,具体为:当第一请求的类型指示第一任务在用户态暂停,且第一请求的类型为预先配置的请求类型时,则只从第一任务的用户态上下文切换到第二任务的用户态上下文。
该种可能的实现方式中,当第一请求的类型为预先配置的请求类型时,只需要切换从第一任务到第二任务的用户态上下文,不需要切换从第一任务到第二任务的内核态上下文,减少了所切换的内容,提高了切换效率。
在第一方面的一种可能的实现方式中,上述第一方面中的步骤:第一请求的类型指示第一任务在用户态暂停,至少从第一任务的用户态上下文切换到第二任务的用户态上下文,具体为:当第一请求的类型指示第一任务在用户态暂停,且第一请求的类型是非预先配置的请求类型时,则从第一任务的用户态上下文切换到第二任务的用户态上下文,且从第一任务的内核态上下文切换到第二任务的内核态上下文。
该种可能的实现方式中,内核态上下文指的是支持任务运行的一组内核态的数据。当第一请求的类型是非预先配置的请求类型,如一些业务场景中的中断请求或异常请求不属于预先配置的请求类型,当在内核态检测到中断请求或异常请求时,则在切换用户态上下文时,还切换内核态上下文,该种实现方式中,依然不需要处理第一任务的所有调度状态,还是可以提高任务的切换效率。
在第一方面的一种可能的实现方式中,该方法还包括:在用户态运行第二任务之后,该方法还包括:检测进入内核入口的第二请求的类型,第二请求是用户态的目标任务触发的,目标任务为第二任务或者在第二任务之后连续运行的至少一个任务中的最后一个,第二任务和至少一个任务都触发了预先配置的请求类型的请求;当第二请求的类型指示目标任务在用户态暂停,且第二请求类型为预先配置的请求类型时,记录目标任务的第一调度状态,并只从目标任务的用户态上下文切换到第三任务的用户态上下文,其中,目标任务的第一调度状态包括目标任务在用户态处于暂停状态以及目标任务从开始运行到暂停的运行时间;在用户态运行第三任务。
该种可能的实现方式中,当通过只切换用户态上下文以及记录第一任务的第一调度状态从第一任务切换到第二任务后,第二任务或者连续的几个任务都发起了预先配置的请求类型的请求,则每次任务切换都只需要切换任务的用户态上下文,不需要切换任务的内核态上下文,这样进一步提高了任务切换的效率。
在第一方面的一种可能的实现方式中,在用户态运行第二任务之后,该方法还包括:检测进入内核入口的第二请求的类型,第二请求是用户态的目标任务触发的,目标任务为第二任务或者在第二任务之后连续运行的至少一个任务中的最后一个,当目标任务为至少一个任务中的最后一个时,第二任务,以及至少一个任务中在目标任务之前运行的每个任 务都触发了预先配置的请求类型的请求;当第二请求的类型指示目标任务在用户态暂停,且第二请求类型是非预先配置的请求类型时,则记录目标任务的第一调度状态,并从第一任务的内核态上下文切换到目标任务的内核态上下文,其中,目标任务的第一调度状态包括目标任务在用户态处于暂停状态以及目标任务从开始运行到暂停的运行时间。
该种可能的实现方式中,当通过只切换用户态上下文以及记录第一任务的第一调度状态从第一任务切换到第二任务后,第二任务发起了非预先配置的请求类型的第二请求,或者,第二任务以及连续的几个任务都发起了预先配置的请求类型的请求,到目标任务时才发起了非预先配置的请求类型的第二请求,则需要从第一任务的内核态上下文直接切换到目标任务的内核态上下文,减少了所切换的内容,提高了切换效率。
在第一方面的一种可能的实现方式中,当目标任务没有被阻塞时,在从第一任务的内核态上下文切换到目标任务的内核态上下文之后,该方法还包括:返回用户态继续运行目标任务。
该种可能的实现方式中,当从第一任务的内核态上下文切换到目标任务的内核态上下文,如果目标任务没有阻塞,则可以返回用户态继续运行该目标任务,实现了目标任务的快速恢复。
在第一方面的一种可能的实现方式中,当目标任务被阻塞时,该方法还包括:通过原生调度流程调度第三任务,并从目标任务切换到第三任务,原生调度流程需要处理从第一任务到至少一个任务中每个任务的所有调度状态;在用户态运行第三任务。
该种可能的实现方式中,当从第一任务的内核态上下文切换到目标任务的内核态上下文之后,如果目标任务被阻塞,则需要走原生调度流程调度第三任务。原生调度流程指的是任务切换的过程中不仅要切换用户态上下文、内核态上下文还要处理切换前的任务的所有调度状态的流程。本申请中,在实现快速切换的同时,还保持了对原生调度流程的兼容性。
在第一方面的一种可能的实现方式中,上述步骤中的:通过原生调度流程调度第三任务,具体为:将从第一任务到至少一个任务中每个任务的第二调度状态,由每个任务开始运行时的调度状态修改为确定对第三任务执行原生调度流程时对应每个任务的调度状态,其中,每个任务的第二调度状态为每个任务的所有调度状态中除每个任务的第一调度状态之外的调度状态。
该种可能的实现方式中,在执行原生调度流程时,需要同步第一任务、第二任务以及至少一个任务中的每个任务的最新调度状态,这样可以使内核不感知第一任务、第二任务以及至少一个任务之前发生的快速切换,这样,这些任务之前发生的切换完全不会影响到原生调度流程,很好的兼容了原生调度流程。
在第一方面的一种可能的实现方式中,上述步骤:检测进入内核入口的第一请求的类型之前,该方法还包括:保存第一任务的用户态上下文;当第一请求的类型为预先配置的请求类型时,则将第一任务的用户态上下文保存为目标上下文,目标上下文用于第一任务下次被调度时使用。
该种可能的实现方式中,在第一任务触发了第一请求后,先暂时保存第一任务的用户 态上下文,当确定为该第一任务执行快速切换后,可以将暂时保存的第一任务的用户态上下文保存为目标上下文,这样在第一任务下次被调度时,可以直接使用该目标上下文,有利于将第一任务快速恢复到用户态执行。
在第一方面的一种可能的实现方式中,在远程过程调用RPC的调度中,第一请求中包括第二任务的信息,第二任务的信息用于调度第二任务。
该种可能的实现方式中,第一任务在发起第一请求时,计算机系统会在用户态直接指定所要切换的第二任务的信息,该第二任务的信息可以是第二任务的标识。这样,在内核态,计算机系统可以直接根据该第二任务的信息调度第二任务来切换,进一步提高了任务切换的效率。
在第一方面的一种可能的实现方式中,该方法还包括:记录第一请求与第一任务关联的信息;通过运行第二任务,以得到返回结果;根据与第一任务关联的信息,将返回结果返回给第一任务;从第二任务切换回第一任务继续运行。
该种可能的实现方式中,在RPC场景中,可以通过记录第一请求与第一任务相关的信息,在得到第二任务的返回结果后,根据该与第一任务相关的信息,将返回结果返回给第一任务,并切换会第一任务继续运行。
在第一方面的一种可能的实现方式中,第二任务位于第一队列,第一队列为先入先出(First In,First Out,FIFO)队列,第二任务为第一队列的所有任务中最先进入第一队列的任务。
该种可能的实现方式中,第一队列也可以称为快速队列,第一队列的维护方式就是先入先出,在需要从第一队列中调度任务时,只需要调度当前在第一队列的任务中最新进入第一队列的任务。
在第一方面的一种可能的实现方式中,在简化公平调度的调度中,在执行针对第三任务的原生调度流程之前,该方法还包括:将第一队列中的任务,以及第一队列中的任务的调度状态同步到第二队列,并将第三任务从第一队列中已输出的信息同步给第二队列,第二队列是用于原生调度流程的队列;将第一队列中的任务在第二队列中的位置信息同步给第一队列,位置信息用于调整第一队列中的任务在第一队列中的位置。
该种可能的实现方式中,第二队列可以称为慢速队列,该慢速队列用于执行原生调度流程。在执行原生调度流程之前,需要将快速队列中的任务都同步给慢速队列,这样,更有利于兼容原生调度流程。快速队列中的任务在同步到慢速队列后,在慢速队列中可能会根据各任务的实际情况做重新排列,将这些任务插入到慢速队列中的合适位置,然后将快速队列中的至少一个任务在慢速队列中的位置信息同步给快速队列,使快速队列根据这些位置信息优化至少一个任务在快速队列中的顺序,这样可以使快速队列中的任务得到更多公平被调度的机会。
本申请第二方面提供一种任务处理的装置,该置包括用户态和内核态,用户态包括多个任务,任务为线程或进程,该装置包括:检测单元、第一处理单元和第二处理单元,这几个单元的功能如下:
检测单元,用于在内核态,检测进入内核入口的第一请求的类型,内核入口为从用户 态到内核态的入口,第一请求是用户态的第一任务触发的。
第一处理单元,用于当检测单元检测的第一请求的类型指示第一任务在用户态暂停,则至少从第一任务的用户态上下文切换到第二任务的用户态上下文,并记录第一任务的第一调度状态,第一任务的第一调度状态为第一任务在用户态处于暂停状态以及第一任务从开始运行到暂停的运行时间。
第二处理单元,用于在用户态运行第一处理单元切换的第二任务。
由上述第二方面可知,当第一任务的第一请求从用户态进入内核态后,通过检测第一请求的类型,确定第一任务在用户态暂停,则只记录该第一任务在用户态处于暂停状态以及第一任务从开始运行到暂停的运行时间,其他调度状态均不做处理,这样,在从第一任务切换到第二任务进行运行的过程中,可以减少处理的内容,提高切换效率,从而提高计算机系统的性能。
在第二方面的一种可能的实现方式中,第一处理单元,具体用于当第一请求的类型指示第一任务在用户态暂停,且第一请求的类型为预先配置的请求类型时,则只从第一任务的用户态上下文切换到第二任务的用户态上下文。
在第二方面的一种可能的实现方式中,第一处理单元,具体用于当第一请求的类型指示第一任务在用户态暂停,且第一请求的类型是非预先配置的请求类型时,则从第一任务的用户态上下文切换到第二任务的用户态上下文,且从第一任务的内核态上下文切换到第二任务的内核态上下文。
在第二方面的一种可能的实现方式中,预先配置的请求类型与业务场景相关,在业务场景中,预先配置的请求类型出现的次数高于业务场景中非预先配置的请求类型出现的次数。
在第二方面的一种可能的实现方式中,检测单元,还用于检测进入内核入口的第二请求的类型,第二请求是用户态的目标任务触发的,目标任务为第二任务或者在第二任务之后连续运行的至少一个任务中的最后一个,第二任务和至少一个任务都触发了预先配置的请求类型的请求。
第一处理单元,还用于当第二请求的类型指示目标任务在用户态暂停,且第二请求类型为预先配置的请求类型时,记录目标任务的第一调度状态,并只从目标任务的用户态上下文切换到第三任务的用户态上下文,其中,目标任务的第一调度状态包括目标任务在用户态处于暂停状态以及目标任务从开始运行到暂停的运行时间。
第二处理单元,还用于在用户态运行第三任务。
在第二方面的一种可能的实现方式中,检测单元,还用于检测进入内核入口的第二请求的类型,第二请求是用户态的目标任务触发的,目标任务为第二任务或者在第二任务之后连续运行的至少一个任务中的最后一个,当目标任务为至少一个任务中的最后一个时,第二任务,以及至少一个任务中在目标任务之前运行的每个任务都触发了预先配置的请求类型的请求。
第一处理单元,还用于当第二请求的类型指示目标任务在用户态暂停,且第二请求类型是非预先配置的请求类型时,则记录目标任务的第一调度状态,并从第一任务的内核态 上下文切换到目标任务的内核态上下文,其中,目标任务的第一调度状态包括目标任务在用户态处于暂停状态以及目标任务从开始运行到暂停的运行时间。
在第二方面的一种可能的实现方式中,第二处理单元,还用于当目标任务没有被阻塞时,返回用户态继续运行目标任务。
在第二方面的一种可能的实现方式中,第一处理单元,还用于当目标任务被阻塞时,通过原生调度流程调度第三任务,并从目标任务切换到第三任务,原生调度流程需要处理从第一任务到至少一个任务中每个任务的所有调度状态。
第二处理单元,还用于在用户态运行第三任务。
在第二方面的一种可能的实现方式中,第一处理单元,具体用于将从第一任务到至少一个任务中每个任务的第二调度状态,由每个任务开始运行时的调度状态修改为确定对第三任务执行原生调度流程时对应每个任务的调度状态,其中,每个任务的第二调度状态为每个任务的所有调度状态中除每个任务的第一调度状态之外的调度状态。
在第二方面的一种可能的实现方式中,在远程过程调用RPC的调度中,第一请求中包括第二任务的信息,第二任务的信息用于调度第二任务。
在第二方面的一种可能的实现方式中,第二处理单元,还用于记录第一请求与第一任务关联的信息;通过运行第二任务,以得到返回结果;根据与第一任务关联的信息,将返回结果返回给第一任务;从第二任务切换回第一任务继续运行。
在第二方面的一种可能的实现方式中,第二任务位于第一队列,第一队列为先入先出队列,第二任务为第一队列的所有任务中最先进入第一队列的任务。
在第二方面的一种可能的实现方式中,在简化公平调度的场景,第二处理单元,还用于将第一队列中的任务,以及第一队列中的任务的调度状态同步到第二队列,并将第三任务从第一队列中已输出的信息同步给第二队列,第二队列是用于原生调度流程的队列;将第一队列中的任务在第二队列中的位置信息同步给第一队列,位置信息用于调整第一队列中的任务在第一队列中的位置。
该任务处理的装置具有实现上述第一方面或第一方面任意一种可能实现方式的方法的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块,例如上述:检测单元、第一处理单元和第二处理单元,这几个单元可以通过一个处理单元或多个处理单元来实现。该第二方面或第二方面任一种可能的实现方式的相关内容可以参阅第一方面及第一方面任一种可能的实现方式的相关内容进行理解。
本申请第三方面提供一种计算机设备,该计算机设备包括至少一个处理器、存储器、输入/输出(input/output,I/O)接口以及存储在存储器中并可在处理器上运行的计算机执行指令,当计算机执行指令被处理器执行时,处理器执行如上述第一方面或第一方面任意一种可能的实现方式的方法。
本申请第四方面提供一种存储一个或多个计算机执行指令的计算机可读存储介质,当计算机执行指令被处理器执行时,一个或多个处理器执行如上述第一方面或第一方面任意一种可能的实现方式的方法。
本申请第五方面提供一种存储一个或多个计算机执行指令的计算机程序产品,当计算机执行指令被一个或多个处理器执行时,一个或多个处理器执行如上述第一方面或第一方面任意一种可能的实现方式的方法。
本申请第六方面提供了一种芯片系统,该芯片系统包括至少一个处理器,至少一个处理器用于支持任务处理的装置实现上述第一方面或第一方面任意一种可能的实现方式中所涉及的功能。在一种可能的设计中,芯片系统还可以包括存储器,存储器,用于保存任务处理的装置必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。
附图说明
图1是本申请实施例提供的计算机系统的一实施例示意图;
图2是本申请实施例提供的计算机系统的另一实施例示意图;
图3是本申请实施例提供的任务处理的方法的一实施例示意图;
图4是本申请实施例提供的任务处理的方法的另一实施例示意图;
图5是本申请实施例提供的任务处理的方法的另一实施例示意图;
图6是本申请实施例提供的任务处理的方法的另一实施例示意图;
图7是本申请实施例提供的任务处理的方法的另一实施例示意图;
图8是本申请实施例提供的任务处理的方法的另一实施例示意图;
图9是本申请实施例提供的计算机系统的另一实施例示意图;
图10是本申请实施例提供的任务处理的方法的另一实施例示意图;
图11是本申请实施例提供的任务处理的方法的另一实施例示意图;
图12是本申请实施例提供的任务处理的方法的另一实施例示意图;
图13是本申请实施例提供的任务处理的装置的一实施例示意图;
图14是本申请实施例提供的计算机设备的一结构示意图。
具体实施方式
下面结合附图,对本申请的实施例进行描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。本领域普通技术人员可知,随着技术发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
本申请实施例提供一种任务处理的方法,用于提高用户态线程的切换效率,提高计算 机系统的性能。本申请实施例还提供了相应的装置、设备、计算机可读存储介质,以及计算机程序产品等。以下分别进行详细说明。
本申请实施例提供的任务处理的方法应用于计算机系统,该计算机系统可以为服务器、终端设备或虚拟机(virtual machine,VM)。
终端设备(也可以称为用户设备(user equipment,UE))是一种具有无线收发功能的设备,可以部署在陆地上,包括室内或室外、手持或车载;也可以部署在水面上(如轮船等);还可以部署在空中(例如飞机、气球和卫星上等)。所述终端可以是手机(mobile phone)、平板电脑(pad)、带无线收发功能的电脑、虚拟现实(virtual reality,VR)终端、增强现实(augmented reality,AR)终端、工业控制(industrial control)中的无线终端、无人驾驶(self driving)中的无线终端、远程医疗(remote medical)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端等。
该计算机系统的架构可以参阅图1进行理解。图1为计算机系统的一架构示意图。
如图1所示,该计算机系统的架构包括用户态10、内核态20和硬件层30。用户态10和内核态20是操作系统(operating system,OS)的两种模式或两种状态,内核态通常也称为特权状态,用户态通常也称为非特权状态。
该用户态10包括多个任务,该任务指的是用户程序,可以为进程或线程。进程是资源分配的最小单位,线程是操作系统调度(处理器调度)的最小单位。一个进程可以包括一个或多个线程。
内核态20是OS负责管理关键资源,并为用户态的进程或线程提供OS调用入口进而在内核提供服务,如:阻塞处理、缺页异常(page fault,PF)处理,页表管理以及中断控制等服务。
硬件层30包括内核20态运行所依赖的硬件资源,如:处理器、内存、内存管理单元(memory management unit,MMU),以及输入/输出(input/output,I/O)设备和磁盘(disk)等。处理器中可以包括寄存器组,该寄存器组可以包括多种类型的寄存器,如:栈帧寄存器、通用寄存器,以及非易失性(callee-saved)寄存器等。寄存器中用于存储线程的上下文或该线程的协程的上下文。
MMU是一种负责处理中央处理器(central processing unit,CPU)的内存访问请求的计算机硬件。它的功能包括虚拟地址到物理地址的转换、内存保护、CPU高速缓存的控制等。
在计算机系统中,以任务是线程为例,通常一个应用就绑定一个线程,线程在用户态运行时,若发生阻塞、缺页异常或者中断,都会使线程触发请求到内核态,然后在内核态执行线程切换,再回到用户态运行。如:线程A在用户态运行时,发生阻塞、缺页异常或者中断,则线程A会触发请求到内核态,然后在内核态执行线程切换,从线程A切换到线程B,然后再返回用户态运行线程B。通常从线程A切换到线程B需要切换线程A和线程B的用户态上下文、内核态上下文,以及要处理线程A和线程B的调度状态,这样多的切换内容,导致线程切换效率低下,影响了计算机系统的性能。
为了提高用户态的进程或线程的切换效率,如图2所示,本申请实施例提供另一计算机系统,该计算机系统的内核态包括快速调度&切换模块、功能处理模块和原生调度&切换模块,该原生调度&切换模块中还可以包括兼容探测模块。该快速调度&切换模块相当于在原生调度&切换模块与用户态之间加了一个中间层,对触发请求进入内核态的任务做快速调度&切换。
快速调度&切换模块用于实现用户态的任务的快速切换。
该功能处理模块用于处理一些锁操作、远程过程调用(remote procedure call,RPC),以及内核态上下文切换等操作。
原生调度&切换模块用于通过计算机系统原生的调度流程执行进程或线程的调度&切换。
本申请中,“调度”指的是调度资源,“切换”指的是切换进程或线程,也可以将调度理解为是切换的一个前提环节。
兼容探测模块用于实现快速调度&切换模块与功能处理模块和原生调度&切换模块的兼容。
基于上述图2所示的计算机系统,本申请实施例提供的方案,在执行用户态的任务处理的过程中,可以执行如图3所示的流程,如图3所示,本申请实施例提供的任务处理的方法的一实施例包括:
401.计算机系统在内核态,检测进入内核入口的第一请求的类型,内核入口为从用户态到内核态的入口,第一请求是用户态的第一任务触发的。
本申请实施例中,内核入口可以是系统调用的入口,异常入口或中断入口等任意一种可以从用户态进入内核态的入口。
402.当第一请求的类型指示第一任务在用户态暂停,则计算机系统至少从第一任务的用户态上下文切换到第二任务的用户态上下文,并记录第一任务的第一调度状态。
第一任务的第一调度状态为第一任务在用户态处于暂停状态以及第一任务从开始运行到暂停的运行时间。
第一任务的第一调度状态是第一任务的所有调度状态中的一部分。
本申请实施例中,用户态上下文指的是任务在用户态运行所必不可少的一组数据,如处理器的寄存器中的数据。从第一任务的用户态上下文切换到第二任务的用户态上下文指的是将第一任务在用户态运行所需要的数据从寄存器中移出,将第二任务在用户态运行所需要的数据写入到寄存器中。以上寄存器可以包括通用寄存器、程序计数器(program counter,PC)、程序状态寄存器(program state,PS)等任意一个或多个。
本申请实施例中,任务的调度状态可以包括任务处于运行状态或暂停状态,以及任务运行的时间长度,也就是任务从开始运行到暂停的运行时间,以及是否进队列或出队列,是否发生阻塞、中断或异常,是否被其他线程调用等状态。本申请中,第一任务的第一调度状态包括第一任务在用户态处于暂停状态以及第一任务从开始运行到暂停的运行时间,第一任务的第二调度状态指的是第一任务的所有调度状态中除第一调度状态之外的调度状态。
403.计算机系统在用户态运行第二任务。
由上述描述可知,本申请实施例提供的方案,当第一任务的第一请求从用户态进入内核态后,通过检测第一请求的类型,确定第一任务在用户态暂停,则只记录该第一任务在用户态处于暂停状态以及第一任务从开始运行到暂停的运行时间,其他调度状态均不做处理,这样,在从第一任务切换到第二任务进行运行的过程中,可以减少处理的内容,提高切换效率,从而提高计算机系统的性能。
本申请实施例中,第一请求的类型可以为预先配置的请求类型,或者非预先配置的请求类型,非预先配置的请求类型指的是不属于预先配置的请求类型的请求类型。
本申请中,预先配置的请求类型与业务场景相关,在业务场景中,预先配置的请求类型出现的次数高于业务场景中非预先配置的请求类型出现的次数。
本申请中,对一些业务场景中的预先配置的请求类型列举如下:
在侧重文件系统的业务场景中,该预先配置的请求类型可以是对文件、目录、软链接的内容、文件属性进行创建、读操作、写操作,对文件描述符进行控制操作、管理操作,文件监控操作等类型中的一种或者多种,不同的请求类型可以使用不同的标识来表示,如:对文件进行创建操作的请求类型可以用00001来表示,对文件进行读操作的请求类型可以用00002来表示,当然,也可以用其他的方式来表示不同的请求类型,只要能确定出是相应的请求类型即可,本申请中对请求类型的具体表现形式不做限定。
在侧重网络系统的业务场景中,该预先配置的请求类型可以是接收数据包的请求类型,发送数据包的请求类型或者监听的请求类型中的一种或多种。
在侧重硬件驱动输入/输出(input/output,IO)的业务场景中,该预先配置的请求类型可以是驱动IO的请求类型。
在侧重IO多路复用的业务场景中,该预先配置的请求类型可以是IO操作的请求类型。
在侧重时钟(Timer)操作的业务场景中,该预先配置的请求类型可以是时钟请求类型。
在侧重内存操作的业务场景中,该预先配置的请求类型可以是与内存请求相关的请求类型。
在侧重信号处理的业务场景中,该预先配置的请求类型可以是等待信号的请求类型。
在侧重进程间通信的业务场景中,该预先配置的请求类型可以是远程过程调用(remote procedure call,RPC)的请求类型,发送消息的请求类型,同步锁操作的请求类型。
在侧重文件系统管理的业务场景中,该预先配置的请求类型可以是mount的请求类型,获取状态的请求类型。
在侧重异步操作的场景中,该预先配置的请求类型可以是将同步操作转换为异步操作的请求类型。
以上只是一些示例,在不同的业务场景,预先配置的请求类型的选择可以根据实际情况确定,本申请中对此不做限定。
基于上述图2所示的系统结构以及图3所示的方案,在处理用户态的任务的过程中,本申请实施例提供了两种快切换方案,以及基于快速切换的原生切换方案,这几种任务切换的方案可以参阅图4所示的结构示意图进行理解。
如图4所示,用户态包括第一任务和第二任务,图4中示出了三种切换方案。
切换方案1.从第一任务切换到第二任务时,执行图4中的路径1,只需要切换第一任务的用户态上下文和第二任务的用户态上下文,以及记录第一调度状态,且不处理第二调度状态。
切换方案2.从第一任务切换到第二任务时,执行图4中的路径2,不仅要切换第一任务的用户态上下文和第二任务的用户态上下文,还要切换第一任务的内核态上下文和第二任务的内核态上下文,以及记录第一调度状态,且不处理第二调度状态。
切换方案3.从第一任务切换到第二任务时,切换用户态和内核态的上下文后,执行图4中的路径3,还要执行原生调度流程。
下面针对这三种切换方案,分别进行介绍:
切换方案1:
当第一请求的类型为预先配置的请求类型时,则从第一任务的用户态上下文切换到第二任务的用户态上下文,且不切换从第一任务的内核态上下文切换到第二任务的内核态上下文,以及记录第一任务的第一调度状态,且不处理第一任务的第二调度状态,第一任务的第一调度状态指示第一任务在用户态处于暂停状态以及第一任务从开始运行到暂停的运行时间,第二调度状态为第一任务的所有调度状态中除第一任务的第一调度状态之外的其他调度状态。
本申请实施例中,预先配置的请求类型指的是预先定义的不需要切换内核态上下文的类型,如:前面所列举的各业务场景中的预先配置的请求类型。
该切换方案可以参阅图5进行理解,如图5所示,从任务A(相当于前文的第一任务)切换到任务B(相当于前文的第二任务)的过程可以包括:
501.在用户态运行的任务A触发第一请求到内核态。
502.暂存任务A的用户态上下文,将暂存的任务A的用户态上下文保存为目标上下文。
该过程可以是将任务A的用户态上下文从寄存器中移出并保存到内存中。
该过程可是按照目标结构来存储任务A的用户态上下文,从而将该目标结构的用户态上下文称为目标上下文。
该目标上下文用于任务A下次被调度时使用。
503.检测第一请求的类型。
该第一请求为预先配置的请求类型,则确定对本次任务切换过程执行图4所示的路径1的切换方案,则可以执行504或505。
504.当第一请求中包括任务B的信息时,则根据该任务B的信息调度任务B。
任务A在发起第一请求时,计算机系统会在用户态直接指定所要切换的任务B的信息,该任务B的信息可以是任务B的标识。这样,在内核态,计算机系统可以直接根据该任务B的信息调度任务B来切换,进一步提高了任务切换的效率。
505.当第一请求中不包括任务B的信息时,则从第一队列中调度任务B。
任务B位于第一队列,第一队列为先入先出(First In,First Out,FIFO)队列,任务B为第一队列的所有任务中最先进入第一队列的任务。
506.切换到任务B的用户态上下文,且只记录第一调度状态,不处理第二调度状态。
507.在用户态,运行任务B。
由图5中可以看出,该切换方案1只切换了用户态上下文,并没有切换内核态上下文。用户态虽然运行的是任务B,但在内核态还是保持着任务A的内核态上下文。
本申请实施例提供的切换方案1,针对发生预先配置的请求类型的请求,只需要切换第一任务和第二任务的用户态上下文,不需要切换内核态上下文,且只记录第一调度状态,不处理第二调度状态,进一步提高了从第一任务到第二任务的切换效率。
切换方案2:
当第一请求的类型是非预先配置的请求类型,也就是该第一请求的类型不属于预先配置的请求类型时,则从第一任务的用户态上下文切换到第二任务的用户态上下文,从第一任务的内核态上下文切换到第二任务的内核态上下文,以及记录第一任务的第一调度状态,不处理第一任务的第二调度状态。
内核态上下文指的是支持任务运行的一组内核态的数据。当第一请求的类型不属于预先配置的请求类型,如一些场景中的中断请求或异常请求,则在切换用户态上下文时,还切换内核态上下文。
该切换方案可以参阅图6进行理解,如图6所示,从任务A(相当于前文的第一任务)切换到任务B(相当于前文的第二任务)的过程可以包括:
601.在用户态运行的任务A触发第一请求到内核态。
602.暂存任务A的用户态上下文,将暂存的任务A的用户态上下文保存为目标上下文。
该过程可以是将任务A的用户态上下文从寄存器中移出并保存到内存中。
该过程可是按照目标结构来存储任务A的用户态上下文,从而将该目标结构的用户态上下文称为目标上下文。
该目标上下文用于任务A下次被调度时使用。
603.检测第一请求的类型。
该第一请求的类型不属于预先配置的请求类型,如:中断请求或异常请求,则确定对本次任务切换过程执行图4所示的路径2的切换方案,则可以执行604或605。
604.当第一请求中包括任务B的信息时,则根据该任务B的信息调度任务B。
任务A在发起第一请求时,计算机系统会在用户态直接指定所要切换的任务B的信息,该任务B的信息可以是任务B的标识。这样,在内核态,计算机系统可以直接根据该任务B的信息调度任务B来切换,进一步提高了任务切换的效率。
605.当第一请求中不包括任务B的信息时,则从第一队列中调度任务B。
任务B位于第一队列,第一队列为先入先出(First In,First Out,FIFO)队列,任务B为第一队列的所有任务中最先进入第一队列的任务。
606.切换到任务B的用户态上下文,且只记录第一调度状态,不处理第二调度状态。
607.切换到任务B的内核态上下文。
608.在用户态,运行任务B。
本申请实施例提供的切换方案2,针对发生不属于预先配置的请求类型的请求,只需要切换第一任务和第二任务的用户态上下文和内核态上下文,且只记录第一调度状态,不处 理第二调度状态,进一步提高了从第一任务到第二任务的切换效率。
基于上述切换方案1和切换方案2,本申请实施例还提供了递归的切换方案,该递归的切换方案包括:在用户态运行第二任务之后,检测进入内核入口的第二请求的类型,第二请求是用户态的目标任务触发的,目标任务为第二任务或者在第二任务之后连续运行的至少一个任务中的最后一个,第二任务和至少一个任务都触发了预先配置的请求类型的请求;当第二请求的类型指示目标任务在用户态暂停,且第二请求类型为预先配置的请求类型时,记录目标任务的第一调度状态,并只从目标任务的用户态上下文切换到第三任务的用户态上下文,其中,目标任务的第一调度状态包括目标任务在用户态处于暂停状态以及目标任务从开始运行到暂停的运行时间;在用户态运行第三任务。
该方案可以理解为是在通过上述图4中的路径1从任务A切换到任务B后,再通过路径1切换到任务C。以目标任务是第二任务为例,该过程可以参阅图7进行理解,如图7所示,该递归的切换过程可以包括:
701.在用户态运行的任务B触发第二请求到内核态。
702.暂存任务B的用户态上下文,将暂存的任务B的用户态上下文保存为目标上下文。
该过程可以是将任务B的用户态上下文从寄存器中移出并保存到内存中。
该过程可是按照目标结构来存储任务B的用户态上下文,从而将该目标结构的用户态上下文称为目标上下文。
该目标上下文用于任务B下次被调度时使用。
703.检测第二请求的类型。
该第二请求为预先配置的请求类型,则确定对本次任务切换过程执行图4所示的路径1的切换方案,则可以执行704或705。
704.当第二请求中包括任务C的信息时,则根据该任务C的信息调度任务C。
任务B在发起第二请求时,计算机系统会在用户态直接指定所要切换的任务C的信息,该任务C的信息可以是任务C的标识。这样,在内核态,计算机系统可以直接根据该任务C的信息调度任务C来切换,进一步提高了任务切换的效率。
705.当第二请求中不包括任务C的信息时,则从第一队列中调度任务C。
任务C位于第一队列,第一队列为FIFO队列,任务C为第一队列的所有任务中最先进入第一队列的任务。
706.切换到任务C的用户态上下文,且只记录第一调度状态,不处理第二调度状态。
707.在用户态,运行任务C。
由图7中可以看出,递归切换方案只切换了任务的用户态上下文,并没有切换内核态上下文。用户态虽然运行的是任务C,但在内核态还是保持着任务A的内核态上下文。
本申请实施例提供的递归切换方案,针对发生预先配置的请求类型的请求,只需要切换任务B的用户态上下文,内核态还可以继续保持在任务A的内核态上下文,进一步提高了任务的切换效率。而且,该实施例列举的只是目标任务是第二任务的情况,如果在第二任务之后连续发生了多个触发预先配置的请求类型的请求,每次都可以只切换任务的用户态上下文,可以极大的提高切换效率。
同样,在执行了上述切换方案1的基础上可以执行切换方案B,这种情况可以理解为通过图4中的路径1从任务A切换到任务B后,任务B在用户态触发第二请求,这种情况的处理过程包括:检测进入内核入口的第二请求的类型,第二请求是用户态的目标任务触发的,目标任务为第二任务或者在第二任务之后连续运行的至少一个任务中的最后一个,当目标任务为至少一个任务中的最后一个时,第二任务,以及至少一个任务中在目标任务之前运行的每个任务都触发了预先配置的请求类型的请求;当第二请求的类型指示目标任务在用户态暂停,且第二请求类型是非预先配置的请求类型时,则记录目标任务的第一调度状态,并从第一任务的内核态上下文切换到目标任务的内核态上下文,其中,目标任务的第一调度状态包括目标任务在用户态处于暂停状态以及目标任务从开始运行到暂停的运行时间。
本申请实施例中,当通过只切换用户态上下文以及记录第一任务的第一调度状态从第一任务切换到第二任务后,第二任务发起了非预先配置的请求类型的第二请求,或者,第二任务以及连续的几个任务都发起了预先配置的请求类型的请求,到目标任务时才发起了非预先配置的请求类型的第二请求,则需要从第一任务的内核态上下文直接切换到目标任务的内核态上下文,减少了所切换的内容,提高了切换效率。
本申请实施例中,在当目标任务没有被阻塞时,在从第一任务的内核态上下文切换到目标任务的内核态上下文之后,该方法还包括:返回用户态继续运行目标任务。也就是说,当从第一任务的内核态上下文切换到目标任务的内核态上下文,如果目标任务没有阻塞,则可以返回用户态继续运行该目标任务,实现了目标任务的快速恢复。
本申请实施例中,当从第一任务的内核态上下文切换到目标任务的内核态上下文之后,如果目标任务被阻塞,则需要走原生调度流程调度第三任务。原生调度流程指的是任务切换的过程中不仅要切换用户态上下文、内核态上下文还要处理切换前的任务的所有调度状态的流程。该过程可以理解为:当目标任务被阻塞时,该方法还包括:通过原生调度流程调度第三任务,并从目标任务切换到第三任务,原生调度流程需要处理从第一任务到至少一个任务中每个任务的所有调度状态;在用户态运行第三任务。
原生调度流程也就是图4所描述的路径3的切换方案3,下面介绍切换方案3。
切换方案3:
该切换方案3的过程可以在确定上述第二请求的类型不属于预先配置的请求类型时执行。
如图8所述,该过程以目标任务是第二任务为例,该确定需要执行原生调度流程的过程包括:
801.当确定上述第二请求的类型不属于预先配置的请求类型时,从第一任务的内核态上下文切换到第二任务的内核态上下文。
802.将第一任务的第二调度状态修改为确定需要执行原生调度流程时对应第一任务的调度状态;将第二任务的第二调度状态修改为确定需要执行原生调度流程时对应第二任务的调度状态。
803.将第一任务的第一调度状态和第二调度状态同步到原生调度流程,以及将第二任务的第一调度状态和第二调度状态同步到原生调度流程。
804.调度第三任务,从第二任务切换到第三任务,并在用户态运行第三任务。
以上是以目标任务不是第二任务的情况,通过原生调度流程调度第三任务,具体为:将从第一任务到至少一个任务中每个任务的第二调度状态,由每个任务开始运行时的调度状态修改为确定对第三任务执行原生调度流程时对应每个任务的调度状态,其中,每个任务的第二调度状态为每个任务的所有调度状态中除每个任务的第一调度状态之外的调度状态。
也就是说,在执行原生调度流程时,需要同步第一任务、第二任务以及至少一个任务中的每个任务的最新调度状态,这样可以使内核不感知第一任务、第二任务以及至少一个任务之前发生的快速切换,这样,这些任务之前发生的切换完全不会影响到原生调度流程,很好的兼容了原生调度流程。
以上步骤803所描述的调度状态同步的过程包括队列同步,队列同步的过程会包括:将第一队列中至少一个任务,以及至少一个任务的调度状态同步到第二队列,并将第二任务从第一队列中已输出的信息同步给第二队列,然后再将至少一个任务在第二队列中的位置信息同步给第一队列,第二队列是用于原生调度流程的队列。
本申请实施例中,第一队列可以称为快速队列,第二队列可以称为慢速队列,该慢速队列用于执行原生调度流程。在执行原生调度流程之前,需要将快速队列中的任务都同步给慢速队列,这样,更有利于兼容原生调度流程。快速队列中的任务在同步到慢速队列后,在慢速队列中可能会根据各任务的实际情况做重新排列,将这些任务插入到慢速队列中的合适位置,然后将快速队列中的至少一个任务在慢速队列中的位置信息同步给快速队列,使快速队列根据这些位置信息优化至少一个任务在快速队列中的顺序,这样可以使快速队列中的任务得到更多公平被调度的机会。
本申请实施例中,以上所描述的任务处理的过程可以应用于令牌调度、简化公平调度以及RPC调度等多个场景中,这几种场景与计算机系统结合的方案可以参阅图9进行理解。
如图9所示,该计算机系统的结构中,用户态可以包括多个进程,每个进程中可以包括多个线程。
结合上述图2所示的计算机系统,其中的,快速调度&切换模块可以包括如下功能单元。
内核态快速调用接口:该内核态快速调用接口可以理解为是内核入口,上述实施例中的第一请求或第二请求会进入该内核态快速调用接口,可以在该内核态快速调用接口检测第一请求或第二请求的类型,进而确定后续的步骤,如:接下来进入功能处理单元或者上下文兼容单元。
功能处理单元:用于处理可以快速处理的功能,一般来说,是状态比较简单,与原生流程比较独立的功能,如:处理锁操作。
调度切换框架包括新调度接口和快速切换单元。其中,新调度接口用于调度快速队列中的任务,快速切换单元用于通过快速路径(上文的路径1)切换任务的用户态上下文。
调度器:此部分支持不同的功能实现不同的调度策略,比如可以实现基于令牌的调度,比较通用的、不限制场景的简化公平调度,基于远程调用的RPC调度等。
上下文兼容单元,用于确定第一请求或第二请求的类型不属于预先配置的请求类型, 则流程进入该上下文兼容单元,切换任务的内核态上下文。
调度兼容处理接口:用于提供一系列的回调函数,供原生调度的原生队列访问探针调用,从而实现快速切换与原生调度流程的数据同步。
管理分配单元:用于管理可以快速调度的线程范围。其中,包括组属性关系和关系管理,调度器可以根据该管理分配单元执行令牌的调度,简化公平调度或基于远程调用的RPC调度等。
原生调度&切换模块包括内核上下文切换单元和原生队列访问探针,内核上下文切换单元用于切换任务的内核态上下文,原生队列访问探针用于实现快速切换与原生调度流程的数据同步。
上述图9所示的实施例中包括了RPC调度和简化公平调度。下面结合这两种调度场景介绍本申请实施例提供的任务处理过程。
为了确保灵活性和安全性,系统中通常划分为多个进程或线程,采用客户端(client)&服务端(server)的模型。一个线程作为客户端时,其他线程可以作为服务端。
如图10所示,在RPC调度的场景中,本申请实施例提供的任务处理的方法的过程包括:
1001.运行线程A作为客户端(client)时,通过快速调用发起请求。
1002.通过请求中所包含的线程B的信息调度线程B。
在内核态中,通过在用户态所指定的下一个线程(线程B的信息,如线程B的标识)找到线程B。
1003.记录返回目标后,通过线程B的用户态上下文在用户态运行线程B。
1004.获取线程B运行的返回结果。
1005.根据步骤1003记录的返回目标将返回结果返回给线程A的内核态上下文。
1006.将返回结果从内核态返回给内核态的线程A。
1007.上述任意一个过程在执行过程中,如果发生了需要执行原生调度流程的问题,需要同步调度状态,例如线程A和线程B的执行时间,队列状态等,这里同步的不只是当前正在执行的任务,还包括上次发生同步以来,执行过的所有任务。最终让原生调度&切换模块看起来就跟未发生快速切换一样。
此示例采用快速切换方法实现了RPC调度,作为client的线程A指定需要的作为server的线程B,这时可以唤醒对应的线程B并调度到这个线程B,并且同时将线程A自身标记为阻塞,并记录线程B的调用者是线程A,直到线程B返回结果,则唤醒线程A。由以上RPC的调度过程可以看出,本申请的RPC调度过程中的处理方案直接指定了需要调度的线程,而且在切换线程A和线程B的上下文时,可以切换内核态上下文,也可以不切换内核态上下文,但不需要处理线程A和线程B的所有调度状态,提高了RPC调度过程中线程切换的速度。
本申请实施例提供的RPC调度方案,相比于现有技术中的DOMAIN、MQ+SHM以及PIPE方案,在切换速度上有了明显的提高,开发人员对这几种技术在RPC调度的过程中做了大量的实验,每种的实验次数为100000次的情况下,得到如下表1的实验结果。
表1:实验结果
测试用例 通讯方式 次数 最大值(纳秒ns) 最小值(ns) 平均值(ns)
Case1 本申请 100000 49280 2240 2385
Case2 DOMAIN 100000 1025920 25800 43819
Case3 MQ+SHM 100000 1573960 19600 31074
Case4 PIPE 100000 1120000 23960 38055
由上表1可以看出,本申请的方案,在RPC调度的过程相比于其他几种现有技术有明显的数量级上的提升。
接下来介绍本申请实施例提供的任务处理的过程结合在简化公平调度的场景中的处理过程。
如图11所示,该处理过程包括:
1101.在用户态运行线程A时,线程A发生快速阻塞,从而触发请求到内核态。
线程A发生快速阻塞后,进入暂停状态。
1102.将暂存的线程A的用户态上下文保存为线程A的快速上下文。
1103.通过快速队列处理单元从快速队列中调度线程B。
该快速队列为FIFO队列,因为线程B是当前队列的所有线程中最早进入快速队列的,所以优先调度线程B。
1104.通过切换单元切换到线程B的快速上下文,进而在用户态运行线程B。
线程B的快速上下文可以是在线程B发生快速阻塞时保存的。
1105.在内核态可以通过线程C的快速队列处理单元唤醒线程A。
1106.将线程A放入快速队列。
1107.向线程C返回对线程A的执行结果。
1108.当上述任意一个过程在内核态执行时,发生了需要执行原生调度流程的问题,需要同步调度状态。
例如线程A和线程B的执行时间,队列状态等,这里同步的不只是当前正在执行的任务,还包括上次发生同步以来,执行过的所有任务。最终让原生调度&切换模块看起来就跟未发生快速切换一样。
另外,本申请实施例中,状态同步的过程会发生快速队列与慢速队列的同步过程,下面结合图12介绍这两个队列的同步过程。
如图12所示,该过程包括:
1201.将快速队列中的任务都同步给慢速队列。
如:将图12中所示的快速队列中的线程C、线程E和线程G都同步给慢速队列。
这里不仅会将线程C、线程E和线程G同步到慢速队列,还会将在两次同步期间所运行的线程的运行时间也同步到慢速队列,这些线程的运行时间可以通过负载跟踪的方式确定。
需要说明的是,如果线程B还在慢速队列中,而线程B已经从快速队列中输出了,则需要将慢速队列中的线程B也输出。
1202.将快速队列同步过来的线程在慢速队列中重新排序。
该步骤会按照线程的公平性进行排序。
1203.将快速队列中的线程在慢速队列中的位置信息同步回快速队列。
以上介绍了任务处理的方法,下面结合附图介绍本申请实施例提供的任务处理的装置,该任务处理的装置包括用户态和内核态,用户态包括多个任务,任务为线程或进程。
如图13所示,本申请实施例提供的任务处理的装置130包括:
检测单元1301,用于在内核态,检测进入内核入口的第一请求的类型,内核入口为从用户态到内核态的入口,第一请求是用户态的第一任务触发的。该检测单元1301用于执行方法实施例中的步骤401。
第一处理单元1302,用于当检测单元1301检测的第一请求的类型指示第一任务在用户态暂停,则至少从第一任务的用户态上下文切换到第二任务的用户态上下文,并记录第一任务的第一调度状态,第一任务的第一调度状态为第一任务在用户态处于暂停状态以及第一任务从开始运行到暂停的运行时间。该第一处理单元1302用于执行方法实施例中的步骤402。
第二处理单元1303,用于在用户态运行第一处理单元1302切换的第二任务。该第二处理单元1303用于执行方法实施例中的步骤403。
本申请实施例中,当第一任务的第一请求从用户态进入内核态后,通过检测第一请求的类型,确定第一任务在用户态暂停,则只记录该第一任务在用户态处于暂停状态以及第一任务从开始运行到暂停的运行时间,其他调度状态均不做处理,这样,在从第一任务切换到第二任务进行运行的过程中,可以减少处理的内容,提高切换效率,从而提高计算机系统的性能。
可选地,第一处理单元1302,具体用于当第一请求的类型指示第一任务在用户态暂停,且第一请求的类型为预先配置的请求类型时,则只从第一任务的用户态上下文切换到第二任务的用户态上下文。
可选地,第一处理单元1302,第一处理单元,具体用于当第一请求的类型指示第一任务在用户态暂停,且第一请求的类型是非预先配置的请求类型时,则从第一任务的用户态上下文切换到第二任务的用户态上下文,且从第一任务的内核态上下文切换到第二任务的内核态上下文。
可选地,预先配置的请求类型与业务场景相关,在业务场景中,预先配置的请求类型出现的次数高于业务场景中非预先配置的请求类型出现的次数。
可选地,检测单元1301,还用于检测进入内核入口的第二请求的类型,第二请求是用户态的目标任务触发的,目标任务为第二任务或者在第二任务之后连续运行的至少一个任务中的最后一个,第二任务和至少一个任务都触发了预先配置的请求类型的请求。
第一处理单元1302,还用于当第二请求的类型指示目标任务在用户态暂停,且第二请求类型为预先配置的请求类型时,记录目标任务的第一调度状态,并只从目标任务的用户态上下文切换到第三任务的用户态上下文,其中,目标任务的第一调度状态包括目标任务在用户态处于暂停状态以及目标任务从开始运行到暂停的运行时间。
第二处理单元1303,还用于在用户态运行第三任务。
可选地,检测单元1301,还用于检测进入内核入口的第二请求的类型,第二请求是用户态的目标任务触发的,目标任务为第二任务或者在第二任务之后连续运行的至少一个任 务中的最后一个,当目标任务为至少一个任务中的最后一个时,第二任务,以及至少一个任务中在目标任务之前运行的每个任务都触发了预先配置的请求类型的请求。
第一处理单元1302,还用于当第二请求的类型指示目标任务在用户态暂停,且第二请求类型是非预先配置的请求类型时,则记录目标任务的第一调度状态,并从第一任务的内核态上下文切换到目标任务的内核态上下文,其中,目标任务的第一调度状态包括目标任务在用户态处于暂停状态以及目标任务从开始运行到暂停的运行时间。
可选地,第二处理单元1303,还用于当目标任务没有被阻塞时,返回用户态继续运行目标任务。
可选地,第二处理单元1303,还用于当目标任务被阻塞时,通过原生调度流程调度第三任务,并从目标任务切换到第三任务,原生调度流程需要处理从第一任务到至少一个任务中每个任务的所有调度状态。
第二处理单元1303,还用于在用户态运行第三任务。
可选地,第一处理单元1302,具体用于将从第一任务到至少一个任务中每个任务的第二调度状态,由每个任务开始运行时的调度状态修改为确定对第三任务执行原生调度流程时对应每个任务的调度状态,其中,每个任务的第二调度状态为每个任务的所有调度状态中除每个任务的第一调度状态之外的调度状态。
可选地,在远程过程调用RPC的调度中,第一请求中包括第二任务的信息,第二任务的信息用于调度第二任务。
可选地,该装置130还包括保存单元1304,保存单元1304,用于保存第一任务的用户态上下文;当第一请求的类型为预先配置的请求类型时,则将第一任务的用户态上下文保存为目标上下文,目标上下文用于第一任务下次被调度时使用。
可选地,在远程过程调用RPC的调度中,第一请求中包括第二任务的信息,第二任务的信息用于调度第二任务。
可选地,第二处理单元1303,还用于记录第一请求与第一任务关联的信息;通过运行第二任务,以得到返回结果;根据与第一任务关联的信息,将返回结果返回给第一任务;从第二任务切换回第一任务继续运行。
可选地,第二任务位于第一队列,第一队列为先入先出队列,第二任务为第一队列的所有任务中最先进入第一队列的任务。
可选地,在简化公平调度的场景,在简化公平调度的场景,第二处理单元1303,还用于将第一队列中的任务,以及第一队列中的任务的调度状态同步到第二队列,并将第三任务从第一队列中已输出的信息同步给第二队列,第二队列是用于原生调度流程的队列;将第一队列中的任务在第二队列中的位置信息同步给第一队列,位置信息用于调整第一队列中的任务在第一队列中的位置。
以上,本申请实施例所提供的任务处理的装置130的相关内容可以参阅前述方法实施例部分的相应内容进行理解,此处不再重复赘述。
图14所示,为本申请的实施例提供的计算机设备140的一种可能的逻辑结构示意图。计算机设备140包括:处理器1401、通信接口1402、内存1403、磁盘1404以及总线1405。处理 器1401、通信接口1402、内存1403以及磁盘1404通过总线1405相互连接。在本申请的实施例中,处理器1401用于对计算机设备140的动作进行控制管理,例如,处理器1401用于执行图3至图12的方法实施例中的步骤。通信接口1402用于支持计算机设备140进行通信。内存1403,用于存储计算机设备140的程序代码和数据,并为进程或线程提供内存空间。磁盘用户存储从内存换出的物理页。
其中,处理器1401可以是中央处理器单元,通用处理器,数字信号处理器,专用集成电路,现场可编程门阵列或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器1401也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理器和微处理器的组合等等。总线1405可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图14中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
在本申请的另一实施例中,还提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机执行指令,当设备的处理器执行该计算机执行指令时,设备执行上述图3至图8中处理器所执行的步骤。
在本申请的另一实施例中,还提供一种计算机程序产品,该计算机程序产品包括计算机执行指令,该计算机执行指令存储在计算机可读存储介质中;当设备的处理器执行该计算机执行指令时,设备执行上述图3至图12中处理器所执行的步骤。
在本申请的另一实施例中,还提供一种芯片系统,该芯片系统包括处理器,该处理器用于任务处理的装置实现上述图3至图12中处理器所执行的步骤。在一种可能的设计中,芯片系统还可以包括存储器,存储器,用于保存数据任务处理的装置必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件 可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请实施例各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (20)

  1. 一种任务处理的方法,其特征在于,所述方法应用于计算机系统中,所述计算机系统包括用户态和内核态,所述用户态包括多个任务,所述任务为线程或进程,所述方法包括:
    在所述内核态,检测进入内核入口的第一请求的类型,所述内核入口为从所述用户态到所述内核态的入口,所述第一请求是所述用户态的第一任务触发的;
    当所述第一请求的类型指示所述第一任务在所述用户态暂停,则至少从第一任务的用户态上下文切换到第二任务的用户态上下文,并记录所述第一任务的第一调度状态,所述第一任务的第一调度状态为所述第一任务在所述用户态处于暂停状态以及所述第一任务从开始运行到暂停的运行时间;
    在所述用户态运行所述第二任务。
  2. 根据权利要求1所述的方法,其特征在于,所述当所述第一请求的类型指示所述第一任务在所述用户态暂停,至少从第一任务的用户态上下文切换到第二任务的用户态上下文,具体为:
    当所述第一请求的类型指示所述第一任务在所述用户态暂停,且所述第一请求的类型为预先配置的请求类型时,则只从所述第一任务的用户态上下文切换到第二任务的用户态上下文。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第一请求的类型指示所述第一任务在所述用户态暂停,至少从第一任务的用户态上下文切换到所述第二任务的用户态上下文,具体为:
    当所述第一请求的类型指示所述第一任务在所述用户态暂停,且所述第一请求的类型是非预先配置的请求类型时,则从所述第一任务的用户态上下文切换到第二任务的用户态上下文,且从所述第一任务的内核态上下文切换到所述第二任务的内核态上下文。
  4. 根据权利要求2所述的方法,其特征在于,所述预先配置的请求类型与业务场景相关,在所述业务场景中,所述预先配置的请求类型出现的次数高于所述业务场景中非预先配置的请求类型出现的次数。
  5. 根据权利要求2所述的方法,其特征在于,在所述用户态运行所述第二任务之后,所述方法还包括:
    检测进入所述内核入口的第二请求的类型,所述第二请求是所述用户态的目标任务触发的,所述目标任务为所述第二任务或者在所述第二任务之后连续运行的至少一个任务中的最后一个,所述第二任务和所述至少一个任务都触发了所述预先配置的请求类型的请求;
    当所述第二请求的类型指示所述目标任务在所述用户态暂停,且所述第二请求类型为所述预先配置的请求类型时,记录所述目标任务的第一调度状态,并只从所述目标任务的用户态上下文切换到第三任务的用户态上下文,其中,所述目标任务的第一调度状态包括所述目标任务在所述用户态处于暂停状态以及所述目标任务从开始运行到暂停的运行时间;
    在所述用户态运行所述第三任务。
  6. 根据权利要求2所述的方法,其特征在于,在所述用户态运行所述第二任务之后,所 述方法还包括:
    检测进入所述内核入口的第二请求的类型,所述第二请求是所述用户态的目标任务触发的,所述目标任务为所述第二任务或者在所述第二任务之后连续运行的至少一个任务中的最后一个,当所述目标任务为所述至少一个任务中的最后一个时,所述第二任务,以及所述至少一个任务中在所述目标任务之前运行的每个任务都触发了所述预先配置的请求类型的请求;
    当所述第二请求的类型指示所述目标任务在所述用户态暂停,且所述第二请求类型是非预先配置的请求类型时,则记录所述目标任务的第一调度状态,并从所述第一任务的内核态上下文切换到所述目标任务的内核态上下文,其中,所述目标任务的第一调度状态包括所述目标任务在所述用户态处于暂停状态以及所述目标任务从开始运行到暂停的运行时间。
  7. 根据权利要求6所述的方法,其特征在于,当所述目标任务没有被阻塞时,在从所述第一任务的内核态上下文切换到所述目标任务的内核态上下文之后,所述方法还包括:
    返回所述用户态继续运行所述目标任务。
  8. 根据权利要求6所述的方法,其特征在于,当所述目标任务被阻塞时,所述方法还包括:
    通过原生调度流程调度第三任务,并从所述目标任务切换到所述第三任务,所述原生调度流程需要处理从所述第一任务到所述至少一个任务中每个任务的所有调度状态;
    在所述用户态运行所述第三任务。
  9. 根据权利要求8所述的方法,其特征在于,所述通过原生调度流程调度第三任务,具体为:
    将从所述第一任务到所述至少一个任务中每个任务的第二调度状态,由所述每个任务开始运行时的调度状态修改为确定对所述第三任务执行原生调度流程时对应所述每个任务的调度状态,其中,所述每个任务的第二调度状态为所述每个任务的所有调度状态中除所述每个任务的第一调度状态之外的调度状态。
  10. 根据权利要求1-4任一项所述的方法,其特征在于,在远程过程调用RPC的调度中,所述第一请求中包括所述第二任务的信息,所述第二任务的信息用于调度所述第二任务。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    记录所述第一请求与所述第一任务关联的信息;
    通过运行所述第二任务,以得到返回结果;
    根据所述与所述第一任务关联的信息,将所述返回结果返回给所述第一任务;
    从所述第二任务切换回所述第一任务继续运行。
  12. 根据权利要求1-4任一项所述的方法,其特征在于,所述第二任务位于第一队列,所述第一队列为先入先出队列,所述第二任务为所述第一队列的所有任务中最先进入所述第一队列的任务。
  13. 根据权利要求9所述的方法,其特征在于,在简化公平调度的调度中,在执行针对所述第三任务的原生调度流程之前,所述方法还包括:
    将第一队列中的任务,以及所述第一队列中的任务的调度状态同步到第二队列,并将所述第三任务从所述第一队列中已输出的信息同步给所述第二队列,所述第二队列是用于所述原生调度流程的队列;
    将所述第一队列中的任务在所述第二队列中的位置信息同步给所述第一队列,所述位置信息用于调整所述第一队列中的任务在所述第一队列中的位置。
  14. 一种任务处理的装置,其特征在于,所述装置包括用户态和内核态,所述用户态包括多个任务,所述任务为线程或进程,所述装置包括:
    检测单元,用于在所述内核态,检测进入内核入口的第一请求的类型,所述内核入口为从所述用户态到所述内核态的入口,所述第一请求是所述用户态的第一任务触发的;
    第一处理单元,用于当所述检测单元检测的第一请求的类型指示所述第一任务在所述用户态暂停,则至少从第一任务的用户态上下文切换到第二任务的用户态上下文,并记录所述第一任务的第一调度状态,所述第一任务的第一调度状态为所述第一任务在所述用户态处于暂停状态以及所述第一任务从开始运行到暂停的运行时间;
    第二处理单元,用于在所述用户态运行所述第一处理单元切换的第二任务。
  15. 根据权利要求14所述的装置,其特征在于,
    第一处理单元,具体用于当所述第一请求的类型指示所述第一任务在所述用户态暂停,且所述第一请求的类型为预先配置的请求类型时,则只从所述第一任务的用户态上下文切换到第二任务的用户态上下文。
  16. 根据权利要求14或15所述的装置,其特征在于,
    所述第一处理单元,具体用于当所述第一请求的类型指示所述第一任务在所述用户态暂停,且所述第一请求的类型是非预先配置的请求类型时,则从所述第一任务的用户态上下文切换到第二任务的用户态上下文,且从所述第一任务的内核态上下文切换到所述第二任务的内核态上下文。
  17. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被一个或多个处理器执行时实现如权利要求1-13任一项所述的方法。
  18. 一种计算设备,其特征在于,包括一个或多个处理器和存储有计算机程序的计算机可读存储介质;
    所述计算机程序被所述一个或多个处理器执行时实现如权利要求1-13任一项所述的方法。
  19. 一种芯片系统,其特征在于,包括一个或多个处理器,所述一个或多个处理器被调用用于执行如权利要求1-13任一项所述的方法。
  20. 一种计算机程序产品,其特征在于,包括计算机程序,所述计算机程序当被一个或多个处理器执行时用于实现如权利要求1-13任一项所述的方法。
PCT/CN2022/141776 2021-12-28 2022-12-26 一种任务处理的方法及装置 WO2023125359A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111633084.9A CN116360930A (zh) 2021-12-28 2021-12-28 一种任务处理的方法及装置
CN202111633084.9 2021-12-28

Publications (1)

Publication Number Publication Date
WO2023125359A1 true WO2023125359A1 (zh) 2023-07-06

Family

ID=86929162

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/141776 WO2023125359A1 (zh) 2021-12-28 2022-12-26 一种任务处理的方法及装置

Country Status (2)

Country Link
CN (1) CN116360930A (zh)
WO (1) WO2023125359A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116684074A (zh) * 2023-07-25 2023-09-01 杭州海康威视数字技术股份有限公司 硬件密码模组多核调度算法驱动方法、装置及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5390329A (en) * 1990-06-11 1995-02-14 Cray Research, Inc. Responding to service requests using minimal system-side context in a multiprocessor environment
CN105939365A (zh) * 2015-06-29 2016-09-14 杭州迪普科技有限公司 主控板用户态从业务板内核态获取数据的方法及装置
CN113010275A (zh) * 2019-12-20 2021-06-22 大唐移动通信设备有限公司 一种中断处理方法和装置
CN113495780A (zh) * 2020-04-07 2021-10-12 Oppo广东移动通信有限公司 任务调度方法、装置、存储介质及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5390329A (en) * 1990-06-11 1995-02-14 Cray Research, Inc. Responding to service requests using minimal system-side context in a multiprocessor environment
CN105939365A (zh) * 2015-06-29 2016-09-14 杭州迪普科技有限公司 主控板用户态从业务板内核态获取数据的方法及装置
CN113010275A (zh) * 2019-12-20 2021-06-22 大唐移动通信设备有限公司 一种中断处理方法和装置
CN113495780A (zh) * 2020-04-07 2021-10-12 Oppo广东移动通信有限公司 任务调度方法、装置、存储介质及电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116684074A (zh) * 2023-07-25 2023-09-01 杭州海康威视数字技术股份有限公司 硬件密码模组多核调度算法驱动方法、装置及电子设备
CN116684074B (zh) * 2023-07-25 2023-10-20 杭州海康威视数字技术股份有限公司 硬件密码模组多核调度算法驱动方法、装置及电子设备

Also Published As

Publication number Publication date
CN116360930A (zh) 2023-06-30

Similar Documents

Publication Publication Date Title
WO2021217529A1 (zh) 一种进程间通信的方法及系统
US11537430B1 (en) Wait optimizer for recording an order of first entry into a wait mode by a virtual central processing unit
CN112491426B (zh) 面向多核dsp的服务组件通信架构及任务调度、数据交互方法
CN113918101B (zh) 一种写数据高速缓存的方法、系统、设备和存储介质
CN113010275B (zh) 一种中断处理方法和装置
CN113485822A (zh) 内存管理方法、系统、客户端、服务器及存储介质
CN111190854B (zh) 通信数据处理方法、装置、设备、系统和存储介质
WO2023125359A1 (zh) 一种任务处理的方法及装置
CN114168271B (zh) 一种任务调度方法、电子设备及存储介质
US20240106754A1 (en) Load Balancing Method for Multi-Thread Forwarding and Related Apparatus
CN115361451A (zh) 一种网络通信并行处理方法及系统
WO2022042127A1 (zh) 一种协程切换的方法、装置及设备
US20210311782A1 (en) Thread scheduling for multithreaded data processing environments
CN116414534A (zh) 任务调度方法、装置、集成电路、网络设备及存储介质
US20120317403A1 (en) Multi-core processor system, computer product, and interrupt method
CN107436752B (zh) 异常现场恢复方法、装置及计算机可读存储介质
CN109426563B (zh) 一种进程管理方法及装置
CN110018782B (zh) 一种数据读/写方法及相关装置
CN115378685A (zh) 数据处理方法、系统、电子设备及计算机可读存储介质
CN114328350A (zh) 一种基于axi总线的通讯方法、装置以及介质
CN112181641A (zh) 线程处理方法、装置、设备及存储介质
CN115904644A (zh) 任务调度方法、电子设备和计算机程序产品
JP2005519393A (ja) 仮想直接メモリ・アクセスのための方法及び装置
CN113704297B (zh) 业务处理请求的处理方法、模块及计算机可读存储介质
WO2017020639A1 (zh) 网络处理器、报文处理数据的获取方法和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22914613

Country of ref document: EP

Kind code of ref document: A1