CN110968418A - Signal-slot-based large-scale constrained concurrent task scheduling method and device - Google Patents

Signal-slot-based large-scale constrained concurrent task scheduling method and device Download PDF

Info

Publication number
CN110968418A
CN110968418A CN201811160925.7A CN201811160925A CN110968418A CN 110968418 A CN110968418 A CN 110968418A CN 201811160925 A CN201811160925 A CN 201811160925A CN 110968418 A CN110968418 A CN 110968418A
Authority
CN
China
Prior art keywords
signal
task
present application
unit
task scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811160925.7A
Other languages
Chinese (zh)
Inventor
侯俊伟
王树珂
路向峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Memblaze Technology Co Ltd
Original Assignee
Beijing Memblaze Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Memblaze Technology Co Ltd filed Critical Beijing Memblaze Technology Co Ltd
Priority to CN201811160925.7A priority Critical patent/CN110968418A/en
Publication of CN110968418A publication Critical patent/CN110968418A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Abstract

The application discloses a scheduling method and device of large-scale constrained concurrent tasks based on signal-slot. The disclosed task scheduling method comprises the following steps: acquiring a signal; in response to the signal being in a "ready" state, a task is scheduled to process the signal.

Description

Signal-slot-based large-scale constrained concurrent task scheduling method and device
Technical Field
The present application relates to task scheduling, and in particular, to scheduling of large-scale constrained concurrent tasks based on signal-slots.
Background
In some applications, the processor handles large-scale concurrent tasks. Such as an embedded processor for a network device, a storage device, processes multiple network packets or IO commands concurrently.
In a desktop CPU and a server CPU, an operating system is operated, and a plurality of processes and/or threads operated on the CPU are scheduled by the operating system to process tasks, so that a user does not need to intervene the switching between the processes/threads too much, and the operating system selects an appropriate process/thread to schedule so as to fully utilize the computing capacity of the CPU. However, in the embedded CPU, resources such as usable memory, CPU processing power, and the like are limited. And some embedded systems have strict requirements on performance, especially on task processing delay, for which the operating systems of the prior art are difficult to adapt.
Disclosure of Invention
Taking the storage device for processing the IO command as an example, the storage device processes large-scale concurrent IO commands at the same time. For example, there may be hundreds, thousands, or more IO commands pending or being processed at the same time. The processing of each IO command, in turn, includes multiple phases that may not be processed consecutively in close temporal proximity due to waiting for software and/or hardware resources.
The memory device needs the capability to process a large number of IO commands simultaneously, and needs to try to shorten the processing delay of the IO commands and avoid the processing delay of individual IO commands being too long. Therefore, it is necessary to efficiently schedule the IO commands and the processing tasks of the stages of the IO commands.
According to a first aspect of the present application, there is provided a first task scheduling method according to the first aspect of the present application, including the steps of: acquiring a signal; and scheduling the task for processing the signal in response to the availability of resources required by the task for processing the signal.
According to the second task scheduling method of the first aspect of the present application, there is provided the second task scheduling method of the first aspect of the present application, wherein if a resource of a task corresponding to processing the signal is unavailable, the signal is added to a signal buffer.
According to the second task scheduling method of the first aspect of the present application, there is provided the third task scheduling method of the first aspect of the present application, wherein the signal is acquired from the signal buffer, and a task that processes the acquired signal is invoked.
According to the task scheduling method of any one of the first to third aspects of the present application, there is provided the fourth task scheduling method of the first aspect of the present application, wherein the types of the signals correspond to the signal buffers one to one.
According to a fourth task scheduling method of the first aspect of the present application, there is provided the fifth task scheduling method of the first aspect of the present application, wherein a class of the signal is identified according to a value of the signal, and each class of the signal has a corresponding task of processing the signal.
According to the task scheduling method of any one of the first to fifth aspects of the present application, there is provided the sixth task scheduling method of the first aspect of the present application, wherein the signal is added to the signal buffer corresponding to the signal type according to the signal type.
According to a sixth task scheduling method of the first aspect of the present application, there is provided the seventh task scheduling method of the first aspect of the present application, wherein the signal is obtained from the signal buffer, and the task corresponding to the signal type is called according to the signal type.
According to a sixth task scheduling method of the first aspect of the present application, there is provided the eighth task scheduling method of the first aspect of the present application, wherein the scheduled task is obtained with the acquired signal as an index.
According to the task scheduling method of any one of the first to ninth aspects of the present application, there is provided the ninth task scheduling method of the first aspect of the present application, wherein the signal buffers are selected in a random, round-robin, weighted round-robin or other manner.
According to a ninth task scheduling method of the first aspect of the present application, there is provided the tenth task scheduling method of the first aspect of the present application, wherein a priority of the signal buffering is set.
According to the task scheduling method of any one of the first to third aspects of the present application, there is provided the eleventh task scheduling method of the first aspect of the present application, wherein the signal is used for processing an IO command, and the signal buffer corresponds to the IO command one to one.
According to an eleventh task scheduling method of the first aspect of the present application, there is provided the twelfth task scheduling method of the first aspect of the present application, wherein in response to that resources required for processing a task corresponding to the signal are available, it is identified whether processing of an IO command indicated by the signal has started.
According to a twelfth task scheduling method of the first aspect of the present application, there is provided the thirteenth task scheduling method of the first aspect of the present application, wherein if it is recognized that the IO command indicated by the signal has not started, the signal is added to a signal buffer corresponding to the IO command indicated by the signal.
According to the twelfth or thirteenth task scheduling method of the first aspect of the present application, there is provided the fourteenth task scheduling method of the first aspect of the present application, wherein if it is recognized that the IO command indicated by the signal has started, a task for acquiring the signal is scheduled and processed.
According to the task scheduling method of any one of the twelfth to fourteenth aspects of the first aspect of the present application, there is provided the fifteenth task scheduling method of the first aspect of the present application, wherein if a resource of a task corresponding to the signal is processed is unavailable, the signal is added to a signal buffer corresponding to an IO command according to the IO command corresponding to the signal.
According to a fifteenth task scheduling method of the first aspect of the present application, there is provided the sixteenth task scheduling method of the first aspect of the present application, wherein in response to starting to process a new IO command, a new signal buffer is allocated for the new IO command.
According to the task scheduling method of any one of the twelfth to fourteenth aspects of the first aspect of the present application, there is provided the seventeenth task scheduling method of the first aspect of the present application, wherein the signal is acquired from the signal buffer, and a task that processes the acquired signal is invoked.
According to a seventeenth task scheduling method of the first aspect of the present application, there is provided the eighteenth task scheduling method of the first aspect of the present application, wherein the signal buffer is selected according to a priority of the signal buffer.
According to an eighteenth task scheduling method of the first aspect of the present application, there is provided the nineteenth task scheduling method of the first aspect of the present application, wherein the priority of each signal buffer is set according to the time when the IO command corresponding to the signal buffer starts to be processed.
According to an eighteenth task scheduling method of the first aspect of the present application, there is provided the twentieth task scheduling method of the first aspect of the present application, wherein the priority of the signal buffering is set according to the order in which the IO commands are started to be processed.
According to an eighteenth or twentieth task scheduling method of the first aspect of the present application, there is provided the twenty-first task scheduling method of the first aspect of the present application, wherein the sequence number of the processing of the recorded IO command is incremented, and the priority of the signal buffering is set with the incremented sequence number.
According to a task scheduling method of any one of the eleventh to twenty-first aspects of the present application, there is provided the twenty-second task scheduling method of the first aspect of the present application, wherein in response to an IO command being processed, a signal buffer accommodating its corresponding signal is recycled or allocated to a new IO command.
According to the task scheduling method of any one of the eleventh to twenty-second aspects of the present application, there is provided the twenty-third task scheduling method of the first aspect of the present application, wherein the IO command includes a plurality of stages, a signal indicates one of the stages of the IO command, and signals corresponding to the plurality of stages belonging to the same IO command are processed in an order of the plurality of stages in the IO command.
According to a twenty-fourth task scheduling method of the first aspect of the present application, there is provided the twenty-fourth task scheduling method of the first aspect of the present application, wherein processing the IO command includes sequential S1 stages and S2 stages.
According to the twenty-fifth task scheduling method of the first aspect of the present application, there is provided the twenty-fifth task scheduling method of the first aspect of the present application, wherein if the signal indicates the S1 stage of the IO command, processing of the IO command is not yet started; if the signal indicates that the stage S2 of the IO command is being processed, then processing of the IO command has already been initiated.
According to the task scheduling method of any one of the first to twenty-fifth aspects of the present application, there is provided the twenty-sixth task scheduling method of the first aspect of the present application, wherein the signal has a quota, and if the quota of the signal is greater than a threshold, the task of the acquired signal is scheduled and processed.
According to a twenty-sixth task scheduling method of the first aspect of the present application, there is provided the twenty-seventh task scheduling method of the first aspect of the present application, wherein if the quota of the signal is not greater than the threshold, the signal is added to the signal cache.
According to a twenty-sixth task scheduling method of the first aspect of the present application, there is provided the twenty-eighth task scheduling method of the first aspect of the present application, wherein a task that processes the acquired signal is invoked, so that the quota is reduced by a specified amount.
According to the task scheduling method of any one of the first to twenty-eight aspects of the present application, there is provided the twenty-ninth task scheduling method of the first aspect of the present application, wherein in response to acquiring the signal, a CPU or a thread of the task corresponding to the operation signal is identified, and the signal is added to a signal buffer belonging to the CPU or the thread of the task corresponding to the operation signal.
According to a second aspect of the present application, there is provided a first task management unit also according to the second aspect of the present application, comprising a signal distribution unit, one or more signal buffers, and a task scheduling unit; a signal distributing unit that adds a signal to one of the signal buffers; the task scheduling unit selects one of the signal buffers, fetches the signal from the selected signal buffer, and schedules one or more task processing units to process the task indicated by the signal.
According to the first task management unit of the second aspect of the present application, there is provided the second task management unit of the second aspect of the present application, wherein the signal distribution unit schedules the task processing unit to process the task indicated by the signal without adding the signal to the signal buffer, in response to that the resource required by the task corresponding to the signal is available.
According to the first task management unit of the second aspect of the present application, there is provided the third task management unit of the second aspect of the present application, wherein the signal distributing unit adds the signal to the signal buffer in response to that the resource required by the task corresponding to the signal is not available.
According to the task management unit of any one of the first to third aspects of the present application, there is provided the fourth task management unit of the second aspect of the present application, wherein the signal distributing unit adds the signal to one of the signal buffers according to the kind of the signal; the types of the signals correspond to the signal caches one by one;
according to a fourth task management unit of the second aspect of the present application, there is provided the fifth task management unit of the second aspect of the present application, wherein the signal distributing unit adds the signal to a signal buffer corresponding to the signal type according to the signal type.
According to a fifth task management unit of the second aspect of the present application, there is provided the sixth task management unit of the second aspect of the present application, wherein the task scheduling unit selects one of the signal buffers periodically, in response to an interrupt, or in response to a current task being processed completely.
According to a sixth task management unit of the second aspect of the present application, there is provided the seventh task management unit of the second aspect of the present application, wherein the task scheduling unit schedules the task processing unit to process the task indicated by the signal according to the kind of the signal.
According to the task management unit of any one of the first to seventh aspects of the present application, there is provided the eighth task management unit of the second aspect of the present application, wherein the task scheduling unit selects the signal buffer in a random, round-robin, weighted round-robin manner.
According to an eighth task management unit of the second aspect of the present application, there is provided the ninth task management unit of the second aspect of the present application, wherein a priority of the signal buffering is set.
According to the task management unit of any one of the first to ninth aspects of the present application, there is provided the tenth task management unit of the second aspect of the present application, wherein the signal distributing unit receives a signal provided by the task processing unit or hardware, the signal indicating the task processing unit that processes the task corresponding to the signal.
According to the task management unit of any one of the first to third aspects of the present application, there is provided the eleventh task management unit of the second aspect of the present application, wherein the signal distributing unit obtains, according to the signal, an IO command to which a task corresponding to the signal belongs.
According to an eleventh task management unit of the second aspect of the present application, there is provided the twelfth task management unit of the second aspect of the present application, wherein the signal distributing unit adds the acquired signal to a signal buffer corresponding to an IO command corresponding to the signal, and the signal buffers correspond to the IO commands being processed one to one.
According to the eleventh or twelfth task management unit of the second aspect of the present application, there is provided the thirteenth task management unit of the second aspect of the present application, wherein the signal distributing unit schedules the task processing unit that processes the task corresponding to the signal in response to the resource required by the task corresponding to the signal being available.
According to a task management unit of any one of the eleventh to thirteenth aspects of the second aspect of the present application, there is provided the fourteenth task management unit of the second aspect of the present application, wherein the signal distributing unit directly schedules the task processing unit to process the task indicated by the signal without adding the signal to the signal buffer, in response to that the IO command corresponding to the signal has started to be processed or that the IO command corresponding to the signal has started to be processed for a sufficiently long time or is sufficiently long relative to other IO commands.
According to the task management unit of any one of the eleventh to fourteenth aspects of the second aspect of the present application, there is provided the fifteenth task management unit of the second aspect of the present application, wherein the task scheduling unit sets the priority of each signal buffer according to the time when the IO command corresponding to the signal buffer is started to be processed.
According to the task management unit of any one of the eleventh to fifteenth aspects of the present application, there is provided the sixteenth task management unit of the second aspect of the present application, wherein the task scheduling unit sets the priority of the signal buffer according to the order in which the IO commands are started to be processed.
According to a task management unit of any one of the eleventh to sixteenth aspects of the second aspect of the present application, there is provided the seventeenth task management unit of the second aspect of the present application, wherein in response to starting to process a new IO command, the signal distributing unit allocates a new signal buffer to the new IO command to accommodate a signal corresponding to the IO command.
According to a seventeenth task management unit of the second aspect of the present application, there is provided the eighteenth task management unit of the second aspect of the present application, wherein the sequence number of the current recording IO command is incremented, and the signal distributing unit sets the priority of the newly allocated signal buffer with the incremented sequence number.
According to a task management unit of any one of the eleventh to eighteenth aspects of the second aspect of the present application, there is provided the nineteenth task management unit of the second aspect of the present application, wherein, in response to an IO command being processed, a signal buffer accommodating its corresponding signal is recycled or allocated to a new IO command.
According to the task management unit of any one of the eleventh to nineteenth aspects of the present application, there is provided the twentieth task management unit of the second aspect of the present application, wherein the task processing unit that runs on a plurality of CPU cores or supports multithreading, manages the CPU core or the thread on which the task processing unit is located.
According to a twentieth task management unit of the second aspect of the present application, there is provided the twenty-first task management unit of the second aspect of the present application, wherein in response to the acquisition signal, the signal distribution unit identifies on which CPU the task processing unit corresponding to the signal is running.
According to a twenty-first task management unit of the second aspect of the present application, there is provided the twenty-second task management unit of the second aspect of the present application, wherein the signal distributing unit adds a signal to a signal buffer which does not belong to the same CPU as the signal distributing unit, and the task scheduling unit schedules a task processing unit of the same CPU as the task scheduling unit.
According to a task management unit of any one of the twentieth to twenty-second aspects of the present application, there is provided the twenty-third task management unit of the second aspect of the present application, wherein the one or more task processing units scheduled by the task scheduling unit in the CPU0 are executed at the CPU0, and the one or more task processing units scheduled by the task scheduling unit in the CPU1 are executed at the CPU 1.
According to a third aspect of the present application, there is provided a first task scheduling method according to the third aspect of the present application, including: acquiring a signal; in response to the signal being in a "ready" state, a task is scheduled to process the signal.
According to the first task scheduling method of the third aspect of the present application, there is provided the second task scheduling method of the third aspect of the present application, further comprising: and if the processed signal is in a waiting state, recording the signal in a signal buffer.
According to the first or second task scheduling method of the third aspect of the present application, there is provided the third task scheduling method of the third aspect of the present application, further comprising: the signal in the "ready" state is retrieved from the signal buffer and the task of processing the retrieved signal is invoked.
According to one of the first to third task scheduling methods of the third aspect of the present application, there is provided the fourth task scheduling method of the third aspect of the present application, wherein the types of signals are in one-to-one correspondence with signal buffers.
According to one of the second to fourth task scheduling methods of the third aspect of the present application, there is provided the fifth task scheduling method of the third aspect of the present application, wherein the signal is recorded in a signal buffer corresponding to a signal type according to the signal type.
According to the third or fourth task scheduling method of the third aspect of the present application, there is provided the sixth task scheduling method of the third aspect of the present application, wherein the signal is obtained from the signal buffer, and the task corresponding to the signal type is scheduled according to the signal type.
According to one of the first to sixth task scheduling methods of the third aspect of the present application, there is provided a seventh task scheduling method of the third aspect of the present application, wherein the signal buffer is selected in a random, round-robin or weighted round-robin manner.
According to one of the first to seventh task scheduling methods of the third aspect of the present application, there is provided the eighth task scheduling method of the third aspect of the present application, wherein the signal buffer has a priority.
According to the first task scheduling method of the third aspect of the present application, there is provided the ninth task scheduling method of the third aspect of the present application, wherein the signal is used for processing an IO command, and the signal buffers are in one-to-one correspondence with the IO command.
According to a ninth task scheduling method of the third aspect of the present application, there is provided the tenth task scheduling method of the third aspect of the present application, wherein if the signal is in a "ready" state, if the IO command indicated by the signal has not started to be processed, the signal is added to a signal buffer corresponding to the IO command indicated by the signal.
According to a ninth task scheduling method of the third aspect of the present application, there is provided the eleventh task scheduling method of the third aspect of the present application, wherein if the IO command indicated by the signal has started to be processed, the task for processing the acquired signal is scheduled.
According to one of the ninth to eleventh task scheduling methods of the third aspect of the present application, there is provided the twelfth task scheduling method of the third aspect of the present application, wherein if the signal is processed in a "wait" state, the signal is added to a signal buffer corresponding to the IO command.
According to a twelfth task scheduling method of the third aspect of the present application, there is provided the thirteenth task scheduling method of the third aspect of the present application, wherein in response to starting to process a new IO command, a new signal buffer is allocated for the new IO command.
According to one of the ninth to thirteenth task scheduling methods of the third aspect of the present application, there is provided the fourteenth task scheduling method according to the third aspect of the present application, wherein the signal buffers are selected according to priorities of the signal buffers.
According to a fourteenth task scheduling method of the third aspect of the present application, there is provided the fifth task scheduling method of the third aspect of the present application, wherein the priority of each signal buffer is set according to the time when the IO command corresponding to the signal buffer starts to be processed.
According to a fourteenth task scheduling method of the third aspect of the present application, there is provided the sixteenth task scheduling method of the third aspect of the present application, wherein the priority of the signal buffering is set according to the order in which the IO commands are started to be processed.
According to one of the ninth to sixteenth task scheduling methods of the third aspect of the present application, there is provided the seventeenth task scheduling method of the third aspect of the present application, wherein the IO command includes a plurality of stages, a signal indicates one of the stages of the IO command, and signals corresponding to the plurality of stages belonging to the same IO command are processed in an order of the plurality of stages in the IO command.
According to one of the first to seventeenth task scheduling methods of the third aspect of the present application, there is provided an eighteenth task scheduling method according to the third aspect of the present application, further comprising: and responding to the acquired signal, identifying the progress of the task corresponding to the running signal, and adding the signal to the signal cache of the progress of the task corresponding to the running signal.
According to an eighteenth task scheduling method of the third aspect of the present application, there is provided the nineteenth task scheduling method of the third aspect of the present application, further comprising: and identifying the execution state of the process of the task corresponding to the running signal, and if the process is in the dormant state, awakening the process.
According to a nineteenth task scheduling method of the third aspect of the present application, there is provided the twentieth task scheduling method of the third aspect of the present application, wherein if the process is in a running state, there is no need to wake up the process again.
According to a nineteenth or twentieth task scheduling method of the third aspect of the present application, there is provided the twenty-first task scheduling method of the third aspect of the present application, wherein the first processor in which the process of the task corresponding to the execution signal is located is identified, and if the first processor is a remote processor with respect to the current processor and the process is in a sleep state, an interrupt is sent to the first processor.
According to a twenty-first task scheduling method of the third aspect of the present application, there is provided a twenty-second task scheduling method of the third aspect of the present application, further comprising: the interrupt processing unit of the first processor responds to the received interrupt, acquires a first process and/or a second process with signals to be processed, and wakes up the first process and/or the second process.
According to one of the eighteenth to twenty-second task scheduling methods of the third aspect of the present application, there is provided the twenty-third task scheduling method according to the third aspect of the present application, further comprising: and responding to the acquired signal, identifying the progress of the task corresponding to the running signal, and setting the progress to have a signal to be processed when the progress is in the progress state table.
According to a twenty-third task scheduling method of the third aspect of the present application, there is provided a twenty-fourth task scheduling method of the third aspect of the present application, wherein the interrupt handler of the first processor acquires the first process and/or the second process having a signal to be processed through the process state table.
According to one of the first to twenty-fourth task scheduling methods of the third aspect of the present application, there is provided the twenty-fifth task scheduling method according to the third aspect of the present application, wherein the signal includes a plurality of switches, and the signal whose all switches are closed is a signal in a "ready" state; where the switches correspond to the resources required to process the signal.
According to a twenty-fifth task scheduling method of the third aspect of the present application, there is provided the twenty-sixth task scheduling method of the third aspect of the present application, wherein the signal whose all switches have any switch turned off is a signal in a "waiting" state.
According to a twenty-fifth or twenty-sixth task scheduling method of the third aspect of the present application, there is provided a twenty-seventh task scheduling method of the third aspect of the present application, further comprising: in response to the first resource required to process the first signal being available, a switch of the first signal corresponding to the first resource is closed.
According to one of the twenty-fifth to twenty-seventh task scheduling methods of the third aspect of the present application, there is provided a twenty-eighth task scheduling method of the third aspect of the present application, further comprising: in response to the first resource required to process the first signal being unavailable, the switch of the first signal corresponding to the first resource is opened.
According to a twenty-eighth task scheduling method of the third aspect of the present application, there is provided the twenty-ninth task scheduling method of the third aspect of the present application, further comprising: the first resource becomes unavailable in response to the task processing the first signal or the second signal consuming the first resource.
According to one of the first to twenty-ninth task scheduling methods of the third aspect of the present application, there is provided the thirtieth task scheduling method of the third aspect of the present application, further comprising: all signals in the signal buffer are cleared.
According to one of the first to thirty-first task scheduling methods of the third aspect of the present application, there is provided a thirty-first task scheduling method according to the third aspect of the present application, further comprising: in response to acquiring the signal, a determination is made whether to directly schedule a task to process the signal or add the signal to a signal buffer, based on an API providing the signal or a parameter associated with the signal.
According to a thirty-first task scheduling method of the third aspect of the present application, there is provided a thirty-second task scheduling method of the third aspect of the present application, further comprising: in response to determining to directly schedule a task to process a signal, the signal is added to a signal buffer if the signal is in a "wait" state.
According to one of the first to thirty-second task scheduling methods of the third aspect of the present application, there is provided a thirty-third task scheduling method according to the third aspect of the present application, further comprising: in response to acquiring the signals, the order in which the signals are processed is determined according to the API that provided the signals or parameters associated with the signals.
According to a thirty-third task scheduling method of the third aspect of the present application, there is provided a thirty-fourth task scheduling method of the third aspect of the present application, further comprising: in response to signals to be processed in sequence, the signals are added to a signal buffer in the form of a queue.
According to a thirty-third task scheduling method of the third aspect of the present application, there is provided the thirty-fifth task scheduling method of the third aspect of the present application, further comprising: in response to a signal being processed in sequence and a task being scheduled to process the signal directly, the task being scheduled to process the signal is scheduled in response to the signal being in a "ready" state.
According to a thirty-fifth task scheduling method of the third aspect of the present application, there is provided the thirty-sixth task scheduling method of the third aspect of the present application, further comprising: in response to signals being processed in sequence and tasks for processing the signals being scheduled directly, signals are added to a signal buffer in the form of a queue in response to the signals being in a "wait" state.
According to a fourth aspect of the present application, there is provided an information processing apparatus according to the fourth aspect of the present application, comprising control means for executing one of the task scheduling methods according to the third aspect of the present application, and a memory.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1A is a schematic diagram of task scheduling provided according to an embodiment of the present application;
FIG. 1B is a block diagram of a task processing system provided according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a task management unit provided according to an embodiment of the present application;
FIG. 3 illustrates a schematic diagram of signals according to an embodiment of the present application;
FIG. 4A is a schematic diagram of a task management unit according to yet another embodiment of the present application;
FIG. 4B illustrates a flow chart for distributing signals according to an embodiment of the present application;
FIG. 5A is a schematic diagram of a task management unit provided in accordance with yet another embodiment of the present application;
fig. 5B is a flowchart of a signal distributing unit distributing a signal according to an embodiment of the present application;
FIG. 6A is a schematic diagram of a task management unit provided in accordance with yet another embodiment of the present application;
fig. 6B is a flowchart of a signal distribution unit distributing a signal provided according to yet another embodiment of the present application;
FIG. 7A is a schematic diagram of a task management unit provided in accordance with yet another embodiment of the present application;
fig. 7B is a flowchart of a signal distributing unit distributing a signal according to an embodiment of the present application;
fig. 7C is a flowchart of a signal distribution unit distributing a signal provided according to still another embodiment of the present application;
FIG. 8A is a schematic illustration of tasks provided according to another embodiment of the present application; and
fig. 8B is a schematic diagram of a task management unit provided according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1A is a schematic diagram of task scheduling according to an embodiment of the present application.
In fig. 1A, the direction from left to right is the direction in which time elapses. Also shown are a plurality of tasks (1-1, 2-1, 3-1, 1-2, 2-2 and 3-2) being processed, wherein in the reference numerals structured "a-b", the preceding symbol a indicates a task and the following symbol b indicates a subtask included in the task. FIG. 1A illustrates that 3 tasks are processed in a time sequence, each task comprising 2 subtasks.
The solid arrows indicate the temporal order of processing a plurality of tasks, and the dashed arrows indicate the logical order of processing of the tasks. For example, taking task 1 as an example, it is required to process its subtask 1-1 (task 1-1) first, and then to process its subtask 1-2 (task 1-2). Still by way of example, referring to FIG. 1A, after processing task 1-1, task 1-2 (because the required resources are not ready) cannot be processed immediately, and thus task 2-1 is scheduled to execute, and task 3-1, then the resources required by task 1-2 are identified as ready, and after processing task 3-1, task 1-2 is scheduled to execute.
On a processor, tasks are processed by executing code segments. A single CPU (or CPU core) that processes only a single task at any one time. Illustratively, as shown in FIG. 1A, for multiple tasks to be processed, a code segment for processing task 1-1 is executed first, a code segment for processing task 2-1 is executed next, a code segment for processing task 3-1 is executed next, a code segment for processing task 1-2 is executed next, a code segment for processing task 2-2 is executed next, and a code segment for processing task 3-2 is executed next. The logical order of processing of tasks is indicated in the code segments of the respective processing tasks. For example, the logical sequence includes tasks 1-2 to be processed after tasks 1-1. As yet another example, a code segment whose logical order post-processing is indicated in the code segments of processing task 1-1 should be the code segment of processing task 1-2.
According to embodiments of the present application, a code segment indicates a logical order of code segments that should be executed later by sending a signal.
For example, the value of the signal is used as an index to the code segment to be executed.
Fig. 1B is a block diagram of a task processing system provided according to an embodiment of the present application.
Referring to fig. 1B, the task processing system includes two parts, software and hardware. The hardware includes, for example, one or more CPUs running software, other hardware resources (e.g., memory, codecs, interfaces, accelerators, interrupt controllers, etc.) associated with processing related tasks.
A code segment of software running on the CPU is referred to as a task processing unit. The task processing system includes a plurality of task processing units. Each task processing unit processes the same or different tasks. For example, task processing unit 0 processes a first sub-task of a task (e.g., task 1-1, task 2-1, and task 3-1), while task processing unit 1, task processing unit 2, and task processing unit 3 process a second sub-task of a task (e.g., task 1-2, task 2-2, and task 3-2).
The software further comprises a task management unit for scheduling one of the task processing units to run on the hardware.
When executed by the CPU, the task management unit provides an API (Application Programming Interface) for the task processing unit. And the task processing unit informs the task management unit of other task processing units to be scheduled subsequently through calling the API. The task management unit checks whether the resources required by the other task processing unit are ready, and schedules the other task processing unit when the resources are ready.
The resources required by the task processing unit include, for example, storage space, data to be read, an indication of completion of a data write operation, and the like.
By using the task management unit and providing the API for the task processing unit, the task processing unit only needs to specify the subsequent task processing unit according to the logic sequence of the task, and the task management unit efficiently schedules the task processing unit under the condition of meeting the requirement of the logic sequence.
In the embodiment according to the application, the task processing units do not need to poll or wait for resource availability, and the sequence among the task processing units does not need to be maintained, so that the CPU utilization rate is improved, and the complexity of the task processing units is also reduced.
Fig. 2 is a schematic diagram of a task management unit provided according to an embodiment of the present application.
The task management unit includes a signal distribution unit 210, one or more signal buffers (220, 222 … … 22n), and a task scheduling unit 240.
The task scheduling unit 240 schedules one or more task processing units (260, 262 … … 26n) to process the tasks indicated by the signals. By way of example, the signals correspond one-to-one to the task processing units.
The signal distribution unit 210 receives a signal provided by a task processing unit or hardware, the signal indicating a task processing unit that subsequently needs to be scheduled to process a task indicated by the signal. For example, the task processing unit provides a signal to the signal distribution unit 210 by calling an API; or hardware provides the signal to the signal distribution unit 210 through an interrupt.
There are various types of signals, for example, a signal type is identified according to a value of the signal, and each type of signal has a task processing unit corresponding to the signal. The task management unit provides a signal buffer for each signal. For example, signal buffer 220 is used to carry signal 0, signal buffer 222 is used to carry signal 2, and signal buffer 22n is used to carry signal n. The signal cache includes a plurality of entries, each entry for accommodating an instance of a signal. The signal instances in the buffer are ordered in the order in which they were added to the buffer. Signals are added to the buffer from the end of the queue and signals are taken from the head of the queue.
In response to receiving the signal, the signal distribution unit 210 adds the signal to a signal buffer corresponding to the kind of the signal according to the kind of the received signal. And optionally, returning the API for providing the signal without blocking the caller of the API.
Periodically, in response to an interrupt, in response to the current task being processed, or for other reasons, the task scheduling unit 240 has an opportunity to schedule the task processing unit based on the signals in the signal buffer. The task scheduling unit 240 fetches a signal from one of the signal buffers and schedules one of the task processing units to process the signal. In one example, the task scheduling unit 240 calls a task processing unit corresponding to the signal type according to the type of the extracted signal. As yet another example, the task scheduling unit 240 obtains the scheduled task processing unit with the fetched signal as an index.
Optionally, the task scheduling unit 240 selects the signal buffer in a random, round-robin, weighted round-robin, or the like manner. Each signal buffer may be prioritized. The priority of each signal buffer may be adjusted.
Still optionally, the task management unit also clears the signal in the signal buffer. For example, in response to receiving a shutdown or power down request, the task management unit clears all signals in the signal buffer to terminate processing of the signals.
Fig. 3 shows a schematic diagram of a signal according to an embodiment of the application.
Signal 310 includes optionally one or more parameters (320, 322), one or more switches (340, 342, 344, 346), and/or task slots 350. Still optionally, signal 310 includes a signal identifier (not shown) that uniquely identifies the signal.
The parameters are used to carry information or data that the signal passes to the task processing unit. The signal may not include a parameter.
The switch indicates whether the task processing unit should be scheduled to process the signal. Each switch indicates two states, "open" and "closed". The task scheduling unit 240 (see also fig. 2) schedules the task processing units to process the signals only when all switches of the signals are in the "closed" state. A signal whose switches are all in the "closed" state is referred to as a signal in the "ready" state. When any one of the switches of a signal is in the "off" state, meaning that the execution conditions for the signal have not been met, such signal is not currently being processed. A signal whose switches are not all in the "closed" state is referred to as a signal in the "wait" state.
As an example, a switch represents whether the resources required to process a signal are available. For example, switch S1 represents whether decoder resources are available and switch S2 represents whether cache resources are available. When a resource becomes available, the corresponding switch is set to the "closed" state. Thus, when all resources required for processing the signal are available, the task scheduling unit 240 recognizes that the signal is in a "ready" state and schedules the corresponding task processing unit to process the signal. Alternatively or additionally, the resource is used in response to the task processing unit processing the signal. If the resources are no longer available, the corresponding switches are set to the "off" state, so that the signals to which these switches belong are in the "waiting" state. The task scheduling unit 240 will not schedule the task processing unit to process the signal in the "wait" state.
The task slot indicates a task processing unit, and the task scheduling unit 240 schedules the task processing unit to process a signal according to the indication of the task slot. Optionally, some signals do not include task slots. And the task processing unit is used for obtaining the task processing unit for processing the signal according to the identifier of the signal or the signal buffer where the signal is located.
Optionally, the signal 310 further comprises information indicating a priority.
Fig. 4A is a schematic diagram of a task management unit according to yet another embodiment of the present application.
The task management unit includes a signal distribution unit 410, one or more signal buffers (420, 422 … … 42n), a task scheduling unit 440, and a resource management unit 450.
The signal includes a plurality of switches. In the example of fig. 4A, each entry of the signal buffer represents a signal, each signal including 4 switches, as indicated by the squares in the entries with the signal buffer.
The task scheduling unit 440 schedules one or more task processing units (460, 462 … … 46n) to process the tasks indicated by the signals.
The signal distribution unit 410 receives a signal provided by a task processing unit or hardware, the signal indicating a subsequent task processing unit that needs to be scheduled to process the task indicated by the signal. In response to receiving the signal, the signal distribution unit 410 adds the signal to a signal buffer corresponding to the kind of the signal according to the kind of the received signal.
The task scheduling unit 440 schedules the task processing units according to the signals in the signal buffer. The task scheduling unit 440 fetches a signal from one of the signal buffers and schedules one of the task processing units to process the signal.
The resource management unit 450 sets a switch corresponding to an available resource to a "closed" state in response to availability of one or more resources. By way of example, in response to a resource being available, the resource management unit 450 sets all switches corresponding to the resource to a "closed" state.
Still alternatively, the task processing unit (460, 462 … … 46n) processes the indicated task and consumes resources. In response to one or more resources becoming unavailable due to being consumed, the resource management unit 450 also sets the switch corresponding to the unavailable resource to an "off" state.
The task scheduling unit 440 schedules only the task processing unit for the signal in the "ready" state, and does not schedule the task processing unit for the signal in the "waiting" state.
In one example, the resource management unit 450 sets switches of the plurality of signals corresponding to a resource to a "closed" state in response to the resource being available, such that both signals (denoted as S1 and S2) become a "ready" state. In this example, the resource management unit does not handle the amount of available resources. Even if the number of available resources is one, the corresponding switches for both signals are set to the "closed" state. The task scheduling unit 440 selects the signal S1 from the signals S1 and S2, and schedules the task processing unit to process the signal S1, consuming such resources, the resource management unit 450 sets the switch of the plurality of signals corresponding to the resources to an "off" state in response to the resources being unavailable, and causes the signal (S2) to change from a "ready" state to a "wait" state. And when the task scheduling unit 440 performs scheduling again, the signal S2 is in the "waiting" state and cannot be processed.
In yet another example, the resource management unit 450 reserves resources for one or more signals. For example, resource R1 is reserved for signal S1. In response to resource R1 being available, resource management unit 450 identifies the amount of resource R1 that is available. If the signal S1 is in the signal buffer, the resource management unit 450 sets the switch of the corresponding resource R1 of the signal S1 to the "closed" state according to the availability of the resource R1, and the resource management unit 450 calculates the available number of the resource R1 according to the signal S1 that the specified number of the resource R1 is consumed, although the resource R1 is not yet used and is still available. If the specified number of resources R1 has been consumed as per S1, the resource management unit 450 recognizes that resource R1 is not available and sets the switch of signal S2 for resource R1 to the "OFF" state. In this way, the resource management unit 450 reserves resources for signal S1.
In another example, resource management unit 450 allocates resources to one or more signals. The resource management unit 450 maintains the number of resources available and decrements the number of resources available in response to setting the switch corresponding to the resource by one or more signals to a "closed" state according to the resource availability. Thus, setting the switch to the "closed" state represents the process of allocating resources to the corresponding signal (without reassigning to other signals).
In yet another example, the resource management unit 450 may be shared or omitted by the task processing unit or hardware recognizing that a resource is available and setting a corresponding switch for a signal that requires a resource.
Fig. 4B illustrates a flow chart for distributing signals according to an embodiment of the application.
The signal distribution unit (410) (see also fig. 4A) adds the signal to the corresponding signal buffer (460, 462 … … 46n) in response to acquiring the signal to be distributed.
Optionally, the signal distributing unit 410 further checks the availability of each resource required by the signal, sets the switch of the signal corresponding to the available resource to the "closed" state, and sets the switch of the signal corresponding to the unavailable resource to the "open" state. The signal issuer, upon submitting a signal to the signal distribution unit 410, optionally also sets the state of one or more switches of the signal according to the availability of resources.
For signals in the signal buffer, the resource management unit 450, task processing unit, or hardware providing the resource obtains the availability of one or more resources (490). Responsive to the resource being available, a switch (492) corresponding to the resource requiring a signal of the available resource is set. Alternatively, in response to a resource being consumed and becoming unavailable, a switch corresponding to the resource that requires a signal of the corresponding resource is also set, setting the switch to an "off" state.
The task scheduling unit 440 retrieves the signal in the "ready" state from the signal buffer (494). When all switches of the signal are "closed," the signal is in a "ready state. Optionally, the task scheduling unit 440 also selects the signals in the "ready" state according to a random, round-robin, weighted round-robin and/or priority of the signals.
The task scheduling unit 440 schedules a task processing unit corresponding to the acquired signal to process the task indicated by the signal (496).
Fig. 5A is a schematic diagram of a task management unit provided according to yet another embodiment of the present application.
Some processors run multiple processes. According to an embodiment of the present application, a task management unit is provided for each process. When the process running on the processor is switched, the operating system or the bottom layer software backups the running context including the architecture register and the like for the switched-out process, and restores the running context for the switched-in process. The process includes a plurality of task processing units. When the task processing units belonging to the same process are switched, the operating system or the bottom layer software does not switch the context for the task processing units. The task management unit manages or schedules the task processing unit belonging to the process in which the task management unit is located.
Referring to fig. 5A, a processor CPU0 runs a process 0 and a process 1, each of which includes a task management unit.
The task management unit of the process 0 includes a signal distribution unit 510, one or more signal buffers (520, 522 … … 52n), and a task scheduling unit 540; the task management unit of process 1 includes a signal distribution unit 512, one or more signal buffers (530, 532 … … 53n), and a task scheduling unit 542. One or more task processing units (560, 562 … … 56n) scheduled by the task scheduling unit 540 run in process 0, and one or more task processing units (570, 572 … … 57n) scheduled by the task scheduling unit 542 run in process 1.
The task processing unit (560, 562 … … 56n) running on the process 0 can send a signal to the task processing unit (560, 562 … … 56n) belonging to the process 0 and can also send a signal to the task processing unit (570, 572 … … 57n) belonging to the process 1 by calling the API provided by the task management unit also running on the process 0. On the contrary, the task processing unit (570, 572 … … 57n) running in the process 1 may send a signal to the task processing unit (570, 572 … … 57n) belonging to the process 1 by calling the API provided by the task management unit also running in the process 1, and may also send a signal to the task processing unit (560, 562 … … 56n) belonging to the process 0.
In response to receiving the signal, the signal distribution unit (510, 512) identifies the task processing unit for which the signal indicates which process is running. Taking the signal distributing unit 510 as an example, if the signal is identified to be sent to the task processing unit (560, 562 … … 56n) running in the process 0, the signal is added to the corresponding signal buffer (520, 522 … … 52 n); if the identification signal is to be sent to a task processing unit (570, 572 … … 57n) running in process 1, the signal is added to the corresponding signal buffer (530, 532 … … 53 n). For the signal distributing unit 512, if the signal is identified to be sent to the task processing unit (560, 562 … … 56n) running in the process 0, adding the signal to the corresponding signal buffer (520, 522 … … 52 n); if the identification signal is to be sent to a task processing unit (570, 572 … … 57n) running in process 1, the signal is added to the corresponding signal buffer (530, 532 … … 53 n).
According to the embodiment of fig. 5A, the signal distribution unit (510, 512) is able to add signals to signal buffers that do not belong to the same process as it does. And the task scheduling units (540, 542) can only schedule the task processing units belonging to the same CPU.
Fig. 5B is a flowchart of distributing a signal by the signal distribution unit provided according to the embodiment of the present application.
Taking the signal distribution unit (510) (see also fig. 5A) as an example, in response to acquiring the signal to be distributed (580), it is identified whether the task processing unit to receive the signal belongs to its own process or to another process (582). If the signal indicates that the signal is to be processed by a task processing unit belonging to the own process (582), the signal distribution unit 510 adds the signal to the signal buffer of the own process (584). If the signal indicates that the signal is to be processed by a task processing unit belonging to another process (582), the signal distribution unit 510 adds the signal to the signal buffer of the other process (586).
Optionally, to avoid that the other process receiving the signal is in a sleep state and the signal cannot be processed for a long time, the signal distribution unit 510 further buffers the other process after adding the signal to the signal buffer of the other process (588). For example, the signal distribution unit 510 wakes up the process by sending an interrupt signal to the other process. Still alternatively, the signal distribution unit 510 presumes whether or not the process 1 is in the sleep state according to whether or not the signal buffer of the process 1 is added with a signal. For example, if there is no signal to be processed in any signal buffer of process 1, the signal distribution unit 510 considers that process 1 is in a sleep state; if there is a task to be processed in any signal buffer of process 1, the signal distribution unit 510 considers that process 1 is running without waking up process 1 again.
Fig. 6A is a schematic diagram of a task management unit provided according to yet another embodiment of the present application.
Some processors include multiple CPU cores. According to an embodiment of the present application, a task management unit is provided for a process running on each CPU core. The task processing unit manages the task processing units belonging to the processes of the CPU core where the task processing unit is located. For the sake of brevity, a processor including a plurality of CPU cores is described hereinafter as an example. Those skilled in the art will appreciate that a task management unit applied to a processor comprising multiple CPU cores is also applicable to a multi-threaded system, as threads provide virtual CPU cores for programs they run.
Referring to fig. 6A, the processor includes CPU0 and CPU 1. CPU0 and CPU1 each include two processes. CPU0 runs process 612 and process 618 and CPU1 runs process 622 and process 628. Each process includes a task management unit. The task management unit of the process 612 of the CPU0 includes a signal distribution unit 610, one or more signal buffers, and a task scheduling unit 640; the task management unit of process 618 of CPU0 includes a signal distribution unit 615, one or more signal buffers, and a task scheduling unit 642. One or more task processing units scheduled by the task scheduling unit 640 run on the process 612 of CPU0, and one or more task processing units scheduled by the task scheduling unit 642 run on the process 618 of CPU 0.
The task management unit of the process 622 of CPU1 includes a signal distribution unit 620, one or more signal buffers, and a task scheduling unit 650; the task management unit of the process 628 of the CPU1 includes a signal distribution unit 625, one or more signal buffers, and a task scheduling unit 652. One or more task processing units scheduled by the task scheduling unit 650 run in the process 622 of CPU1, and one or more task processing units scheduled by the task scheduling unit 652 run in the process 628 of CPU 1.
The task processing unit of the process 612 may send a signal to the task processing unit of the same process 612 or send a signal to the task processing unit of another process (including another process 618 of the CPU0, another process 622 of the CPU1, and a process 628) by calling an API provided by the task management unit also running in the process 612. The task processing unit belonging to the process 622 running on the CPU1 may send a signal to the task processing unit belonging to the same process 622 or may send a signal to the task processing unit belonging to another process by calling an API provided by the task management unit belonging to the process 622 also running on the CPU 1.
In response to receiving the signal, the signal distribution unit (610, 615, 620, 625) identifies the task processing unit for which the signal indicates which process is running. Taking the signal distributing unit 610 as an example, if the identification signal is to be sent to the task processing unit of the process 612 running in the CPU0, the signal is added to the corresponding signal cache of the process 612; if the identification signal is to be sent to a task processing unit of a process 628 running on CPU1, the signal is added to the corresponding signal buffer of the process 628 (330, 332 … … 33 n). Optionally, the signal buffer is provided by a shared memory accessible by each CPU and each process. Still alternatively, the signal cache is provided by private memory maintained by each CPU, with access to private memory belonging to other CPUs being provided by the operating system, underlying software or hardware.
According to the embodiment of fig. 6A, the signal distribution unit (610, 615, 620, 625) is able to add signals to signal buffers that do not belong to the same CPU as it does. And the task scheduling units (640, 642, 650, 652) can only schedule the task processing units belonging to the same CPU.
CPU0 and CPU1 each include a process wakeup unit (619, 629). Optionally, each process comprises a process wakeup unit.
The process awakening unit is used for awakening other processes to schedule the task processing unit to process the signal.
According to an embodiment of the application, a process state table is also maintained. The process state table is provided by a shared memory accessible to both CPU0 and CPU 1. The process state table includes a plurality of entries, each entry indicating an execution state of one of the processes. The execution states of the processes include, for example, a "sleep" state and a "run" state.
In response to the signal distribution unit (e.g., the signal distribution unit 615) sending a signal to another process (e.g., the process 622), the process wake unit 619 queries the state of the process 622 through the process state table. If the process 622 is in the "sleep" state, the remote wakeup unit modifies the entry of the process 622 in the process state table to record the "work" state, and also sends an interrupt to the CPU1 where the process 622 is located. In the interrupt processing unit of the CPU1, a process state table is accessed, the execution state of each process running on itself is acquired, and the process in the "working" state is waken up.
If remote wakeup unit 619 discovers that process 622 is in a "working" state via the process state table, no further processing is required.
When the process enters the dormant state, the 'dormant' state is set in the table entry of the process state table corresponding to the process.
Optionally, the remote wakeup unit may infer the execution state of the target process according to whether the signal buffer of the target process receiving the signal is empty. And waking up the target process in the "sleep" state without further processing for the target process in the "working" state.
Fig. 6B is a flowchart of signal distribution by a signal distribution unit according to another embodiment of the present application.
Taking the signal distribution unit (610) (see also fig. 6A) as an example, in response to acquiring the signal to be distributed (680) of the target process supplied to the remote CPU1, the signal is added to the signal buffer (682) of the target process. The signal distribution unit (610) (or remote wakeup unit 619) identifies whether the remote CPU1 or a target process of the remote CPU1 is in a "running" state (684).
If the remote CPU1 or the target process is not in the "running" state (684), the signal distribution unit (610) (or the remote wakeup unit 619) sets the operating state of the target process to the "running" state in the process state table, and also sends an interrupt to the remote CPU1 (686). The interrupt handling function of the remote CPU1 accesses the process state table and wakes up the target process according to the process state table (688).
If the remote CPU1 or target process is in a "running" state (684), the signal distribution unit (610) (or remote wakeup unit 619) distributes the process results of the signal without further processing (690).
In response to being woken up, the task scheduling unit of the target process starts working.
Fig. 7A is a schematic diagram of a task management unit provided according to yet another embodiment of the present application.
The task management unit includes a signal distribution unit 710, one or more signal buffers (720, 722 … … 22n), and a task scheduling unit 750.
In response to receiving the signal, the signal distribution unit 710 identifies whether there are sufficient resources to process the task. E.g. whether there is sufficient memory space, whether hardware resources needed for processing the task are free, whether data needed for the task are ready, etc. There is a corresponding switch for each resource required to process a signal. A signal with all switches "closed," in the "ready" state, represents sufficient resources to process the signal.
If the received signal is already in the "ready" state, the signal distributing unit 710 calls the task processing unit (752, 754 … … or 75n) corresponding to the signal to process the task indicated by the signal without adding the signal to the signal buffer. If the received signal is already in the "wait" state, the signal distribution unit 710 adds the signal to the corresponding signal buffer.
In this way, tasks indicated by some signals can be processed earlier without going through the processes of being added to and taken out of the signal buffer, and the time for processing the tasks indicated by the signals is shortened.
The resource management unit 740, the task processing unit, or the hardware providing the resource acquires the availability of one or more resources. In response to the resource being available, a switch corresponding to the resource that requires a signal of the available resource is set to a "closed" state. Alternatively, in response to a resource being consumed and becoming unavailable, a switch corresponding to the resource that requires a signal of the corresponding resource is also set, setting the switch to an "off" state.
The task scheduling unit 750 acquires a signal in a "ready" state from the signal buffer. Optionally, the task scheduling unit 750 also selects the signals in the "ready" state according to a random, round-robin, weighted round-robin and/or priority of the signals. The task scheduling unit 750 schedules the task processing unit corresponding to the acquired signal to process the task indicated by the signal.
Still alternatively, the API for transmitting a signal provided by the task management unit to the task processing unit includes a plurality of types, or may be specified with a plurality of parameters. The signal distributing unit 710 selects whether to directly call the task processing unit corresponding to the signal with the signal in the "ready" state or to add the signal to the signal buffer, according to the type or parameter of the API used by the task processing unit. It is understood that even if the API used by the task processing unit indicates to directly call the task processing unit corresponding to the signal, the signal distributing unit 710 still adds the signal to the signal buffer if the signal is in the "waiting" state.
Still alternatively, the API for transmitting signals provided by the task management unit to the task processing unit further specifies whether the signals are to be processed in order. The order is, for example, the order in which the task processing unit calls the API for transmitting the signal. If the API indicates that the signals are to be processed in sequence, the signal distribution unit 710 optionally still calls the signal in the "ready" state directly to the task processing unit corresponding to the signal, adds the signal in the "waiting" state to the signal buffer, and marks the sequence of the signals in the signal buffer. For example, with the signal buffering of the queue structure, a signal is added to the tail of the queue, and the task scheduling unit 750 acquires a signal in a "ready" state only from the head of the queue.
In one example, the signal buffer includes both signals that are to be processed sequentially and signals that are not to be processed sequentially. The signal distribution unit 710 adds signals to be sequentially processed to the signal buffers of the queue structure, and adds signals that do not need to be sequentially processed to other signal buffers.
Optionally, the priority is adjusted for the signal buffer according to the data of the signal in the signal buffer. The priority of a signal buffer is increased in response to the number of signals in the signal buffer being greater than a threshold or an increment thereof being greater than a threshold. And reducing the priority of a certain signal buffer in response to the number of signals in the signal buffer being less than the threshold or the increment thereof being less than the threshold.
Fig. 7B is a flowchart of distributing a signal by the signal distribution unit provided according to the embodiment of the present application.
In response to acquiring a signal to be distributed (760), a signal distribution unit (710) (see also fig. 7A) identifies whether resources of a task corresponding to the processing signal are all ready (the signal is in a "ready" state) (762). And calling a task processing unit (764) for processing the task corresponding to the signal when the resources required by the task corresponding to the signal are all ready. If, at step 762, it is identified that the resources required to process the task corresponding to the signal are not all ready, the signal is added to the corresponding signal buffer (768).
For signals in the signal buffer, the resource management unit, the task processing unit, or the hardware providing the resource acquires availability of one or more resources (770). In response to the resource being available, a switch corresponding to the resource waiting for a signal of the available resource is set (772). Alternatively, in response to a resource being consumed and becoming unavailable, a switch corresponding to the resource that requires a signal of the corresponding resource is also set, setting the switch to an "off" state.
The task scheduling unit 750 retrieves the signal in the "ready" state from the signal buffer (774). When all switches of the signal are "closed," the signal is in a "ready state. The task scheduling unit 750 schedules the task processing unit corresponding to the acquired signal to process the task indicated by the signal (776).
Fig. 7C is a flowchart of distributing a signal by a signal distribution unit according to still another embodiment of the present application.
Some tasks processed by the electronic device include multiple phases. For a task that has already been started to process, it is desirable to complete its processing as soon as possible to shorten the time that the resources for processing the task are occupied by the task. Taking the storage device processing the IO command as an example, the processing of the IO command includes two stages, i.e., operating the storage medium (denoted as S1) and providing the processing result (denoted as S2) to the host. Each stage is processed by a task processing unit (752, 754 … … 75n) (see also fig. 7A). After the task processing unit has completed stage S1, it issues a signal indicating that stage S2 is to be processed next.
Referring to fig. 7C, the signal distribution unit (710) (see also fig. 7A) identifies whether the processing signal is in a "ready" state (782) in response to acquiring the signal to be distributed (780). In case the signal is in the "ready" state, it is further identified whether the processing of the IO command indicated by the signal to be distributed has been started (784). For example, if the signal to be distributed indicates that the IO command is processed in the S1 stage, it means that the IO command is not yet processed; if the signal to be distributed indicates that the IO command is processed in stage S2, it means that the IO command has already been processed.
And in response to recognizing that the processing of the IO command indicated by the signal to be distributed is started, calling a task processing unit (786) of a task S2 stage corresponding to the processing signal. If, at step 784, processing of the IO command in response to identifying that the signal to be distributed indicates has not been initiated, then the signal is added to the corresponding signal buffer (788). Optionally, in step 784, it is determined whether to prioritize the task processing unit corresponding to the scheduling signal according to some other condition. For example, the signal is set to a designated high priority. As yet another example, the signal has a quota, and the quota of the signal has not been exhausted; and responding to the basis quota, in step 784, calling the task processing unit corresponding to the processing signal, so that the quota is reduced by a specified amount. By setting a quota, some signals are preferentially processed under a specified load, but if such signals frequently occur and run out of quota, the signals are added to a signal buffer for uniform scheduling. Still alternatively, in step 784, it is determined whether to directly schedule the task processing unit corresponding to the signal according to the type of the API for transmitting the signal, without adding the signal to the signal buffer.
If, at step 782, it is recognized that the processed signal is in a "wait" state, the signal is added to the corresponding signal buffer (788).
FIG. 8A is a schematic illustration of tasks provided according to another embodiment of the present application.
IO commands are used as an example of tasks. Storage devices typically process multiple IO commands simultaneously. In FIG. 8A, the IO command (800, 802, 804) includes a plurality of phases (S1, S2, and S3). The 3 phases belonging to the same IO command need to be processed in the order of S1, S2 and S3. Between stages S1, S2, and S3, it may be necessary to wait some time for the resources required to process the IO command to be ready. Multiple IO commands may be processed in parallel. After the task processing unit has processed stage S1 of the IO command 800, it indicates by signaling that stage S2 should be processed next, and also indicates by signaling that stage S2 is stage S2 of the IO command 800.
According to the embodiment of the application, it is desirable to process more IO commands simultaneously by using limited resources, and it is also desirable that the processing time of each IO command is as short as possible. It is thus to be avoided that the signal indicating the subsequent stage (S2 or S3) of the IO command that has started processing is too long to be scheduled, and that the signal indicating the subsequent stage (S2 or S3) of the IO command that has started processing is expected to be processed as early as possible.
The above object is achieved by various means. For example, a higher priority is set for a signal indicating a subsequent stage (S2 or S3) of an IO command that has started processing, a higher priority is set for a signal indicating an IO command that has started processing, and a priority is set for a signal corresponding to an IO command in the order in which a plurality of IO commands are started to be processed so that a signal corresponding to an IO that is started to be processed earlier has a higher priority.
Fig. 8B is a schematic diagram of a task management unit provided according to another embodiment of the present application.
The task management unit includes a signal distribution unit 810, one or more signal buffers (820, 822 … … 82n), a resource management unit 830, and a task scheduling unit 840.
By way of example, the signals correspond one-to-one to the task processing units.
The signal buffers (820, 822 … … 82n) are in one-to-one correspondence with the IO commands being processed. For example, the signal buffer 820 is dedicated to accommodate signals indicative of processing of the IO command 800 (or stages thereof); the signal buffer 822 is dedicated to accommodate signals that indicate processing of the IO command 802 (or stages thereof); the signal buffer 82n is dedicated to accommodate signals indicative of processing of the IO command 80n (or stages thereof). The number of signal buffers is independent of the type of signal.
The task management unit according to the embodiment of fig. 8B schedules a plurality of task processing units (850, 852 … … 85 m). The number of task processing units and the number of signal buffers need not be the same. Optionally, the number of task processing units depends on the kind of signal, at least for each signal a corresponding task processing unit is provided. Still alternatively, a plurality of task processing units are provided for each signal.
The signal distribution unit 810 receives a signal provided by a task processing unit or hardware, the signal indicating a task processing unit that subsequently needs to be scheduled to process the task indicated by the signal. The IO command to which the task indicated by the signal belongs can be obtained according to the signal. The signal distributing unit 810 adds the acquired signal to a signal buffer corresponding to an IO command corresponding to the signal.
Optionally, the signal distribution unit 810 also identifies whether the signal is already in a "ready" state. In the case where the signal is already in the "ready" state, the task processing unit (850, 852 … … 85m) that processes the task to which the signal corresponds is called.
Still alternatively, in case the signal is in the "ready" state, the signal distribution unit 810 also determines whether to directly schedule the task processing unit (850, 852 … … 85m) processing the corresponding task of the signal according to other conditions, instead of adding the signal to the signal buffer. For example, if the IO command corresponding to the signal has already been processed (the stage of the IO command corresponding to the signal is not the stage S1, but the stage S2 or S3), or the time for the IO command corresponding to the signal to be processed to be started is long enough or long enough relative to other IO commands, the task processing unit (850, 852 … … 85m) that processes the task corresponding to the signal is directly scheduled.
The resource management unit 830, the task processing unit, or the hardware providing the resource acquires the availability of one or more resources. In response to the resource being available, a switch corresponding to the resource that requires a signal of the available resource is set to a "closed" state. Alternatively, in response to a resource being consumed and becoming unavailable, a switch corresponding to the resource that requires a signal of the corresponding resource is also set, setting the switch to an "off" state.
The task scheduling unit 850 acquires a signal in a "ready" state from the signal buffer. Optionally, the task scheduling unit 850 also selects the signals in the "ready" state according to a random, round-robin, weighted round-robin and/or priority of the signals. The task scheduling unit 850 schedules a task processing unit corresponding to the acquired signal to process the task indicated by the signal.
Optionally, the task scheduling unit 840 selects the signal buffer in a random, round-robin, weighted round-robin, or the like manner. Optionally, the priority of each signal buffer is set according to the time when the IO command corresponding to the signal buffer starts to be processed. The signal buffers corresponding to the IO commands that are started to be processed earlier are set to a higher priority, so that the task scheduling unit 540 selects the signal buffers according to the priority to service the IO commands that are started to be processed earlier with higher priority.
Optionally, the priority of the signal buffering is set according to the order in which IO commands are started to be processed. In response to starting to process a new IO command, a new signal buffer is allocated for the new IO command to accommodate a signal corresponding to the IO command. And increasing the sequence number of the current record, and setting the priority of the newly allocated signal buffer by using the increased sequence number. And in response to the IO command being processed, the signal buffer accommodating its corresponding signal is reclaimed or allocated to the new IO command.
In addition to being applied to storage devices, the embodiments according to the present application are also applicable to task scheduling in computers, servers, network devices and other electronic devices.
Embodiments of the present application also provide a program comprising program code, which, when loaded into and executed on an electronic device, causes the electronic device to perform the method described above.
It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including program instructions. These program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data control apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data control apparatus create means for implementing the functions specified in the flowchart block or blocks.
Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of operations for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
At least a portion of the various blocks, operations, and techniques described above may be performed using hardware, by controlling a device to execute firmware instructions, by controlling a device to execute software instructions, or any combination thereof. When implemented using a control device executing firmware and software instructions, the software or firmware instructions may be stored on any computer-readable storage medium, such as a magnetic disk, optical disk or other storage medium, in RAM or ROM or flash memory, a control device, hard disk, optical disk, magnetic disk, or the like. Likewise, the software and firmware instructions may be transmitted to a user or system via any known or desired transmission means. The software or firmware instructions may include machine-readable instructions that, when executed by the control device, cause the control device to perform various actions.
When implemented in hardware, the hardware may include one or more discrete components, integrated circuits, Application Specific Integrated Circuits (ASICs), and the like.
It should be understood that the present application may be implemented in software, hardware, firmware, or a combination thereof. The hardware may be, for example, a control device, an application specific integrated circuit, a large scale integrated circuit, or the like.
Although the present application has been described with reference to examples, which are intended to be illustrative only and not to be limiting of the application, changes, additions and/or deletions may be made to the embodiments without departing from the scope of the application.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A task scheduling method comprises the following steps:
acquiring a signal;
in response to the signal being in a "ready" state, a task is scheduled to process the signal.
2. The method of claim 1, further comprising:
and if the processed signal is in a waiting state, recording the signal in a signal buffer.
3. The method of claim 1, wherein the signals are for processing IO commands and the signal buffers are in one-to-one correspondence with IO commands.
4. The method of claim 3, wherein if the signal is in a "ready" state, adding the signal to a signal buffer corresponding to the signaled IO command if the signaled IO command has not started to be processed.
5. The method of one of claims 1-4, further comprising:
and responding to the acquired signal, identifying the progress of the task corresponding to the running signal, and adding the signal to the signal cache of the progress of the task corresponding to the running signal.
6. The method of claim 5, further comprising:
and identifying the execution state of the process of the task corresponding to the running signal, and if the process is in the dormant state, awakening the process.
7. The method of claim 5 or 6, wherein
And identifying a first processor in which the process of the task corresponding to the running signal is positioned, and if the first processor is a remote processor relative to the current processor and the process is in a dormant state, sending an interrupt to the first processor.
8. The method according to one of claims 1 to 7, wherein
The signal comprises a plurality of switches, the signal of which all switches are closed is the signal in the ready state; where the switches correspond to the resources required to process the signal.
9. The method of claim 7 or 8, further comprising:
in response to the first resource required to process the first signal being available, a switch of the first signal corresponding to the first resource is closed.
10. A storage device comprising control means and a non-volatile memory, the control means being adapted to perform the method according to one of claims 1-9.
CN201811160925.7A 2018-09-30 2018-09-30 Signal-slot-based large-scale constrained concurrent task scheduling method and device Pending CN110968418A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811160925.7A CN110968418A (en) 2018-09-30 2018-09-30 Signal-slot-based large-scale constrained concurrent task scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811160925.7A CN110968418A (en) 2018-09-30 2018-09-30 Signal-slot-based large-scale constrained concurrent task scheduling method and device

Publications (1)

Publication Number Publication Date
CN110968418A true CN110968418A (en) 2020-04-07

Family

ID=70029087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811160925.7A Pending CN110968418A (en) 2018-09-30 2018-09-30 Signal-slot-based large-scale constrained concurrent task scheduling method and device

Country Status (1)

Country Link
CN (1) CN110968418A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860401A (en) * 2021-02-10 2021-05-28 北京百度网讯科技有限公司 Task scheduling method and device, electronic equipment and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1320240A (en) * 1998-09-28 2001-10-31 西门子公司 Process management method
US6430593B1 (en) * 1998-03-10 2002-08-06 Motorola Inc. Method, device and article of manufacture for efficient task scheduling in a multi-tasking preemptive priority-based real-time operating system
US20050223382A1 (en) * 2004-03-31 2005-10-06 Lippett Mark D Resource management in a multicore architecture
US20060271712A1 (en) * 2005-05-04 2006-11-30 Arm Limited Use of a data engine within a data processing apparatus
JP2007200112A (en) * 2006-01-27 2007-08-09 Kyocera Corp Task processing management method, operating system and computer program
US20090158299A1 (en) * 2007-10-31 2009-06-18 Carter Ernst B System for and method of uniform synchronization between multiple kernels running on single computer systems with multiple CPUs installed
US7865894B1 (en) * 2005-12-19 2011-01-04 Nvidia Corporation Distributing processing tasks within a processor
CN103501498A (en) * 2013-08-29 2014-01-08 中国科学院声学研究所 Baseband processing resource allocation method and device thereof
US20150066157A1 (en) * 2013-08-30 2015-03-05 Regents Of The University Of Minnesota Parallel Processing with Cooperative Multitasking
EP3010207A1 (en) * 2014-10-14 2016-04-20 F5 Networks, Inc Systems and methods for idle driven scheduling
CN105760216A (en) * 2016-02-29 2016-07-13 惠州市德赛西威汽车电子股份有限公司 Multi-process synchronization control method
US20170046202A1 (en) * 2014-04-30 2017-02-16 Huawei Technologies Co.,Ltd. Computer, control device, and data processing method
WO2017070900A1 (en) * 2015-10-29 2017-05-04 华为技术有限公司 Method and apparatus for processing task in a multi-core digital signal processing system
CN107797760A (en) * 2016-09-05 2018-03-13 北京忆恒创源科技有限公司 Method, apparatus and driver based on the processing of cache optimization write order

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6430593B1 (en) * 1998-03-10 2002-08-06 Motorola Inc. Method, device and article of manufacture for efficient task scheduling in a multi-tasking preemptive priority-based real-time operating system
CN1320240A (en) * 1998-09-28 2001-10-31 西门子公司 Process management method
US20050223382A1 (en) * 2004-03-31 2005-10-06 Lippett Mark D Resource management in a multicore architecture
US20060271712A1 (en) * 2005-05-04 2006-11-30 Arm Limited Use of a data engine within a data processing apparatus
US7865894B1 (en) * 2005-12-19 2011-01-04 Nvidia Corporation Distributing processing tasks within a processor
JP2007200112A (en) * 2006-01-27 2007-08-09 Kyocera Corp Task processing management method, operating system and computer program
US20090158299A1 (en) * 2007-10-31 2009-06-18 Carter Ernst B System for and method of uniform synchronization between multiple kernels running on single computer systems with multiple CPUs installed
CN103501498A (en) * 2013-08-29 2014-01-08 中国科学院声学研究所 Baseband processing resource allocation method and device thereof
US20150066157A1 (en) * 2013-08-30 2015-03-05 Regents Of The University Of Minnesota Parallel Processing with Cooperative Multitasking
US20170046202A1 (en) * 2014-04-30 2017-02-16 Huawei Technologies Co.,Ltd. Computer, control device, and data processing method
EP3010207A1 (en) * 2014-10-14 2016-04-20 F5 Networks, Inc Systems and methods for idle driven scheduling
WO2017070900A1 (en) * 2015-10-29 2017-05-04 华为技术有限公司 Method and apparatus for processing task in a multi-core digital signal processing system
CN108351783A (en) * 2015-10-29 2018-07-31 华为技术有限公司 The method and apparatus that task is handled in multinuclear digital information processing system
CN105760216A (en) * 2016-02-29 2016-07-13 惠州市德赛西威汽车电子股份有限公司 Multi-process synchronization control method
CN107797760A (en) * 2016-09-05 2018-03-13 北京忆恒创源科技有限公司 Method, apparatus and driver based on the processing of cache optimization write order

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860401A (en) * 2021-02-10 2021-05-28 北京百度网讯科技有限公司 Task scheduling method and device, electronic equipment and storage medium
CN112860401B (en) * 2021-02-10 2023-07-25 北京百度网讯科技有限公司 Task scheduling method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US8397236B2 (en) Credit based performance managment of computer systems
US8959515B2 (en) Task scheduling policy for limited memory systems
US7318128B1 (en) Methods and apparatus for selecting processes for execution
US8612986B2 (en) Computer program product for scheduling ready threads in a multiprocessor computer based on an interrupt mask flag value associated with a thread and a current processor priority register value
US9858115B2 (en) Task scheduling method for dispatching tasks based on computing power of different processor cores in heterogeneous multi-core processor system and related non-transitory computer readable medium
KR101686010B1 (en) Apparatus for fair scheduling of synchronization in realtime multi-core systems and method of the same
US10552213B2 (en) Thread pool and task queuing method and system
CN109564528B (en) System and method for computing resource allocation in distributed computing
US20150121387A1 (en) Task scheduling method for dispatching tasks based on computing power of different processor cores in heterogeneous multi-core system and related non-transitory computer readable medium
CN111767134A (en) Multitask dynamic resource scheduling method
US8892819B2 (en) Multi-core system and external input/output bus control method
JP5347451B2 (en) Multiprocessor system, conflict avoidance program, and conflict avoidance method
EP2613257B1 (en) Systems and methods for use in performing one or more tasks
JP2010044784A (en) Scheduling request in system
CN102193828B (en) Decoupling the number of logical threads from the number of simultaneous physical threads in a processor
CN104598311A (en) Method and device for real-time operation fair scheduling for Hadoop
CN111597044A (en) Task scheduling method and device, storage medium and electronic equipment
CN114816777A (en) Command processing device, method, electronic device and computer readable storage medium
CN110968418A (en) Signal-slot-based large-scale constrained concurrent task scheduling method and device
CN116795503A (en) Task scheduling method, task scheduling device, graphic processor and electronic equipment
JP2005092780A (en) Real time processor system and control method
US9618988B2 (en) Method and apparatus for managing a thermal budget of at least a part of a processing system
US7603673B2 (en) Method and system for reducing context switch times
CN101661406A (en) Processing unit dispatching device and method
JP2008225641A (en) Computer system, interrupt control method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room A302, B-2 Building, North Territory of Dongsheng Science Park, Zhongguancun, 66 Xixiaokou Road, Haidian District, Beijing, 100192

Applicant after: Beijing yihengchuangyuan Technology Co.,Ltd.

Address before: Room A302, B-2 Building, North Territory of Dongsheng Science Park, Zhongguancun, 66 Xixiaokou Road, Haidian District, Beijing, 100192

Applicant before: BEIJING MEMBLAZE TECHNOLOGY Co.,Ltd.