CN113626213A - Event processing method, device and equipment and computer readable storage medium - Google Patents

Event processing method, device and equipment and computer readable storage medium Download PDF

Info

Publication number
CN113626213A
CN113626213A CN202110805102.0A CN202110805102A CN113626213A CN 113626213 A CN113626213 A CN 113626213A CN 202110805102 A CN202110805102 A CN 202110805102A CN 113626213 A CN113626213 A CN 113626213A
Authority
CN
China
Prior art keywords
coroutine
target
event
queue
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110805102.0A
Other languages
Chinese (zh)
Inventor
李丰军
周剑光
王腾达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Innovation Corp
Original Assignee
China Automotive Innovation Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Innovation Corp filed Critical China Automotive Innovation Corp
Publication of CN113626213A publication Critical patent/CN113626213A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application discloses an event processing method, an event processing device and a computer readable storage medium, wherein the method comprises the following steps: the first coroutine generates a target event corresponding to the target service; the target event comprises target coroutine identification information of a second coroutine, the first coroutine is a coroutine under a thread corresponding to a first process, and the second coroutine is a coroutine under a thread corresponding to a second process; the first coroutine sends the target event to the second process through a target network communication link between the first process and the second process; and scheduling the second coroutine to execute the target event by the scheduling thread in the second process according to the target coroutine identification information. By the technical scheme, the cross-process concurrent processing efficiency of a plurality of events can be improved at least.

Description

Event processing method, device and equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an event processing method, an event processing apparatus, an event processing device, and a computer-readable storage medium.
Background
In order to realize cross-process concurrent processing of a plurality of events, the currently adopted technical scheme is as follows: when the target process receives a plurality of events sent by other processes, the thread scheduler of the target process allocates time slices for a plurality of threads in the target process, and when the time slice corresponding to each thread is reached, the resource of a Central Processing Unit (CPU) is switched to each thread, and each thread is called to process each event. Obviously, the scheme increases the switching overhead of CPU resources and also reduces the cross-process concurrent processing efficiency of a plurality of events.
Disclosure of Invention
The application provides an event processing method, an event processing device and a computer storage medium, which can at least improve the cross-process concurrent processing efficiency of a plurality of events.
The application provides an event processing method, which comprises the following steps:
the first coroutine generates a target event corresponding to the target service; the target event comprises target coroutine identification information of a second coroutine, the first coroutine is a coroutine under a thread corresponding to a first process, and the second coroutine is a coroutine under a thread corresponding to a second process;
the first coroutine sends the target event to the second process through a target network communication link between the first process and the second process;
and scheduling the second coroutine to execute the target event by the scheduling thread in the second process according to the target coroutine identification information.
The present application also provides an event processing apparatus, comprising:
the generating module is used for generating a target event corresponding to the target service by the first coroutine; the target event comprises target coroutine identification information of a second coroutine, the first coroutine is a coroutine under a thread corresponding to a first process, and the second coroutine is a coroutine under a thread corresponding to a second process;
a sending module, configured to send, by the first coroutine, the target event to the second process through a target network communication link between the first process and the second process;
and the first scheduling module is used for scheduling the second coroutine to execute the target event according to the target coroutine identification information by the scheduling thread in the second process.
In some optional embodiments, the first scheduling module includes:
the first storage unit is used for storing the target event in an event queue corresponding to the second coroutine by the scheduling thread according to the target coroutine identification information; the event queue corresponding to the second coroutine is used for storing the to-be-processed event corresponding to the second coroutine;
the second storage unit is used for storing the second coroutine in a coroutine queue by the scheduling thread; the coroutine queue is used for storing coroutines to be scheduled;
the first scheduling unit is used for scheduling the second coroutine to execute the target event based on the first position information and the second position information by the scheduling thread; the first position information represents the position of the second coroutine in the coroutine queue, and the second position information represents the position of the target event in the event queue corresponding to the second coroutine.
In some optional embodiments, the first scheduling unit includes:
the first scheduling subunit is used for the scheduling thread to sequentially schedule the target coroutines to execute the events to be executed; the target coroutine is a coroutine positioned at the head end of the coroutine queue, and the event to be executed is an event positioned at the head end of the event queue corresponding to the target coroutine;
a first deleting subunit, configured to, when the target coroutine finishes executing the to-be-executed event, delete, by the scheduling thread, the to-be-executed event from an event queue corresponding to the target coroutine, and determine whether the event queue corresponding to the target coroutine is empty;
a transfer subunit, configured to transfer the target coroutine to a tail end of the coroutine queue, transfer an adjacent coroutine of the target coroutine to a head end of the coroutine queue, and transfer an adjacent event of the event to be executed to the head end of the event queue corresponding to the target coroutine, if the coroutine is determined to be non-empty;
and a second scheduling subunit, configured to schedule, by the scheduling thread, the second coroutine to execute the target event when the first location information indicates that the second coroutine is located at a head end of the coroutine queue, and the second location information indicates that the target event is located at a head end of an event queue corresponding to the second coroutine.
In some optional embodiments, the scheduling unit further includes:
the second deleting subunit is used for deleting the target coroutine from the coroutine queue and transferring the adjacent coroutines to the head end of the coroutine queue under the condition that the target coroutine is judged to be empty;
and the second scheduling subunit is further configured to schedule, by the scheduling thread, the second coroutine to execute the target event when the first location information indicates that the second coroutine is located at a head end of the coroutine queue, and the second location information indicates that the target event is located at a head end of an event queue corresponding to the second coroutine.
In some optional embodiments, the apparatus further comprises:
the judging module is used for judging whether the second coroutine exists in the coroutine queue or not by the scheduling thread;
correspondingly, the second storage unit is further configured to, when the second coroutine is determined to be absent, store the second coroutine in the coroutine queue by the scheduling thread.
In some optional embodiments, the apparatus further comprises:
the monitoring module is used for monitoring link state information of a plurality of network communication links by the scheduling thread; wherein the plurality of network communication links includes the target network communication link;
the first scheduling module includes:
and the second scheduling unit is used for scheduling the second co-program to execute the target event according to the target co-program identification information under the condition that the scheduling thread monitors that the target link state information of the target network communication link indicates that the event to be received exists.
In some optional embodiments, the apparatus further comprises:
a first receiving module, configured to receive, by the scheduling thread, a plurality of events simultaneously when monitoring that the link state information of the plurality of network communication links indicates that the event to be received exists; wherein the plurality of events comprise the target event, and an execution priority exists among the plurality of events;
a determining module, configured to determine coroutines corresponding to the multiple events according to coroutine identification information in the multiple events by the scheduling thread;
the first storage module is used for the scheduling thread to sequentially store the plurality of events in the event queues of the corresponding coroutines according to the execution priority;
and the second storage module is used for storing coroutines corresponding to the events in the coroutine queue by the scheduling thread according to the execution priority.
The application also provides an event processing device, which comprises a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executed to realize the event processing method.
The present application also provides a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes or set of instructions, which is loaded and executed by a processor to implement an event handling method as described above.
The event processing method, the event processing device, the event processing equipment and the computer readable storage medium have the following technical effects:
in the application, because the coroutine has the characteristic of actively giving out the CPU resource after the event is executed, under the condition of cross-process processing of the target event, the second coroutine is scheduled to execute the target event according to the target coroutine identification information in the target event through the scheduling thread, so that the processing efficiency of the target event can be greatly improved, and the switching overhead of the CPU resource can be greatly reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions and advantages of the embodiments of the present application or the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of an event processing method according to an embodiment of the present application;
fig. 2 is a flowchart illustrating a specific process of scheduling a second coroutine to execute a target event according to an embodiment of the present application;
fig. 3 is a flowchart illustrating another specific process for scheduling a second coroutine to execute a target event according to an embodiment of the present application;
FIG. 4 is a diagram illustrating a process of switching a thread stack to a coroutine stack according to an embodiment of the present disclosure;
FIG. 5 is a diagram illustrating a process of switching a protocol stack to a thread stack according to an embodiment of the present application;
FIG. 6 is a diagram illustrating an event processing process provided by an embodiment of the present application;
fig. 7 is a system framework diagram of an AP (Adaptive Platform) according to an embodiment of the present disclosure;
FIG. 8 is a diagram illustrating a structural relationship among processes, threads, and coroutines according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an event processing apparatus according to an embodiment of the present application;
fig. 10 is a hardware block diagram of a server of an event processing method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The following describes an event processing method of the present application, and the present specification provides the method operation steps as described in the examples or flowcharts, but may include more or less operation steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. Specifically, as shown in fig. 1, the method includes:
s101: the first coroutine generates a target event corresponding to the target service; the target event comprises target coroutine identification information of a second coroutine, the first coroutine is a coroutine under a thread corresponding to the first process, and the second coroutine is a coroutine under a thread corresponding to the second process.
In this embodiment, the first process and the second process may be different processes in the same operating system. Specifically, the operating system may include, but is not limited to, a Linux system and a QNX (quick UNIX) system.
In this embodiment, the first process may include at least one first thread, and the first coroutine may be a coroutine under any one of the first threads. The second process may include at least one second thread, and the second coroutine may be a coroutine under any one of the second threads.
In the embodiment of the present application, the target service may include any service that needs cross-process processing. The first coroutine can generate a target event which needs to be executed by the second coroutine according to the service requirement of the target service.
In this embodiment of the application, the target coroutine identification information may include a coroutine ID (identity) of the second coroutine, and may further include a thread ID of a second thread to which the second coroutine belongs. It can be understood that the target coroutine identification information is used for distinguishing the second coroutine from other coroutines except the second coroutine, and has uniqueness.
S103: and the first coroutine sends the target event to the second process through a target network communication link between the first process and the second process.
In the embodiment of the application, the target network communication link can be realized through an IPC mechanism. Specifically, the IPC mechanism may include, but is not limited to, shared memory and Unix Socket.
Taking Unix Socket as an example, a target network communication link realized by adopting Unix Socket comprises a sending interface, and the sending interface is defined with a sending end address and a destination end address. Specifically, the sending end address may be process identification information of the first process, and the destination end address may be process identification information of the second process. The first co-process realizes sending the target event to the second process through the target network communication link by calling the sending interface. Specifically, the process identification information of the first process may be a process ID of the first process, and the process identification information of the second process may be a process ID of the second process. The process identification information of the first process is used for distinguishing the first process from other processes except the first process in the operating system, and has uniqueness. The process identification information of the second process is used for distinguishing the second process from other processes except the second process in the operating system, and has uniqueness.
S105: and scheduling the second coroutine to execute the target event by the scheduling thread in the second process according to the target coroutine identification information.
In this embodiment, the second process may further include a scheduling thread, and the scheduling thread may be configured to schedule the coroutine in the at least one second process. Correspondingly, after the second process receives the target event, the scheduling thread may determine the second coroutine according to the target coroutine identification information in the target event, and then schedule the second coroutine to execute the target event.
Specifically, the scheduling thread may allocate the CPU resource of the second thread corresponding to the second coroutine, and then schedule the second coroutine to execute the target event. After the second coroutine finishes executing the target event, the second coroutine actively gives out the CPU resource, so that the dispatching thread can conveniently distribute the CPU resource to other coroutines under the second thread corresponding to the second coroutine to execute corresponding events. And the other coroutines are coroutines except the second coroutine in coroutines under the second thread corresponding to the second coroutine.
Because the coroutine has the characteristic of actively giving out CPU resources after the event is executed, the CPU resources do not need to be switched between the coroutines, and under the condition of cross-process processing of the target event, a second coroutine is scheduled to execute the target event through the scheduling thread according to the target coroutine identification information in the target event, so that the switching overhead of the CPU resources can be greatly reduced, and the processing efficiency of the target event can be greatly improved.
In a specific embodiment, as shown in fig. 2, a flowchart of a specific process for scheduling a second coroutine to execute a target event is provided in the embodiment of the present application. Referring to fig. 2, the scheduling, by the scheduling thread, according to the target coroutine identification information, the scheduling the second coroutine to execute the target event includes:
s201: the scheduling thread stores the target event in an event queue corresponding to the second coroutine according to the target coroutine identification information; and the event queue corresponding to the second coroutine is used for storing the to-be-processed event corresponding to the second coroutine.
In a specific embodiment, the scheduling thread stores the target event at the end of the event queue corresponding to the second coroutine according to the target coroutine identification information.
In an optional embodiment, the scheduling thread may allocate a memory address to the to-be-processed event corresponding to the second coroutine, and store the to-be-processed event corresponding to the second coroutine in a memory pool of the second process according to the memory address. The event queue corresponding to the second coroutine may be used to store a memory address of the to-be-processed event corresponding to the second coroutine.
Correspondingly, the scheduling thread may allocate a memory address to the target event, and store the memory address of the target event at the end of the event queue corresponding to the second coroutine. Therefore, under the condition that the second coroutine executes the to-be-processed event in the corresponding event queue, the second coroutine can read the to-be-processed event corresponding to the second coroutine in the memory pool of the second process in a memory pointer mode according to the memory address of the to-be-processed event in the corresponding event queue.
It can be understood that, in the case that each event to be processed corresponding to the second coroutine is stored in the event queue corresponding to the second coroutine, the scheduling thread starts to store from the end of the event queue corresponding to the second coroutine. The event priority of each event to be processed corresponding to the second coroutine can be determined by utilizing the first-in first-out principle of the event queue. The second coroutine can be facilitated to sequentially process each corresponding event to be processed.
S203: the scheduling thread stores the second coroutine in a coroutine queue; the coroutine queue is used for storing coroutines to be scheduled.
In a particular embodiment, the scheduling thread may store the second coroutine at the end of the coroutine queue.
It is understood that the scheduling thread stores each coroutine to be scheduled from the end of the coroutine queue in the case of storing the coroutine queue. The first-in first-out principle of the coroutine queue can be utilized to determine the scheduling priority of each coroutine to be scheduled. And each coroutine to be scheduled can be conveniently and orderly scheduled by the scheduling thread.
S205: the scheduling thread schedules the second coroutine to execute the target event based on the first position information and the second position information; the first position information represents the position of the second coroutine in the coroutine queue, and the second position information represents the position of the target event in the event queue corresponding to the second coroutine.
In the embodiment of the application, the scheduling thread may schedule the second coroutine to execute the target event based on the position of the second coroutine in the coroutine queue and the position of the target event in the event queue corresponding to the second coroutine by using a first-in first-out principle of the coroutine queue and a first-in first-out principle of the event queue corresponding to the second coroutine.
In a specific embodiment, as shown in fig. 3, a flowchart of another specific process for scheduling a second coroutine to execute a target event is provided in this embodiment of the present application. Referring to fig. 3, the scheduling, by the scheduling thread, the scheduling, by the second coroutine, the target event based on the first location information and the second location information includes:
s301: the scheduling thread sequentially schedules a target coroutine to execute the events to be executed; the target coroutine is a coroutine located at the head end of the coroutine queue, and the event to be executed is an event located at the head end of the event queue corresponding to the target coroutine.
Specifically, the scheduling thread allocates the CPU resource of the second thread corresponding to the second coroutine to the target coroutine, and then schedules the target coroutine to execute the event to be executed.
S303: and under the condition that the target coroutine finishes executing the event to be executed, deleting the event to be executed from the event queue corresponding to the target coroutine by the scheduling thread, and judging whether the event queue corresponding to the target coroutine is empty or not.
Specifically, when the target coroutine finishes executing the to-be-executed event, the to-be-executed event is no longer the to-be-processed event in the event queue corresponding to the target coroutine, and the scheduling thread deletes the to-be-executed event from the event queue corresponding to the target coroutine.
Specifically, when the target coroutine finishes executing the event to be executed, the target coroutine actively gives out the CPU resource. Considering that the blocking time of the coroutines to be scheduled in the coroutine queue is not too long, no matter the event queue corresponding to the target coroutine is empty or non-empty, the target coroutine will actively give out CPU resources so as to schedule the coroutines to be scheduled by the thread subsequently and reduce the blocking time of the coroutines to be scheduled.
It will be appreciated that in the event that the target coroutine has finished executing the pending event, the scheduling thread needs to determine whether the target coroutine still needs to execute the pending event. If the target coroutine also needs to execute the event to be processed, the target coroutine can also be used as a coroutine to be scheduled, and the scheduling thread can not delete the target coroutine from the coroutine queue but transfer the position of the target coroutine. If the target coroutine does not have a to-be-processed event which needs to be executed, the target coroutine is no longer the coroutine to be scheduled, and the scheduling thread needs to delete the target coroutine from the coroutine queue. Specifically, the scheduling thread determines how to process the position of the target coroutine in the coroutine queue by judging whether the event queue corresponding to the target coroutine is empty or non-empty.
S305: and under the condition that the event queue is not empty, transferring the target coroutine to the tail end of the coroutine queue, transferring the adjacent coroutine of the target coroutine to the head end of the coroutine queue, and transferring the adjacent event of the event to be executed to the head end of the event queue corresponding to the target coroutine.
In the embodiment of the application, under the condition that the scheduling thread is judged to be non-empty, the target coroutine can also be used as a coroutine to be scheduled, and in consideration of the first-in first-out principle of a coroutine queue, the scheduling thread transfers the target coroutine from the head end of the coroutine queue to the tail end of the coroutine queue and transfers the adjacent coroutines to the head end of the coroutine queue. It is understood that the adjacent coroutine can be regarded as an updated target coroutine, and in the case of a means of transferring the adjacent event to an event queue corresponding to the target coroutine, the adjacent event can be regarded as an updated event to be executed. At this time, the scheduling thread may allocate the CPU resource that the target coroutine actively gives way to the updated target coroutine, and schedule the updated target coroutine to execute the updated event to be executed.
S307: and under the condition that the first position information represents that the second coroutine is positioned at the head end of the coroutine queue, and the second position information represents that the target event is positioned at the head end of the event queue corresponding to the second coroutine, the scheduling thread schedules the second coroutine to execute the target event.
According to the scheme, as the scheduling thread sequentially schedules the target coroutines to execute the target events, the first position information and the second position information are sequentially updated. And under the condition that the scheduling thread transfers the second coroutine to the head end of the coroutine queue and transfers the target event to the head end of the event queue corresponding to the second coroutine, the scheduling thread allocates the CPU resource of the second thread corresponding to the second coroutine and schedules the second coroutine to execute the target event.
It can be understood that the application utilizes the first-in first-out principle of the coroutine queue to orderly schedule coroutines to be scheduled in the thread queue. And sequentially scheduling the coroutines to be scheduled to execute the events to be processed in the corresponding event queues by utilizing the first-in first-out principle of the event queues corresponding to the coroutines to be scheduled. Because the coroutine to be scheduled executes a to-be-processed event in the corresponding event queue each time, namely, the CPU resource is given out actively, the blocking time of the coroutine to be scheduled in the coroutine queue can be greatly reduced, and the blocking time of the to-be-processed event in the event queue corresponding to the coroutine to be scheduled in the coroutine queue can be greatly reduced. As the coroutines to be scheduled do not need to seize CPU resources, and the coroutines to be scheduled do not need to switch the CPU resources, the switching expense of the CPU resources can be greatly reduced, and the cross-process concurrent processing efficiency of a plurality of events can be greatly improved.
In a specific embodiment, if the event queue corresponding to the target coroutine is empty, the scheduling thread, based on the first location information and the second location information, schedules the second coroutine to execute the target event further includes:
under the condition that the coroutine queue is empty, deleting the target coroutine from the coroutine queue, and transferring the adjacent coroutines to the head end of the coroutine queue;
and under the condition that the first position information represents that the second coroutine is positioned at the head end of the coroutine queue, and the second position information represents that the target event is positioned at the head end of the event queue corresponding to the second coroutine, the scheduling thread schedules the second coroutine to execute the target event.
In the embodiment of the application, if the event queue corresponding to the target coroutine is empty, it is indicated that the target coroutine does not have a to-be-processed event that needs to be executed, and is no longer a coroutine to be scheduled. The scheduling thread may delete the target coroutine from the coroutine queue.
And the first position information and the second position information are updated in sequence as the scheduling thread sequentially schedules the target coroutine to execute the target event. And under the condition that the scheduling thread transfers the second coroutine to the head end of the coroutine queue and transfers the target event to the head end of the event queue corresponding to the second coroutine, the scheduling thread allocates the CPU resource of the second thread corresponding to the second coroutine and schedules the second coroutine to execute the target event.
In a specific embodiment, before the scheduling thread stores the second coroutine in a coroutine queue, the method further includes:
the scheduling thread judges whether the second coroutine exists in the coroutine queue or not;
correspondingly, the step of storing the second coroutine in a coroutine queue by the scheduling thread comprises:
and if the second coroutine is judged to be absent, the scheduling thread stores the second coroutine in the coroutine queue.
In the embodiment of the application, if the second coroutine is already stored in the coroutine queue, after the scheduling thread stores the target event in the event queue corresponding to the second coroutine according to the target coroutine identification information, the scheduling thread does not need to execute the operation of storing the second coroutine in the coroutine queue.
In a specific embodiment, the method further includes:
the scheduling thread monitors link state information of a plurality of network communication links; wherein the plurality of network communication links includes the target network communication link;
the scheduling thread in the second process schedules the second coroutine to execute the target event according to the target coroutine identification information, and comprises the following steps:
and under the condition that the scheduling thread monitors that the target link state information of the target network communication link indicates that the event to be received exists, scheduling the second co-program to execute the target event according to the target co-program identification information.
In the embodiment of the application, the second process and the plurality of processes including the first process have inter-process communication. Accordingly, the plurality of network communication links may be network communication links between the second process and the plurality of processes.
In this embodiment of the present application, the link state information of the multiple network communication links may characterize whether there is an event to be received in the multiple network communication links. Specifically, the link state information of the plurality of network communication links may be link state identifiers of the plurality of network communication links. In the case where there is an event to be received for the plurality of network communication links, the link status identifiers of the plurality of network communication links are 1. In the case where there is no event to be received for the plurality of network communication links, the link status flags for the plurality of network communication links are 0.
Specifically, if the scheduling thread monitors that the link state of the target network communication link is 1, the scheduling thread receives a target event and schedules a second coroutine to execute the target event according to a target coroutine ID in the target event.
In a specific embodiment, the method further includes:
the scheduling thread receives a plurality of events simultaneously under the condition that the link state information of the network communication links indicates that the events to be received exist is monitored; wherein the plurality of events comprise the target event, and an execution priority exists among the plurality of events;
the scheduling thread determines coroutines corresponding to the events according to coroutine identification information in the events;
the scheduling thread sequentially stores the events in the event queues of the corresponding coroutines according to the execution priority;
and the scheduling thread stores coroutines corresponding to the events in the coroutine queue according to the execution priority.
In the embodiment of the application, the scheduling thread monitors a plurality of network communication links through the selection interface. Under the condition that a plurality of network communication links have events to be received, the scheduling thread can also simultaneously receive a plurality of events through the selection interface, and then sequentially store the events in the event queues of the corresponding coroutines according to the event priorities of the events, and store the coroutines corresponding to the events in the coroutine queues. Therefore, under the condition that the scheduling thread schedules the target coroutine to execute the event to be executed, cross-process high concurrent processing of a plurality of events can be realized according to the event priority of the plurality of events.
Fig. 4 is a schematic diagram of a process of switching a thread stack to a coroutine stack according to an embodiment of the present disclosure. In conjunction with fig. 4, how the scheduling thread allocates the CPU resource of the thread corresponding to the second coroutine can be described, and the second coroutine is scheduled to execute the target event.
Fig. 5 is a schematic diagram of a process of switching a coroutine stack to a thread stack according to an embodiment of the present application. Referring to fig. 5, how to actively make the CPU resource of the thread corresponding to the second coroutine after the second coroutine executes the target event can be described.
Specifically, taking the CPU as a PPC-series processor as an example, the process of allocating the CPU resource to the second coroutine by the scheduling thread is as follows:
1) the dispatching thread controls a start Stack Pointer (SP) to point to the initial location of the thread stack of the second coroutine corresponding thread (A1).
2) The scheduling thread controls SP to shift upwards to an end position (A2) according to the size of data needing to be stored by a target event, and context data between A1-A2 are stored and popped; the context data between A1-A2 includes the initial position of SP, the data of general register R3, the data of link register LR, the data of condition register CR, the data of condition register CTR, and the data of general registers R14-R31.
3) And the dispatching thread finds the pProcSp domain of the coroutine management structure of the second coroutine (the stack address of the coroutine stack of the second coroutine) according to the data of the register R3, and controls the SP to point to the stack address of the coroutine stack of the second coroutine.
4) The scheduling thread completes the stack switching from the thread stack corresponding to the second coroutine to the coroutine stack of the second coroutine according to the stack address of the coroutine stack of the second coroutine, and pushes the context data between A1-A2 into the coroutine stack of the second coroutine; wherein, stack switching can be realized by assembly language to improve switching efficiency.
5) The scheduling thread schedules a second coroutine to execute the target event.
Specifically, taking the CPU as a PPC-based processor as an example, after the second coroutine executes the target event, the process of actively issuing the CPU resource of the thread corresponding to the second coroutine is as follows:
6) after the second coroutine finishes executing the target event, the second coroutine acquires the stack address of the thread stack of the thread corresponding to the second coroutine before stack switching from the pTask Sp domain in the coroutine management structure of the second coroutine, controls the SP to point to the stack address of the thread stack of the thread corresponding to the second coroutine, pops the context data out of the coroutine stack of the SP, and controls the SP to point to the stack address of the thread stack of the thread corresponding to the second coroutine, so that the context data is pushed into the thread stack of the thread corresponding to the second coroutine.
Through the scheme, the scheduling thread can allocate the CPU resource of the thread corresponding to the second coroutine, and schedule the second coroutine to execute the target event. After the second coroutine finishes executing the target event, the CPU resource of the thread corresponding to the second coroutine can be actively output.
In an optional embodiment, according to a service requirement of a target service, the event processing method provided in the embodiment of the present application may further have synchronous event processing between different coroutines.
Specifically, taking a CPU as a PPC-series processor as an example, the process of stack switching for sending a synchronization event from the second coroutine to the third coroutine is as follows:
7) the second coroutine generates a synchronous event in the process of executing the target event, sends the synchronous event to the scheduling thread and actively gives out the CPU resource of the thread corresponding to the second coroutine, and the specific process of actively giving out the CPU resource of the thread corresponding to the second coroutine is similar to the process of actively giving out the CPU resource of the thread corresponding to the second coroutine in the step 6).
8) And the scheduling thread allocates the CPU resource of the thread corresponding to the second coroutine to the third coroutine, and the specific process is similar to the process of allocating the CPU resource of the thread corresponding to the second coroutine by the scheduling thread in 1) to 4).
9) The scheduling thread schedules a third coroutine execution synchronization event.
10) After the third coroutine finishes executing the synchronous event, the specific process of actively letting out the CPU resource of the thread corresponding to the second coroutine is similar to the process of actively letting out the CPU resource of the thread corresponding to the second coroutine in the step 6).
11) And the scheduling thread continuously allocates the CPU resource of the thread corresponding to the second coroutine, and schedules the second coroutine to continuously execute the target event.
The scheme can realize the processing of synchronous events among coroutines.
In order to explain a specific implementation process of the event processing method provided in the embodiment of the present application, as shown in fig. 6, a diagram illustrating an event processing process provided in the embodiment of the present application is shown. Specifically, the method comprises a storage link and a scheduling link.
A. And (4) a storage link: the scheduling thread monitors the network communication links 1-N through the selection interface, and receives the target event when the target network communication link is monitored to have the event to be received. And storing the target event at the tail end of an event queue corresponding to the second coroutine according to the target coroutine identification information carried by the target event, and storing the second coroutine at the tail end of the coroutine queue.
B. And (3) scheduling links: according to the first-in first-out principle of the coroutine queue and the event queue, the head end of the coroutine queue is a target coroutine, the head end of the event queue corresponding to the target coroutine is an event to be executed, and the scheduling thread schedules the target coroutine to execute the event to be executed.
To specifically explain a specific application scenario of the event processing method provided in the embodiment of the present application, as shown in fig. 7, a system framework diagram of an AP (Adaptive Platform) provided in the embodiment of the present application is shown. Specifically, the AP includes various adaptive application modules, various application interface modules, an operating system abstraction layer, and a basic function cluster of application interfaces. It can be understood that the event processing method provided by the embodiment of the present application is suitable for cross-process event highly concurrent processing among the multiple adaptive application program modules, the multiple application program interface modules, the operating system abstraction layer, and the basic function clusters of the application program interface.
To illustrate the structural relationship among processes, threads and coroutines, fig. 8 is a diagram of the structural relationship among processes, threads and coroutines according to an embodiment of the present disclosure. Referring to FIG. 8, process 1, process 2, and other processes share operating system CPU resources. Thread 1, thread 2, thread 3, and other threads are included within process 1. Wherein, the thread 3 is a primary scheduling thread because only one coroutine is in the thread; thread 1, with multiple coroutines inside, schedules threads for the second level. It will be appreciated that different processes may serve different modules in FIG. 7, respectively.
As shown in fig. 9, a schematic structural diagram of an event processing apparatus 900 according to an embodiment of the present application is provided, the apparatus including:
a generating module 901, configured to generate, by the first coroutine, a target event corresponding to the target service; the target event comprises target coroutine identification information of a second coroutine, the first coroutine is a coroutine under a thread corresponding to a first process, and the second coroutine is a coroutine under a thread corresponding to a second process;
a sending module 903, configured to send the target event to the second process by the first coroutine through a target network communication link between the first process and the second process;
a first scheduling module 905, configured to schedule, by a scheduling thread in the second process, the second coroutine to execute the target event according to the target coroutine identification information.
In some optional embodiments, the first scheduling module 905 includes:
the first storage unit is used for storing the target event in an event queue corresponding to the second coroutine by the scheduling thread according to the target coroutine identification information; the event queue corresponding to the second coroutine is used for storing the to-be-processed event corresponding to the second coroutine;
the second storage unit is used for storing the second coroutine in a coroutine queue by the scheduling thread; the coroutine queue is used for storing coroutines to be scheduled;
the first scheduling unit is used for scheduling the second coroutine to execute the target event based on the first position information and the second position information by the scheduling thread; the first position information represents the position of the second coroutine in the coroutine queue, and the second position information represents the position of the target event in the event queue corresponding to the second coroutine.
In some optional embodiments, the first scheduling unit includes:
the first scheduling subunit is used for the scheduling thread to sequentially schedule the target coroutines to execute the events to be executed; the target coroutine is a coroutine positioned at the head end of the coroutine queue, and the event to be executed is an event positioned at the head end of the event queue corresponding to the target coroutine;
a first deleting subunit, configured to, when the target coroutine finishes executing the to-be-executed event, delete, by the scheduling thread, the to-be-executed event from an event queue corresponding to the target coroutine, and determine whether the event queue corresponding to the target coroutine is empty;
a transfer subunit, configured to transfer the target coroutine to a tail end of the coroutine queue, transfer an adjacent coroutine of the target coroutine to a head end of the coroutine queue, and transfer an adjacent event of the event to be executed to the head end of the event queue corresponding to the target coroutine, if the coroutine is determined to be non-empty;
and a second scheduling subunit, configured to schedule, by the scheduling thread, the second coroutine to execute the target event when the first location information indicates that the second coroutine is located at a head end of the coroutine queue, and the second location information indicates that the target event is located at a head end of an event queue corresponding to the second coroutine.
In some optional embodiments, the scheduling unit further includes:
the second deleting subunit is used for deleting the target coroutine from the coroutine queue and transferring the adjacent coroutines to the head end of the coroutine queue under the condition that the target coroutine is judged to be empty;
and the second scheduling subunit is further configured to schedule, by the scheduling thread, the second coroutine to execute the target event when the first location information indicates that the second coroutine is located at a head end of the coroutine queue, and the second location information indicates that the target event is located at a head end of an event queue corresponding to the second coroutine.
In some optional embodiments, the apparatus further comprises:
the judging module is used for judging whether the second coroutine exists in the coroutine queue or not by the scheduling thread;
correspondingly, the second storage unit is further configured to, when the second coroutine is determined to be absent, store the second coroutine in the coroutine queue by the scheduling thread.
In some optional embodiments, the apparatus further comprises:
the monitoring module is used for monitoring link state information of a plurality of network communication links by the scheduling thread; wherein the plurality of network communication links includes the target network communication link;
the first scheduling module includes:
and the second scheduling unit is used for scheduling the second co-program to execute the target event according to the target co-program identification information under the condition that the scheduling thread monitors that the target link state information of the target network communication link indicates that the event to be received exists.
In some optional embodiments, the apparatus further comprises:
a first receiving module, configured to receive, by the scheduling thread, a plurality of events simultaneously when monitoring that the link state information of the plurality of network communication links indicates that the event to be received exists; wherein the plurality of events comprise the target event, and an execution priority exists among the plurality of events;
a determining module, configured to determine coroutines corresponding to the multiple events according to coroutine identification information in the multiple events by the scheduling thread;
the first storage module is used for the scheduling thread to sequentially store the plurality of events in the event queues of the corresponding coroutines according to the execution priority;
and the second storage module is used for storing coroutines corresponding to the events in the coroutine queue by the scheduling thread according to the execution priority.
In some optional embodiments, the apparatus further comprises:
an obtaining module, configured to obtain link priorities among the multiple network communication links when the scheduling thread monitors that the link state information of the multiple network communication links indicates that an event to be received exists;
the second receiving module is used for the dispatching thread to sequentially receive a plurality of events according to the link priority;
and the second scheduling module is used for scheduling the coroutine corresponding to each event to execute each event according to the coroutine identification information in each event under the condition that the scheduling thread receives each event.
The device in the described device embodiment and the corresponding method embodiment are based on the same inventive concept.
The application also provides an event processing device, which comprises a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executed to realize the event processing method.
The device in the described device embodiment and the corresponding method embodiment are based on the same inventive concept.
The present application also provides a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes or set of instructions, which is loaded and executed by a processor to implement an event handling method as described above.
The computer-readable storage medium in the described computer-readable storage medium embodiments and the corresponding method embodiments are based on the same inventive concept.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the event processing method provided in the above-mentioned various alternative implementations.
An embodiment of the present application provides an event processing server, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the event processing method provided in the foregoing method embodiment.
The memory may be used to store software programs and modules, and the processor may execute various functional applications and event processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal, a server or a similar operation device. Taking an example of the event processing method running on a server, fig. 10 is a hardware structure block diagram of the server according to the event processing method provided in the embodiment of the present application. As shown in fig. 10, the server 1000 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1010 (the processor 1010 may include but is not limited to a Processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 1030 for storing data, and one or more storage media 1020 (e.g., one or more mass storage devices) for storing applications 1023 or data 1022. Memory 1030 and storage media 1020 may be, among other things, transient or persistent storage. The program stored in the storage medium 1020 may include one or more modules, each of which may include a series of instruction operations for a server. Still further, the central processor 1010 may be disposed in communication with the storage medium 1020A series of instruction operations in the storage medium 1020 are executed on the server 1000. The Server 1000 may also include one or more power supplies 1060, one or more wired or wireless network interfaces 1050, one or more input-output interfaces 1040, and/or one or more operating systems 1021, such as a Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMAnd so on.
Input-output interface 1040 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the server 1000. In one example, i/o Interface 1040 includes a Network adapter (NIC) that may be coupled to other Network devices via a base station to communicate with the internet. In one example, the input/output interface 1040 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
It will be understood by those skilled in the art that the structure shown in fig. 10 is merely illustrative and is not intended to limit the structure of the electronic device. For example, server 1000 may also include more or fewer components than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
Embodiments of the present application further provide a storage medium, which may be disposed in a server to store at least one instruction, at least one program, a set of codes, or a set of instructions related to implementing an event processing method in the method embodiments, where the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the event processing method provided in the method embodiments.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
As can be seen from the above embodiments of the event processing method, apparatus, server, or storage medium provided by the present application, in the present application, since the coroutine has a characteristic of actively giving up CPU resources after the event is executed, in the case of cross-process processing of a target event, a second coroutine is scheduled to execute the target event according to target coroutine identification information in the target event by a scheduling thread, so that the processing efficiency of the target event can be greatly improved, and the switching overhead of CPU resources can also be greatly reduced.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. An event processing method, characterized in that the method comprises:
the first coroutine generates a target event corresponding to the target service; the target event comprises target coroutine identification information of a second coroutine, the first coroutine is a coroutine under a thread corresponding to a first process, and the second coroutine is a coroutine under a thread corresponding to a second process;
the first coroutine sends the target event to the second process through a target network communication link between the first process and the second process;
and scheduling the second coroutine to execute the target event by the scheduling thread in the second process according to the target coroutine identification information.
2. The method of claim 1, wherein the scheduling thread to schedule the second coroutine to execute the target event according to the target coroutine identification information comprises:
the scheduling thread stores the target event in an event queue corresponding to the second coroutine according to the target coroutine identification information; the event queue corresponding to the second coroutine is used for storing the to-be-processed event corresponding to the second coroutine;
the scheduling thread stores the second coroutine in a coroutine queue; the coroutine queue is used for storing coroutines to be scheduled;
the scheduling thread schedules the second coroutine to execute the target event based on the first position information and the second position information; the first position information represents the position of the second coroutine in the coroutine queue, and the second position information represents the position of the target event in the event queue corresponding to the second coroutine.
3. The method of claim 2, wherein scheduling the second coroutine to execute the target event based on the first location information and the second location information by the scheduling thread comprises:
the scheduling thread sequentially schedules a target coroutine to execute the events to be executed; the target coroutine is a coroutine positioned at the head end of the coroutine queue, and the event to be executed is an event positioned at the head end of the event queue corresponding to the target coroutine;
when the target coroutine finishes executing the event to be executed, the scheduling thread deletes the event to be executed from the event queue corresponding to the target coroutine and judges whether the event queue corresponding to the target coroutine is empty;
under the condition that the coroutine queue is not empty, transferring the target coroutine to the tail end of the coroutine queue, transferring the adjacent coroutine of the target coroutine to the head end of the coroutine queue, and transferring the adjacent event of the event to be executed to the head end of the event queue corresponding to the target coroutine;
and under the condition that the first position information represents that the second coroutine is positioned at the head end of the coroutine queue, and the second position information represents that the target event is positioned at the head end of the event queue corresponding to the second coroutine, the scheduling thread schedules the second coroutine to execute the target event.
4. The method of claim 3, wherein scheduling the second coroutine to execute the target event based on the first location information and the second location information by the scheduling thread further comprises:
under the condition that the coroutine queue is empty, deleting the target coroutine from the coroutine queue, and transferring the adjacent coroutines to the head end of the coroutine queue;
and under the condition that the first position information represents that the second coroutine is positioned at the head end of the coroutine queue, and the second position information represents that the target event is positioned at the head end of the event queue corresponding to the second coroutine, the scheduling thread schedules the second coroutine to execute the target event.
5. The method of claim 2, wherein before the scheduling thread stores the second coroutine in a coroutine queue, the method further comprises:
the scheduling thread judges whether the second coroutine exists in the coroutine queue or not;
correspondingly, the step of storing the second coroutine in a coroutine queue by the scheduling thread comprises:
and if the second coroutine is judged to be absent, the scheduling thread stores the second coroutine in the coroutine queue.
6. The method of claim 1, further comprising:
the scheduling thread monitors link state information of a plurality of network communication links; wherein the plurality of network communication links includes the target network communication link;
the scheduling thread in the second process schedules the second coroutine to execute the target event according to the target coroutine identification information, and comprises the following steps:
and under the condition that the scheduling thread monitors that the target link state information of the target network communication link indicates that the event to be received exists, scheduling the second co-program to execute the target event according to the target co-program identification information.
7. The method of claim 6, further comprising:
the scheduling thread receives a plurality of events simultaneously under the condition that the link state information of the network communication links indicates that the events to be received exist is monitored; wherein the plurality of events comprise the target event, and an execution priority exists among the plurality of events;
the scheduling thread determines coroutines corresponding to the events according to coroutine identification information in the events;
the scheduling thread sequentially stores the events in the event queues of the corresponding coroutines according to the execution priority;
and the scheduling thread stores coroutines corresponding to the events in the coroutine queue according to the execution priority.
8. An event processing apparatus, characterized in that the apparatus comprises:
the generating module is used for generating a target event corresponding to the target service by the first coroutine; the target event comprises target coroutine identification information of a second coroutine, the first coroutine is a coroutine under a thread corresponding to a first process, and the second coroutine is a coroutine under a thread corresponding to a second process;
a sending module, configured to send, by the first coroutine, the target event to the second process through a target network communication link between the first process and the second process;
and the scheduling module is used for scheduling the second coroutine to execute the target event according to the target coroutine identification information by the scheduling thread in the second process.
9. An event processing device, characterized in that the device comprises a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to realize the event processing method according to any one of claims 1 to 7.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the event processing method of any of claims 1 to 7.
CN202110805102.0A 2021-03-31 2021-07-16 Event processing method, device and equipment and computer readable storage medium Pending CN113626213A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110350140 2021-03-31
CN2021103501401 2021-03-31

Publications (1)

Publication Number Publication Date
CN113626213A true CN113626213A (en) 2021-11-09

Family

ID=78379899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110805102.0A Pending CN113626213A (en) 2021-03-31 2021-07-16 Event processing method, device and equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113626213A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114398179A (en) * 2022-01-14 2022-04-26 北京思明启创科技有限公司 Method and device for acquiring tracking identifier, server and storage medium
CN115687599A (en) * 2022-09-29 2023-02-03 恒生电子股份有限公司 Service data processing method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111078436A (en) * 2019-12-18 2020-04-28 上海金仕达软件科技有限公司 Data processing method, device, equipment and storage medium
CN111837104A (en) * 2019-02-21 2020-10-27 华为技术有限公司 Method and device for scheduling software tasks among multiple processors

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111837104A (en) * 2019-02-21 2020-10-27 华为技术有限公司 Method and device for scheduling software tasks among multiple processors
CN111078436A (en) * 2019-12-18 2020-04-28 上海金仕达软件科技有限公司 Data processing method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114398179A (en) * 2022-01-14 2022-04-26 北京思明启创科技有限公司 Method and device for acquiring tracking identifier, server and storage medium
CN114398179B (en) * 2022-01-14 2023-03-14 北京思明启创科技有限公司 Method and device for acquiring tracking identifier, server and storage medium
CN115687599A (en) * 2022-09-29 2023-02-03 恒生电子股份有限公司 Service data processing method and device, electronic equipment and storage medium
CN115687599B (en) * 2022-09-29 2023-10-31 恒生电子股份有限公司 Service data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
EP3669494B1 (en) Dynamic allocation of edge computing resources in edge computing centers
CN108762896B (en) Hadoop cluster-based task scheduling method and computer equipment
US9577961B2 (en) Input/output management in a distributed strict queue
US10200295B1 (en) Client selection in a distributed strict queue
CN109564528B (en) System and method for computing resource allocation in distributed computing
CN113626213A (en) Event processing method, device and equipment and computer readable storage medium
US9571414B2 (en) Multi-tiered processing using a distributed strict queue
US9584593B2 (en) Failure management in a distributed strict queue
CN108829512B (en) Cloud center hardware accelerated computing power distribution method and system and cloud center
US9591101B2 (en) Message batching in a distributed strict queue
CN110333939B (en) Task mixed scheduling method and device, scheduling server and resource server
CN115167996A (en) Scheduling method and device, chip, electronic equipment and storage medium
US9577878B2 (en) Geographic awareness in a distributed strict queue
CN114691321A (en) Task scheduling method, device, equipment and storage medium
CN115362434A (en) Task scheduling for distributed data processing
US9990240B2 (en) Event handling in a cloud data center
CN112698929A (en) Information acquisition method and device
CN114896050B (en) Task scheduling method and system based on cluster resources
CN110780869A (en) Distributed batch scheduling
CN113364888B (en) Service scheduling method, system, electronic device and computer readable storage medium
EP2413240A1 (en) Computer micro-jobs
CN113347430A (en) Distributed scheduling device of hardware transcoding acceleration equipment and use method thereof
CN111309467A (en) Task distribution method and device, electronic equipment and storage medium
CN113204434B (en) Planned task execution method and device based on k8s and computer equipment
CN115904673B (en) Cloud computing resource concurrent scheduling method, device, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination