CN109558235A - A kind of dispatching method of processor, device and computer equipment - Google Patents

A kind of dispatching method of processor, device and computer equipment Download PDF

Info

Publication number
CN109558235A
CN109558235A CN201811457620.2A CN201811457620A CN109558235A CN 109558235 A CN109558235 A CN 109558235A CN 201811457620 A CN201811457620 A CN 201811457620A CN 109558235 A CN109558235 A CN 109558235A
Authority
CN
China
Prior art keywords
processor
scheduling request
queue
scheduling
operation interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811457620.2A
Other languages
Chinese (zh)
Other versions
CN109558235B (en
Inventor
符志清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou DPTech Technologies Co Ltd
Original Assignee
Hangzhou DPTech Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou DPTech Technologies Co Ltd filed Critical Hangzhou DPTech Technologies Co Ltd
Priority to CN201811457620.2A priority Critical patent/CN109558235B/en
Publication of CN109558235A publication Critical patent/CN109558235A/en
Application granted granted Critical
Publication of CN109558235B publication Critical patent/CN109558235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

This specification provides the dispatching method, device and computer equipment of a kind of processor, which comprises receives the processor operation data of application;Preset queue is written using the processor operation data as processor scheduling request;Processor scheduling request is extracted from the queue;Determine that the processor scheduling requests corresponding processor operation interface;Processor is called to handle the processor scheduling request according to the processor operation interface.By increasing/dequeue operation of joining the team, it realizes and runs on process context, the application of interrupting the various environment such as context is docked with the processor operation interface for the process context for running on no spin lock, the processor operation interface that reusable manufacturer provides, without development process device operation interface again, cost is greatly reduced.

Description

Scheduling method and device of processor and computer equipment
Technical Field
The present disclosure relates to the field of technologies, and in particular, to a method and an apparatus for scheduling a processor, and a computer device.
Background
In a multitasking operating system such as Linux, a plurality of tasks are generally executed. Macroscopically, these tasks are running simultaneously; microscopically, the operating system divides the usage right of a processor (such as a Central Processing Unit (CPU)) into tiny time periods (called time slices, and the length may be on the order of 10 milliseconds), only one task is run in each time period, and the operating system determines which task is run in each time period, so that all tasks are guaranteed to be executed in a longer time range.
Since "tasks" and "processes" are somewhat equivalent concepts, the execution environment of a processor is often referred to as a "process context" during the execution of a task.
There is another execution environment for a processor, called an interrupt context. An interrupt is an event with higher priority than a task, and is generally used to make a system quickly respond to some emergency event, for example, a hardware event (such as a user operating a keyboard, a mouse, etc.), a system exception event, a timer event, and so on.
When an interrupt event occurs, the processor immediately switches to the interrupt context, executes corresponding processing, and switches back to the process context to execute the task after the corresponding processing is finished.
A certain common task is always operated in a process context environment, each task has a task basic information data structure, a processor can schedule the task as a basic unit, and each task has a chance to be operated in the processor.
The interrupt context is triggered by some emergency event, the interrupt has no independent task basic information data structure, so it can not be scheduled, when the interrupt occurs, the processor needs to execute the current interrupt processing event completely, then execute other tasks, otherwise, the processor executes other tasks in the process of processing the interrupt processing event, it can not return to the original interrupt processing event, the rest of the interrupt processing event can not be executed.
In addition, even in the process context, if a spin lock (including a read-write lock) is used, scheduling is not allowed to occur in the locking and unlocking processes, otherwise other tasks needing to lock the same lock may be scheduled to run after the current task is locked, and deadlock is caused.
At present, manufacturers deliver processors while providing corresponding processor instruction manuals and processor operation interfaces, namely SDKs (Software Development Kit Control List).
In some processor operation interfaces, due to the large number of semaphores and mutexes (when the semaphores and mutexes are used, a current task may sleep, and other tasks are scheduled by a processor), the processor operation interfaces only support the process context.
In an actual operating environment, there is a large need to manage tables in the interrupt context. For example, the operating system receives a packet in a packet reception interrupt, parses the packet in the interrupt context, and issues a corresponding table according to the content of the packet, for example, updates an ARP entry in the processor according to a received ARP (Address Resolution Protocol) packet. If the function for updating the ARP table entry is possibly scheduled in the running process, the crash is possibly caused.
In order to operate the processor in various complex environments and facilitate later maintenance, developers generally abandon the processor operation interface provided by manufacturers, rewrite the processor operation interface according to functions described in a processor usage manual, and avoid executing operations (such as process sleep, active yielding of the processor, semaphore invocation/mutex lock, and the like) which may cause scheduling, so that the processors can be scheduled in a process context and an interrupt context at the same time regardless of whether a spin lock is added in the scheduling process.
However, rewriting the processor operation interface requires maintaining a relatively large development team, and requires a long development period, and the cost of the early development is relatively high, especially if the amount of equipment to which the processor is applied is small, which results in a relatively high average cost.
Disclosure of Invention
In order to overcome the problems in the related art, the specification provides a scheduling method and device of a processor and computer equipment.
According to a first aspect of embodiments herein, there is provided a scheduling method for a processor, including:
receiving processor operation data of an application;
writing the processor operation data serving as a processor scheduling request into a preset queue;
extracting a processor scheduling request from the queue;
determining a processor operation interface corresponding to the processor scheduling request;
and calling a processor to process the processor scheduling request according to the processor operation interface.
According to a second aspect of embodiments herein, there is provided a scheduling apparatus of a processor, including:
the operation data receiving module is used for receiving the processor operation data of the application;
the queue writing module is used for writing the processor operation data serving as a processor scheduling request into a preset queue;
the queue extracting module is used for extracting a processor scheduling request from the queue;
an operation interface determining module, configured to determine a processor operation interface corresponding to the processor scheduling request;
and the operation interface scheduling module is used for calling a processor to process the processor scheduling request according to the processor operation interface.
According to a third aspect of embodiments herein, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the following method when executing the program:
receiving processor operation data of an application;
writing the processor operation data serving as a processor scheduling request into a preset queue;
extracting a processor scheduling request from the queue;
determining a processor operation interface corresponding to the processor scheduling request;
and calling a processor to process the processor scheduling request according to the processor operation interface.
The technical scheme provided by the embodiment of the specification can have the following beneficial effects:
in the embodiment of the specification, processor operation data of an application is received, the processor operation data is written into a preset queue as a processor scheduling request, a processor scheduling request is extracted from the queue, a processor operation interface corresponding to the processor scheduling request is determined, the processor scheduling request is processed by calling the processor according to the processor operation interface, and by adding enqueue/dequeue operations, the application running in various environments such as a process context and an interrupt context is butted with the processor operation interface running in the process context without spin lock, the processor operation interface provided by a manufacturer can be reused without redeveloping the processor operation interface, and the cost is greatly reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present specification and together with the description, serve to explain the principles of the specification.
FIG. 1 is a flow chart illustrating a method of scheduling a processor according to an exemplary embodiment of the present description.
FIG. 2 is a flow chart illustrating a method of enqueuing according to an exemplary embodiment of the present description.
FIG. 3 is a flow chart illustrating another method of scheduling a processor according to another exemplary embodiment of the present description.
Fig. 4 is a hardware structure diagram of a computer device in which a scheduling apparatus of a processor according to an embodiment of the present disclosure is located.
FIG. 5 is a block diagram of a scheduling apparatus of a processor shown in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The following provides a detailed description of examples of the present specification.
As shown in fig. 1, fig. 1 is a flowchart illustrating a scheduling method of a processor according to an exemplary embodiment, including the following steps:
step 101, receiving processor operation data of an application.
In a specific implementation, the embodiment may be applied to a device configured with a processor, including a device configured with a switch chip (a processor), such as a switch, a router, and the like.
In one aspect, the processor is configured with a processor operational interface, which may be an SDK provided by a vendor that developed the processor.
On the other hand, the operating system of the device includes Linux, and various applications may be installed, and the application may be a system application or a third party application, which is not limited in this embodiment.
The driver layer between the application and the processor operation interface is responsible for interfacing between the application and the processor operation interface, namely, the driver layer provides the same interface (called as a driver interface) for the upper layer application to manage the same function of various processors.
In practical application, the application can call the driving interface to send the processor operation data to the processor.
For example, applications may publish to the processor based on user configuration or traffic characteristics, such as a MAC (Media Access Control) address table, an ARP table, a routing table, an ACL (Access Control List) table, and so on.
When the publication is carried out through user configuration, the publication is generally in a process context, and spin locks can be added in some scenes; when publishing under the traffic characteristic, the system is generally in an interrupt context (in the Linux system, a packet receiving process is in the interrupt context).
And 102, writing the processor operation data serving as a processor scheduling request into a preset queue.
In the device, a queue is established in the memory in advance, when an application calls a processor operation interface which can sleep, namely the processor operation interface can be scheduled, processor operation data is abstracted out and written into the queue as a processor scheduling request with a uniform format.
In one embodiment, as shown in FIG. 2, step 102 includes the sub-steps of:
sub-step S11, checks the number of processor scheduling requests stored in the preset queue.
And a substep S12 of waiting a predetermined time in a non-scheduled manner if the number has reached a predetermined upper limit, and returning to the substep S11.
And a substep S13, if the number does not reach a preset upper limit value, converting the processor operation data into a processor scheduling request and writing the processor scheduling request into the tail of the queue.
In this embodiment, the processor does not sleep when executing step 102, and if the number of the processor scheduling requests in the queue reaches the upper limit value, that is, the node is full, at this time, it may wait for the node in the queue to decrease in a non-scheduling manner.
The non-scheduling mode refers to a mode that does not trigger scheduling, for example, a specific number of idle cycles are executed to consume a specified processor time, whereas the scheduling mode refers to a mode that calls a function that can be put to sleep or actively scheduled.
In addition, the queue is implemented as a first-in-first-out linked list, ensuring that the enqueue order is the same as the dequeue order, i.e., the processor schedules requests and writes them to the tail of the queue at the time of enqueue.
Further, the present embodiment designs a corresponding data structure for the nodes of the queue, including the operation code and the operation parameter.
The operation code may be used to identify a type of chip operation, and the operation parameter is a parameter required for executing the type of chip operation.
Thus, in sub-step S13, the operation code and operation parameters may be read from the processor operation data as a processor scheduling request.
And converting the operation parameters into memory data, and writing the operation codes and the memory data into the tail part of the queue as a new node.
Step 103, extracting a processor scheduling request from the queue.
In a specific implementation, the processor scheduling request can be extracted from the queue in a polling manner, so that it is ensured that the driver interface is not asleep when called.
If the queue is implemented as a first-in-first-out linked list, the processor scheduling request may be extracted from the head of the queue.
Further, in the present embodiment, a corresponding data structure is designed for the nodes of the queue, including the operation code and the operation parameter.
At this time, the operation code and the memory data of the processor scheduling request may be extracted from the head of the queue, and the memory data may be converted into the operation parameter according to the specification corresponding to the operation code.
And step 104, determining a processor operation interface corresponding to the processor scheduling request.
For a processor scheduling request extracted from a queue, a processor operational interface associated with a processor for processing the processor scheduling request may be determined.
In one embodiment, a mapping relationship between an operation code and a processor operation interface may be established in advance, for a current processor scheduling request, an operation code in the processor scheduling request may be determined, and a processor operation interface corresponding to the operation code may be searched in the mapping relationship.
And 105, calling a processor to process the processor scheduling request according to the processor operation interface.
In a specific implementation, the processor operation interface may be invoked to flush the processor scheduling request to the processor for corresponding processing.
In addition, in this embodiment, the processing mode of the chip operation data is modified after the driver interface, the application does not sense this, and the conditions such as the entry table and the like are checked at the driver interface, so that the driver interface can return the same value as that before the modification to the application, the processor operation interface generally succeeds in execution, and may report a failure (e.g., a hardware failure) caused by the processor itself, and this failure may be reported to the application by other modes (e.g., output log information).
In one embodiment, the driver interfaces for application calls include a first driver interface for issuing processor operation data to a processor, such as an ACL table, to which most driver interfaces in the device belong.
In this embodiment, a processor operation interface is called to send a processor scheduling request to a processor for processing.
In another embodiment, the driver interfaces invoked by the application include a second driver interface for issuing processor operation data (e.g., an ARP table) to the processor and reading certain data (e.g., index values) from the processor, to which few driver interfaces in the device belong.
In this embodiment, in one aspect, a specified entry, which is typically a free entry, may be looked up from the table.
The position of the table entry in the table is set as an index value, and the index value is sent to an application, which manages, such as modifying, deleting, etc., based on the index value.
And on the other hand, calling the processor operation interface and sending the processor scheduling request to the processor for processing.
In one embodiment, two threads, an interface thread for performing at least one of steps 101-102 and a dispatch thread for performing at least one of steps 103-105, may be generated at the device.
When the interface thread calls the driving interface, a node is added into the queue, the scheduling thread takes the node out of the queue, if the running speed of the two nodes is unbalanced, the number of the nodes of the queue can reach the upper limit, and then when the interface thread calls the driving interface again, the interface thread waits for the reduction of the nodes in the queue in a non-scheduling mode and then adds a new node.
To avoid this problem, if the device has two or more processors (cores), the interface thread is in one processor (core) and the dispatch thread is in another processor (core), i.e., the dispatch thread is in an independent processor (core).
If the device has only one processor (core), and the interface thread and the scheduling thread are in the same processor (core), the queue cannot be scheduled even when the interface thread waits in a non-scheduling manner, which may cause the device to crash and crash.
To avoid this problem, if the interface thread is in the same processor (core) as the dispatch thread, at least one of the following processes may be performed:
1. the priority of the scheduling thread is higher than that of the interface thread
2. Upper bound of queue based on target memory allocation
The target memory is a memory except the reserved memory in the remaining memory, the reserved memory can guarantee normal operation of the device, the remaining memory is allocated to the target memory, and the number of nodes in the queue can be increased as much as possible.
3. In sub-step S12, if the number of nodes in the queue has reached the predetermined upper limit, the processor operation data is discarded, and an error message is generated.
In the embodiment of the specification, processor operation data of an application is received, the processor operation data is written into a preset queue as a processor scheduling request, a processor scheduling request is extracted from the queue, a processor operation interface corresponding to the processor scheduling request is determined, the processor scheduling request is processed by calling the processor according to the processor operation interface, and by adding enqueue/dequeue operations, the application running in various environments such as a process context and an interrupt context is butted with the processor operation interface running in the process context without spin lock, the processor operation interface provided by a manufacturer can be reused without redeveloping the processor operation interface, and the cost is greatly reduced.
Further, in the embodiments of the present specification, the applied modification amount is small, the adaptive modification is only required at the calling of a few driving layer interfaces for reading data from the processor, and the modification is consistent for all types of processor operation interfaces and does not need to be redeveloped for a new type of processor each time.
FIG. 3 is a flow chart of another method for scheduling a processor, as shown in FIG. 3, according to another exemplary embodiment, including the steps of:
step 301, receiving processor operation data of an application.
Step 302, determining a processor operation interface corresponding to the processor operation data.
And 303, calling the processor operation interface, and sending the processor operation data to a processor for processing.
In an embodiment, when the application calls the processor operation interface which is not in sleep, that is, the processor operation interface is not scheduled, the type of the current processor may be determined downwards, so as to decide to call the corresponding processor operation interface, and the processor operation data is flushed to the processor for processing.
Therefore, when processors of different types are newly added, the judgment of the type of the processor by the driving layer is adjusted, and the compatibility of the new type of processor can be realized without adjusting the application.
The driving interface called by the application comprises at least one of the following components:
1. first drive interface
The first driver interface is used to issue processor operation data to the processor, such as an ACL table, to which most driver interfaces in the device belong.
2. Second drive interface
This second drive interface is used to send processor operation data (e.g., ARP tables) down to the processor and read certain data (e.g., index values) from the processor, to which very few drive interfaces in the device belong.
3. Third drive interface
This third driver interface is used to fetch data from the processor (e.g., fetch ACL entry match statistics), to which the very few driver interfaces in the device belong.
Because the queue cannot realize that the application reliably waits for the processor operation interface to return data in an unscheduled way, for the situations that the processor operation interface is unscheduled and the processor operation interface is possibly scheduled, the driver interface does not modify at the driver layer, but modifies the calling environment of the application.
Further, in step 301, if the application is in the non-scheduling context, the non-scheduling context is set as the scheduling context, and the processor operation data of the application is received based on the scheduling context;
wherein the non-scheduling context includes at least one of:
interrupt context, process context with spin lock;
the scheduling context includes at least one of:
process context without interrupt, process context without spin lock;
further, the application may check the call context of the third driver interface, and if there is a call in the interrupt context or the process context with spin lock, change the call to a call in the process context without spin lock or interrupt, that is, if the call of the third driver interface is in the packet interrupt context or the timer interrupt context, change the call to a call in the kernel thread or the work queue (a type of kernel thread); if the call is called in the lock, the call is changed to be called out of the lock.
In the embodiment of the present description, processor operation data of an application is received, a processor operation interface corresponding to the processor operation data is determined, the processor operation interface is called, and the processor operation data is sent to a processor for processing.
Corresponding to the embodiments of the method, the present specification also provides embodiments of the apparatus and the terminal applied thereto.
The embodiment of the scheduling apparatus of the processor in the present specification can be applied to a computer device, such as a server or a terminal device. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor in which the file processing is located. From a hardware aspect, as shown in fig. 4, it is a hardware structure diagram of a computer device in which a file processing apparatus is located in an embodiment of this specification, except for the processor 410, the memory 430, the network interface 420, and the nonvolatile memory 440 shown in fig. 4, a server or an electronic device in which an apparatus 431 is located in an embodiment may also include other hardware according to an actual function of the computer device, which is not described again.
As shown in fig. 5, fig. 5 is a block diagram of a scheduling apparatus of a processor shown in the present specification according to an exemplary embodiment, the apparatus includes:
an operation data receiving module 501, configured to receive processor operation data of an application;
a queue writing module 502, configured to write the processor operation data into a preset queue as a processor scheduling request;
a queue extracting module 503, configured to extract a processor scheduling request from the queue;
an operation interface determining module 504, configured to determine a processor operation interface corresponding to the processor scheduling request;
and an operation interface scheduling module 505, configured to invoke a processor to process the processor scheduling request according to the processor operation interface.
In one embodiment, the queue write module 502 includes:
the quantity checking submodule is used for checking the quantity of the processor scheduling requests stored in a preset queue;
the cyclic waiting submodule is used for waiting for preset time in a non-scheduling mode and returning to call the quantity checking submodule if the quantity reaches a preset upper limit value;
or,
the operation data discarding submodule is used for discarding the operation data of the processor to generate error information if the number reaches a preset upper limit value;
and the tail writing submodule is used for converting the processor operation data into a processor scheduling request and writing the processor scheduling request into the tail of the queue if the number does not reach a preset upper limit value.
Accordingly, the queue extraction module 503 includes:
a head extraction submodule for extracting a processor scheduling request from a head of the queue.
In one embodiment, the tail write submodule includes:
a scheduling request conversion unit, configured to read an operation code and an operation parameter from the processor operation data as a processor scheduling request;
the memory data conversion unit is used for converting the operating parameters into memory data;
a node writing unit, configured to write the operation code and the memory data into the tail of the queue;
accordingly, the header extraction sub-module includes:
the node extraction unit is used for extracting the operation codes and the memory data of the processor scheduling request from the head of the queue;
and the operation parameter conversion unit is used for converting the memory data into operation parameters according to the operation codes.
In one embodiment, the operation interface determining module 504 includes:
an operation code determining submodule, configured to determine an operation code in the processor scheduling request;
and the operation interface searching submodule is used for searching the processor operation interface corresponding to the operation code.
In one embodiment, the operating parameters in the operating request comprise a table;
the operation interface scheduling module 505 comprises:
the first lower brush module is used for calling the processor operation interface and sending the processor scheduling request to the processor for processing;
or,
the table item searching submodule is used for searching the appointed table item from the table;
the index value setting submodule is used for setting the position of the table entry in the table as an index value;
the index value sending submodule is used for sending the index value to the application;
and the second lower brush module is used for calling the processor operation interface and sending the processor scheduling request to the processor for processing.
In one embodiment, the interface thread is on one processor and the dispatch thread is on another processor;
or,
if the interface thread and the scheduling thread are in the same processor, the priority of the scheduling thread is higher than that of the interface thread, and/or the upper limit value of the queue is distributed based on a target memory, and the target memory is a memory except a reserved memory in the rest memories;
wherein the interface thread is configured to perform at least one of:
receiving processor operation data of an application;
writing the processor operation data serving as a processor scheduling request into a preset queue;
the scheduling thread is used for executing at least one of the following steps:
extracting a processor scheduling request from the queue;
determining a processor operation interface corresponding to the processor scheduling request;
and calling a processor to process the processor scheduling request according to the processor operation interface.
In one embodiment, further comprising:
an operation interface corresponding module, configured to determine a processor operation interface corresponding to the processor operation data;
and the operation interface processing module is used for calling the processor operation interface and sending the processor operation data to the processor for processing.
In one embodiment, the operation data receiving module 501 includes:
the scheduling context setting submodule is used for setting the non-scheduling context as the scheduling context if the non-scheduling context is in the non-scheduling context;
a scheduling context receiving submodule for receiving processor operation data of an application based on the scheduling context;
wherein the non-scheduling context includes at least one of:
interrupt context, process context with spin lock;
the scheduling context includes at least one of:
an interrupt-free process context, a spin lock-free process context.
Accordingly, the present specification also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the following method when executing the program:
receiving processor operation data of an application;
writing the processor operation data serving as a processor scheduling request into a preset queue;
extracting a processor scheduling request from the queue;
determining a processor operation interface corresponding to the processor scheduling request;
and calling a processor to process the processor scheduling request according to the processor operation interface.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A method for scheduling a processor, comprising:
receiving processor operation data of an application;
writing the processor operation data serving as a processor scheduling request into a preset queue;
extracting a processor scheduling request from the queue;
determining a processor operation interface corresponding to the processor scheduling request;
and calling a processor to process the processor scheduling request according to the processor operation interface.
2. The method of claim 1, wherein writing the processor operation data as a processor scheduling request to a predetermined queue comprises:
checking the number of processor scheduling requests stored in a preset queue;
if the number reaches a preset upper limit value, waiting for a preset time in a non-scheduling mode, and returning to the number of the processor scheduling requests stored in the preset queue for checking, or discarding the processor operation data to generate error information;
if the number does not reach a preset upper limit value, converting the processor operation data into a processor scheduling request and writing the processor scheduling request into the tail of the queue;
the extracting a processor scheduling request from the queue includes:
a processor scheduling request is extracted from the head of the queue.
3. The method of claim 2, wherein converting the processor operation data into a processor scheduling request and writing the processor scheduling request to the tail of the queue comprises:
reading an operation code and an operation parameter from the processor operation data as a processor scheduling request;
converting the operating parameters into memory data;
writing the operation code and the memory data into the tail part of the queue;
the extracting the processor scheduling request from the head of the queue includes:
extracting an operation code and memory data of a processor scheduling request from the head of the queue;
and converting the memory data into operation parameters according to the operation codes.
4. The method of claim 1, wherein determining the processor operation interface to which the processor scheduling request corresponds comprises:
determining an operation code in the processor scheduling request;
and searching a processor operation interface corresponding to the operation code.
5. The method of claim 1, wherein the operational parameters in the operational request comprise a table;
the calling a processor to process the processor scheduling request according to the processor operation interface comprises the following steps:
calling the processor operation interface, and sending the processor scheduling request to the processor for processing;
or,
searching a specified table item from the table;
setting the position of the table entry in the table as an index value;
sending the index value to the application;
and calling the processor operation interface, and sending the processor scheduling request to the processor for processing.
6. The method according to any one of claims 1 to 5,
the interface thread is positioned in one processor, and the scheduling thread is positioned in the other processor;
or,
if the interface thread and the scheduling thread are in the same processor, the priority of the scheduling thread is higher than that of the interface thread, and/or the upper limit value of the queue is distributed based on a target memory, and the target memory is a memory except a reserved memory in the rest memories;
wherein the interface thread is configured to perform at least one of:
receiving processor operation data of an application;
writing the processor operation data serving as a processor scheduling request into a preset queue;
the scheduling thread is used for executing at least one of the following steps:
extracting a processor scheduling request from the queue;
determining a processor operation interface corresponding to the processor scheduling request;
and calling a processor to process the processor scheduling request according to the processor operation interface.
7. The method of claim 1, after the receiving processor operational data for an application, further comprising:
determining a processor operation interface corresponding to the processor operation data;
and calling the processor operation interface and sending the processor operation data to a processor for processing.
8. The method of claim 7, wherein receiving processor operational data for an application comprises:
if the scheduling context is in the non-scheduling context, setting the non-scheduling context as the scheduling context;
receiving processor operation data of an application based on the scheduling context;
wherein the non-scheduling context includes at least one of:
interrupt context, process context with spin lock;
the scheduling context includes at least one of:
an interrupt-free process context, a spin lock-free process context.
9. A scheduler for a processor, comprising:
the operation data receiving module is used for receiving the processor operation data of the application;
the queue writing module is used for writing the processor operation data serving as a processor scheduling request into a preset queue;
the queue extracting module is used for extracting a processor scheduling request from the queue;
an operation interface determining module, configured to determine a processor operation interface corresponding to the processor scheduling request;
and the operation interface scheduling module is used for calling a processor to process the processor scheduling request according to the processor operation interface.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the following method when executing the program:
receiving processor operation data of an application;
writing the processor operation data serving as a processor scheduling request into a preset queue;
extracting a processor scheduling request from the queue;
determining a processor operation interface corresponding to the processor scheduling request;
and calling a processor to process the processor scheduling request according to the processor operation interface.
CN201811457620.2A 2018-11-30 2018-11-30 Scheduling method and device of processor and computer equipment Active CN109558235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811457620.2A CN109558235B (en) 2018-11-30 2018-11-30 Scheduling method and device of processor and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811457620.2A CN109558235B (en) 2018-11-30 2018-11-30 Scheduling method and device of processor and computer equipment

Publications (2)

Publication Number Publication Date
CN109558235A true CN109558235A (en) 2019-04-02
CN109558235B CN109558235B (en) 2020-11-06

Family

ID=65868317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811457620.2A Active CN109558235B (en) 2018-11-30 2018-11-30 Scheduling method and device of processor and computer equipment

Country Status (1)

Country Link
CN (1) CN109558235B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111506426A (en) * 2020-04-17 2020-08-07 翱捷科技(深圳)有限公司 Memory management method and device and electronic equipment
CN112134859A (en) * 2020-09-09 2020-12-25 上海沈德医疗器械科技有限公司 Control method of focused ultrasound treatment equipment based on ARM architecture
CN117093355A (en) * 2023-10-19 2023-11-21 井芯微电子技术(天津)有限公司 Method for scheduling pseudo threads in process

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7152122B2 (en) * 2001-04-11 2006-12-19 Mellanox Technologies Ltd. Queue pair context cache
US20080225872A1 (en) * 2007-03-12 2008-09-18 Collins Kevin T Dynamically defining queues and agents in a contact center
WO2005046304A3 (en) * 2003-09-22 2009-04-30 Codito Technologies Method and system for allocation of special purpose computing resources in a multiprocessor system
CN102859492A (en) * 2010-04-28 2013-01-02 瑞典爱立信有限公司 Technique for GPU command scheduling
CN106919442A (en) * 2015-12-24 2017-07-04 中国电信股份有限公司 Many GPU dispatching devices and distributed computing system and many GPU dispatching methods
CN106959891A (en) * 2017-03-30 2017-07-18 山东超越数控电子有限公司 A kind of cluster management method and system for realizing GPU scheduling
CN108475198A (en) * 2016-02-24 2018-08-31 英特尔公司 The system and method for context vector for instruction at runtime

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7152122B2 (en) * 2001-04-11 2006-12-19 Mellanox Technologies Ltd. Queue pair context cache
WO2005046304A3 (en) * 2003-09-22 2009-04-30 Codito Technologies Method and system for allocation of special purpose computing resources in a multiprocessor system
US20080225872A1 (en) * 2007-03-12 2008-09-18 Collins Kevin T Dynamically defining queues and agents in a contact center
CN102859492A (en) * 2010-04-28 2013-01-02 瑞典爱立信有限公司 Technique for GPU command scheduling
CN106919442A (en) * 2015-12-24 2017-07-04 中国电信股份有限公司 Many GPU dispatching devices and distributed computing system and many GPU dispatching methods
CN108475198A (en) * 2016-02-24 2018-08-31 英特尔公司 The system and method for context vector for instruction at runtime
CN106959891A (en) * 2017-03-30 2017-07-18 山东超越数控电子有限公司 A kind of cluster management method and system for realizing GPU scheduling

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄钰: "网络打印安全系统研究与嵌入式软件平台设计", 《万方学位论文数据库》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111506426A (en) * 2020-04-17 2020-08-07 翱捷科技(深圳)有限公司 Memory management method and device and electronic equipment
CN112134859A (en) * 2020-09-09 2020-12-25 上海沈德医疗器械科技有限公司 Control method of focused ultrasound treatment equipment based on ARM architecture
CN117093355A (en) * 2023-10-19 2023-11-21 井芯微电子技术(天津)有限公司 Method for scheduling pseudo threads in process
CN117093355B (en) * 2023-10-19 2024-02-23 井芯微电子技术(天津)有限公司 Method for scheduling pseudo threads in process

Also Published As

Publication number Publication date
CN109558235B (en) 2020-11-06

Similar Documents

Publication Publication Date Title
US8161453B2 (en) Method and apparatus for implementing task management of computer operations
US8032899B2 (en) Providing policy-based operating system services in a hypervisor on a computing system
Selic Turning clockwise: using UML in the real-time domain
KR101658035B1 (en) Virtual machine monitor and scheduling method of virtual machine monitor
Schmidt et al. Software architectures for reducing priority inversion and non-determinism in real-time object request brokers
US8635612B2 (en) Systems and methods for hypervisor discovery and utilization
CN109558235B (en) Scheduling method and device of processor and computer equipment
US9009701B2 (en) Method for controlling a virtual machine and a virtual machine system
US8713582B2 (en) Providing policy-based operating system services in an operating system on a computing system
US7971205B2 (en) Handling of user mode thread using no context switch attribute to designate near interrupt disabled priority status
JP2005505833A (en) System for application server messaging using multiple shipping pools
JP2008306714A (en) Communicating method and apparatus in network application, and program for them
US20050132121A1 (en) Partitioned operating system tool
CN111274019A (en) Data processing method and device and computer readable storage medium
Zuepke et al. AUTOBEST: a united AUTOSAR-OS and ARINC 653 kernel
US7703103B2 (en) Serving concurrent TCP/IP connections of multiple virtual internet users with a single thread
Pöhnl et al. A middleware journey from microcontrollers to microprocessors
US20080141213A1 (en) Flexible interconnection system
Rammig et al. Basic concepts of real time operating systems
CN111310638A (en) Data processing method and device and computer readable storage medium
CN114816668A (en) Virtual machine kernel monitoring method, device, equipment and storage medium
CN114462388A (en) Handle management or communication method, electronic device, storage medium, and program product
CN113439260A (en) I/O completion polling for low latency storage devices
CN117931483B (en) Operating system, generating method, electronic device, storage medium, and program product
US7039772B1 (en) System, method, and computer program product for processing reflective state machines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant