CN116594754A - Task scheduling method and device and electronic equipment - Google Patents

Task scheduling method and device and electronic equipment Download PDF

Info

Publication number
CN116594754A
CN116594754A CN202310803262.0A CN202310803262A CN116594754A CN 116594754 A CN116594754 A CN 116594754A CN 202310803262 A CN202310803262 A CN 202310803262A CN 116594754 A CN116594754 A CN 116594754A
Authority
CN
China
Prior art keywords
host
task
shared memory
processing
processing task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310803262.0A
Other languages
Chinese (zh)
Inventor
曹超倚
刘亮
王建修
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecarx Hubei Tech Co Ltd
Original Assignee
Ecarx Hubei Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecarx Hubei Tech Co Ltd filed Critical Ecarx Hubei Tech Co Ltd
Priority to CN202310803262.0A priority Critical patent/CN116594754A/en
Publication of CN116594754A publication Critical patent/CN116594754A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application provides a task scheduling method, a task scheduling device and electronic equipment, wherein the task scheduling method comprises the following steps: after the host distributes the current processing task to the processor for processing, the host dispatches the next processing task from the shared memory; if the host computer dispatches to the next processing task from the shared memory, the host computer sends the next processing task to the processor for processing; the application relates to a method for scheduling processing tasks, which is characterized in that the next processing task is determined to be the current processing task, and a host computer is executed to schedule the next processing task from a shared memory.

Description

Task scheduling method and device and electronic equipment
Technical Field
The present application relates to the field of virtual machines, and in particular, to a task scheduling method and apparatus, and an electronic device.
Background
Virtualization technology allows multiple virtual machines to run on the same hardware platform at the same time, and for processors, such as graphics processors (Graphics Processing Unit, GPUs), the virtualization technology can enable the multiple virtual machines to multiplex GPUs in a time-sharing manner, and fully utilize GPU resources. HOST belonging to the privileged domain in the virtualization technology accesses GPU hardware, and the GUEST system communicates with HOST through shared memory and notification mechanism established by hypervisor.
At present, virtualization technology is commonly used, when a GUEST performs complex tasks, such as graphic rendering display, and when images such as games, videos and the like are transmitted, notification between HOST and the GUEST is too frequent, blocking is easy to occur, and further, the HOST is not timely in task calling sent by the GUEST, so that task processing delay of a processor can occur.
Disclosure of Invention
The application provides a task scheduling method, a task scheduling device and electronic equipment. The method is used for solving the problem that the existing task scheduling is not timely.
In a first aspect, an embodiment of the present application provides a task scheduling method, applied to a terminal device based on a virtualization technology, where the terminal device includes a host, a shared memory, and a processor, and the task scheduling method includes: after the host distributes the current processing task to the processor for processing, the host dispatches the next processing task from the shared memory; if the host computer dispatches to the next processing task from the shared memory, the host computer sends the next processing task to the processor for processing; determining the next processing task as the current processing task, and executing the step of scheduling the next processing task from the shared memory by the host.
In one embodiment of the present application, the task scheduling method further includes: if the host computer does not dispatch the next processing task from the shared memory, the host computer stops dispatching the processing task from the shared memory.
In one embodiment of the present application, the terminal device further includes: after stopping scheduling the processing task from the shared memory, the host further comprises: if the host receives the processing notification sent by the guest through the virtual machine manager, the step of the host scheduling the next processing task from the shared memory is executed.
In one embodiment of the application, a processor includes: a graphics processor.
In one embodiment of the present application, before the host schedules the next processing task from the shared memory, the method further includes: the host computer runs a main function of the equipment model; the method comprises the steps that a host configures first hardware resources of a graphic processor for a guest, wherein the first hardware resources comprise a shared memory; the host configures second hardware resources for the equipment model, wherein the second hardware resources comprise shared memory; the host configures a processing function for the device model, the processing function is used by the host to schedule a next processing task from the shared memory, and the processing function is run on the graphics processor.
In one embodiment of the present application, further comprising: the host creates a thread that handles the processing tasks of the graphics processor, the thread being used to execute the processing functions.
In one embodiment of the present application, the host schedules a next processing task from the shared memory, comprising: the host schedules processing tasks from the shared memory based on processing functions of the device model.
In a second aspect, the present application provides a task scheduling device, applied to a terminal device based on a virtualization technology, where the terminal device includes a host, a shared memory, and a processor, and the task scheduling device includes:
the scheduling module is used for scheduling the next processing task from the shared memory after the host distributes the current processing task to the processor for processing;
the sending module is used for sending the next processing task to the processor for processing if the host dispatches the next processing task from the shared memory;
and the execution module is used for determining the next processing task as the current processing task and executing the step of scheduling the next processing task from the shared memory by the host.
In another embodiment of the present application, the task scheduling device further includes: and the stopping module is used for stopping scheduling the processing task from the shared memory if the host is not scheduled to the next processing task from the shared memory.
In another embodiment of the present application, the terminal device further includes: the guest and virtual machine manager, the task scheduling device further includes: and the execution module is used for executing the step of dispatching the next processing task from the shared memory by the host if the host receives the processing notification sent by the guest through the virtual machine manager after the host stops dispatching the processing task from the shared memory.
In another embodiment of the present application, a processor includes: a graphics processor.
In another embodiment of the present application, the device further includes an execution module for executing the master function of the device model by the host before the host schedules the next processing task from the shared memory; the configuration module is used for configuring a first hardware resource of the graphic processor for the guest by the host, wherein the first hardware resource comprises a shared memory; the host configures second hardware resources for the equipment model, wherein the second hardware resources comprise shared memory; the host configures a processing function for the device model, the processing function is used by the host to schedule a next processing task from the shared memory, and the processing function is run on the graphics processor.
In another embodiment of the application, the creation module is further configured to: the host creates a thread that handles the processing tasks of the graphics processor, the thread being used to execute the processing functions.
In another embodiment of the present application, the scheduling module is specifically configured to schedule processing tasks from the shared memory by the host based on the processing functions of the device model.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores tasks executable by the at least one processor to enable the electronic device to perform the task scheduling method of any one of the above-mentioned first aspects.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, in which a computer-executable task is stored, which when executed by a processor, implements a task scheduling method according to any one of the first aspects described above.
In a fifth aspect, an embodiment of the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the task scheduling method of any one of the above first aspects.
The embodiment of the application provides a task scheduling method, a task scheduling device and electronic equipment, wherein the task scheduling method comprises the following steps: after the host distributes the current processing task to the processor for processing, the host dispatches the next processing task from the shared memory; if the host computer dispatches to the next processing task from the shared memory, the host computer sends the next processing task to the processor for processing; the application relates to a method for scheduling processing tasks, which is characterized in that the next processing task is determined to be the current processing task, and a host computer is executed to schedule the next processing task from a shared memory.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions of the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort to a person skilled in the art.
Fig. 1 is a schematic view of a task scheduling method according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating steps of a first task scheduling method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating steps of a second task scheduling method according to an embodiment of the present application;
fig. 4 is a block diagram of a task scheduling device according to an embodiment of the present application;
fig. 5 is a schematic hardware structure of an electronic device according to an embodiment of the application.
Specific embodiments of the present application have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second, third and the like in the description and in the claims and in the above drawings, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The communication between HOST and the virtual device of the GUEST system mainly comprises data transmission and processing notification, and the essence of the data transmission is that the data transmission is realized through a shared memory. The process notification between HOST and GUEST is the process that needs to be forwarded to the Device model last by Kernel state sent to the HOST by the hypervisor. Every time a processing task is processed, a notification of the processing between HOST and GUEST is required (GUEST sends a start processing task to HOST, HOST sends a processing task end to GUEST). If the HOST runs very complex tasks (such as graphics), such as a large game, the processing notification between the HOST and the GUEST is abnormal frequently, which easily causes blocking of the processing notification, and further causes that the HOST cannot schedule the processing tasks sent by the GUEST to the processor in time for processing, and further causes the GUEST to be blocked, such as graphical interface blocking.
Based on the above problems, the present application provides a task scheduling method, after each time a processor processes a processing task, a host actively schedules a next processing task in a shared memory, without waiting for a notification of a next processing of a guist, and a scheduling thread of the host actively performs inspection and processing of the processing task, so that the scheduling capability of the host on the processing task can be effectively improved.
Fig. 1 is a schematic view of a scenario of a task scheduling method according to an embodiment of the present application, where fig. 1 is a virtualization architecture deployed in a terminal device, and includes: host, guest, shared memory, hardware, and virtual machine manager, where the hardware includes a GPU. The host computer includes: the user mode and the kernel mode, the user mode of the host computer includes: a device model including a backend for scheduling processing tasks. The kernel mode of the host includes: GPU drivers and Vmm (software programs that provide computing resources and store virtualization functionality) drivers. The passenger aircraft comprises: a user state and a kernel state, the user state of the guest comprising an application, the kernel state of the guest comprising: for sending the front end of the processing task. The virtual machine manager includes a notification module for implementing front-end and back-end processing notifications with Vmm drivers.
Further, a terminal device such as a vehicle-mounted terminal in a smart cockpit in an automobile field, a GPU (Graphics Processing Unit, graphics processor), also called a display core, a vision processor, and a display chip, is a microprocessor that is specially used for performing image and graphics related operations on a personal computer, a workstation, a game machine, and some mobile devices (such as a tablet computer, a smart phone, and the like). GPUs play an irreplaceable role in UI (User Interface) displays and game entertainment. At present, the GPU is integrated into the intelligent cabin in the automobile field, and the processing capacity of the GPU to the graphic unit is fully utilized to strengthen the graphic display capacity of the intelligent cabin.
In the intelligent cabin, the display interface of the terminal equipment can provide the most visual and timely visual feedback mode for drivers and passengers. The intelligent cabin is developed towards the direction of intellectualization and digitalization, and along with the continuous maturity of technology, the current intelligent cabin has enough and rich components mainly including instruments, information entertainment, game fields, sound systems and the like.
The technical scheme of the application is described in detail through specific embodiments. It should be noted that the following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is a flowchart of a task scheduling method according to an embodiment of the present application, where the task scheduling method is applied to a terminal device based on a virtualization technology, and the terminal device includes a host, a shared memory, and a processor, and the task scheduling method specifically includes the following steps:
s201, after the host distributes the current processing task to the processor for processing, the host schedules the next processing task from the shared memory.
The processing task in the shared memory is sent to the shared memory by the front end.
Referring to fig. 1, the front end sends the processing task to the shared memory, the device model in the host actively schedules the processing task from the shared memory by using the back end, and then the back end distributes the processing task to the GPU in the hardware through the GPU driver in kernel mode.
In the embodiment of the application, the front end sends the processing tasks to the shared memory and simultaneously sends the processing notifications corresponding to the processing tasks to the notification module of the virtual machine manager, and then the notification module of the virtual machine manager forwards the processing notifications to the rear end, wherein the processing notifications and the processing tasks are in one-to-one correspondence.
In the prior art, the back end receives a notification to schedule a corresponding processing task from the shared memory, but when the front end sends a large number of processing tasks to the shared memory in a short time, the front end also sends a large number of processing notifications to the notification module, if the processing notifications are blocked, the back end cannot receive the processing notifications in time, and the back end cannot actively schedule the processing task in the shared memory to delay the processing task.
In the embodiment of the application, after the back end sends the current processing task to the GPU for processing, the next processing task is actively scheduled from the shared memory, and the scheduling is not needed to be carried out after the processing notification is waited, so that the problem of untimely scheduling caused by the blockage of the processing notification can be avoided.
S202, if the host computer dispatches to the next processing task from the shared memory, the host computer sends the next processing task to the processor for processing.
In the embodiment of the application, the back end in the host schedules the next processing task from the shared memory, and if the processing task is stored in the shared memory, the host can schedule the next processing task from the shared memory. If there is no processing task in the shared memory, the host cannot schedule to the next processing task from the shared memory.
In the embodiment of the application, after a back end in the host sends a processing task to the GPU, the next processing task is scheduled from the shared memory, and the next processing task is sent to the GPU for processing after the next processing task is scheduled.
S203, determining the next processing task as the current processing task.
And taking the next processing task sent to the GPU as the current processing task, and continuing to execute S201.
In the embodiment of the present application, the back end is loop execution S201 to S203. The back end sends the processing task a to the GPU, then schedules the processing task from the shared memory, if the processing task B is scheduled, then sends the processing task B to the GPU for processing, then schedules the processing task from the shared memory, and if the processing task C is scheduled, sends the processing task C to the GPU for processing, and then performs round robin scheduling until no schedulable processing task exists in the shared memory.
The application calls the next processing task after the execution of one processing task is completed, and does not need to wait for the processing task to be scheduled after the processing notification is received, so that the scheduling of the processing task is not influenced even if the processing notification is blocked, and the application has the effect of scheduling the processing task in time.
Referring to fig. 3, a flowchart of steps of another task scheduling method according to another embodiment of the present application specifically includes the following steps:
s301, the host runs a main function of the device model.
In an embodiment of the application, the device model is a virtualization manager running directly on top of the hardware or a virtualization manager running on top of the operating system.
The main function (main function) is a starting point of the execution of the equipment model, the execution of the equipment model is started from the main function, if other functions exist, the main function returns after the call of the other functions is completed, and finally the main function ends the whole program. When executing the program, the main function is called by the system. The main function is called after initialization of non-local objects with static storage periods is completed in the program launch. That is, it can be understood that the device model can only start to operate after the main function of the device model is running.
S302, the host configures a first hardware resource of the graphics processor for the guest.
The first hardware resource includes a shared memory, where the first hardware resource may further include communication resources such as a CPU, a hard disk, and the like.
The application configures a first hardware resource of a graphics processor (virtualized GPU) for a guest in an initialized manner, the configured first hardware resource being for the guest to communicate with the graphics processor using the first hardware resource.
The establishment of the shared memory depends on a virtual machine manager, each guest has a corresponding shared memory, and the shared memories of the guests are independent and do not affect each other. After the guest has configured the shared memory, the shared memory between the host and the guest is used until the program is finished without being released.
S303, the host configures a second hardware resource for the device model.
Wherein the second hardware resource comprises a shared memory. As above, the second hardware resource is configured for the device model, and the channel shared memory between the device model and the guest can be used for communication. Wherein the second hardware resource further comprises: and communication resources such as CPU, hard disk and the like.
S304, the host configures a processing function for the device model.
The processing function is used for the host to schedule the next processing task from the shared memory. In the embodiment of the application, the processing function is the processing function running on the GPU of the virtual machine. After the processing function is operated, the processing task can be actively scheduled from the shared memory, and the scheduled processing task can be distributed to the GPU.
S305, the host creates a thread that processes the processing task of the graphics processor.
Wherein the thread is used to execute the processing function.
S306, receiving a processing notice sent by the guest through the virtual machine manager.
In the embodiment of the present application, after completing S301 to S305, the processing notification of the front end in the guest is waited. If a processing notification sent by the guest through the virtual machine manager is received, the subsequent steps are performed.
S307, scheduling the processing task from the shared memory, and sending the scheduled processing task to the processor as the current processing task for processing.
After receiving the processing notification, the guest activates a processing function, and the processing function actively schedules a processing task from the shared memory and sends the scheduled processing task as a current processing task to the processor for processing. After the GPU receives the processing task, the processing task may be processed.
Further, after the GPU finishes processing the current processing task, the processing function may also be activated, and then the processing function continues to schedule the processing task from the shared memory, and if the processing task is scheduled to the next processing task from the shared memory, the next processing task is sent to the processor for processing.
The specific implementation process of this step refers to S201, and will not be described herein.
S308, scheduling the next processing task from the shared memory.
The specific implementation process of this step refers to S202, and will not be described herein.
S309, if the host computer dispatches to the next processing task from the shared memory, the host computer sends the next processing task to the processor for processing.
The specific implementation process of this step refers to S202, and will not be described herein.
S310, determining the next processing task as the current processing task.
Then, S308 is performed. The specific implementation process of this step refers to S203, and will not be described here again.
S311, if the host computer does not dispatch the next processing task from the shared memory, the host computer stops dispatching the processing task from the shared memory.
If no processing task exists in the shared memory, the processing function immediately returns to release the thread resources without excessively occupying the thread resources.
S312, if the host computer receives the processing notification sent by the guest through the virtual machine manager, S308 is executed.
After the back-end releases the thread resource, the back-end stops scheduling the processing task from the shared memory. When the processing task is scheduled in the back-end shared memory, if the processing notification of the front-end is received again, the processing function is activated again, and the processing is executed in a loop of S308 to S312.
In summary, firstly, the application can improve the three-dimensional rendering and display performance of the guest, and improve the smoothness of the display picture on the interface of the terminal equipment, and the mode does not depend on that the guest is a virtual machine system, and only the guest can access the virtual technical architecture shown in fig. 1.
Secondly, the application can avoid the delay of GPU processing tasks caused by untimely task scheduling of the equipment model, and can reduce untimely task scheduling caused by blockage between a host and a guest due to frequent processing notification, wherein the more complex the graphics to be displayed by the guest, the more obvious the processing performance is improved.
The application does not change the execution logic of the virtual machine manager, thus not occupying the resources of the virtual machine manager additionally, but occupying the resources of the equipment model, and having little influence on other virtualization equipment (such as network equipment, hard disk and the like).
Fig. 4 is a block diagram of a task scheduling device 40 according to an embodiment of the present application. The task scheduling device is applied to a terminal device based on a virtualization technology, the terminal device includes a host, a shared memory and a processor, and the task scheduling device 40 provided in the embodiment of the present application includes:
the scheduling module 41 is configured to schedule a next processing task from the shared memory after the host distributes the current processing task to the processor for processing;
the sending module 42 is configured to send the next processing task to the processor for processing if the host dispatches to the next processing task from the shared memory;
the execution module 43 is configured to determine that the next processing task is the current processing task, and execute the step of scheduling the next processing task from the shared memory by the host.
In another embodiment of the present application, the task scheduling device further includes: a stopping module (not shown) is configured to stop scheduling the processing task from the shared memory if the host does not schedule the next processing task from the shared memory.
In another embodiment of the present application, the terminal device further includes: the guest and virtual machine manager, the task scheduling device further includes: an execution module (not shown) for executing the step of the host scheduling the next processing task from the shared memory if the host receives a processing notification sent by the guest through the virtual machine manager after the host stops scheduling the processing task from the shared memory.
In another embodiment of the present application, a processor includes: a graphics processor.
In another embodiment of the present application, the device further comprises an execution module (not shown) for executing the master function of the device model by the host before the host schedules the next processing task from the shared memory; the configuration module is used for configuring a first hardware resource of the graphic processor for the guest by the host, wherein the first hardware resource comprises a shared memory; the host configures second hardware resources for the equipment model, wherein the second hardware resources comprise shared memory; the host configures a processing function for the device model, the processing function is used by the host to schedule a next processing task from the shared memory, and the processing function is run on the graphics processor.
In another embodiment of the application, a creation module (not shown) is also used to: the host creates a thread that handles the processing tasks of the graphics processor, the thread being used to execute the processing functions.
In another embodiment of the present application, the scheduling module 41 is specifically configured to schedule processing tasks from the shared memory by the host based on the processing functions of the device model.
The task scheduling device provided by the embodiment of the present application is configured to execute the technical scheme in the corresponding method embodiment, and its implementation principle and technical effect are similar, and are not described herein again.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic device 50 of the embodiment of the present application may include: at least one processor 51 (only one processor is shown in fig. 5); and a memory 52 communicatively coupled to the at least one processor. The memory 52 stores tasks executable by the at least one processor 51, and the tasks are executed by the at least one processor 51 to enable the electronic device 50 to execute the technical solutions in any of the foregoing method embodiments.
Alternatively, the memory 52 may be separate or integrated with the processor 51.
When the memory 52 is a device separate from the processor 51, the electronic device 50 further includes: a bus 53 for connecting the memory 52 and the processor 51.
The electronic device provided by the embodiment of the application can execute the technical scheme of any of the method embodiments, and the implementation principle and the technical effect are similar, and are not repeated here.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores computer execution tasks, and the technical scheme in any method embodiment is realized when a processor executes the computer execution tasks.
Embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements the technical solution in any of the foregoing method embodiments.
The embodiment of the application also provides a chip, which comprises: the processing module and the communication interface, the processing module can execute the technical scheme in the embodiment of the method.
Further, the chip further includes a storage module (e.g., a memory), where the storage module is configured to store tasks, and the processing module is configured to execute the tasks stored in the storage module, and execution of the tasks stored in the storage module causes the processing module to execute the technical solution in the foregoing method embodiment.
It should be understood that the above processor may be a central processing unit (english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, digital signal processors (english: digital Signal Processor, abbreviated as DSP), application specific integrated circuits (english: application Specific Integrated Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk or optical disk, etc.
The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or to one type of bus.
The storage medium may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC for short). The processor and the storage medium may reside as discrete components in an electronic device.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (10)

1. The task scheduling method is characterized by being applied to terminal equipment based on a virtualization technology, wherein the terminal equipment comprises a host, a shared memory and a processor, and comprises the following steps:
after the host distributes the current processing task to the processor for processing, the host dispatches the next processing task from the shared memory;
if the host computer dispatches to the next processing task from the shared memory, the host computer sends the next processing task to the processor for processing;
and determining the next processing task as the current processing task, and executing the step of scheduling the next processing task from the shared memory by the host.
2. The task scheduling method according to claim 1, characterized in that the task scheduling method further comprises:
and if the host computer does not dispatch the next processing task from the shared memory, stopping dispatching the processing task from the shared memory by the host computer.
3. The task scheduling method according to claim 2, wherein the terminal device further comprises: after stopping scheduling the processing task from the shared memory, the host further comprises:
and if the host receives the processing notification sent by the guest through the virtual machine manager, executing the step of scheduling the next processing task from the shared memory by the host.
4. A task scheduling method according to any one of claims 1 to 3, wherein the processor comprises: a graphics processor.
5. The task scheduling method according to claim 4, wherein before the host schedules a next processing task from the shared memory, further comprising:
the host runs a main function of the equipment model;
the host configures a first hardware resource of the graphics processor for the guest, the first hardware resource comprising the shared memory;
the host configures a second hardware resource for the equipment model, wherein the second hardware resource comprises the shared memory;
the host configures a processing function for the device model, the processing function is used for the host to schedule a next processing task from the shared memory, and the processing function runs on the graphics processor.
6. The task scheduling method of claim 5, further comprising:
the host creates a thread that processes processing tasks of a graphics processor, the thread for executing the processing functions.
7. The task scheduling method according to claim 6, wherein the host schedules a next processing task from the shared memory, comprising:
the host schedules a next processing task from the shared memory based on the processing function of the device model.
8. The task scheduling device is characterized by being applied to terminal equipment based on a virtualization technology, wherein the terminal equipment comprises a host, a shared memory and a processor, and the task scheduling method comprises the following steps:
the scheduling module is used for scheduling a next processing task from the shared memory after the host distributes the current processing task to the processor for processing;
the sending module is used for sending the next processing task to the processor for processing if the host is scheduled to the next processing task from the shared memory;
and the execution module is used for determining the next processing task as the current processing task and executing the step of scheduling the next processing task from the shared memory by the host.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the local memory stores tasks executable by the at least one processor to enable the electronic device to perform the task scheduling method of any one of claims 1 to 7.
10. A computer readable storage medium having stored therein computer executable tasks, which when executed by a processor, implement the task scheduling method according to any of claims 1 to 7.
CN202310803262.0A 2023-06-30 2023-06-30 Task scheduling method and device and electronic equipment Pending CN116594754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310803262.0A CN116594754A (en) 2023-06-30 2023-06-30 Task scheduling method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310803262.0A CN116594754A (en) 2023-06-30 2023-06-30 Task scheduling method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116594754A true CN116594754A (en) 2023-08-15

Family

ID=87606522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310803262.0A Pending CN116594754A (en) 2023-06-30 2023-06-30 Task scheduling method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116594754A (en)

Similar Documents

Publication Publication Date Title
US9176765B2 (en) Virtual machine system and a method for sharing a graphics card amongst virtual machines
US8978051B2 (en) Method and apparatus for displaying application image
US20160328272A1 (en) Vehicle with multiple user interface operating domains
WO2018119951A1 (en) Gpu virtualization method, device, system, and electronic apparatus, and computer program product
US8789064B2 (en) Mobile device and application switching method
US20170221173A1 (en) Adaptive context switching
CN110362186B (en) Layer processing method and device, electronic equipment and computer readable medium
CN109324903B (en) Display resource scheduling method and device for embedded system
US11301952B2 (en) Full screen processing in multi-application environments
CN109213607B (en) Multithreading rendering method and device
CN107122176B (en) Graph drawing method and device
CN114168271B (en) Task scheduling method, electronic device and storage medium
KR20180059892A (en) Preceding graphics processing unit with pixel tile level granularity
WO2024041328A1 (en) Resource allocation method, apparatus, and carrier
CN114461287A (en) Operating system starting method and device, electronic equipment and storage medium
US20180052700A1 (en) Facilitation of guest application display from host operating system
CN115639977B (en) Android graph synthesis method and device, electronic equipment and storage medium
CN116594754A (en) Task scheduling method and device and electronic equipment
CN115756730A (en) Virtual machine scheduling method and device, GPU and electronic equipment
CN111785229B (en) Display method, device and system
Joe et al. Remote graphical processing for dual display of RTOS and GPOS on an embedded hypervisor
CN116176461B (en) Display method, system, electronic equipment and storage medium of vehicle-mounted instrument interface
CN116028162A (en) Mixed virtualization model based on hardware isolation, electronic equipment and vehicle cabin
CN118093083A (en) Page processing method, page processing device, computer equipment and computer readable storage medium
CN115904295A (en) Multi-screen display control method, device, medium, system, chip and panel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination