Detailed Description
As described in the background art, the existing scheduling method of physical graphics card resources needs to be further optimized.
Taking graphic drawing as an example, an operating system is installed on each virtual machine to run graphic design programs or application programs such as games, the application programs can generate command packets for graphic drawing through a display card driver, the drawing contents are different, the drawing areas are different in size, and the time required by the command packets to run on a physical engine is also different. In order to enable the physical engine to divide the time period as required to provide GPU services for the multiple virtual machines, the physical graphics card is configured with a scheduler to control the physical engine to switch between the respective virtual machines.
In the prior art, a scheduler of a physical display card typically switches with command packets as granularity. Specifically, after one command packet from one virtual machine is executed, the command packet from the other virtual machine is switched. That is, the physical graphics card resources are allocated to different virtual machines with the time length required by the command packet as granularity.
However, since the content of each command packet is different, the execution time of different command packets is different, the time divided into the virtual machines is not uniform, and the switching time between the virtual machines cannot be controlled. If the execution time of a certain command packet is long, the command packets of other virtual machines all need to wait, which results in the problem that some GPU functions of the virtual machines are blocked at some time, and user experience is affected.
In view of this, in the scheme of the present invention, a single time unit includes a plurality of time slices, the single time unit is determined according to a frame rate of a physical display card, the plurality of time slices are in one-to-one correspondence with a plurality of virtual machines, and in the single time unit, if it is detected that the time slice corresponding to a first virtual machine is used up and a first command packet from the first virtual machine is not executed, execution information of the first command packet is saved, and a second command packet from a second virtual machine is loaded; and if the second command packet has an unfinished identifier, continuing to execute the second command packet according to the execution information of the second command packet until the time slice corresponding to the second virtual machine is used.
In the above scheme, a single time unit is divided into a plurality of time slices, and then the plurality of time slices are allocated to a plurality of virtual machines, so that a command packet from a virtual machine corresponding to each time slice is executed in each time slice. In other words, in the above scheme, the virtual machines are used as granularity to divide a single time unit more finely, so that the command packet of each virtual machine is executed in each time unit, and the GPU function requirements of each virtual machine are met uniformly.
Because the virtual machines are used as granularity for finer scheduling, the virtual machines are switched inside the command packet. Therefore, in the scheme of the embodiment of the invention, when the command packet is not executed but the execution is finished due to the use of the time slice, the execution information of the command packet is saved, so that the context environment of the last execution is recovered and the execution position of the last execution is positioned when the next execution is performed, and the execution integrity of the command packet is ensured.
In order to make the above objects, features and advantages of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a resource scheduling method of a physical display card in an embodiment of the present invention. The following describes, in a non-limiting manner, an application scenario and a specific scheme of an embodiment of the present invention with reference to fig. 1.
In the application scenario of the embodiment of the invention, a Host machine (Host) of the server may be configured with a physical graphics card and a plurality of virtual machines, where the physical graphics card provides services of GPU functions for the plurality of virtual machines. The virtual machine in the embodiment of the invention refers to a virtual machine with GPU function requirements. For example, a plurality of virtual machines with GPU functionality requirements may be considered a virtual machine group, which includes a plurality of virtual machines.
Further, the physical graphics card provides services of GPU functions for each virtual machine in the virtual machine group. Each virtual machine can generate a command packet through a display card driver, and a physical engine of a physical display card executes the command packet to realize the GPU function of the virtual machine.
In an implementation, a queue may be set for each virtual machine, where the queues are in a one-to-one correspondence with the virtual machines, and each queue may be used to store a to-be-processed command packet from the virtual machine corresponding to the queue, where the queue may also be referred to as a command queue, and the command queue may be a first-in-first-out queue.
Further, for command packets from multiple virtual machines, the scheduler of the physical display card schedules the physical engine to control the physical engine to execute the command packets from different virtual machines at different times, thereby realizing time-division multiplexing of the physical display card resources.
In the scheme of the embodiment of the invention, a single time unit is distributed to a plurality of virtual machines in a virtual machine group. Specifically, a single time unit includes a plurality of time slices, and the time slices are in one-to-one correspondence with the virtual machines. Wherein, the time slices refer to a part of a single time unit, and the length of each time slice is smaller than that of the single time unit. That is, the granularity of the time slices is smaller compared to the time units.
As shown in fig. 1, the virtual machine group includes 3 virtual machines, that is, virtual machine 1, virtual machine 2, and virtual machine 3, respectively, and the scheduler allocates each time unit to virtual machine 1, virtual machine 2, and virtual machine 3, respectively. Thus, command packets from 3 virtual machines can be executed in each time unit.
It should be noted that, fig. 1 describes an example in which the number of virtual machines is 3, but the solution provided by the embodiment of the present invention does not limit the number of virtual machines.
In the scheme of the embodiment of the invention, the length of the single time unit can be determined according to the frame rate of the physical display card. Assuming that the length of a single time unit is denoted as T, the Frame rate of the physical display card is X, and the unit of the Frame rate X may refer to a transmission Frame Per Second (FPS), t=1/X.
In practical applications, the length of a single time unit may typically be in the order of milliseconds, for example, 16 milliseconds. The lengths of the different time units may be the same, but the lengths of the corresponding time slices of the same virtual machine within the different time units may be the same or different.
Further, the scheduler may allocate a single time unit to multiple virtual machines.
As one possible implementation, a single time unit is allocated equally to multiple virtual machines. That is, the length of the time slice corresponding to each virtual machine in a single time unit may be T/N, where N is the number of virtual machines.
As another possible implementation, the length of the time slices corresponding to each virtual machine may depend on the weight value of the virtual machine. Specifically, the greater the weight value, the greater the length of the time slice corresponding to the virtual machine. That is, the greater the weight value, the longer the virtual machine occupies the physical engine in a single time unit. Wherein the weight value may be a normalized weight value. For example, the sum of the weight values of multiple virtual machines within a virtual machine group may be 1. Thus, the length of the time slice corresponding to each virtual machine in a single time unit may be a×t, where a is a weight value of the virtual machine.
In a specific implementation, the physical graphics card may be configured with a weight register, which may be used to store weight values for each virtual machine.
For example, the weight value of the virtual machine may be a fixed weight value according to the virtual machine. Wherein the fixed weight value may depend on a preset reference weight value.
Specifically, the fixed weight value may be calculated according to a reference weight value, where the reference weight value is preset, for example, a server manufacturer may set the reference weight value of the virtual machine according to a user level, and the higher the user level, the larger the reference weight value, and the larger the fixed weight value. Wherein the user may refer to a user using a virtual machine.
Further, the reference weight value of each virtual machine in the virtual machine group can be normalized, so that the fixed weight value of each virtual machine is obtained.
In practical application, when the virtual machines included in the virtual machine set change, the fixed weight values corresponding to the virtual machines in the virtual machine set can be updated according to the reference weight values corresponding to the virtual machines in the changed virtual machine set. The change may be that a command queue corresponding to one or more virtual machines in the virtual machine set is emptied, so that one or more virtual machines are deleted, or a new virtual machine is added in the virtual machine set, but is not limited thereto.
In the scheme, the server manufacturer can flexibly allocate the time length in a single time unit for different users, namely, different time slices can be divided for different virtual machines according to actual requirements, and the flexibility is better.
Also exemplary, the weight value of the virtual machine may be calculated according to a fixed weight value of the virtual machine and a dynamic weight value of the virtual machine. Wherein the dynamic weight value may depend on a length of a command queue corresponding to the virtual machine. For example, the weight value of the virtual machine may be obtained by performing weighted summation on a fixed weight value and a dynamic weight value, where a weight coefficient corresponding to the fixed weight value may be greater than a weight coefficient of the dynamic weight value.
The length of the command queue may refer to the number of command packets to be executed in the command queue. Since the dynamic weight value depends on the length of the command queue to which the virtual machine corresponds, the dynamic weight value may be dynamically changed. Specifically, the longer the length of the command queue, the greater the dynamic weight value of the virtual machine.
By adopting the scheme, on the basis of distributing time slices according to the fixed weight values of the virtual machines, the weight values of the virtual machines are finely adjusted by combining the dynamic weight values, so that the time slice length corresponding to the virtual machines with more command packets to be executed is increased appropriately, and the execution efficiency of the command packets of the whole virtual machines is improved.
In a specific implementation, before determining the weight values of the current multiple virtual machines, whether the users corresponding to the current multiple virtual machines are the same user may be determined first, and if the determination result is yes, the weight values of the multiple virtual machines may be calculated according to the fixed weight values and the dynamic weight values of the multiple virtual machines. If the determination result is no, the weight value of each virtual machine may be determined only according to the fixed weight values of the plurality of virtual machines. By adopting the scheme, the problems that the length of a command queue of a certain virtual machine is too long, the weight value of the virtual machine is large, the time slices allocated to the virtual machine are too long, and the use experience of users corresponding to other virtual machines is poor can be avoided.
In addition, in the solution of the embodiment of the present invention, the order among the plurality of virtual machines may be fixed. That is, the order of the plurality of virtual machines in the virtual machine group may be fixed, and the scheduler may sequentially execute command packets from the respective virtual machines in the single time unit in the order. For example, the order may depend on an Identification (ID) of the virtual machine, which is used to uniquely determine the virtual machine. For example, the smaller the ID, the earlier the command packet for the virtual machine is executed.
Alternatively, the order between the plurality of virtual machines may also depend on the weight values of the virtual machines. For example, the larger the weight value, the earlier the command packet of the virtual machine is executed.
The solution provided by the embodiment of the present invention is further described below with reference to fig. 2 to 5.
Referring to fig. 2, fig. 2 is a flow chart of a resource scheduling method of a physical display card according to an embodiment of the invention. The method may be performed by the server described above, and details regarding the server may be described with reference to fig. 1. Fig. 2 illustrates an example of switching between a first virtual machine and a second virtual machine, and describes and illustrates a resource scheduling method provided by an embodiment of the present invention in a non-limiting manner.
The resource scheduling method of the physical display card shown in fig. 2 may include the following steps:
step S21: in a single time unit, if it is detected that a time slice corresponding to a first virtual machine is used and a first command packet from the first virtual machine is not executed, storing execution information of the first command packet and loading a second command packet from a second virtual machine, wherein the execution information includes: the execution position and the context environment when the time slices are used;
Step S22: and if the second command packet has an unfinished identifier, continuing to execute the second command packet according to the execution information of the second command packet until the time slice corresponding to the second virtual machine is used.
In step S21, during the process of executing the first command packet, the duration that the first virtual machine has occupied in the current time unit is monitored, where the first command packet is from the first virtual machine.
Specifically, the time occupied by each virtual machine may be timed. For example, the scheduler may be configured with a timing unit that may be used to time the time occupied by each virtual machine within each time unit.
The time occupied by the virtual machine may refer to the execution duration of a command packet from the virtual machine.
If the use of the time slice corresponding to the first virtual machine is detected to be finished, the execution of the first command packet can be stopped, and the execution information of the first command packet can be stored. The time slice corresponding to the first virtual machine after being used may be that the duration occupied by the first virtual machine in the current time unit reaches the length of the time slice corresponding to the first virtual machine.
The execution information in the embodiment of the invention can include the execution position and the context when the time slice is used.
Specifically, the execution position when the time slice is used is the position of the command symbol executed by the physical engine in the command packet when the time slice corresponding to the virtual machine is used. More specifically, when the time slice corresponding to the virtual machine is used, it is recorded which row of command symbols in the command packet the physical engine executes.
The context when the time slice is used is a set of external environment parameters of the physical engine executing the command packet when the time slice corresponding to the virtual machine is used.
Specifically, the process of executing the command packet by the physical engine requires participation of a plurality of units, for example, the plurality of units may include an operation unit, a register, and the like, and in the process of executing the same command packet, the executed command symbols are different, and parameters adopted by the plurality of units are also different. That is, the context refers to parameters adopted by the units participating in the execution of the command packet, and the parameters adopted by the units vary with the execution process of the command packet.
More specifically, the process of executing the command packet by the physical engine requires participation of a plurality of units, such as graphics drawing, and the executing process includes a vertex shader, a rasterization module, a pixel shader, a texture sampling module, a pixel testing module, and the like, and in the executing process of the same command packet, the executing command symbols are different, and parameters adopted by the units are also different.
In the scheme of the embodiment of the invention, if a certain command packet finishes executing because the time slice is used, the execution position and the context environment of the command packet when the time slice is used are saved. In one aspect, execution continues from the execution location in the next time unit by saving the execution location. On the other hand, the normal execution of the command packet is facilitated by saving the context in order to restore the current context before the next execution.
In step S21, after stopping the execution of the first command packet, a second command packet may also be loaded, the second command packet being from the second virtual machine. The second command packet may be a command packet with the forefront ordering in a command queue corresponding to the second virtual machine.
In step S22, it may be first determined whether the second command packet has an incomplete flag, which may be used to indicate whether the command packet is executed for the first time.
If the second command packet has an incomplete flag, it indicates that the second command packet was executed and was not executed in the last time unit. Therefore, the second command packet can be continuously executed according to the execution information of the second command packet until the time slice corresponding to the second virtual machine is used.
Specifically, the context environment when the last execution of the second command packet is finished can be recovered according to the context environment in the execution information of the second command packet; it is then possible to jump to the execution position indicated commander and continue to execute the second command packet.
It should be noted that, in the solution of the embodiment of the present invention, the end of execution of the command packet (or the end of execution) and the completion of execution of the command packet are different.
Specifically, the completion of execution of the command packet refers to that all the command symbols related to the GPU function implementation in the command packet are completely executed, for example, referring to fig. 3, the completion of execution may refer to that all the main command symbols are executed. The end of the execution of the command packet means that the execution of the command packet is stopped when the time slice is used, and the execution of the command packet is not necessarily completed when the execution of the command packet is ended. In other words, the end of the command packet execution refers to the command packet execution suspension, and the completion of the command packet execution refers to the command packet execution termination.
With continued reference to fig. 2, in step S22, the occupied time of the second virtual machine is counted together when the execution of the second command packet is started. Referring to step S21, if the duration occupied by the second virtual machine reaches the length of the time slice corresponding to the second virtual machine, it may be determined that the time slice corresponding to the second virtual machine is used, and the execution of the second command packet is ended.
Further, if the execution of the second command packet is completed, but the time slice corresponding to the second virtual machine in the current time unit is not used, a new command packet may be loaded from the command queue corresponding to the second virtual machine for execution until the time slice corresponding to the second virtual machine is used.
In step S22, if the loaded second command packet has a non-execution flag, the second command packet may be executed directly.
The unexecuted flag may be used to indicate that the command packet has not been executed, for example, if there is no unexecuted flag, it may be regarded as having unexecuted flag.
Further, directly executing the second command packet may refer to executing from a start position of the body command symbol in the second command packet. In other words, there is no need to restore the context and jump to the execution location. For details on the command package and the body command symbol, reference may be made to the relevant description below with respect to fig. 3.
From the above, the embodiment of the present invention describes specifically a switching process between a command packet from a first virtual machine and a command packet from a second virtual machine. The scheme of the embodiment of the invention provides a finer scheduling method, a single time unit is distributed to each virtual machine by taking a time slice as granularity, the switching between the virtual machines depends on the time slice and is not limited by the length of a command packet, and the switching between the virtual machines can be carried out inside the command packet.
Because the virtual machine switching occurs in a certain command packet in the scheme of the embodiment of the invention, the embodiment of the invention also provides a new command packet format so as to restore the execution of the command packet.
Referring to fig. 3, fig. 3 is a schematic diagram of a command packet according to an embodiment of the present invention.
As shown in fig. 3, the command packet 30 in the embodiment of the present invention may include: a first auxiliary commander 31, a main commander 32, and a second auxiliary commander 33, wherein the first auxiliary commander 31 is located before the main commander 32, and the second auxiliary commander 33 is located after the main commander 32.
Specifically, the first auxiliary command 31 may be used to read the execution information of the last execution of the command packet, the main command 32 may be used to implement the GPU function, and the second auxiliary command 33 may be used to save the execution information of the current execution of the command packet.
The switching of virtual machines within a certain command packet is facilitated by the setting of a first auxiliary command 31 and a second auxiliary command 33.
Specifically, assuming that the physical engine executes the main body command symbol m, the time slice corresponding to the virtual machine from which the command packet 30 is received is used, it may be determined whether the execution of the command packet 30 is completed.
If it is determined that the command packet 30 has not been executed, i.e. 1.ltoreq.m < n, a jump may be made to the second auxiliary commander 33 in response to the time slice being used up.
The second auxiliary commander 31 may include a "storage commander" and a "storage space", execute the storage commander, and save the context and execution position (i.e., m) when executing the main body commander m to the storage space in the second auxiliary commander 31. Further, the command packet 30 may also be marked with the incomplete identification described above.
If it is determined that the execution of the command packet 30 is completed, that is, the execution position of the command packet 30 is the body command symbol n, the completion flag may be marked for the command packet 30.
The first auxiliary commander 31 is executed first when the command packet 30 is executed in the next time unit. The first auxiliary commander 31 includes a "resume commander" and a "memory space", and executing the first auxiliary commander 31 may refer to reading execution information from the memory space in the first auxiliary commander 31, then resuming the context according to the execution information, and jumping to the main body commander m. The subject commander following the subject commander m may then be executed.
In the scheme of the embodiment of the invention, the scheduler needs to switch between virtual machines according to time slices, and in order to cooperate with the scheduler to correctly save and restore the context, an auxiliary command is added on the basis of a conventional command (namely, the main command above) so as to assist the scheduler in completing the scheduling work, and the execution integrity of the command packet is ensured under the condition of discontinuously executing the same command packet.
Referring to fig. 4, fig. 4 is a flowchart illustrating another method for scheduling resources of a physical display card according to an embodiment of the present application. The resource scheduling method provided by the embodiment of the application is described in detail below with reference to fig. 4. The scheme shown in fig. 4 may include steps S41 to S48.
Step S41, executing the current command packet.
Step S42, judging whether the time slice of the current virtual machine is used.
The current command packet refers to a command packet executed by the current physical engine, and the current virtual machine refers to a virtual machine from which the current command packet comes. In the process of step S41, step S42 may be performed.
If the time slice of the current virtual machine is used, step S43 is executed, and if the time slice of the current virtual machine is not used, step S41 is continuously executed, that is, the current command packet is continuously executed.
In other words, the time occupied by the current virtual machine is counted during the execution of the current command packet, and step S43 is executed when the time slice of the current virtual machine is used.
Step S43, judging whether the current command packet is executed.
Step S45 is performed if the execution of the current command packet is completed, step S44 is performed if the execution of the current command packet is not completed, and step S45 is performed after step S44 is performed.
Step S44, executing the second auxiliary command symbol.
Specifically, the execution information of the current command packet is saved by executing the second auxiliary command symbol.
After step S44, and before step S45, the incomplete identity may also be marked for the current command packet.
Step S45, obtaining a command packet of the next virtual machine.
If step S45 is directly performed after step S43 is performed, the execution completion flag may be marked for the current command packet before step S45 is performed.
Further, after the command packet of the next virtual machine is acquired, the command packet may be used as a current command packet, and the virtual machine may be used as a current virtual machine.
After step S45 is executed, step S46 is continued.
Step S46, judging whether the current command packet has an unfinished mark.
If the determination result is yes, step S48 is executed, and if the determination result is no, step S47 is executed.
Step S47, skip the first auxiliary command symbol.
If the current command packet does not have the incomplete identification, indicating that the current command packet is executed for the first time, skipping the first auxiliary command symbol, returning to step S41 to start execution from the first main command symbol in the current command packet.
Step S48, executing the first auxiliary command symbol.
If the current command packet has the unfinished identifier, restoring the context environment when the current command packet is finished in the last execution by executing the first auxiliary command symbol, and jumping to the position when the last execution is finished. Then, the process returns to step S41 to continue the execution of the current command packet.
For more details regarding the method illustrated in fig. 4, reference is made to the above description regarding fig. 1 to 3, and no further description is given here.
By the method, the virtual machines can be switched in the command packet, and context environments such as an operation unit, a register and the like in the physical engine can be saved and restored according to requirements in the switching process, so that the execution integrity of the command packet is ensured. In the scheme of the embodiment of the invention, the method is not limited by the execution time of the command packet, different time slices can be allocated to each virtual machine according to the requirements, and a server manufacturer can more flexibly configure the virtual machines to meet the requirements of different clients.
It will be appreciated that in a specific implementation, the method may be implemented in a software program running on a processor integrated within a chip or a chip module; alternatively, the method may be implemented in hardware or a combination of hardware and software, for example, implemented in a dedicated chip or chip module, or implemented in a dedicated chip or chip module in combination with a software program.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a resource scheduling device of a physical display card in an embodiment of the present invention, where the device may be deployed on a server, the physical display card provides GPU functions for a plurality of virtual machines, where the plurality of virtual machines includes at least a first virtual machine and a second virtual machine, and the device shown in fig. 5 may include:
the first processing module 51 is configured to store execution information of a first command packet and load a second command packet from a second virtual machine if it is detected that a time slice corresponding to the first virtual machine is used and the first command packet from the first virtual machine is not executed in a single time unit, where the execution information includes: the execution position and the context environment when the time slices are used;
the second processing module 52 is configured to, if the second command packet has an incomplete identifier, continue executing the second command packet according to the execution information of the second command packet until the time slice corresponding to the second virtual machine is used;
the length of the single time unit is determined according to the frame rate of the physical display card, the single time unit comprises a plurality of time slices, and the time slices are in one-to-one correspondence with the virtual machines.
The apparatus may be, for example, a scheduler.
For more matters such as the working principle, the working method and the beneficial effects of the resource scheduling device for the physical display card in the embodiment of the present application, reference may be made to the above description related to the resource scheduling method for the physical display card, which is not repeated here.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when being run by a processor, executes the steps of the resource scheduling method of the physical display card. The storage medium may include ROM, RAM, magnetic or optical disks, and the like. The storage medium may also include a non-volatile memory (non-volatile) or a non-transitory memory (non-transitory) or the like.
The embodiment of the application also provides a server, which comprises a memory and a processor, wherein the memory stores a computer program which can be run on the processor, and the processor executes the steps of the resource scheduling method of the physical display card when running the computer program.
It should be appreciated that in the embodiment of the present application, the processor may be a central processing unit (central processing unit, abbreviated as CPU), and the processor may also be other general purpose processors, digital signal processors (digital signal processor, abbreviated as DSP), application specific integrated circuits (application specific integrated circuit, abbreviated as ASIC), off-the-shelf programmable gate arrays (field programmable gate array, abbreviated as FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should also be appreciated that the memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically erasable ROM (electrically EPROM, EEPROM), or a flash memory. The volatile memory may be a random access memory (random access memory, RAM for short) which acts as an external cache. By way of example and not limitation, many forms of random access memory (random access memory, RAM) are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate synchronous DRAM (double data rate SDRAM, DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus RAM (direct rambus RAM, DR RAM)
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer program may be stored in or transmitted from one computer readable storage medium to another, for example, by wired or wireless means from one website, computer, server, or data center.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus and system may be implemented in other manners. For example, the device embodiments described above are merely illustrative; for example, the division of the units is only one logic function division, and other division modes can be adopted in actual implementation; for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may be physically included separately, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units. For example, for each device or product applied to or integrated on a chip, each module/unit included in the device or product may be implemented in hardware such as a circuit, or at least part of the modules/units may be implemented in software program, where the software program runs on a processor integrated inside the chip, and the rest (if any) of the modules/units may be implemented in hardware such as a circuit; for each device and product applied to or integrated in the chip module, each module/unit contained in the device and product can be realized in a hardware manner such as a circuit, different modules/units can be located in the same component (such as a chip, a circuit module and the like) or different components of the chip module, or at least part of the modules/units can be realized in a software program, the software program runs on a processor integrated in the chip module, and the rest (if any) of the modules/units can be realized in a hardware manner such as a circuit; for each device, product, or application to or integrated with the terminal, each module/unit included in the device, product, or application may be implemented by using hardware such as a circuit, different modules/units may be located in the same component (for example, a chip, a circuit module, or the like) or different components in the terminal, or at least part of the modules/units may be implemented by using a software program, where the software program runs on a processor integrated inside the terminal, and the remaining (if any) part of the modules/units may be implemented by using hardware such as a circuit.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In this context, the character "/" indicates that the front and rear associated objects are an "or" relationship.
The term "plurality" as used in the embodiments of the present application means two or more. The first, second, etc. descriptions in the embodiments of the present application are only used for illustrating and distinguishing the description objects, and no order is used, nor is the number of the devices in the embodiments of the present application limited, and no limitation on the embodiments of the present application should be construed.
Although the present application is disclosed above, the present application is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the application, and the scope of the application should be assessed accordingly to that of the appended claims.