CN116521376B - Resource scheduling method and device for physical display card, storage medium and terminal - Google Patents

Resource scheduling method and device for physical display card, storage medium and terminal Download PDF

Info

Publication number
CN116521376B
CN116521376B CN202310794980.6A CN202310794980A CN116521376B CN 116521376 B CN116521376 B CN 116521376B CN 202310794980 A CN202310794980 A CN 202310794980A CN 116521376 B CN116521376 B CN 116521376B
Authority
CN
China
Prior art keywords
virtual machine
command packet
command
weight value
execution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310794980.6A
Other languages
Chinese (zh)
Other versions
CN116521376A (en
Inventor
王彦杰
刘运兵
赵旋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Li Computing Technology Shanghai Co ltd
Original Assignee
Li Computing Technology Shanghai Co ltd
Nanjing Lisuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Li Computing Technology Shanghai Co ltd, Nanjing Lisuan Technology Co ltd filed Critical Li Computing Technology Shanghai Co ltd
Priority to CN202310794980.6A priority Critical patent/CN116521376B/en
Publication of CN116521376A publication Critical patent/CN116521376A/en
Application granted granted Critical
Publication of CN116521376B publication Critical patent/CN116521376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A resource scheduling method and device for a physical display card, a storage medium and a terminal, wherein the method comprises the following steps: in a single time unit, if it is detected that a time slice corresponding to the first virtual machine is used and a first command packet from the first virtual machine is not executed, storing execution information of the first command packet and loading a second command packet from the second virtual machine, wherein the execution information comprises: execution position and context when the time slice is used; if the second command packet has an unfinished identifier, continuing to execute the second command packet according to the execution information of the second command packet until the time slice corresponding to the second virtual machine is used; the length of a single time unit is determined according to the frame rate of the physical display card, and the single time unit comprises a plurality of time slices, and the time slices and the virtual machines are in one-to-one correspondence. The invention provides a more optimized resource scheduling method which is beneficial to uniformly meeting the GPU function requirements of each virtual machine.

Description

Resource scheduling method and device for physical display card, storage medium and terminal
Technical Field
The present invention relates to the field of graphics processing technologies, and in particular, to a method and apparatus for scheduling resources of a physical graphics card, a storage medium, and a terminal.
Background
Graphics processing unit (graphics processing unit, GPU) virtualization technology is a technology that divides physical graphics card resources into multiple virtual graphics card resources, and then allocates these virtual graphics card resources to multiple virtual machines for use, respectively.
In particular, a host may have multiple virtual machines installed on it, where there are many scenarios using graphics processing unit (graphics processing unit, GPU) functionality, such as graphics rendering, general purpose computing, and artificial intelligence, etc. The GPU virtualization technology can virtualize a plurality of lightweight video cards on a physical video card, and a virtual machine can accelerate GPU related functions through the lightweight video cards.
However, the lightweight graphics card itself does not have a physical engine, and acceleration of graphics rendering, general-purpose computing, and artificial intelligence related GPU functions is ultimately provided by the physical graphics card. For this reason, one physical graphics card needs to provide GPU services for multiple virtual machines in a time-division multiplexing manner. The existing resource scheduling method is used for scheduling physical display card resources, when the concurrency of the virtual machines is high, the situation that part of the virtual machines are blocked still easy to occur, and the effect of resource scheduling needs to be further optimized.
Disclosure of Invention
The technical aim of the invention is to provide a more optimized resource scheduling method for the situation that a physical display card provides services of GPU functions for a plurality of virtual machines.
In view of the above, the present invention provides a resource scheduling method of a physical graphics card, where the physical graphics card provides services of GPU functions for a plurality of virtual machines, the plurality of virtual machines at least including a first virtual machine and a second virtual machine, the method comprising: in a single time unit, if it is detected that a time slice corresponding to a first virtual machine is used and a first command packet from the first virtual machine is not executed, storing execution information of the first command packet and loading a second command packet from a second virtual machine, wherein the execution information includes: the execution position and the context environment when the time slices are used; if the second command packet has an unfinished identifier, continuing to execute the second command packet according to the execution information of the second command packet until the time slice corresponding to the second virtual machine is used; the length of the single time unit is determined according to the frame rate of the physical display card, the single time unit comprises a plurality of time slices, and the time slices are in one-to-one correspondence with the virtual machines.
Optionally, the length of the time slice corresponding to each virtual machine is determined according to the weight value of the virtual machine.
Optionally, the weight value of the virtual machine is calculated according to the fixed weight value of the virtual machine and the dynamic weight value of the virtual machine; the fixed weight value depends on a preset reference weight value, the dynamic weight value depends on the length of a command queue corresponding to the virtual machine, and the command queue is used for storing command packets to be executed from the virtual machine.
Optionally, continuing to execute the second command packet according to the execution information of the second command packet includes: restoring the context environment when the last execution of the second command packet is finished according to the context environment in the execution information; and jumping to the command indicated by the execution position, and continuing to execute the second command packet.
Optionally, the command packet includes a first auxiliary command symbol, a main command symbol, and a second auxiliary command symbol, where the first auxiliary command symbol is located before the main command symbol, and the second auxiliary command symbol is located after the main command symbol; the first auxiliary command symbol is used for reading the execution information when the last execution of the command packet is finished, the main command symbol is used for realizing the GPU function, and the second auxiliary command symbol is used for storing the execution information when the current execution of the command packet is finished.
Optionally, the method further comprises: and if the second command packet has a non-execution identifier, starting to execute from the starting position of the main body command symbol in the second command packet.
Optionally, the method further comprises: and if the first command packet is executed, skipping over the second auxiliary command symbol.
The invention also provides a resource scheduling device of a physical display card, wherein the physical display card provides GPU function services for a plurality of virtual machines, the plurality of virtual machines at least comprise a first virtual machine and a second virtual machine, and the device comprises: the first processing module is configured to store execution information of a first command packet and load a second command packet from a second virtual machine if it is detected that a time slice corresponding to the first virtual machine is used and the first command packet from the first virtual machine is not executed in a single time unit, where the execution information includes: the execution position and the context environment when the time slices are used; the second processing module is used for continuing to execute the second command packet according to the execution information of the second command packet if the second command packet has an incomplete identifier until the time slice corresponding to the second virtual machine is used; the length of the single time unit is determined according to the frame rate of the physical display card, the single time unit comprises a plurality of time slices, and the time slices are in one-to-one correspondence with the virtual machines.
The present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the resource scheduling method of a physical display card described above.
The invention also provides a server, which comprises a memory and a processor, wherein the memory stores a computer program which can be run on the processor, and the processor executes the steps of the resource scheduling method of the physical display card when running the computer program.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
in the scheme of the invention, a single time unit comprises a plurality of time slices, the single time unit is determined according to the frame rate of a physical display card, the plurality of time slices are in one-to-one correspondence with a plurality of virtual machines, and in the single time unit, if the fact that the time slices corresponding to a first virtual machine are used completely and a first command packet from the first virtual machine is not executed completely is detected, the execution information of the first command packet is stored, and a second command packet from a second virtual machine is loaded; and if the second command packet has an unfinished identifier, continuing to execute the second command packet according to the execution information of the second command packet until the time slice corresponding to the second virtual machine is used.
In the above scheme, a single time unit is divided into a plurality of time slices, and then the plurality of time slices are respectively allocated to a plurality of virtual machines, so that a command packet from the virtual machine corresponding to each time slice is executed in each time slice. That is, in the above scheme, the virtual machines or time slices are used as granularity to divide the single time unit more finely, so as to execute the command packet of each virtual machine in each time unit, and uniformly meet the GPU function requirements of each virtual machine.
In the technical scheme of the invention, the virtual machine is used as granularity for finer scheduling, so that the virtual machine can be switched in the command packet. Therefore, in the scheme of the invention, when the command packet is not executed but the execution is finished due to the completion of the time slice, the execution information of the command packet is saved so as to restore the context environment of the last execution and position the execution position of the last execution when the next execution is performed, thereby ensuring the integrity of the command packet when the command packet is executed across time units.
Further, in the scheme of the invention, the time slice corresponding to the virtual machine is determined according to the weight value of the virtual machine. By adopting the scheme, the time length in a single time unit can be flexibly distributed for different users, and the flexibility is better.
Drawings
Fig. 1 is a schematic diagram of an application scenario of a resource scheduling method of a physical display card in an embodiment of the present invention;
FIG. 2 is a flow chart of a method for scheduling resources of a physical display card according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a command packet in accordance with an embodiment of the present invention;
FIG. 4 is a flowchart of another method for scheduling resources of a physical graphic card according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a resource scheduling device of a physical display card in an embodiment of the present invention.
Detailed Description
As described in the background art, the existing scheduling method of physical graphics card resources needs to be further optimized.
Taking graphic drawing as an example, an operating system is installed on each virtual machine to run graphic design programs or application programs such as games, the application programs can generate command packets for graphic drawing through a display card driver, the drawing contents are different, the drawing areas are different in size, and the time required by the command packets to run on a physical engine is also different. In order to enable the physical engine to divide the time period as required to provide GPU services for the multiple virtual machines, the physical graphics card is configured with a scheduler to control the physical engine to switch between the respective virtual machines.
In the prior art, a scheduler of a physical display card typically switches with command packets as granularity. Specifically, after one command packet from one virtual machine is executed, the command packet from the other virtual machine is switched. That is, the physical graphics card resources are allocated to different virtual machines with the time length required by the command packet as granularity.
However, since the content of each command packet is different, the execution time of different command packets is different, the time divided into the virtual machines is not uniform, and the switching time between the virtual machines cannot be controlled. If the execution time of a certain command packet is long, the command packets of other virtual machines all need to wait, which results in the problem that some GPU functions of the virtual machines are blocked at some time, and user experience is affected.
In view of this, in the scheme of the present invention, a single time unit includes a plurality of time slices, the single time unit is determined according to a frame rate of a physical display card, the plurality of time slices are in one-to-one correspondence with a plurality of virtual machines, and in the single time unit, if it is detected that the time slice corresponding to a first virtual machine is used up and a first command packet from the first virtual machine is not executed, execution information of the first command packet is saved, and a second command packet from a second virtual machine is loaded; and if the second command packet has an unfinished identifier, continuing to execute the second command packet according to the execution information of the second command packet until the time slice corresponding to the second virtual machine is used.
In the above scheme, a single time unit is divided into a plurality of time slices, and then the plurality of time slices are allocated to a plurality of virtual machines, so that a command packet from a virtual machine corresponding to each time slice is executed in each time slice. In other words, in the above scheme, the virtual machines are used as granularity to divide a single time unit more finely, so that the command packet of each virtual machine is executed in each time unit, and the GPU function requirements of each virtual machine are met uniformly.
Because the virtual machines are used as granularity for finer scheduling, the virtual machines are switched inside the command packet. Therefore, in the scheme of the embodiment of the invention, when the command packet is not executed but the execution is finished due to the use of the time slice, the execution information of the command packet is saved, so that the context environment of the last execution is recovered and the execution position of the last execution is positioned when the next execution is performed, and the execution integrity of the command packet is ensured.
In order to make the above objects, features and advantages of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a resource scheduling method of a physical display card in an embodiment of the present invention. The following describes, in a non-limiting manner, an application scenario and a specific scheme of an embodiment of the present invention with reference to fig. 1.
In the application scenario of the embodiment of the invention, a Host machine (Host) of the server may be configured with a physical graphics card and a plurality of virtual machines, where the physical graphics card provides services of GPU functions for the plurality of virtual machines. The virtual machine in the embodiment of the invention refers to a virtual machine with GPU function requirements. For example, a plurality of virtual machines with GPU functionality requirements may be considered a virtual machine group, which includes a plurality of virtual machines.
Further, the physical graphics card provides services of GPU functions for each virtual machine in the virtual machine group. Each virtual machine can generate a command packet through a display card driver, and a physical engine of a physical display card executes the command packet to realize the GPU function of the virtual machine.
In an implementation, a queue may be set for each virtual machine, where the queues are in a one-to-one correspondence with the virtual machines, and each queue may be used to store a to-be-processed command packet from the virtual machine corresponding to the queue, where the queue may also be referred to as a command queue, and the command queue may be a first-in-first-out queue.
Further, for command packets from multiple virtual machines, the scheduler of the physical display card schedules the physical engine to control the physical engine to execute the command packets from different virtual machines at different times, thereby realizing time-division multiplexing of the physical display card resources.
In the scheme of the embodiment of the invention, a single time unit is distributed to a plurality of virtual machines in a virtual machine group. Specifically, a single time unit includes a plurality of time slices, and the time slices are in one-to-one correspondence with the virtual machines. Wherein, the time slices refer to a part of a single time unit, and the length of each time slice is smaller than that of the single time unit. That is, the granularity of the time slices is smaller compared to the time units.
As shown in fig. 1, the virtual machine group includes 3 virtual machines, that is, virtual machine 1, virtual machine 2, and virtual machine 3, respectively, and the scheduler allocates each time unit to virtual machine 1, virtual machine 2, and virtual machine 3, respectively. Thus, command packets from 3 virtual machines can be executed in each time unit.
It should be noted that, fig. 1 describes an example in which the number of virtual machines is 3, but the solution provided by the embodiment of the present invention does not limit the number of virtual machines.
In the scheme of the embodiment of the invention, the length of the single time unit can be determined according to the frame rate of the physical display card. Assuming that the length of a single time unit is denoted as T, the Frame rate of the physical display card is X, and the unit of the Frame rate X may refer to a transmission Frame Per Second (FPS), t=1/X.
In practical applications, the length of a single time unit may typically be in the order of milliseconds, for example, 16 milliseconds. The lengths of the different time units may be the same, but the lengths of the corresponding time slices of the same virtual machine within the different time units may be the same or different.
Further, the scheduler may allocate a single time unit to multiple virtual machines.
As one possible implementation, a single time unit is allocated equally to multiple virtual machines. That is, the length of the time slice corresponding to each virtual machine in a single time unit may be T/N, where N is the number of virtual machines.
As another possible implementation, the length of the time slices corresponding to each virtual machine may depend on the weight value of the virtual machine. Specifically, the greater the weight value, the greater the length of the time slice corresponding to the virtual machine. That is, the greater the weight value, the longer the virtual machine occupies the physical engine in a single time unit. Wherein the weight value may be a normalized weight value. For example, the sum of the weight values of multiple virtual machines within a virtual machine group may be 1. Thus, the length of the time slice corresponding to each virtual machine in a single time unit may be a×t, where a is a weight value of the virtual machine.
In a specific implementation, the physical graphics card may be configured with a weight register, which may be used to store weight values for each virtual machine.
For example, the weight value of the virtual machine may be a fixed weight value according to the virtual machine. Wherein the fixed weight value may depend on a preset reference weight value.
Specifically, the fixed weight value may be calculated according to a reference weight value, where the reference weight value is preset, for example, a server manufacturer may set the reference weight value of the virtual machine according to a user level, and the higher the user level, the larger the reference weight value, and the larger the fixed weight value. Wherein the user may refer to a user using a virtual machine.
Further, the reference weight value of each virtual machine in the virtual machine group can be normalized, so that the fixed weight value of each virtual machine is obtained.
In practical application, when the virtual machines included in the virtual machine set change, the fixed weight values corresponding to the virtual machines in the virtual machine set can be updated according to the reference weight values corresponding to the virtual machines in the changed virtual machine set. The change may be that a command queue corresponding to one or more virtual machines in the virtual machine set is emptied, so that one or more virtual machines are deleted, or a new virtual machine is added in the virtual machine set, but is not limited thereto.
In the scheme, the server manufacturer can flexibly allocate the time length in a single time unit for different users, namely, different time slices can be divided for different virtual machines according to actual requirements, and the flexibility is better.
Also exemplary, the weight value of the virtual machine may be calculated according to a fixed weight value of the virtual machine and a dynamic weight value of the virtual machine. Wherein the dynamic weight value may depend on a length of a command queue corresponding to the virtual machine. For example, the weight value of the virtual machine may be obtained by performing weighted summation on a fixed weight value and a dynamic weight value, where a weight coefficient corresponding to the fixed weight value may be greater than a weight coefficient of the dynamic weight value.
The length of the command queue may refer to the number of command packets to be executed in the command queue. Since the dynamic weight value depends on the length of the command queue to which the virtual machine corresponds, the dynamic weight value may be dynamically changed. Specifically, the longer the length of the command queue, the greater the dynamic weight value of the virtual machine.
By adopting the scheme, on the basis of distributing time slices according to the fixed weight values of the virtual machines, the weight values of the virtual machines are finely adjusted by combining the dynamic weight values, so that the time slice length corresponding to the virtual machines with more command packets to be executed is increased appropriately, and the execution efficiency of the command packets of the whole virtual machines is improved.
In a specific implementation, before determining the weight values of the current multiple virtual machines, whether the users corresponding to the current multiple virtual machines are the same user may be determined first, and if the determination result is yes, the weight values of the multiple virtual machines may be calculated according to the fixed weight values and the dynamic weight values of the multiple virtual machines. If the determination result is no, the weight value of each virtual machine may be determined only according to the fixed weight values of the plurality of virtual machines. By adopting the scheme, the problems that the length of a command queue of a certain virtual machine is too long, the weight value of the virtual machine is large, the time slices allocated to the virtual machine are too long, and the use experience of users corresponding to other virtual machines is poor can be avoided.
In addition, in the solution of the embodiment of the present invention, the order among the plurality of virtual machines may be fixed. That is, the order of the plurality of virtual machines in the virtual machine group may be fixed, and the scheduler may sequentially execute command packets from the respective virtual machines in the single time unit in the order. For example, the order may depend on an Identification (ID) of the virtual machine, which is used to uniquely determine the virtual machine. For example, the smaller the ID, the earlier the command packet for the virtual machine is executed.
Alternatively, the order between the plurality of virtual machines may also depend on the weight values of the virtual machines. For example, the larger the weight value, the earlier the command packet of the virtual machine is executed.
The solution provided by the embodiment of the present invention is further described below with reference to fig. 2 to 5.
Referring to fig. 2, fig. 2 is a flow chart of a resource scheduling method of a physical display card according to an embodiment of the invention. The method may be performed by the server described above, and details regarding the server may be described with reference to fig. 1. Fig. 2 illustrates an example of switching between a first virtual machine and a second virtual machine, and describes and illustrates a resource scheduling method provided by an embodiment of the present invention in a non-limiting manner.
The resource scheduling method of the physical display card shown in fig. 2 may include the following steps:
step S21: in a single time unit, if it is detected that a time slice corresponding to a first virtual machine is used and a first command packet from the first virtual machine is not executed, storing execution information of the first command packet and loading a second command packet from a second virtual machine, wherein the execution information includes: the execution position and the context environment when the time slices are used;
Step S22: and if the second command packet has an unfinished identifier, continuing to execute the second command packet according to the execution information of the second command packet until the time slice corresponding to the second virtual machine is used.
In step S21, during the process of executing the first command packet, the duration that the first virtual machine has occupied in the current time unit is monitored, where the first command packet is from the first virtual machine.
Specifically, the time occupied by each virtual machine may be timed. For example, the scheduler may be configured with a timing unit that may be used to time the time occupied by each virtual machine within each time unit.
The time occupied by the virtual machine may refer to the execution duration of a command packet from the virtual machine.
If the use of the time slice corresponding to the first virtual machine is detected to be finished, the execution of the first command packet can be stopped, and the execution information of the first command packet can be stored. The time slice corresponding to the first virtual machine after being used may be that the duration occupied by the first virtual machine in the current time unit reaches the length of the time slice corresponding to the first virtual machine.
The execution information in the embodiment of the invention can include the execution position and the context when the time slice is used.
Specifically, the execution position when the time slice is used is the position of the command symbol executed by the physical engine in the command packet when the time slice corresponding to the virtual machine is used. More specifically, when the time slice corresponding to the virtual machine is used, it is recorded which row of command symbols in the command packet the physical engine executes.
The context when the time slice is used is a set of external environment parameters of the physical engine executing the command packet when the time slice corresponding to the virtual machine is used.
Specifically, the process of executing the command packet by the physical engine requires participation of a plurality of units, for example, the plurality of units may include an operation unit, a register, and the like, and in the process of executing the same command packet, the executed command symbols are different, and parameters adopted by the plurality of units are also different. That is, the context refers to parameters adopted by the units participating in the execution of the command packet, and the parameters adopted by the units vary with the execution process of the command packet.
More specifically, the process of executing the command packet by the physical engine requires participation of a plurality of units, such as graphics drawing, and the executing process includes a vertex shader, a rasterization module, a pixel shader, a texture sampling module, a pixel testing module, and the like, and in the executing process of the same command packet, the executing command symbols are different, and parameters adopted by the units are also different.
In the scheme of the embodiment of the invention, if a certain command packet finishes executing because the time slice is used, the execution position and the context environment of the command packet when the time slice is used are saved. In one aspect, execution continues from the execution location in the next time unit by saving the execution location. On the other hand, the normal execution of the command packet is facilitated by saving the context in order to restore the current context before the next execution.
In step S21, after stopping the execution of the first command packet, a second command packet may also be loaded, the second command packet being from the second virtual machine. The second command packet may be a command packet with the forefront ordering in a command queue corresponding to the second virtual machine.
In step S22, it may be first determined whether the second command packet has an incomplete flag, which may be used to indicate whether the command packet is executed for the first time.
If the second command packet has an incomplete flag, it indicates that the second command packet was executed and was not executed in the last time unit. Therefore, the second command packet can be continuously executed according to the execution information of the second command packet until the time slice corresponding to the second virtual machine is used.
Specifically, the context environment when the last execution of the second command packet is finished can be recovered according to the context environment in the execution information of the second command packet; it is then possible to jump to the execution position indicated commander and continue to execute the second command packet.
It should be noted that, in the solution of the embodiment of the present invention, the end of execution of the command packet (or the end of execution) and the completion of execution of the command packet are different.
Specifically, the completion of execution of the command packet refers to that all the command symbols related to the GPU function implementation in the command packet are completely executed, for example, referring to fig. 3, the completion of execution may refer to that all the main command symbols are executed. The end of the execution of the command packet means that the execution of the command packet is stopped when the time slice is used, and the execution of the command packet is not necessarily completed when the execution of the command packet is ended. In other words, the end of the command packet execution refers to the command packet execution suspension, and the completion of the command packet execution refers to the command packet execution termination.
With continued reference to fig. 2, in step S22, the occupied time of the second virtual machine is counted together when the execution of the second command packet is started. Referring to step S21, if the duration occupied by the second virtual machine reaches the length of the time slice corresponding to the second virtual machine, it may be determined that the time slice corresponding to the second virtual machine is used, and the execution of the second command packet is ended.
Further, if the execution of the second command packet is completed, but the time slice corresponding to the second virtual machine in the current time unit is not used, a new command packet may be loaded from the command queue corresponding to the second virtual machine for execution until the time slice corresponding to the second virtual machine is used.
In step S22, if the loaded second command packet has a non-execution flag, the second command packet may be executed directly.
The unexecuted flag may be used to indicate that the command packet has not been executed, for example, if there is no unexecuted flag, it may be regarded as having unexecuted flag.
Further, directly executing the second command packet may refer to executing from a start position of the body command symbol in the second command packet. In other words, there is no need to restore the context and jump to the execution location. For details on the command package and the body command symbol, reference may be made to the relevant description below with respect to fig. 3.
From the above, the embodiment of the present invention describes specifically a switching process between a command packet from a first virtual machine and a command packet from a second virtual machine. The scheme of the embodiment of the invention provides a finer scheduling method, a single time unit is distributed to each virtual machine by taking a time slice as granularity, the switching between the virtual machines depends on the time slice and is not limited by the length of a command packet, and the switching between the virtual machines can be carried out inside the command packet.
Because the virtual machine switching occurs in a certain command packet in the scheme of the embodiment of the invention, the embodiment of the invention also provides a new command packet format so as to restore the execution of the command packet.
Referring to fig. 3, fig. 3 is a schematic diagram of a command packet according to an embodiment of the present invention.
As shown in fig. 3, the command packet 30 in the embodiment of the present invention may include: a first auxiliary commander 31, a main commander 32, and a second auxiliary commander 33, wherein the first auxiliary commander 31 is located before the main commander 32, and the second auxiliary commander 33 is located after the main commander 32.
Specifically, the first auxiliary command 31 may be used to read the execution information of the last execution of the command packet, the main command 32 may be used to implement the GPU function, and the second auxiliary command 33 may be used to save the execution information of the current execution of the command packet.
The switching of virtual machines within a certain command packet is facilitated by the setting of a first auxiliary command 31 and a second auxiliary command 33.
Specifically, assuming that the physical engine executes the main body command symbol m, the time slice corresponding to the virtual machine from which the command packet 30 is received is used, it may be determined whether the execution of the command packet 30 is completed.
If it is determined that the command packet 30 has not been executed, i.e. 1.ltoreq.m < n, a jump may be made to the second auxiliary commander 33 in response to the time slice being used up.
The second auxiliary commander 31 may include a "storage commander" and a "storage space", execute the storage commander, and save the context and execution position (i.e., m) when executing the main body commander m to the storage space in the second auxiliary commander 31. Further, the command packet 30 may also be marked with the incomplete identification described above.
If it is determined that the execution of the command packet 30 is completed, that is, the execution position of the command packet 30 is the body command symbol n, the completion flag may be marked for the command packet 30.
The first auxiliary commander 31 is executed first when the command packet 30 is executed in the next time unit. The first auxiliary commander 31 includes a "resume commander" and a "memory space", and executing the first auxiliary commander 31 may refer to reading execution information from the memory space in the first auxiliary commander 31, then resuming the context according to the execution information, and jumping to the main body commander m. The subject commander following the subject commander m may then be executed.
In the scheme of the embodiment of the invention, the scheduler needs to switch between virtual machines according to time slices, and in order to cooperate with the scheduler to correctly save and restore the context, an auxiliary command is added on the basis of a conventional command (namely, the main command above) so as to assist the scheduler in completing the scheduling work, and the execution integrity of the command packet is ensured under the condition of discontinuously executing the same command packet.
Referring to fig. 4, fig. 4 is a flowchart illustrating another method for scheduling resources of a physical display card according to an embodiment of the present application. The resource scheduling method provided by the embodiment of the application is described in detail below with reference to fig. 4. The scheme shown in fig. 4 may include steps S41 to S48.
Step S41, executing the current command packet.
Step S42, judging whether the time slice of the current virtual machine is used.
The current command packet refers to a command packet executed by the current physical engine, and the current virtual machine refers to a virtual machine from which the current command packet comes. In the process of step S41, step S42 may be performed.
If the time slice of the current virtual machine is used, step S43 is executed, and if the time slice of the current virtual machine is not used, step S41 is continuously executed, that is, the current command packet is continuously executed.
In other words, the time occupied by the current virtual machine is counted during the execution of the current command packet, and step S43 is executed when the time slice of the current virtual machine is used.
Step S43, judging whether the current command packet is executed.
Step S45 is performed if the execution of the current command packet is completed, step S44 is performed if the execution of the current command packet is not completed, and step S45 is performed after step S44 is performed.
Step S44, executing the second auxiliary command symbol.
Specifically, the execution information of the current command packet is saved by executing the second auxiliary command symbol.
After step S44, and before step S45, the incomplete identity may also be marked for the current command packet.
Step S45, obtaining a command packet of the next virtual machine.
If step S45 is directly performed after step S43 is performed, the execution completion flag may be marked for the current command packet before step S45 is performed.
Further, after the command packet of the next virtual machine is acquired, the command packet may be used as a current command packet, and the virtual machine may be used as a current virtual machine.
After step S45 is executed, step S46 is continued.
Step S46, judging whether the current command packet has an unfinished mark.
If the determination result is yes, step S48 is executed, and if the determination result is no, step S47 is executed.
Step S47, skip the first auxiliary command symbol.
If the current command packet does not have the incomplete identification, indicating that the current command packet is executed for the first time, skipping the first auxiliary command symbol, returning to step S41 to start execution from the first main command symbol in the current command packet.
Step S48, executing the first auxiliary command symbol.
If the current command packet has the unfinished identifier, restoring the context environment when the current command packet is finished in the last execution by executing the first auxiliary command symbol, and jumping to the position when the last execution is finished. Then, the process returns to step S41 to continue the execution of the current command packet.
For more details regarding the method illustrated in fig. 4, reference is made to the above description regarding fig. 1 to 3, and no further description is given here.
By the method, the virtual machines can be switched in the command packet, and context environments such as an operation unit, a register and the like in the physical engine can be saved and restored according to requirements in the switching process, so that the execution integrity of the command packet is ensured. In the scheme of the embodiment of the invention, the method is not limited by the execution time of the command packet, different time slices can be allocated to each virtual machine according to the requirements, and a server manufacturer can more flexibly configure the virtual machines to meet the requirements of different clients.
It will be appreciated that in a specific implementation, the method may be implemented in a software program running on a processor integrated within a chip or a chip module; alternatively, the method may be implemented in hardware or a combination of hardware and software, for example, implemented in a dedicated chip or chip module, or implemented in a dedicated chip or chip module in combination with a software program.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a resource scheduling device of a physical display card in an embodiment of the present invention, where the device may be deployed on a server, the physical display card provides GPU functions for a plurality of virtual machines, where the plurality of virtual machines includes at least a first virtual machine and a second virtual machine, and the device shown in fig. 5 may include:
the first processing module 51 is configured to store execution information of a first command packet and load a second command packet from a second virtual machine if it is detected that a time slice corresponding to the first virtual machine is used and the first command packet from the first virtual machine is not executed in a single time unit, where the execution information includes: the execution position and the context environment when the time slices are used;
the second processing module 52 is configured to, if the second command packet has an incomplete identifier, continue executing the second command packet according to the execution information of the second command packet until the time slice corresponding to the second virtual machine is used;
the length of the single time unit is determined according to the frame rate of the physical display card, the single time unit comprises a plurality of time slices, and the time slices are in one-to-one correspondence with the virtual machines.
The apparatus may be, for example, a scheduler.
For more matters such as the working principle, the working method and the beneficial effects of the resource scheduling device for the physical display card in the embodiment of the present application, reference may be made to the above description related to the resource scheduling method for the physical display card, which is not repeated here.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when being run by a processor, executes the steps of the resource scheduling method of the physical display card. The storage medium may include ROM, RAM, magnetic or optical disks, and the like. The storage medium may also include a non-volatile memory (non-volatile) or a non-transitory memory (non-transitory) or the like.
The embodiment of the application also provides a server, which comprises a memory and a processor, wherein the memory stores a computer program which can be run on the processor, and the processor executes the steps of the resource scheduling method of the physical display card when running the computer program.
It should be appreciated that in the embodiment of the present application, the processor may be a central processing unit (central processing unit, abbreviated as CPU), and the processor may also be other general purpose processors, digital signal processors (digital signal processor, abbreviated as DSP), application specific integrated circuits (application specific integrated circuit, abbreviated as ASIC), off-the-shelf programmable gate arrays (field programmable gate array, abbreviated as FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should also be appreciated that the memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically erasable ROM (electrically EPROM, EEPROM), or a flash memory. The volatile memory may be a random access memory (random access memory, RAM for short) which acts as an external cache. By way of example and not limitation, many forms of random access memory (random access memory, RAM) are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate synchronous DRAM (double data rate SDRAM, DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus RAM (direct rambus RAM, DR RAM)
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer program may be stored in or transmitted from one computer readable storage medium to another, for example, by wired or wireless means from one website, computer, server, or data center.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus and system may be implemented in other manners. For example, the device embodiments described above are merely illustrative; for example, the division of the units is only one logic function division, and other division modes can be adopted in actual implementation; for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may be physically included separately, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units. For example, for each device or product applied to or integrated on a chip, each module/unit included in the device or product may be implemented in hardware such as a circuit, or at least part of the modules/units may be implemented in software program, where the software program runs on a processor integrated inside the chip, and the rest (if any) of the modules/units may be implemented in hardware such as a circuit; for each device and product applied to or integrated in the chip module, each module/unit contained in the device and product can be realized in a hardware manner such as a circuit, different modules/units can be located in the same component (such as a chip, a circuit module and the like) or different components of the chip module, or at least part of the modules/units can be realized in a software program, the software program runs on a processor integrated in the chip module, and the rest (if any) of the modules/units can be realized in a hardware manner such as a circuit; for each device, product, or application to or integrated with the terminal, each module/unit included in the device, product, or application may be implemented by using hardware such as a circuit, different modules/units may be located in the same component (for example, a chip, a circuit module, or the like) or different components in the terminal, or at least part of the modules/units may be implemented by using a software program, where the software program runs on a processor integrated inside the terminal, and the remaining (if any) part of the modules/units may be implemented by using hardware such as a circuit.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In this context, the character "/" indicates that the front and rear associated objects are an "or" relationship.
The term "plurality" as used in the embodiments of the present application means two or more. The first, second, etc. descriptions in the embodiments of the present application are only used for illustrating and distinguishing the description objects, and no order is used, nor is the number of the devices in the embodiments of the present application limited, and no limitation on the embodiments of the present application should be construed.
Although the present application is disclosed above, the present application is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the application, and the scope of the application should be assessed accordingly to that of the appended claims.

Claims (8)

1. The utility model provides a resource scheduling method of a physical display card, which is characterized in that the physical display card provides GPU function service for a plurality of virtual machines, wherein the plurality of virtual machines at least comprise a first virtual machine and a second virtual machine, and the method comprises the following steps:
In a single time unit, if it is detected that a time slice corresponding to a first virtual machine is used and a first command packet from the first virtual machine is not executed, storing execution information of the first command packet and loading a second command packet from a second virtual machine, wherein the execution information includes: the execution position and the context environment when the time slices are used;
if the second command packet has an unfinished identifier, continuing to execute the second command packet according to the execution information of the second command packet until the time slice corresponding to the second virtual machine is used;
the length of the single time unit is determined according to the frame rate of the physical display card, the single time unit comprises a plurality of time slices, and the time slices are in one-to-one correspondence with the virtual machines;
the method comprises the steps that the length of a time slice corresponding to each virtual machine is determined according to the weight value of the virtual machine, whether users corresponding to the virtual machines are the same user is judged before the weight values of the virtual machines are determined, if yes, the weight value of each virtual machine is calculated according to the fixed weight values and the dynamic weight values of the virtual machines, the weight coefficient corresponding to the fixed weight value is larger than the weight coefficient of the dynamic weight value, and if no, the weight value of each virtual machine is determined according to the fixed weight values of the virtual machines;
The fixed weight value depends on a preset reference weight value, the dynamic weight value depends on the length of a command queue corresponding to the virtual machine, and the command queue is used for storing command packets to be executed from the virtual machine.
2. The resource scheduling method of claim 1, wherein continuing to execute the second command packet according to the execution information of the second command packet comprises:
restoring the context environment when the last execution of the second command packet is finished according to the context environment in the execution information;
and jumping to the command indicated by the execution position, and continuing to execute the second command packet.
3. The resource scheduling method of claim 1, wherein the command packet includes a first auxiliary command symbol, a main command symbol, and a second auxiliary command symbol, the first auxiliary command symbol being located before the main command symbol, the second auxiliary command symbol being located after the main command symbol;
the first auxiliary command symbol is used for reading the execution information when the last execution of the command packet is finished, the main command symbol is used for realizing the GPU function, and the second auxiliary command symbol is used for storing the execution information when the current execution of the command packet is finished.
4. A resource scheduling method according to claim 3, characterized in that the method further comprises:
and if the second command packet has a non-execution identifier, starting to execute from the starting position of the main body command symbol in the second command packet.
5. A resource scheduling method according to claim 3, characterized in that the method further comprises:
and if the first command packet is executed, skipping over the second auxiliary command symbol.
6. A resource scheduling device of a physical display card, wherein the physical display card provides GPU function services for a plurality of virtual machines, the plurality of virtual machines including at least a first virtual machine and a second virtual machine, the device comprising:
the first processing module is configured to store execution information of a first command packet and load a second command packet from a second virtual machine if it is detected that a time slice corresponding to the first virtual machine is used and the first command packet from the first virtual machine is not executed in a single time unit, where the execution information includes: the execution position and the context environment when the time slices are used;
the second processing module is used for continuing to execute the second command packet according to the execution information of the second command packet if the second command packet has an incomplete identifier until the time slice corresponding to the second virtual machine is used;
The length of the single time unit is determined according to the frame rate of the physical display card, the single time unit comprises a plurality of time slices, and the time slices are in one-to-one correspondence with the virtual machines;
wherein the length of the time slice corresponding to each virtual machine is determined according to the weight value of the virtual machine,
the apparatus further comprises: before determining the weight values of the plurality of virtual machines, judging whether the users corresponding to the plurality of virtual machines are the same user, if yes, calculating the weight value of each virtual machine according to the fixed weight value and the dynamic weight value of the plurality of virtual machines, wherein the weight coefficient corresponding to the fixed weight value is larger than the weight coefficient of the dynamic weight value, and if no, determining the weight value of each virtual machine according to the fixed weight value of the plurality of virtual machines;
the fixed weight value depends on a preset reference weight value, the dynamic weight value depends on the length of a command queue corresponding to the virtual machine, and the command queue is used for storing command packets to be executed from the virtual machine.
7. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when being executed by a processor, performs the steps of the resource scheduling method of a physical graphics card according to any one of claims 1 to 5.
8. A server comprising a memory and a processor, the memory having stored thereon a computer program executable on the processor, characterized in that the processor, when executing the computer program, performs the steps of the resource scheduling method of a physical display card according to any one of claims 1 to 5.
CN202310794980.6A 2023-06-29 2023-06-29 Resource scheduling method and device for physical display card, storage medium and terminal Active CN116521376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310794980.6A CN116521376B (en) 2023-06-29 2023-06-29 Resource scheduling method and device for physical display card, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310794980.6A CN116521376B (en) 2023-06-29 2023-06-29 Resource scheduling method and device for physical display card, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN116521376A CN116521376A (en) 2023-08-01
CN116521376B true CN116521376B (en) 2023-11-21

Family

ID=87390605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310794980.6A Active CN116521376B (en) 2023-06-29 2023-06-29 Resource scheduling method and device for physical display card, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN116521376B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111506385A (en) * 2019-01-31 2020-08-07 Ati科技无限责任公司 Engine preemption and recovery
CN114008588A (en) * 2019-06-26 2022-02-01 Ati科技无限责任公司 Sharing multimedia physical functions in a virtualized environment of processing units
CN114327843A (en) * 2020-09-29 2022-04-12 华为技术有限公司 Task scheduling method and device
CN114816777A (en) * 2021-01-29 2022-07-29 上海阵量智能科技有限公司 Command processing device, method, electronic device and computer readable storage medium
CN115586955A (en) * 2022-10-19 2023-01-10 湖南大学 Command execution method and device, computer equipment and storage medium
CN115756730A (en) * 2022-11-17 2023-03-07 上海天数智芯半导体有限公司 Virtual machine scheduling method and device, GPU and electronic equipment
CN116185554A (en) * 2021-11-29 2023-05-30 华为技术有限公司 Configuration device, scheduling device, configuration method and scheduling method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10474490B2 (en) * 2017-06-29 2019-11-12 Advanced Micro Devices, Inc. Early virtualization context switch for virtualized accelerated processing device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111506385A (en) * 2019-01-31 2020-08-07 Ati科技无限责任公司 Engine preemption and recovery
CN114008588A (en) * 2019-06-26 2022-02-01 Ati科技无限责任公司 Sharing multimedia physical functions in a virtualized environment of processing units
CN114327843A (en) * 2020-09-29 2022-04-12 华为技术有限公司 Task scheduling method and device
CN114816777A (en) * 2021-01-29 2022-07-29 上海阵量智能科技有限公司 Command processing device, method, electronic device and computer readable storage medium
CN116185554A (en) * 2021-11-29 2023-05-30 华为技术有限公司 Configuration device, scheduling device, configuration method and scheduling method
CN115586955A (en) * 2022-10-19 2023-01-10 湖南大学 Command execution method and device, computer equipment and storage medium
CN115756730A (en) * 2022-11-17 2023-03-07 上海天数智芯半导体有限公司 Virtual machine scheduling method and device, GPU and electronic equipment

Also Published As

Publication number Publication date
CN116521376A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN111450524B (en) Information processing method and device in cloud game, cloud game server and medium
EP3073374A1 (en) Thread creation method, service request processing method and related device
CN108064377B (en) Management method and device for multi-system shared memory
CN109726005B (en) Method, server system and computer readable medium for managing resources
CN106796530B (en) A kind of virtual method, device and electronic equipment, computer program product
CN111324432B (en) Processor scheduling method, device, server and storage medium
CN111274019A (en) Data processing method and device and computer readable storage medium
US20180108109A1 (en) Gpu resource allocation method and system
CN106897299B (en) Database access method and device
WO2016202154A1 (en) Gpu resource allocation method and system
CN114448909B (en) Network card queue polling method and device based on ovs, computer equipment and medium
CN114880259A (en) Data processing method, device, system, electronic equipment and storage medium
CN109542829B (en) Control method and device of GPU (graphics processing Unit) equipment in multiple systems and electronic equipment
CN108241522B (en) Sleep state switching method and device in virtualization environment and electronic equipment
CN116521376B (en) Resource scheduling method and device for physical display card, storage medium and terminal
CN114780463A (en) Interrupt control method, device, distributed system and storage medium
CN109766168B (en) Task scheduling method and device, storage medium and computing equipment
CN111310638B (en) Data processing method, device and computer readable storage medium
CN113051059A (en) Multi-GPU task real-time scheduling method and device
CN114816777A (en) Command processing device, method, electronic device and computer readable storage medium
CN111338769A (en) Data processing method and device and computer readable storage medium
CN114116220B (en) GPU sharing control method, GPU sharing control device and storage medium
US11189003B2 (en) Graphics processing method and related apparatus, and device for unidirectionally transmitting calling information of a graphics API to a client
CN110796587B (en) Drawcall call processing method, device, terminal and storage medium
CN113590289A (en) Job scheduling method, system, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240521

Address after: 201207 Pudong New Area, Shanghai, China (Shanghai) free trade trial area, No. 3, 1 1, Fang Chun road.

Patentee after: Li Computing Technology (Shanghai) Co.,Ltd.

Country or region after: China

Address before: Room 2794, Hatching Building, No. 99 Tuanjie Road, Nanjing Area, Nanjing (Jiangsu) Pilot Free Trade Zone, Jiangsu Province, 210031

Patentee before: Nanjing Lisuan Technology Co.,Ltd.

Country or region before: China

Patentee before: Li Computing Technology (Shanghai) Co.,Ltd.