CN117350916B - Method, electronic device and medium for managing GPU kernel drive based on state machine - Google Patents

Method, electronic device and medium for managing GPU kernel drive based on state machine Download PDF

Info

Publication number
CN117350916B
CN117350916B CN202311643594.3A CN202311643594A CN117350916B CN 117350916 B CN117350916 B CN 117350916B CN 202311643594 A CN202311643594 A CN 202311643594A CN 117350916 B CN117350916 B CN 117350916B
Authority
CN
China
Prior art keywords
gpu
state machine
state
gpu kernel
virtualization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311643594.3A
Other languages
Chinese (zh)
Other versions
CN117350916A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Muxi Technology Beijing Co ltd
Original Assignee
Muxi Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Muxi Technology Beijing Co ltd filed Critical Muxi Technology Beijing Co ltd
Priority to CN202311643594.3A priority Critical patent/CN117350916B/en
Publication of CN117350916A publication Critical patent/CN117350916A/en
Application granted granted Critical
Publication of CN117350916B publication Critical patent/CN117350916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4498Finite state machines
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Stored Programmes (AREA)

Abstract

The invention relates to a method, electronic equipment and medium for managing GPU kernel drive based on a state machine, wherein the method comprises the following steps: s1, creating a state machine process, acquiring a GPU kernel driving version to be installed, and monitoring the change of a preset configuration file; s2, acquiring current GPU virtualization configuration information, reading updated configuration information, executing S3 if the updated configuration information is consistent, and executing S4 if the updated configuration information is not consistent; s3, acquiring a current GPU kernel driving version, comparing the current GPU kernel driving version with a version to be installed, if the current GPU kernel driving version is inconsistent with the version to be installed, entering S4, otherwise, entering S5; s4, unloading the GPU kernel driver and the virtualization driver, if the virtualization is started, installing the virtualization driver first, then installing the GPU kernel driver, otherwise, directly installing the GPU kernel driver, and executing the step S5; s5, if the state changes, jumping to the corresponding state. The method and the device can reasonably manage the GPU kernel drive based on the GPU kernel drive state, and ensure the normal operation of the GPU kernel drive.

Description

Method, electronic device and medium for managing GPU kernel drive based on state machine
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an electronic device, and a medium for managing GPU kernel drivers based on a state machine.
Background
Graphics processor (Graphics Processing Unit, GPU) kernel drivers are a type of process with states. When the GPU hardware is used, a GPU kernel driver which interacts with the GPU hardware is required to be set, a user process interacts with the GPU kernel driver first, and then the GPU kernel driver interacts with the GPU hardware. The GPU kernel driver is extremely important, and the security of the GPU kernel driver needs to be ensured as much as possible, so how to reasonably manage the GPU kernel driver is particularly important. GPU kernel-driven uninstallation and reinstallation are required when the kernel-driven version or configuration information of the GPU changes. In the prior art, when the need of unloading and reloading of the GPU kernel driver is detected, the unloading and reloading of the GPU kernel driver are directly executed, the state of the GPU kernel driver is not considered, but the state of the GPU kernel driver is not considered, so that the state of the GPU kernel driver is possibly confused, and even some states of the GPU kernel driver cannot be normally used. Therefore, how to reasonably manage the GPU kernel driver based on the GPU kernel driver state and ensure the normal operation of the GPU kernel driver becomes a technical problem to be solved.
Disclosure of Invention
The invention aims to provide a method, electronic equipment and medium for managing a GPU (graphics processing unit) kernel driver based on a state machine, which can reasonably manage the GPU kernel driver based on the state of the GPU kernel driver and ensure the normal operation of the GPU kernel driver.
According to a first aspect of the present invention, there is provided a method for managing GPU kernel drivers based on a state machine, comprising:
step S1, entering a state machine initialization state, judging whether a state machine process exists, if yes, ending the flow, otherwise, creating the state machine process, obtaining a GPU kernel driving version to be installed, monitoring the change of a preset configuration file, wherein GPU virtualization configuration information is stored in the preset configuration file, and the state machine is a state machine arranged in GPU equipment and used for managing the GPU kernel driving;
step S2, entering a state machine configuration state, obtaining current GPU virtualization configuration information of the GPU equipment, wherein the virtualization configuration information comprises virtualization starting information and division number information of virtualization, reading updated GPU virtualization configuration information from the preset configuration file, executing step S3 if the current GPU virtualization configuration information is consistent with the GPU virtualization configuration information, otherwise, updating the current GPU virtualization configuration information into the updated GPU virtualization configuration information, and executing step S4;
step S3, acquiring a current GPU kernel driving version of the GPU equipment, comparing the current GPU kernel driving version with the GPU kernel driving version to be installed, if the current GPU kernel driving version is inconsistent with the GPU kernel driving version to be installed, entering a step S4, otherwise, entering a step S5;
s4, entering a state machine reloading state, unloading all GPU kernel drivers and virtualization drivers of the GPU equipment on the premise that all GPU processes are finished, if the GPU is virtualized to be in an on state, installing the virtualization drivers according to current GPU virtualization configuration information, then installing the GPU kernel drivers, if the GPU is virtualized to be in an off state, directly installing the GPU kernel drivers, and then executing the step S5;
and S5, entering a state machine waiting state, and if the state changes, jumping to a state corresponding to the state machine.
According to a second aspect of the present invention, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being arranged to perform the method according to the first aspect of the invention.
According to a third aspect of the present invention there is provided a computer readable storage medium storing computer executable instructions for performing the method of the first aspect of the present invention.
Compared with the prior art, the invention has obvious advantages and beneficial effects. By means of the technical scheme, the method, the electronic equipment and the medium for managing the GPU kernel drive based on the state machine can achieve quite technical progress and practicality, and have wide industrial utilization value, and the method and the electronic equipment have at least the following beneficial effects:
the method manages the GPU kernel driver through the state machine, so that unloading and reloading of the GPU kernel driver are performed in a clear GPU kernel driver state, the condition of the GPU kernel driver is not disordered, the GPU kernel driver can be reasonably managed based on the GPU kernel driver, and the normal operation of the GPU kernel driver is ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for managing GPU kernel drivers based on a state machine according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a state machine architecture for managing GPU kernel drivers according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
The embodiment of the invention provides a method for managing GPU kernel driving based on a state machine, which is shown in figure 1 and comprises the following steps:
step S1, entering a state machine initialization state (Lnit), judging whether a state machine process exists, if yes, ending the flow, otherwise, creating the state machine process, obtaining a GPU kernel driving version to be installed, monitoring the change of a preset configuration file, wherein GPU virtualization configuration information is stored in the preset configuration file, and the state machine is a state machine arranged in GPU equipment and used for managing the GPU kernel driving.
It should be noted that, before step S1 is performed, a state machine for managing GPU kernel drivers needs to be built on the GPU device, where the state machine includes an initialization state, a configuration state, a reload state, and a wait state, as shown in fig. 2.
Step S2, entering a state machine configuration state (Config), obtaining current GPU virtualization configuration information of the GPU equipment, wherein the virtualization configuration information comprises virtualization starting information and division number information of virtualization, reading updated GPU virtualization configuration information from the preset configuration file, executing step S3 if the current GPU virtualization configuration information is consistent with the GPU virtualization configuration information, otherwise, updating the current GPU virtualization configuration information into the updated GPU virtualization configuration information, and executing step S4.
Step S3, acquiring a current GPU kernel driving version of the GPU equipment, comparing the current GPU kernel driving version with the GPU kernel driving version to be installed, if the current GPU kernel driving version is inconsistent with the GPU kernel driving version to be installed, entering a step S4, otherwise, entering a step S5.
And S4, entering a state machine Reload state (Reload), unloading all GPU kernel drivers and virtualization drivers of the GPU equipment on the premise of meeting the condition that all GPU processes are finished, installing the virtualization drivers according to the current GPU virtualization configuration information if the GPU is virtualized to be in an on state, installing the GPU kernel drivers, directly installing the GPU kernel drivers if the GPU is virtualized to be in an off state, and executing the step S5.
And S5, entering a state machine waiting state, and if the state changes, jumping to a state corresponding to the state machine. As shown in fig. 2, in the waiting state, if a modification (Modified) occurs, the configuration state is skipped.
The GPU kernel driver can be reasonably managed based on the GPU kernel driver through the steps S1-S5, and the normal operation of the GPU kernel driver is ensured.
As an embodiment, the GPU device may be a computer with a GPU, and may be a computer with a GPU in a K8s (kubernetes) cluster. The GPU virtualization configuration information specifically includes { A } 1 ,A 2 ,…,A m ,…,A M (wherein A) m The value range of M is 1 to M, M is the total number of computers with GPU in K8s cluster, A is the GPU virtualization configuration information of the M-th node with GPU in K8s cluster m =(A1 m ,A2 m ),A1 m Is A m Corresponding virtualized starting identification A2 m Is A m The number of partitions of the corresponding virtualization, when A1 m A2 is when the mark is opened m Is effective. A1A 1 m May be set to "0" or "1". If A1 m Setting to "0" indicates that the node does not turn on virtualization, A1 m A2 when set to "0 m And (3) invalidating. If A1 m Setting to "1" indicates that the node turns on virtualization, A1 m A2 when set to "1 m Effective, A2 m The number of (2) is the number of virtual GPUs to be constructed for each GPU on the node. Each computer may include multiple GPUs, and the virtualized configuration information corresponding to the same node is the same. Preferably, A2 m Is an integer power of 2, A2 m May be set to values of 0,1,2,4, 8, etc.
The method specifically can be realized by storing the GPU virtualization configuration information in the preset configuration file through the following steps:
step S10, creating a configmap resource named driver-config in the K8S cluster, wherein the driver-config stores the configmap resourceStore virtualization configuration information { A } 1 ,A 2 ,…,A m ,…,A M }。
The configmap resource is a resource specially used for storing configuration information in the K8s cluster.
Step S20, setting daemon process set { P ] 1 ,P 2 ,…,P m ,…,P M And creates daemon set tasks, P m To run a daemon on the mth node with GPU in the K8s cluster.
Wherein the daemon refers to a process which runs for a long time without interruption, and the daemon task creates a Pod, P for each node with GPU m Run on the corresponding Pod, pod is the smallest scheduling unit in the K8s cluster.
Step S30, the daemon task mounts driver-config to each P m And (3) in a preset configuration file which can be accessed.
It should be noted that driver-config is mounted to each P m And establishing a mapping relation between the driver-config and the preset configuration file in the preset configuration files which can be accessed. Each P m The preset configuration files which can be accessed all need to be in P m The corresponding Pod is contained in the container. Only driver-config is mounted on P m In the corresponding Pod of Pod, P m The corresponding virtualized configuration information is accessed. After performing the mount operation, slave P m From the perspective, the driver-config conversion is to read and write the preset configuration file.
Step S40, when the virtualized configuration information in the driver-config changes, the preset configuration files synchronously change.
It should be noted that, after the driver-config and the preset file establish the mapping relationship, if the virtualized configuration information in the driver-config changes, the virtualized configuration information in the preset file also changes synchronously.
Through the steps S10-S40, the virtualized configuration information can be configured based on configmap resources, and then the GPU virtualized configuration of each node with the GPU can be automatically acquired and updated in real time based on the daemon set, so that the efficiency and the accuracy of the GPU virtualized configuration in the K8S cluster are improved.
As an embodiment, in the step S1, it is determined whether a state machine process already exists, if yes, the flow is ended, otherwise, the creating of the state machine process includes:
step S11, inquiring whether a state machine process file exists under a preset directory, if so, ending the flow if the state machine process exists, if not, creating the state machine process file under the preset directory, setting a write lock flow, and executing step S12.
It should be noted that, if the state machine process file exists in the preset directory, it is indicated that the state machine is already in the process of managing the kernel driver, and the state machine process file does not need to be repeatedly created.
And step S12, if the state machine process file lock cannot be acquired, returning an error identification, and if the state machine process file lock is successfully acquired, writing the state machine process identification into the state machine process file, then unlocking the state machine process file, and closing the state machine process file.
It should be noted that if the state machine process file lock cannot be obtained, it is stated that more than two processes may be started at the same time, but only one process is started successfully finally, so that only one state machine is ensured to run on one node to manage and drive the GPU kernel, because management conflicts can occur in a plurality of state machines, only one state machine management is set to ensure the reliability and accuracy of GPU driving kernel management.
As an embodiment, in the step S1, initializing a state in the state machine further includes:
step S13, creating a first interface, a second interface and a third interface for interaction with the outside, wherein the first interface is used for providing state information of a current state machine to the outside; the second interface is used for providing the current GPU kernel driving version and the current GPU virtualization configuration information to the outside; the third interface is used for receiving a stop instruction.
It should be noted that the outside may refer to other containers. The state information of the current state machine can be provided to the outside under any state of the state machine through the first interface. The current GPU kernel driving version and the current GPU virtualization configuration information can be provided to the outside in any state of the state machine through the second interface. However, the third interface may receive the stop instruction in any one state of the state machine, but perform the corresponding operation, and it is required that the corresponding operation be performed based on the stop instruction after entering the state machine waiting state.
As an embodiment, the method further includes step S10, if the state machine receives a stop instruction based on the third interface, determining whether the current state machine is in a waiting state, if yes, executing step S20, otherwise, executing step S20 after the state machine enters the waiting state; step S20, the state machine process is suspended, the (Release) state machine process file lock is released, and the execution returns to step S1, as shown in fig. 2.
In other states, a situation may occur that the kernel driver is being installed, and if the execution of the stop instruction is received, the kernel driver is caused to be in error, and an operation risk is brought, so that an operation related to the stop instruction needs to be executed when the state machine is in a waiting state.
As an embodiment, in the step S2, obtaining current GPU virtualization configuration information of the GPU device includes:
and S21, reading a GPU equipment file under the/sys/bus/pcb/devices directory, judging whether the GPU equipment is currently started to be virtualized, if so, acquiring the division number information of the virtualization, and generating the current GPU virtualization configuration information.
The GPU equipment files are fixedly stored under the/sys/bus/pcb/devices, so that the GPU equipment files can be directly read from the/sys/bus/pcb/devices.
As an embodiment, in the step S3, obtaining the current GPU kernel driver version of the GPU device includes:
step S31, judging whether the current GPU equipment is provided with the GPU kernel driver, if yes, reading the currently-installed GPU kernel driver version information in/sys/module, wherein/sys/module is a preset file for storing the GPU kernel driver version information, and if not, returning to the current GPU kernel driver version to be empty.
As an embodiment, the step S4 includes:
step S41, inquiring all running GPU processes, sending a stop instruction to all running GPU processes, ending all running GPU processes, or waiting for all running GPU processes to end, and not allowing new GPU processes to be added while waiting for devices.
In order to further ensure that the currently installed GPU kernel driver version is correct, the step S4 further includes, after installing the GPU kernel driver:
and step S42, judging whether the currently installed GPU kernel driving version is the GPU kernel driving version to be installed, if so, executing the step S5, otherwise, executing the step S4 again.
It should be noted that some exemplary embodiments are described as a process or a method depicted as a flowchart. Although a flowchart depicts steps as a sequential process, many of the steps may be implemented in parallel, concurrently, or with other steps. Furthermore, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The embodiment of the invention also provides electronic equipment, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being configured to perform the methods of embodiments of the present invention.
The embodiment of the invention also provides a computer readable storage medium, which stores computer executable instructions for executing the method according to the embodiment of the invention.
The method manages the GPU kernel driver through the state machine, so that unloading and reloading of the GPU kernel driver are performed in a clear GPU kernel driver state, the condition of the GPU kernel driver is not disordered, the GPU kernel driver can be reasonably managed based on the GPU kernel driver, and the normal operation of the GPU kernel driver is ensured.
The present invention is not limited to the above-mentioned embodiments, but is intended to be limited to the following embodiments, and any modifications, equivalents and modifications can be made to the above-mentioned embodiments without departing from the scope of the invention.

Claims (10)

1. A method for managing GPU kernel drivers based on a state machine, comprising:
step S1, entering a state machine initialization state, judging whether a state machine process exists, if yes, ending the flow, otherwise, creating the state machine process, obtaining a GPU kernel driving version to be installed, monitoring the change of a preset configuration file, wherein GPU virtualization configuration information is stored in the preset configuration file, and the state machine is a state machine arranged in GPU equipment and used for managing the GPU kernel driving;
step S2, entering a state machine configuration state, obtaining current GPU virtualization configuration information of the GPU equipment, wherein the virtualization configuration information comprises virtualization starting information and division number information of virtualization, reading updated GPU virtualization configuration information from the preset configuration file, executing step S3 if the current GPU virtualization configuration information is consistent with the GPU virtualization configuration information, otherwise, updating the current GPU virtualization configuration information into the updated GPU virtualization configuration information, and executing step S4;
step S3, acquiring a current GPU kernel driving version of the GPU equipment, comparing the current GPU kernel driving version with the GPU kernel driving version to be installed, if the current GPU kernel driving version is inconsistent with the GPU kernel driving version to be installed, entering a step S4, otherwise, entering a step S5;
s4, entering a state machine reloading state, unloading all GPU kernel drivers and virtualization drivers of the GPU equipment on the premise that all GPU processes are finished, if the GPU is virtualized to be in an on state, installing the virtualization drivers according to current GPU virtualization configuration information, then installing the GPU kernel drivers, if the GPU is virtualized to be in an off state, directly installing the GPU kernel drivers, and then executing the step S5;
and S5, entering a state machine waiting state, and if the state changes, jumping to a state corresponding to the state machine.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
in the step S1, it is determined whether a state machine process already exists, if yes, the flow is ended, otherwise, a state machine process is created, including:
step S11, inquiring whether a state machine process file exists under a preset directory, if so, ending the flow if the state machine process exists, if not, creating the state machine process file under the preset directory, setting a write lock flow, and executing step S12;
and step S12, if the state machine process file lock cannot be acquired, returning an error identification, and if the state machine process file lock is successfully acquired, writing the state machine process identification into the state machine process file, then unlocking the state machine process file, and closing the state machine process file.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
in the step S1, in the state machine initialization state, the method further includes:
step S13, creating a first interface, a second interface and a third interface for interaction with the outside, wherein the first interface is used for providing state information of a current state machine to the outside; the second interface is used for providing the current GPU kernel driving version and the current GPU virtualization configuration information to the outside; the third interface is used for receiving a stop instruction.
4. The method of claim 3, wherein the step of,
the method further comprises the step S10 of judging whether the current state machine is in a waiting state or not if the state machine receives a stop instruction based on the third interface, if so, executing the step S20, otherwise, executing the step S20 after the state machine enters the waiting state;
and step S20, stopping the state machine process, releasing the file lock of the state machine process, and returning to the step S1.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
in the step S2, obtaining current GPU virtualization configuration information of the GPU device includes:
and S21, reading a GPU equipment file under the/sys/bus/pcb/devices directory, judging whether the GPU equipment is currently started to be virtualized, if so, acquiring the division number information of the virtualization, and generating the current GPU virtualization configuration information.
6. The method of claim 1, wherein the step of determining the position of the substrate comprises,
in the step S3, obtaining a current GPU kernel driving version of the GPU device includes:
step S31, judging whether the current GPU equipment is provided with the GPU kernel driver, if yes, reading the currently-installed GPU kernel driver version information in/sys/module, wherein/sys/module is a preset file for storing the GPU kernel driver version information, and if not, returning to the current GPU kernel driver version to be empty.
7. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the step S4 includes:
step S41, inquiring all running GPU processes, sending a stop instruction to all running GPU processes, ending all running GPU processes, or waiting for all running GPU processes to end, and not allowing new GPU processes to be added while waiting for devices.
8. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the step S4 further includes, after installing the GPU kernel driver:
and step S42, judging whether the currently installed GPU kernel driving version is the GPU kernel driving version to be installed, if so, executing the step S5, otherwise, executing the step S4 again.
9. An electronic device, comprising:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the instructions being arranged to perform the method of any of the preceding claims 1-8.
10. A computer readable storage medium, characterized in that computer executable instructions are stored for performing the method of any of the preceding claims 1-8.
CN202311643594.3A 2023-12-04 2023-12-04 Method, electronic device and medium for managing GPU kernel drive based on state machine Active CN117350916B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311643594.3A CN117350916B (en) 2023-12-04 2023-12-04 Method, electronic device and medium for managing GPU kernel drive based on state machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311643594.3A CN117350916B (en) 2023-12-04 2023-12-04 Method, electronic device and medium for managing GPU kernel drive based on state machine

Publications (2)

Publication Number Publication Date
CN117350916A CN117350916A (en) 2024-01-05
CN117350916B true CN117350916B (en) 2024-02-02

Family

ID=89356082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311643594.3A Active CN117350916B (en) 2023-12-04 2023-12-04 Method, electronic device and medium for managing GPU kernel drive based on state machine

Country Status (1)

Country Link
CN (1) CN117350916B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2579721A (en) * 2015-12-02 2020-07-01 Imagination Tech Ltd GPU virtualisation
CN114662088A (en) * 2020-12-23 2022-06-24 英特尔公司 Techniques for providing access to kernel and user space memory regions
CN115859269A (en) * 2021-09-24 2023-03-28 辉达公司 Secure execution of multiple processor devices using trusted execution environment
CN116339825A (en) * 2021-12-14 2023-06-27 三星电子株式会社 System, method, and device for accessing a computing device kernel

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11340958B2 (en) * 2020-07-08 2022-05-24 Vmware, Inc. Real-time simulation of compute accelerator workloads for distributed resource scheduling

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2579721A (en) * 2015-12-02 2020-07-01 Imagination Tech Ltd GPU virtualisation
CN114662088A (en) * 2020-12-23 2022-06-24 英特尔公司 Techniques for providing access to kernel and user space memory regions
CN115859269A (en) * 2021-09-24 2023-03-28 辉达公司 Secure execution of multiple processor devices using trusted execution environment
CN116339825A (en) * 2021-12-14 2023-06-27 三星电子株式会社 System, method, and device for accessing a computing device kernel

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于异构计算平台的容器调度和部署研究;卢林通;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;I139-366 *

Also Published As

Publication number Publication date
CN117350916A (en) 2024-01-05

Similar Documents

Publication Publication Date Title
US20130047160A1 (en) Systems and methods for modifying an operating system for a virtual machine
JP6089064B2 (en) Method, computer system and memory device for updating software components
US7809985B2 (en) Offline hardware diagnostic environment
BRPI0911610B1 (en) computer-implemented method for an application layer to initiate and manage the creation, operation, and decommissioning of one or more virtual machines
US11327738B2 (en) Software and firmware updates in a combined single pane of glass interface
US11334341B2 (en) Desired state model for managing lifecycle of virtualization software
US11269609B2 (en) Desired state model for managing lifecycle of virtualization software
US8484616B1 (en) Universal module model
US9672047B1 (en) Systems and methods for accessing a bootable partition on a serial peripheral interface device
US20070028228A1 (en) Software upgrades with user advisement
CN117350916B (en) Method, electronic device and medium for managing GPU kernel drive based on state machine
US11221842B2 (en) Systems and methods for executing and verifying system firmware update before committing firmware update to motherboard
CN116880877A (en) Virtual machine enhancement tool upgrading method and device, computer equipment and storage medium
US10838737B1 (en) Restoration of memory content to restore machine state
CN111427588A (en) Suspending installation of firmware packages
US20230161643A1 (en) Lifecycle management for workloads on heterogeneous infrastructure
US11354109B1 (en) Firmware updates using updated firmware files in a dedicated firmware volume
US11461131B2 (en) Hosting virtual machines on a secondary storage system
US11720386B2 (en) Validation and pre-check of combined software/firmware updates
US11204704B1 (en) Updating multi-mode DIMM inventory data maintained by a baseboard management controller
CN117519984A (en) K8 s-based GPU virtualization dynamic configuration method, electronic equipment and medium
US10521155B2 (en) Application management data
US11836500B2 (en) Systems and methods for basic input/output system driver offline protocol
CN113377566B (en) UEFI-based server starting method, device and storage medium
US11915003B2 (en) Process parasitism-based branch prediction method and device for serverless computing, electronic device, and non-transitory readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant