CN117648211B - Runtime unified interface, server and calling method of artificial intelligent framework - Google Patents

Runtime unified interface, server and calling method of artificial intelligent framework Download PDF

Info

Publication number
CN117648211B
CN117648211B CN202410116709.1A CN202410116709A CN117648211B CN 117648211 B CN117648211 B CN 117648211B CN 202410116709 A CN202410116709 A CN 202410116709A CN 117648211 B CN117648211 B CN 117648211B
Authority
CN
China
Prior art keywords
interface
runtime
framework
board card
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410116709.1A
Other languages
Chinese (zh)
Other versions
CN117648211A (en
Inventor
刘辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Original Assignee
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd filed Critical Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority to CN202410116709.1A priority Critical patent/CN117648211B/en
Publication of CN117648211A publication Critical patent/CN117648211A/en
Application granted granted Critical
Publication of CN117648211B publication Critical patent/CN117648211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Stored Programmes (AREA)

Abstract

The invention relates to the technical field of artificial intelligent frameworks, and discloses a runtime unified interface, a server and a calling method of an artificial intelligent framework, wherein the runtime unified interface comprises a docking framework interface and a runtime base interface; the runtime interface is used for connecting with the runtime interface of the board card equipment; the interface of the docking framework is used for indicating interfaces included in the unified interface in the running process to the artificial intelligent framework, and is used for calling the running interface of the board card equipment through the runtime base interface when receiving a calling request from the artificial intelligent framework. According to the invention, the unified interface in operation is set, so that the butt joint time of the board card equipment and the artificial intelligent frame can be reduced, and the use experience of a user is improved.

Description

Runtime unified interface, server and calling method of artificial intelligent framework
Technical Field
The invention relates to the technical field of artificial intelligence frameworks, in particular to a runtime unified interface, a server and a calling method of an artificial intelligence framework.
Background
An artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) framework is a software toolkit for developing and deploying artificial intelligence models, and an AI framework provides a series of libraries, APIs and tools that can help developers perform tasks such as data processing, model construction, training and reasoning in an artificial intelligence project. The commonly used AI framework is provided with an open source machine learning framework (TensorFlow), an open source deep learning framework (PyTorch), a flying oar (PADDLEPADDLE) and the like, and the AI framework is usually in butt joint with an acceleration card (also called a board card device) to realize task acceleration so as to improve the computing efficiency and the performance.
However, the runtime interface of a certain class of accelerator card may not be adapted to the runtime interface in the AI framework, which is limited by the human input and cost of the accelerator card manufacturer, and currently the accelerator card needs to be adapted according to the type of the AI framework when interfacing with a different type of AI framework, so as to access the runtime interface of the accelerator card to the AI framework. After the user changes the type of AI framework, it takes longer to dock the accelerator card with the AI framework, affecting the user's use experience.
Specifically, as shown in fig. 1, after the card vendor or the user accesses the runtime interface 100 of the card device to the custom runtime interface 200 according to the interface function, the PADDLPPADDLE framework 300 can complete the logic interaction with the card device by calling the custom runtime interface 200, thereby completing the use of the card device. However, the runtime interface defined by PADDLEPADDLE is only applicable to PADDLEPADDLE framework, not applicable to other types of AI framework, and cannot well solve the problem that the time for the accelerator card to dock with the AI framework is long.
Disclosure of Invention
In view of the above, the invention provides a runtime unified interface, a server and a calling method of an artificial intelligence framework, so as to solve the problem that the interface time of an acceleration card and an AI framework is long, and the use experience of a user is affected.
In a first aspect, the present invention provides a runtime unified interface for an artificial intelligence framework, the runtime unified interface comprising a docking framework interface and a runtime base interface; the runtime interface is used for connecting with the runtime interface of the board card equipment; the interface of the docking framework is used for indicating interfaces included in the unified interface in the running process to the artificial intelligent framework, and is used for calling the running interface of the board card equipment through the runtime base interface when receiving a calling request from the artificial intelligent framework.
After the runtime interface of the AI framework is connected with the runtime interface of the board card equipment, if the interface of the docking framework receives a call request from the AI framework, the interface of the docking framework can call the runtime interface of the board card equipment by calling the runtime interface. According to the embodiment, through setting the runtime unified interface of the AI framework, a user only needs to simply connect the runtime interface of the board card equipment into the runtime unified interface according to the interface requirements, and then the butt joint of the board card equipment and various AI frameworks can be completed, so that the AI framework completes a corresponding data processing process through the board card equipment, the butt joint time of the board card equipment and the AI framework is shortened, and the use experience of the user is improved.
In an alternative embodiment, the number of the board card devices connected with the runtime base interface is a plurality of board card devices, each board card device in the plurality of board card devices is configured with a device number, and the runtime base interface comprises a device management module; and the device management module is used for determining the device number of the target board card device, wherein the target board card device is one of the plurality of board card devices for executing the call request.
In the embodiment, the device management module is arranged, so that the board card device accessing to the basic interface in the running process can be managed, and a user can conveniently check related information of the accessed board card device.
In an alternative embodiment, the runtime base interface further comprises a memory management module; the memory management module is used for distributing the memory of the target board card equipment or distributing the memory of the artificial intelligent framework so as to synchronously or asynchronously transmit data from the artificial intelligent framework to the target board card equipment or synchronously or asynchronously transmit data from the target board card equipment to the artificial intelligent framework, wherein the memory of the artificial intelligent framework is the memory of a server where the artificial intelligent framework is located; the memory management module is further used for releasing the memory of the target board card device after synchronously or asynchronously transmitting the data from the artificial intelligent framework to the target board card device, and releasing the memory of the artificial intelligent framework after synchronously or asynchronously transmitting the data from the target board card device to the artificial intelligent framework.
In this embodiment, by setting the memory management module, the memory of the board card device end or the memory of the server where the AI framework is located can be managed more conveniently and rapidly. Synchronous transmission can ensure that data is transmitted between a sending party and a receiving party at a controllable speed, data loss and errors are reduced, data transmission stability is improved, asynchronous transmission can tolerate certain transmission delay and jitter, and higher fault tolerance and flexibility are achieved.
In an alternative embodiment, the runtime base interface further comprises a first stream management module; a first stream management module for creating a stream, wherein the created stream is used for storing a plurality of operation tasks from the artificial intelligence framework; and the first stream management module is also used for sending the created streams to the board card equipment so that the board card equipment processes the operation tasks on the same stream in sequence and processes the operation tasks on different streams in parallel.
In this embodiment, the operation task issued to the board card device by the AI framework is completed based on the flow, so that timing control of the operation task can be achieved.
In an alternative embodiment, the runtime base interface further comprises an event management module; an event management module for creating an event on the created stream, wherein the created event is used for determining whether a plurality of operation tasks on the stream are executed to complete; the event management module is further used for triggering the first stream management module to synchronize the stream after the recorded event is executed so as to indicate that the operation tasks before the event on the stream where the event is located are all executed and completed; a first stream management module for synchronizing the streams after the event is executed and for destroying the streams after all operational tasks on the created streams are executed to free up resources of the server occupied by the created streams.
In this embodiment, by setting the event management module, the execution situation of the operation task on the stream can be more conveniently understood.
In an alternative embodiment, the runtime base interface further comprises a version management module; and the version management module is used for acquiring the runtime version number and the drive version number of the board card equipment.
In this embodiment, the version management module is set to obtain the runtime version number and the drive version number of the board card device, so that the AI framework can conveniently check the version information of the runtime interface of the currently connected board card device.
In an alternative embodiment, the runtime unified interface further comprises an extension interface comprising a second stream management module configured with a plurality of streams created in advance; a second stream management module for sending a target stream to the artificial intelligence framework after receiving the instruction information for creating the stream from the artificial intelligence framework, wherein the target stream is one or more of a plurality of streams; the second flow management module is further used for acquiring a default flow of the board card device, acquiring a current flow used by the board card device, and setting or switching the current flow used by the board card device so as to facilitate the calling of the board card device by the artificial intelligent framework.
In this embodiment, when the AI framework applies for a stream from the back end (the runtime unified interface), the runtime unified interface directly takes out a pre-applied stream from a plurality of streams to the AI framework, thereby saving the time of frequently creating and destroying streams.
In an optional implementation manner, the expansion interface further comprises a cache module, and a memory for pre-application is configured in a cache pool of the cache module; the cache module is used for sending a target memory to the artificial intelligent framework when the artificial intelligent framework applies for the memory, wherein the target memory is the memory in the cache pool; the cache module is further used for marking the target memory as an unused state when the artificial intelligence framework releases the target memory, saving the target memory in the cache pool again, and releasing the target memory after the target memory is used.
In this embodiment, through the cache mechanism, frequent application and release of the memory from the board card device end can be avoided.
In an alternative embodiment, the buffering module is further configured to record, through an event, a time node of use of the target memory, so as to release the target memory after the target memory is used.
In this embodiment, the AI framework only needs to directly call the memory release interface when releasing the memory, and the AI framework is released after being used by the buffer module to delay the release time.
In an alternative embodiment, the runtime unified interface further comprises a debug interface, the debug interface comprising a log module; and the log module is used for providing logs of different grades.
In this embodiment, the logs are set to different levels, and in a debug scenario, all the logs corresponding to the target level may be called out to obtain more detailed log information to assist in analyzing the problem.
In an alternative embodiment, the debug interface further comprises an error checking module; and the error checking module is used for converting the received execution result from the board card equipment into a state return value and sending the state return value to the artificial intelligent framework.
In this embodiment, by setting the error checking module, the AI framework can conveniently learn about the completion condition of the operation task.
In an alternative embodiment, the debug interface further comprises a trace debug module; and the tracking debugging module is used for outputting call stack information when the unified interface fails in the running process.
In this embodiment, by setting the trace debug module, call stack information can be output when the unified interface fails in the runtime, so as to assist in analyzing the failure cause.
In a second aspect, the present invention provides a server comprising the runtime unified interface of the first aspect or any implementation manner corresponding to the first aspect.
In a third aspect, the present invention provides a method for calling a runtime interface of a board card device, which is applied to the docking frame interface of the runtime unified interface of the first aspect or any implementation manner corresponding to the first aspect, where the runtime unified interface includes a docking frame interface and a runtime base interface, and the runtime base interface connects the runtime interface of the board card device, where the method includes: receiving a call request from an artificial intelligence framework; and calling a runtime interface of the board card equipment through the runtime interface.
After the runtime interface of the AI framework is connected with the runtime interface of the board card equipment, if the docking framework interface receives a call request from the artificial intelligent framework, the docking framework interface can call the runtime interface of the board card equipment through the runtime interface, so that the artificial intelligent framework can complete logic interaction with the runtime interface of the board card equipment, and further the use of the board card equipment is completed.
In an alternative embodiment, the method further comprises: compiling the runtime unified interface into a first file, wherein the first file comprises a header file and a dynamic link library file, the header file is used for indicating interfaces included in the runtime unified interface to the artificial intelligence framework, and the dynamic link library file is used for connecting the interfaces included in the runtime unified interface.
In this embodiment, the runtime unified interface is compiled into the first file, and when the AI framework needs to call the corresponding interface, the runtime unified interface can be added to the AI framework by loading the dynamic link library file, so that the AI framework is convenient to call the runtime interface of the board card device.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the related art, the drawings that are required to be used in the description of the embodiments or the related art will be briefly described, and it is apparent that the drawings in the description below are some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a schematic illustration of an accelerator card interfacing with a flying paddle frame;
FIG. 2 is a schematic diagram of a runtime unified interface of an artificial intelligence framework in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a runtime unified interface of another artificial intelligence framework in accordance with an embodiment of the present invention;
FIG. 4 is a flow chart of a method of invoking a runtime interface of a board card device according to an embodiment of the invention;
FIG. 5 is a flow diagram of a method for an artificial intelligence framework to use a runtime unified interface in accordance with an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of a server according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The following describes in detail the runtime unified interface of the AI framework provided by the present invention with reference to the accompanying drawings.
As shown in fig. 2, the runtime unification interface 400 of the AI framework includes a docking framework interface 410 and a runtime timing interface 420.
Wherein the runtime interface 420 is configured to interface with the runtime interface 100 of the board card device. The docking framework interface 410 is used to indicate to the Artificial Intelligence (AI) framework 500 the interfaces that the runtime unified interface 400 includes and to invoke the runtime interface 100 of the board card device through the runtime base interface 420 when a call request is received from the AI framework 500.
Specifically, since the common AI frameworks have higher similarity to the requirements of the interfaces in the runtime, the interfaces required to be supported in the common AI frameworks are counted through investigation, then the minimum set of the interfaces required by the common AI frameworks is summarized, the interfaces in the minimum set are used as the interfaces in the runtime, and the minimum set is the intersection of the interface sets corresponding to the common AI frameworks. The runtime interface 100 of the board card device configures the basic interface file according to the requirements of the runtime basic interface 420, i.e. the runtime basic interface 420 can be connected with the runtime interface 100 of the board card device.
The docking framework interface 410 may indicate to the AI framework 500 that the runtime unified interface 400 includes an interface, and after the AI framework 500 sends a call request, the docking framework interface 410 calls the runtime interface 100 of the board card device through the runtime base interface 420, so that the AI framework 500 completes a logical interaction with the runtime interface 100 of the board card device, and further completes use of the board card device. By way of example, the AI framework may be, but is not limited to, an open source deep learning framework (Pytorch), a flyer (PADDLEPADDLE) framework, or other AI framework. The board card device may be, but is not limited to, the BlueToril 370, the Murrax MXC500, or the Invivia (NVIDIA) A30.
Illustratively, the runtime unifying interface 400 may be compiled into a first file, and the AI framework 500 invokes the runtime base interface 420 by loading a dynamic link library (so) file. The first file includes a header file and a dynamic link library file, where the header file is used to indicate to the AI framework 500 an interface included in the runtime unified interface 400, and the dynamic link library file is used to connect the interface included in the runtime unified interface 400.
After the runtime base interface 420 is connected with the runtime interface 100 of the board card device, if the docking frame interface 410 receives a call request from the AI frame 500, the docking frame interface 410 can call the runtime interface 100 of the board card device by calling the runtime base interface 420. According to the embodiment, by setting the unified interface 400 in the operation of the AI framework, a user can complete the butt joint of the plate card equipment and various AI frameworks only by simply connecting the operation interface 100 of the plate card equipment to the unified interface 400 in the operation according to the interface requirements, so that the AI framework can complete the corresponding data processing process through the plate card equipment, the butt joint time of the plate card equipment and the AI framework is shortened, and the use experience of the user is improved.
The runtime base interface 420 is described in detail below.
In particular, the runtime base interface 420 may include at least one of a device management module, a memory management module, a first stream management module, an event management module, and a version management module.
Illustratively, the device management module (SET DEVICE) is configured to determine a device number of a target board device, where the number of board devices connected to the runtime base interface 420 is a plurality of board devices, each of the plurality of board devices is configured with the device number, and the target board device is a board device selected from the plurality of board devices to execute the call request.
Specifically, the interfaces and functions of the device management module may be as shown in table 1. As in table 1, the device management module may also be used for initializing (Init) the board device, for obtaining the device number of the board device currently connected to the runtime base interface 420 (GET DEVICE), for obtaining the number of board devices currently connected to the runtime base interface 420 (GET DEVICE Count), and for releasing the resources obtained at initialization (DeInit) after the target board device has performed the corresponding call request.
TABLE 1
Illustratively, the device management module has a partial code 1 as follows:
v Set current device/v; determining the equipment number of the target board card equipment;
TGStatus (×set_device_) (const TGDEVICE DEVICE); the device number of the target board card device obtained from the target board card device is represented by a runtime unified interface custom type so that the parameter types are unified, and an operation result (such as setting success or setting failure) is displayed.
Wherein, the unified interface custom type can be the encapsulation of the data type (int) at runtime. And the device number of the target board card device acquired from the board card device is uniformly converted, so that the interface is conveniently in butt joint with the runtime interface of the AI framework.
Specifically, taking the british board card as an example, the runtime interface provided by the british board card also includes a device management module, for example, the interface for determining the device number of the target board card device in the british board card sets the device number used by the board card to CNRTSETDEVICE, CNRTSETDEVICE.
Illustratively, the partial code 2 of the device management module is as follows:
TGStatus SetDevice(const TGDevice device) {
DEBUG_LOG();
CHECK_RT_SUCCESS(cnrtSetDevice(device));
return ret;
-a }; the function is to convert the interface (e.g., CNRTSETDEVICE) package of the board device (e.g., the chilly 370) to a functionally corresponding interface (e.g., SET DEVICE) in the runtime unified interface.
Specifically, the device management module is a basic interface of the board card device, so that the device management module of the board card device is directly converted and packaged, and the runtime interface of the board card device can be accessed into the runtime unified interface.
In this embodiment, by setting the device management module, the board device accessing to the runtime basic interface 420 can be managed, so that the user can check the related information of the accessed board device conveniently.
Illustratively, the Memory management module is configured to allocate (or apply for) a Memory (Memory) of a target board Device (Device) or allocate a Memory of the AI framework 500 to transfer data from the AI framework 500 to the target board Device or to transfer data from the target board Device to the AI framework 500. The memory of the AI framework 500 may be the memory of the server (Host) where the AI framework 500 is located.
Illustratively, the portion of code 3 of the memory management module may be as follows:
TGStatus Allocate (const TGDEVICE DEVICE, void_ptr, size_tsize) converts an interface of the board device application memory into a function implementation of an interface of the application board device memory of a unified interface custom type in running.
Specifically, interface cnrtMalloc (ptr, size) of the memory management module in the kakiosks 370 interfaces with interface Device/Host Memory Allocate of the memory management module.
Optionally, the Memory management module may be further configured to release Deallocate a Memory (Memory) of the target board card Device (Device) or release a Memory of a server where the AI framework 500 is located, where the Memory management module may further transmit data in the target board card Device to the AI framework (sync_d2h/async_d2h) synchronously or asynchronously, and may also transmit data in the AI framework to the target board card Device (sync_h2d/async_h2d) synchronously or asynchronously.
Specifically, the interfaces and functions of the memory management module may be as shown in table 2.
TABLE 2
In this embodiment, by setting the memory management module, the memory of the board card device end or the memory of the server where the AI framework is located can be managed more conveniently and rapidly.
It should be appreciated that in synchronous transmission, the sender (the artificial intelligence framework or the board card device) and the receiver (the board card device or the artificial intelligence framework) need to keep synchronous in time, that is, each time the sender sends data, the receiver must be ready to receive and immediately respond, and the sender must wait for the response of the receiver before continuing to send the next piece of data. In asynchronous transmission, there is no strict time constraint between a sender and a receiver, the sender can send data according to its own rhythm, and the receiver can receive data according to its own rhythm. Synchronous transmission can ensure that data is transmitted between a sending party and a receiving party at a controllable speed, data loss and errors are reduced, data transmission stability is improved, asynchronous transmission can tolerate certain transmission delay and jitter, and higher fault tolerance and flexibility are achieved.
The first stream management module is used for creating streams (CREATE STREAM), and is also used for sending the created streams to the board card device, so that the board card device processes operation tasks on the same stream in sequence and processes operation tasks on different streams in parallel. The created stream is used for storing a plurality of operation tasks from the AI framework, and the operation tasks can be operations of reading data, writing data or deleting data.
Illustratively, the portion of code 4 of the first stream management module may be as follows:
TGStatus (create_stream_) (const TGDEVICE DEVICE, TGSTREAM) and converting the interface of the board card device creation stream into a function implementation of the interface of the creation stream of the unified interface custom type in the running process.
Alternatively, the first Stream management module may also be used to Destroy streams (destroystream) and to synchronize streams (SYNC STREAM), i.e. to wait for the completion of operational tasks on a given Stream.
Specifically, the interfaces and functions of the first flow management module may be as shown in table 3.
TABLE 3 Table 3
In this embodiment, the operation task issued to the board card device by the AI framework is completed based on the flow, so that timing control of the operation task can be achieved. After the operation task on the stream is completed, the destroying stream can timely release the system resources occupied by the stream so as to avoid resource waste, improve the performance and response capability of the system, prevent sensitive data (such as personal information or confidential documents) possibly contained in the stream from being improperly accessed or leaked, improve the data security, and simultaneously enable the code file to be clearer and more readable, reduce unnecessary complexity and improve the maintainability of the code.
The specific method of destroying the stream depends on the programming language and framework used. In general, depending on the type and context of the stream object used, an appropriate method or function may be called to destroy the stream and release the associated resources. Such as closing file streams, closing network connections, freeing memory, etc.
The event management module is illustratively used to create an event (CREATE EVENT), where the created event is used to determine whether a plurality of operational tasks on the stream are performed to completion. Specifically, after an Event (Event) is recorded (Record) on a stream, since the operation tasks on the same stream are sequentially executed, when the Event is triggered, it means that the operation task before the Event has been executed to complete, thereby completing the operation task synchronization function before the Event.
Optionally, the Event management module may also be used to Destroy events (destroyEvent), to synchronize events (SYNC EVENT), i.e., to indicate that the operational tasks preceding the Event have been fully performed, to Record events on a stream (Record Event), to synchronize the stream after waiting for the recorded Event to be performed (triggered) (WAIT EVENT), and to Query Event (Query Event) status to determine if the Event has been performed. Specifically, the interfaces and functions of the event management module may be as shown in table 4.
By way of example, the portion of code 5 of the event management module may be as follows:
TGStatus CreateEvent(const TGDevice device, TGEvent *event) {
DEBUG_LOG();
CHECK_RT_SUCCESS(cnrtNotifierCreate(reinterpret_cast<cnrtNotifier_t *>(event)));
return ret; and converting the interface of the board card device creation event into the function implementation of the creation event of the unified interface custom type in the running process.
That is, the interface (cnrtNotifier _t) of the creation event of the custom type of the board device (e.g., the chilly 370) is packaged as CREATE EVENT in a unified interface at runtime, which upon docking can be assigned a value by forced type conversion.
Specifically, the event management module creates an event on a stream created by the first stream management module, triggers the first stream management module to synchronize the stream after the event is executed (triggered), and the first stream management module responds to triggering the synchronized stream and destroys the stream after determining that all operation tasks on the created stream are executed, releasing resources of a server occupied by the created stream.
TABLE 4 Table 4
In this embodiment, by setting the event management module, the execution situation of the operation task on the stream can be more conveniently understood.
Illustratively, the Version management module is configured to obtain a runtime Version number of the board device (Get Runtime Version) and obtain a drive Version number of the board device (GET DRIVER Version), and specifically, interfaces and functions of the Version management module may be as shown in table 5.
In this embodiment, the version management module is set to obtain the runtime version number and the drive version number of the board card device, so that the AI framework can conveniently check the version information of the runtime interface of the currently connected board card device.
TABLE 5
Illustratively, one of the basic interfaces in the version management module, the setting of the version management module is similar to that described above, and will not be described here.
To increase the efficiency of the AI framework operating the board card device, in some alternative embodiments, as shown in fig. 3, the runtime unified interface 400 of the AI framework further includes an expansion interface 430 and/or a debug interface 440. That is, the runtime unified interface 400 of the AI framework may include an expansion interface 430 or a debug interface 440, and may also include the expansion interface 430 and the debug interface 440.
Specifically, extension interface 430 and/or debug interface 440 are provided directly to AI framework side for use, but do not require interfacing with the runtime interface of the board card device, and extension interface 430 and/or debug interface 440 may invoke runtime interface 100 of the board card device through runtime base interface 420.
Expansion interface 430 and debug interface 440 are described in detail below.
Illustratively, the expansion interface 430 may include at least one of a second stream management module 431, a caching module 432, and an event module 433. Illustratively, the second stream management module 431 may be connected to a runtime interface of the board device through the first stream management module, the buffer module 432 may be connected to a runtime interface of the board device through the memory management module, and the event module 433 may be connected to a runtime interface of the board device through the event management module.
Specifically, the second stream management module 431 is provided with a stream pool (stream pool) configured with a plurality of streams created in advance, for example, N streams created in advance, N being an integer, N being empirically configurable by a developer, such as N being 16, 32, 64, or the like. The second flow management module 431 is configured to send a target flow to the AI framework 500 after receiving the indication information of creating the flow from the AI framework 500, where the target flow is one or more of multiple flows, and the multiple flows may be recycled. Illustratively, when the flows in the flow pool are all used by the AI framework, the flow is created again, and the second flow management module 431 may send the first flow in the flow pool from the head to the AI framework for cyclic reuse.
In this embodiment, when the AI framework applies for a stream from the back end (the runtime unified interface), the runtime unified interface 400 directly takes out a pre-applied stream from a plurality of streams to the AI framework, thereby saving the time of frequently creating and destroying streams.
Optionally, the second flow management module 431 may also be used to obtain a default flow for the board card device (Get Default Stream), to obtain a flow currently used by the board card device (Get Current Stream), and to set or switch a flow currently used by the board card device (Set Current Stream). The board card equipment is connected with the unified interface in running.
Specifically, after receiving the call request, the runtime unified interface may call the interface (cnrtQueueCreate) for creating a Stream from the runtime interface of the docked board card device to obtain the Stream information of the board card device, and in addition, the obtained Stream information of the board card device may be assigned to the Default Stream to be used as a Default Stream for the AI framework. In the initial state, the current flow used by the board card device may be Default Stream, the AI framework may switch the current flow used by the board card device into a specified flow through calling Set Current Stream, and the AI framework may call the first flow management module to create the flow, and use the newly created flow as the current flow used by the board card device for later logic use.
Specifically, the interfaces and functions of the version management module may be as shown in table 6. In this embodiment, by setting the second flow management module 431, it is possible to acquire a default flow of the board card device, acquire a flow currently used by the board card device, and set or switch the flow currently used by the board card device, so that the AI framework operates the board card device more efficiently.
TABLE 6
Specifically, the cache pool of the cache module 432 is configured with a pre-applied memory, and the cache module 432 is configured to send the target memory to the AI framework 500 when the AI framework 500 applies for the memory. The target memory is a memory in the cache pool. That is, when the AI framework 500 needs to apply for memory, the cache module 432 directly selects the matched memory from the cache pool according to the algorithm for use by the AI framework 500. In addition, when the AI framework 500 releases the memory, the cache module 432 marks the memory as unused and continues to store in the cache pool until the program exits the memory in the release cache pool.
In this embodiment, frequent application and release of the memory from the board card device end can be avoided through a cache (cache) mechanism.
Specifically, the buffer module 432 stores the memory allocation situation in the buffer pool in a linked list, when the memory needs to be allocated, the linked list is traversed to find a memory with proper size to return, then the linked list information is updated, when the memory of the AI framework is released, the linked list node corresponding to the address space is set to be unused by the updated linked list, and if the linked list nodes before and after the linked list node are also in an unused state, the linked list nodes can be combined into one linked list node.
When the memory of the buffer module 432 is insufficient, the memory can be applied from the board card device again, and the applied memory is stored in the buffer pool, so as to meet the memory application requirement of the AI framework. When the memory cache pool applied by the AI framework does not meet the size, and the board card device does not have enough memory, the cache module 432 performs task synchronization operation, waits for the asynchronous operation to complete, that is, after the memory occupied by the asynchronous operation is in an unused state, returns the memory in the cache pool in the unused state to the board card device, and then applies for memory from the board card device again.
The AI framework is generally an asynchronous operation, i.e. after the memory is allocated, the asynchronous operation is performed to use the memory, and then the memory is released. Because the asynchronous operation is not performed immediately after the asynchronous operation command is issued, the asynchronous operation is performed at a later time point according to the operation task sequence above the stream, so that the cache module cannot determine the time point of releasing the memory.
Based on the above asynchronous operation scenario, the cache module is further configured to record, through an event, a time point when the target memory is used, and release the target memory after the target memory is used. That is, the AI framework only needs to directly call the memory release interface when releasing the memory, and the AI framework is released after being used by the buffer module, so that the release time is delayed.
Since no more requirements are set for the event in the AI framework 500 that is common at present, the event management module in the runtime base interface can meet the requirements of the existing AI framework 500, and the event module 433 is configured in the extension interface to interface when the AI framework 500 has new requirements.
Illustratively, debug interface 440 includes at least one of log module 441, error checking module 442, and trace debug module 443.
Specifically, LOG (LOG) module 441 is used to provide LOGs of different levels. Illustratively, the level may be set for the log by an environment variable.
In this embodiment, the logs are set to different levels, and in a debug scenario, all the logs corresponding to the target level may be called out to obtain more detailed log information to assist in analyzing the problem.
For example, in a debugging scene, parameters of each interface need to be known, but in a normal execution situation, parameters of each interface do not need to be known, at this time, a level is set for a log corresponding to the interface parameters through a log module, and in the debugging scene, the parameters of the interface are output by calling the log corresponding to the level, so that the parameters of the interface can be prevented from being output under the normal execution situation, and the performance of the server is prevented from being affected.
Specifically, an Error (Error) checking module 442 is used to convert the execution results received from the board device into a status return value, and to send the status return value to the artificial intelligence framework.
In this embodiment, by providing the error checking module 442, the AI framework 500 can be facilitated to know the completion of the operation task.
Optionally, the error checking module 442 is further configured to call the character string information corresponding to the error when the error occurs in the board card device, and display the character string information corresponding to the error on the display device, so that a user can conveniently learn about the detailed information of the error. The display device may be a display device of a server where the unified interface is located in the running process.
Specifically, the trace debug module 443 is configured to output call stack information to assist in analyzing a cause of a failure when a runtime unified interface fails (Core Dump).
In this embodiment, there is also provided a method for calling a runtime interface of a board card device, which may be used for the docking frame interface of the runtime unified interface, where the runtime interface of the runtime unified interface is connected with the runtime interface of the board card device, and fig. 4 is a flowchart of a method for calling a runtime interface of a board card device according to an embodiment of the present invention, as shown in fig. 4, where the method includes the following steps:
step S401, a call request from an artificial intelligence framework is received.
The call request is request information for calling a target interface, the target interface is one of the unified interfaces in the running time, for example, the target interface can be any one of modules in the runtime base interface, any one of modules in the expansion interface or any one of modules in the debugging interface.
Step S402, calling a runtime interface of the board card equipment through the runtime base interface.
After the runtime interface of the AI framework is connected with the runtime interface of the board card equipment, if the docking framework interface receives a call request from the artificial intelligent framework, the docking framework interface can call the runtime interface of the board card equipment through the runtime interface, so that the artificial intelligent framework can complete logic interaction with the runtime interface of the board card equipment, and further the use of the board card equipment is completed.
In some optional embodiments, the method for calling the runtime interface of the board card device further includes: the runtime unified interface is compiled into a first file. The first file comprises a header file and a dynamic link library file, wherein the header file is used for indicating interfaces included in the unified interface in the operation process to the artificial intelligent framework, and the dynamic link library file is used for connecting the interfaces included in the unified interface in the operation process.
In this embodiment, the runtime unified interface is compiled into the first file, and when the AI framework needs to call the corresponding interface, the runtime unified interface can be added to the AI framework by loading the dynamic link library file, so that the AI framework is convenient to call the runtime interface of the board card device.
In this embodiment, there is also provided a method for using a runtime unified interface by an artificial intelligence framework, which may be used to configure a processor, a computer, or a server, etc. of the artificial intelligence framework and the runtime unified interface, and fig. 5 is a schematic flow chart of a method for using a runtime unified interface by an artificial intelligence framework according to an embodiment of the present invention, as shown in fig. 5, the method includes the following steps:
in step S501, the storage location of the first file is specified by an environment variable.
The first file is determined through a unified interface in compiling operation. The first file comprises a header file and a dynamic link library file, wherein the header file is used for indicating interfaces included in the unified interface in the running process, and the dynamic link library file is used for connecting the interfaces included in the unified interface in the running process.
Specifically, the runtime unified interface interfaces with the runtime interface of the board card device, by configuring the basic interface file in the board card device, the runtime unified interface interfaces with the runtime interface of the board card device, when the plurality of board card devices interface with the runtime unified interface, the basic interface files corresponding to the plurality of board card devices can be distinguished by defining the back end NAME (Define BACKEND _NAME). At compile time, the base interface file may be selected by specifying BACKEND _NAME, which in turn compiles only the runtime unified interface of the specified board device.
Step S502, loading dynamic link library files in the form of plug-ins.
Specifically, when the artificial intelligence framework issues an operation task, a dynamic link library file is loaded through a specified PATH (BACKEND _path) of the first file, and then a back end interface (a unified interface in running) corresponding to the dynamic link library file is added into the artificial intelligence framework, so that the artificial intelligence framework uses the unified interface in running.
Illustratively, the dynamically linked library file may be loaded by dynamically loading a function (dlopen) of the shared library.
Step S503, calling the runtime interface of the board card equipment through the unified runtime interface to enable the board card equipment to execute the operation task issued by the artificial intelligent framework.
Specifically, after the runtime unified interface is accessed to the artificial intelligent framework, the runtime interface of the board card device can be called through the runtime unified interface, so that the board card device can execute the operation task issued by the artificial intelligent framework.
According to the method for using the runtime unified interface by the artificial intelligent framework, the runtime unified interface can be conveniently and rapidly added to the artificial intelligent framework by loading the dynamic link library file, and then the artificial intelligent framework can complete logic interaction with the runtime interface of the board card equipment through the runtime unified interface, so that the board card equipment is used.
In addition, when the AI framework is PADDLEPADDLE framework, the runtime unified interface provided by the application is compatible with the runtime interface defined by the PADDLEPADDLE framework, so that the runtime unified interface can be directly added to the PADDLEPADDLE framework or can be added to the PADDLEPADDLE framework by a small amount of modification aiming at the PADDLEPADDLE framework.
The embodiment of the invention also provides a server with the unified interface in the running process shown in the figure 2 or the figure 3.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a server according to an alternative embodiment of the present invention, as shown in fig. 6, the server apparatus includes: one or more processors 610, memory 620, and interfaces for connecting components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the server device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple server devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 610 is illustrated in fig. 6.
The processor 610 may be a central processor, a network processor, or a combination thereof. The processor 610 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 620 stores instructions executable by the at least one processor 610 to cause the at least one processor 610 to perform methods illustrated by implementing the embodiments described above.
Memory 620 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the server device, or the like. In addition, memory 620 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 620 may optionally include memory located remotely from processor 610, which may be connected to the server device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 620 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; memory 620 may also include a combination of the types of memory described above.
The server apparatus further comprises input means 630 and output means 640. The processor 610, memory 620, input devices 630, and output devices 640 may be connected by a bus or other means, for example in fig. 6.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code that may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine-readable storage medium and to be stored in a local storage medium downloaded through a network, so that the method described herein may be stored on such software processes on a storage medium using a general purpose server, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a server, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the server, processor or hardware, implements the methods illustrated by the above embodiments.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the claims.

Claims (13)

1. The runtime unified interface of the artificial intelligent framework is characterized by comprising a docking framework interface, a runtime base interface, an expansion interface and a debugging interface;
The runtime base interface is used for being connected with a runtime interface of the board card equipment, and the runtime base interface is an interface in an intersection set of interface sets corresponding to a plurality of artificial intelligent frameworks;
the interface of the docking framework is used for indicating interfaces included in the unified interface of the runtime to the artificial intelligent framework, and is used for calling the runtime interface of the board card equipment through the runtime base interface when receiving a call request from the artificial intelligent framework, wherein the call request is request information for calling a target interface, the target interface is one of the unified interfaces of the runtime, and the expansion interface or the debugging interface calls the runtime interface of the board card equipment through the runtime base interface when the target interface is the expansion interface or the debugging interface;
the expansion interface comprises a second stream management module and a cache module, wherein the second stream management module is configured with a plurality of streams which are created in advance, and a memory which is applied in advance is configured in a cache pool of the cache module;
The second flow management module is used for sending a target flow to the artificial intelligence framework after receiving the indication information of creating the flow from the artificial intelligence framework, wherein the target flow is one or more of the flows;
The second flow management module is further configured to obtain a default flow of the board card device, obtain a flow currently used by the board card device, and set or switch the flow currently used by the board card device, so that the artificial intelligence framework invokes the board card device;
The cache module is used for sending a target memory to the artificial intelligent framework when the artificial intelligent framework applies for the memory, wherein the target memory is the memory in the cache pool;
the cache module is further configured to mark the target memory as unused when the artificial intelligence framework releases the target memory, restore the target memory in a cache pool, and release the target memory after the target memory is used.
2. The runtime unified interface of claim 1, wherein the number of said board card devices connected to said runtime based interface is a plurality, each of said plurality of board card devices configured with a device number, said runtime based interface comprising a device management module;
The device management module is configured to determine a device number of a target board device, where the target board device is a board device that executes the call request in the plurality of board devices.
3. The runtime unified interface of claim 2, wherein the runtime based interface further comprises a memory management module;
the memory management module is configured to allocate a memory of the target board card device or allocate a memory of the artificial intelligent frame, so as to synchronously or asynchronously transfer data from the artificial intelligent frame to the target board card device or synchronously or asynchronously transfer data from the target board card device to the artificial intelligent frame, where the memory of the artificial intelligent frame is a memory of a server where the artificial intelligent frame is located;
The memory management module is further configured to release the memory of the target board card device after the data is synchronously or asynchronously transferred from the artificial intelligence frame to the target board card device, and to release the memory of the artificial intelligence frame after the data is synchronously or asynchronously transferred from the target board card device to the artificial intelligence frame.
4. A runtime unified interface as claimed in any one of claims 1 to 3, wherein the runtime base interface further comprises a first flow management module;
The first flow management module is used for creating a flow, wherein the created flow is used for storing a plurality of operation tasks from the artificial intelligence framework;
the first stream management module is further configured to send the created stream to the board card device, so that the board card device processes the operation tasks on the same stream in sequence, and processes the operation tasks on different streams in parallel.
5. The runtime unified interface of claim 4, wherein the runtime based interface further comprises an event management module;
the event management module is used for creating an event on the created stream, wherein the created event is used for determining whether a plurality of operation tasks on the stream are executed to be completed;
The event management module is further configured to trigger the first stream management module to synchronize a stream after the recorded event is executed, so as to indicate that all operation tasks on the stream where the event is located before the event have been executed;
the first flow management module is used for synchronizing the flows after the event is executed, and destroying the flows after all operation tasks on the created flows are executed, so as to release the resources of the server occupied by the created flows.
6. A runtime unified interface as claimed in any one of claims 1 to 3, wherein the runtime base interface further comprises a version management module;
the version management module is used for acquiring the runtime version number and the drive version number of the board card equipment.
7. The runtime unified interface of claim 1, wherein,
The cache module is further configured to record, through an event, a time node used by the target memory, so as to release the target memory after the target memory is used.
8. A runtime unified interface as claimed in any one of claims 1 to 3, wherein the debug interface comprises a log module;
The log module is used for providing logs of different grades.
9. The runtime unified interface of claim 8, wherein the debug interface further comprises an error checking module;
The error checking module is used for converting the execution result received from the board card equipment into a state return value and sending the state return value to the artificial intelligent framework.
10. The runtime unified interface of claim 8, wherein the debug interface further comprises a trace debug module;
and the tracking and debugging module is used for outputting call stack information when the unified interface fails in the running process.
11. A server, characterized in that the server comprises a runtime unified interface as claimed in any of claims 1 to 10.
12. A method for calling a runtime interface of a board card device, wherein the method is applied to a docking framework interface of a runtime unified interface according to any one of claims 1 to 10, the runtime unified interface including the docking framework interface, a runtime base interface, an extension interface, and a debug interface, the runtime base interface connecting the runtime interface of the board card device, the method comprising:
Receiving a call request from the artificial intelligent framework, wherein the call request is request information for calling a target interface, and the target interface is one of the unified interfaces in the running process;
and calling the runtime interface of the board card equipment through the runtime base interface.
13. The method according to claim 12, wherein the method further comprises:
Compiling the runtime unified interface into a first file, wherein the first file comprises a header file and a dynamic link library file, the header file is used for indicating the interface included in the runtime unified interface to an artificial intelligence framework, and the dynamic link library file is used for connecting the interface included in the runtime unified interface.
CN202410116709.1A 2024-01-29 2024-01-29 Runtime unified interface, server and calling method of artificial intelligent framework Active CN117648211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410116709.1A CN117648211B (en) 2024-01-29 2024-01-29 Runtime unified interface, server and calling method of artificial intelligent framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410116709.1A CN117648211B (en) 2024-01-29 2024-01-29 Runtime unified interface, server and calling method of artificial intelligent framework

Publications (2)

Publication Number Publication Date
CN117648211A CN117648211A (en) 2024-03-05
CN117648211B true CN117648211B (en) 2024-05-24

Family

ID=90045388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410116709.1A Active CN117648211B (en) 2024-01-29 2024-01-29 Runtime unified interface, server and calling method of artificial intelligent framework

Country Status (1)

Country Link
CN (1) CN117648211B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111338631A (en) * 2018-12-18 2020-06-26 北京奇虎科技有限公司 Generation method and device of universal interface framework and computing equipment
WO2020199469A1 (en) * 2019-04-04 2020-10-08 平安科技(深圳)有限公司 Interface call recording method, apparatus, device, and storage medium based on django framework
CN112698939A (en) * 2020-12-01 2021-04-23 武汉虹信科技发展有限责任公司 Operation maintenance method and system for ATCA architecture core network
CN115827285A (en) * 2023-02-23 2023-03-21 苏州浪潮智能科技有限公司 Cross-platform communication method, system, device, equipment and medium
CN117291260A (en) * 2023-09-27 2023-12-26 中科曙光国际信息产业有限公司 Deep learning framework adaptation method, deep learning framework adaptation device, deep learning framework adaptation equipment, deep learning framework adaptation storage medium and deep learning framework adaptation product
CN117407195A (en) * 2023-10-31 2024-01-16 浙江讯盟科技有限公司 Interface system, integration method and device of application system and third party system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111338631A (en) * 2018-12-18 2020-06-26 北京奇虎科技有限公司 Generation method and device of universal interface framework and computing equipment
WO2020199469A1 (en) * 2019-04-04 2020-10-08 平安科技(深圳)有限公司 Interface call recording method, apparatus, device, and storage medium based on django framework
CN112698939A (en) * 2020-12-01 2021-04-23 武汉虹信科技发展有限责任公司 Operation maintenance method and system for ATCA architecture core network
CN115827285A (en) * 2023-02-23 2023-03-21 苏州浪潮智能科技有限公司 Cross-platform communication method, system, device, equipment and medium
CN117291260A (en) * 2023-09-27 2023-12-26 中科曙光国际信息产业有限公司 Deep learning framework adaptation method, deep learning framework adaptation device, deep learning framework adaptation equipment, deep learning framework adaptation storage medium and deep learning framework adaptation product
CN117407195A (en) * 2023-10-31 2024-01-16 浙江讯盟科技有限公司 Interface system, integration method and device of application system and third party system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PYNQ Environment on Versal Devices;Nemanja Filipović et al.;2023 31st Telecommunications Forum (TELFOR);20240101;第1-4页 *
国产高性能智能计算服务器研究;靳文兵 等;火力与指挥控制;20221130;第47卷(第11期);第140页 *

Also Published As

Publication number Publication date
CN117648211A (en) 2024-03-05

Similar Documents

Publication Publication Date Title
WO2019095936A1 (en) Method and system for building container mirror image, and server, apparatus and storage medium
US6074427A (en) Apparatus and method for simulating multiple nodes on a single machine
EP0622714A1 (en) Integrated automation development system and method
US20030233634A1 (en) Open debugging environment
CN108628626B (en) Development environment building method, code updating method and device
US11010144B2 (en) System and method for runtime adaptable applications
CN111651169B (en) Block chain intelligent contract operation method and system based on web container
CN109542464B (en) Development and deployment system, method and storage medium of IoT (Internet of things) equipment script program
CN110083366B (en) Application running environment generation method and device, computing equipment and storage medium
CN117648211B (en) Runtime unified interface, server and calling method of artificial intelligent framework
CN115633073B (en) Micro-service calling method, electronic device, system and readable storage medium
CN1988479A (en) Method for recording system information and object pile
CN109739666A (en) Striding course call method, device, equipment and the storage medium of singleton method
US7552440B1 (en) Process communication multiplexer
TW200834419A (en) Method and apparatus for administering a process filesystem with respect to program code conversion
CN114428702A (en) Information physical test system containing general interface module
CN106922189B (en) Equipment agent device and control method thereof
US7702764B1 (en) System and method for testing network protocols
US20140298303A1 (en) Method of processing program and program
Dantam et al. Unix philosophy and the real world: Control software for humanoid robots
EP4394601A1 (en) Systems, methods, and apparatus for intermediary representations of workflows for computational devices
CN117376229B (en) FTP file system software cross debugging method and system based on embedded equipment
Boulifa et al. Model generation for distributed Java programs
CN117251118B (en) Virtual NVMe simulation and integration supporting method and system
US20240220266A1 (en) Systems, methods, and apparatus for intermediary representations of workflows for computational devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant