CN116136737A - IO processing method and system - Google Patents

IO processing method and system Download PDF

Info

Publication number
CN116136737A
CN116136737A CN202111357287.XA CN202111357287A CN116136737A CN 116136737 A CN116136737 A CN 116136737A CN 202111357287 A CN202111357287 A CN 202111357287A CN 116136737 A CN116136737 A CN 116136737A
Authority
CN
China
Prior art keywords
hardware
queue
request
driving module
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111357287.XA
Other languages
Chinese (zh)
Inventor
缪勰
汤晨
曹树烽
温从洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111357287.XA priority Critical patent/CN116136737A/en
Publication of CN116136737A publication Critical patent/CN116136737A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4411Configuring for operating with peripheral devices; Loading of device drivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Stored Programmes (AREA)

Abstract

The embodiment of the application provides an IO processing method and system, which relate to the technical field of terminal operating systems, and the method comprises the following steps: the first driving module is arranged in the business service, processes the applied IO request, the second driving module is arranged in the driving framework, initializes, unloads, processes IO errors and the like on the hardware equipment, and drives the hardware equipment through the separated driving framework so as to reduce IPC and improve IO performance.

Description

IO processing method and system
Technical Field
The embodiment of the application relates to the technical field of terminal operating systems, in particular to an IO processing method and system.
Background
Under the microkernel architecture, the driver of the hardware device and the service for processing the IO are respectively in different processes of a user mode, and IPC (Inter process communication, inter-process communication) exists between the service for processing the IO and the driver of the hardware device in the process of processing the IO access request of the application to the hardware device, so that the IO performance is reduced.
For example, in the CDC (Cockpit Domain controller) intelligent cockpit scenario of an autopilot system, a large number of ko (Kernel module), so (shared object) are required to be loaded to support the running of service App (Application) such as instrument, reverse image, 360 look around, etc. during the system start-up. IO (Input output) performance can seriously affect the initialization of service components, thereby affecting the starting speed of the system, virtualized storage performance, and the like. In addition, the Android (Android) virtual machine of the vehicle central control domain has high requirements on IO performance, so that the IO performance can influence the starting time of the system, the response speed of the App, automatic driving and the like, and further influence the user experience.
Disclosure of Invention
In order to solve the technical problems, the application provides an IO processing method and system. In the method, a driver program of the hardware device for processing the IO request can be placed in the business service, the redundant code of the driving framework can be bypassed, and IPC call between the business service and the hardware device driver is reduced, so that IO performance is improved.
In a first aspect, an embodiment of the present application provides an IO processing system.
The IO processing system is connected with the hardware equipment, and comprises: the system comprises a preset business service and a driving frame, wherein a first driving module is arranged in the preset business service, and a second driving module is arranged in the driving frame; the first driving module comprises a hardware protocol;
the first driving module is used for:
responding to a received first IO request, and encapsulating the first IO request according to a hardware protocol of the hardware equipment to generate a second IO request;
the second IO request is sent to the hardware equipment through operating a hardware queue of the hardware equipment;
the second driving module is used for:
and carrying out connection management and IO error processing on the hardware equipment.
In the embodiment of the application, a driver module of a hardware device is constructed in a split driving architecture, and the driver module of the hardware device includes a first driver module deployed in a preset business service and a second driver module deployed in a driving framework. The first driving module is used for processing the IO request according to a hardware protocol, and can bypass the lengthy code of the driving framework to process the IO request in the process of presetting business service, thereby reducing IPC between the business service and the hardware device driving and providing an efficient IO processing mode. And the first driving module can directly issue the IO request to the hardware device through the hardware queue of the hardware device, thereby further reducing IPC and improving IO performance. In addition, the second driving module is arranged in the driving frame to manage connection between the hardware equipment and the IO processing system and process the error-reporting IO request, so that the driving frame with an open source can be utilized to realize logic operation which is strongly coupled with the driving frame, and meanwhile, the first driving module is deployed in a preset business service, so that the first driving module can be prevented from being polluted by the open source.
In one possible implementation manner, the hardware device includes a hard disk, and the preset business service includes: a plurality of file system services corresponding to the hard disk, wherein the first driving modules are respectively arranged in the plurality of file system services;
and the first driving module is arranged in a plurality of file system services of the hard disk and is used for operating different hardware queues of the hard disk.
In one possible implementation, the number of hardware queues of the hard disk operated by the first driver module disposed in the high priority file system service is greater than the number of hardware queues of the hard disk operated by the first driver module disposed in the low priority file system service.
In one possible implementation manner, the second driving module is specifically configured to:
and distributing the hardware queues for operation to a plurality of first driving modules in different file system services of the hard disk based on the number of the hardware queues of the hard disk.
In one possible implementation, the hardware device includes a physical network card, and the preset business service includes a network service;
the first driving module is disposed in the network service, and is specifically configured to:
In responding to IO requests from different applications, different hardware queues are operated.
In one possible implementation, the first driving module disposed in the network service is further configured to:
and determining the number of the hardware queues corresponding to the application according to the priority of the application accessing the network service based on the number of the hardware queues of the physical network card.
In one possible implementation manner, the driving framework is internally provided with a plurality of second driving modules, and the plurality of second driving modules are used for carrying out connection management and IO error processing on hardware devices with different protocol versions.
In one possible implementation, the first driver module includes hardware protocols of one or more protocol versions of a hardware device.
In one possible implementation manner, the second driving module is specifically configured to:
and responding to the initialization request of the hardware equipment, creating a hardware queue object for a hardware queue of the hardware equipment and writing the hardware queue object into a first memory.
The first driving module is specifically configured to:
and writing the second IO request into a hardware queue of the hardware device by calling the hardware queue object in the first memory.
In one possible implementation manner, the second driving module is specifically configured to:
setting a queue state of a hardware queue of the hardware device and a hardware state of the hardware device to the first memory;
the first driving module is specifically configured to:
responding to a received first IO request, and acquiring a hardware state of the hardware device and a queue state of a target hardware queue in the first memory, wherein the target hardware queue is a hardware queue corresponding to a physical address accessed by the first IO request;
and under the condition that the hardware state and the queue state of the target hardware queue are both preset states, the first IO request is packaged according to the hardware protocol of the hardware equipment, and a second IO request is generated.
In one possible implementation, the first driving module is further configured to:
receiving an interrupt of the hardware device;
and according to the interrupt, acquiring an IO result of the hardware equipment responding to the second IO request from the hardware queue of the operation.
In one possible implementation, the first driving module is further configured to:
and sending the IO result to a target application, wherein the target application triggers the first IO request.
In one possible implementation, the first driving module is further configured to:
and under the condition that the error number in the IO result is matched with the number in the preset number list, repackaging the first IO request according to the hardware protocol of the hardware equipment, and retransmitting the packaged first IO request to the hardware equipment through the re-operation of the hardware queue of the hardware equipment.
In one possible implementation of the present invention,
the first driving module is further configured to:
under the condition that the error number in the IO result is matched with the number in the preset number list, the error number is sent to the second driving module;
the second driving module is specifically configured to:
determining an error hardware object in the hardware device according to the error number, and updating the state of the hardware object of the hardware device in a first memory into an unavailable state;
resetting the hardware object of the hardware device according to the hardware protocol;
and acquiring the state of the hardware object after the reset operation, so as to update the state of the hardware object in the first memory under the condition that the state of the hardware object is changed.
In a second aspect, an embodiment of the present application provides an IO processing method, which is applied to an IO processing system, where the IO processing system is connected to a hardware device, and the IO processing system includes: the system comprises a preset business service and a driving frame, wherein a first driving module is arranged in the preset business service, and a second driving module is arranged in the driving frame; the first driving module comprises a hardware protocol;
the first driving module responds to the received first IO request, encapsulates the first IO request according to a hardware protocol of the hardware device, and generates a second IO request;
the first driving module sends the second IO request to the hardware equipment through operating a hardware queue of the hardware equipment;
and the second driving module is used for carrying out connection management and IO error processing on the hardware equipment.
In one possible implementation manner, the hardware device includes a hard disk, and the preset business service includes: a plurality of file system services corresponding to the hard disk, wherein the first driving modules are respectively arranged in the plurality of file system services;
the first driving module is arranged in a plurality of file system services of the hard disk, and the hardware queues of the operated hard disk are different.
In one possible implementation, the number of hardware queues of the hard disk operated by the first driver module disposed in the high priority file system service is greater than the number of hardware queues of the hard disk operated by the first driver module disposed in the low priority file system service.
In one possible implementation manner, the second driving module performs connection management on the hardware device, including:
the second driving module allocates hardware queues for operation to the plurality of first driving modules placed in different file system services of the hard disk based on the number of the hardware queues of the hard disk.
In one possible implementation, the hardware device includes a physical network card, and the preset business service includes a network service;
the first driving module arranged in the network service responds to IO requests from different applications, and the operating hardware queues are different.
In one possible implementation, the method further includes:
the first driving module arranged in the network service determines the number of hardware queues corresponding to the application according to the priority of the application accessing the network service based on the number of the hardware queues of the physical network card.
In one possible implementation manner, the driving frame has a plurality of the second driving modules built therein, and the method further includes:
and the plurality of second driving modules respectively carry out connection management and IO error processing on hardware devices with different protocol versions.
In one possible implementation, the first driver module includes hardware protocols of one or more protocol versions of a hardware device.
In one possible implementation manner, the second driving module performs connection management on the hardware device, including:
and the second driving module responds to the initialization request of the hardware equipment, creates a hardware queue object for a hardware queue of the hardware equipment and writes the hardware queue object into a first memory.
The first driving module sends the second IO request to the hardware device by operating a hardware queue of the hardware device, including:
and the first driving module writes the second IO request into a hardware queue of the hardware device by calling the hardware queue object in the first memory.
In one possible implementation manner, the second driving module performs connection management on the hardware device, including:
The second driving module is used for setting the queue state of the hardware queue of the hardware device and the hardware state of the hardware device to the first memory;
the first driving module, in response to the received first IO request, encapsulates the first IO request according to a hardware protocol of the hardware device, and generates a second IO request, including:
the first driving module responds to a received first IO request, and obtains the hardware state of the hardware device and the queue state of a target hardware queue in the first memory, wherein the target hardware queue is a hardware queue corresponding to a physical address accessed by the first IO request;
and the first driving module encapsulates the first IO request according to the hardware protocol of the hardware equipment under the condition that the hardware state and the queue state of the target hardware queue are both preset states, and generates a second IO request.
In one possible implementation, the method further includes:
the first driving module receives the interrupt of the hardware equipment;
and the first driving module acquires an IO result of the hardware equipment responding to the second IO request from the operated hardware queue according to the interrupt.
In one possible implementation, the method further includes:
and the first driving module sends the IO result to a target application, wherein the target application triggers the first IO request.
In one possible implementation, the method further includes:
and under the condition that the error number in the IO result is matched with the number in the preset number list, the first driving module repackages the first IO request according to the hardware protocol of the hardware equipment, and retransmits the packaged first IO request to the hardware equipment through the hardware queue of the hardware equipment.
In one possible implementation, the method further includes:
under the condition that the error number in the IO result is matched with the number in the preset number list, the first driving module sends the error number to the second driving module;
the second driving module performs IO error processing on the hardware device, including:
the second driving module determines an error hardware object in the hardware device according to the error number, and updates the state of the hardware object of the hardware device in the first memory into an unavailable state; and carrying out reset operation on the hardware object of the hardware device according to the hardware protocol, and acquiring the state of the hardware object after the reset operation so as to update the state of the hardware object in the first memory under the condition that the state of the hardware object is changed.
Any implementation manner of the second aspect and the second aspect corresponds to any implementation manner of the first aspect and the first aspect, respectively. The technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a third aspect, embodiments of the present application provide a chip. The chip comprises at least one processor, a first driving module and a second driving module. The first driving module and the processor can implement the second aspect and the method implemented by the first driving module in any implementation manner of the second aspect; the second driving module and the processor may implement the second aspect and a method implemented by the second driving module in any implementation manner of the second aspect.
Any implementation manner of the third aspect and any implementation manner of the third aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. The technical effects corresponding to the third aspect and any implementation manner of the third aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium. The computer readable storage medium stores a computer program which, when run on a computer or processor, causes the computer or processor to perform the method of the first aspect or any one of the possible implementations of the first aspect.
Any implementation manner of the fourth aspect and any implementation manner of the fourth aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. Technical effects corresponding to any implementation manner of the fourth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, and are not described herein.
In a fifth aspect, embodiments of the present application provide a computer program product. The computer program product comprises a software program which, when executed by a computer or processor, causes the method of the first aspect or any one of the possible implementations of the first aspect to be performed.
Any implementation manner of the fifth aspect and any implementation manner of the fifth aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. Technical effects corresponding to any implementation manner of the fifth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, and are not described herein.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a diagram of an exemplary system architecture;
FIG. 2 is a schematic diagram of an exemplary autopilot system;
FIG. 3 is a diagram of a system architecture that is exemplary;
FIG. 4 is a flow chart of an exemplary illustrated IO processing method;
FIG. 5 is a schematic diagram illustrating the process of initializing one type of hardware;
FIG. 6 is a schematic diagram illustrating one example of the processing of IO requests;
FIG. 7 is a schematic diagram illustrating one example of handling IO errors;
FIG. 8 is a schematic diagram of an IO processing system according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of embodiments of the present application are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, the plurality of processing units refers to two or more processing units; the plurality of systems means two or more systems.
In the CDC (Cockpit Domain controller, cockpit controller) intelligent cockpit scenario of the autopilot system, during the system start-up process, a large number of ko (Kernel module), so (shared object) are needed to support the running of service apps (Application) such as meters, reverse images, 360-degree look around, and the like. IO (Input output) performance can seriously affect the initialization of business components, thereby affecting the starting speed of the system, the performance of virtualized storage, and the like. In addition, the Android (Android) virtual machine of the vehicle central control domain has high requirements on IO performance, so that the IO performance can influence the starting time of the system, the response speed of the App, automatic driving and the like, and further influence the user experience.
As shown in fig. 1 (1), a communication architecture diagram of an operating system at seL (seL microkernel ) is exemplarily shown. In the architecture, DDEkit is used as a driving operation environment, and an open source code of a Linux driving is multiplexed to provide an environment compatible with a Linux driving code. Among them, DDE (Device Driver Environment, device driven environment) is a dynamic data exchange mechanism. Furthermore, seL the operating system provides a separate user-state service to execute the code of the DDEkit, which runs as a so in the service process space of the user-state. The DDEkit can load the Linux driver and convert the Linux interface into an interface needed to be used by the seL operating system through the DDE/Linux module.
Although seL operating system provides a drive framework compatible with Linux interface, app, after sending an IO request to FS (File system), because FS and DDEkit are in different process spaces, when processing IO, the driver from FS to hardware (such as Linux driver shown in fig. 1) calls a long path, and needs IPC, which may bring about serious IO performance problem. And, ddekits multiplex open source code, if linked into FS service process, would cause FS to be contaminated by open source.
As shown in fig. 1 (2), a communication architecture diagram in a QNX operating system (embedded real-time operating system) is exemplarily shown. In this architecture, each drive is a user-state resource management process, here shown as a block device drive (e.g., disk drive). The driving framework module is a driving framework in the QNX operating system and can support driving of various hardware, the IO-blk.so comprises block device driving provided by a device manufacturer, the fs.so comprises a certain file system, and the IO-blk.so supports an interface of the driving framework module. Both the block device driver and the FS are loaded in the dynamic library form within the process space of the block device driver. In the QNX operating system, app may send an IO request to a process driven by the block device, where the fs.so and the IO-blk.so in the process driven by the block device process the IO request, and the IO-blk.so issues the IO request to hardware to perform data reading and writing.
In the architecture shown in fig. 1 (2), each block device starts a service process, and when there are multiple block devices (for example, two blocks of disks) in the system, two block device drivers as shown in fig. 1 (2) need to be configured, and two service processes are started, and file systems on different disks run in different process spaces to provide IO services to apps.
In addition, in the architecture of fig. 1 (2), each time a manufacturer's hardware, such as a disk, is accessed, it is necessary to configure a block device driver for the manufacturer's disk, and the disk driver of the manufacturer must be transplanted into so to implement a specific interface, so that the hardware driver can operate in the architecture, so that the development cost of the hardware driver in the system is high, and the south compatibility is poor. In addition, in the system, when the file system or the device driver fails, all the partitions on the whole disk cannot work, and the system reliability is poor.
As shown in fig. 1 (3), a communication architecture diagram of an operating system is exemplarily shown. In this architecture, all hardware drivers run in the address space of the driver framework. And the App forwards all IO requests of the hardware to the driving framework through the FS, and the driving framework calls the hardware driving operation hardware to perform IO operation.
However, in this architecture, FS and the drive framework require IPC from FS to the drive framework in different process spaces, affecting IO performance; in addition, in the process of issuing the IO, the path of the IO is longer, and codes passing through the whole driving framework are needed, so that a serious IO performance problem is caused.
The embodiment of the application provides an IO processing method which can be applied to an IO subsystem under a microkernel system architecture. The IO processing method can be applied to CDC intelligent cabin scenes of an automatic driving system, and of course, the IO processing method in the embodiment of the application can also be applied to other application scenes with high IO performance requirements, and the application is not limited.
Illustratively, the microkernel represents an operating system kernel capable of providing necessary services, and services such as a file system, a device driver, and the like are all running in a user mode.
The IO processing method may be deployed in a server or a terminal device under a microkernel system architecture, where the terminal device may be a cellular phone (cellular phone), a tablet computer (pad), a wearable device, or an internet of things device.
The IO processing method is applicable to an automatic driving system under a microkernel system architecture, wherein the automatic driving system can comprise a host machine, hardware equipment connected with the host machine and the like. Fig. 2 is a schematic structural diagram of an autopilot system according to an embodiment of the present application. Referring to fig. 2, illustratively, the host may include, but is not limited to: the system comprises an application layer, a microkernel service layer, a physical interface and the like, wherein a host and hardware are in communication connection through the physical interface.
Illustratively, the application layer optionally includes one or more application programs. Such as App 0-Appn shown in fig. 2.
Illustratively, the microkernel services layer (also known as the microkernel's user-oriented basic services) includes, but is not limited to: file System (FS) services, network services (networks), drive frameworks (Devhost), process scheduling services, memory management services, power management services, security services, etc.
Under the microkernel architecture of the embodiment of the application, each function involved in the microkernel service layer can provide corresponding service in an App mode by transplanting the function from the kernel mode to the user mode under the microkernel architecture.
Illustratively, the application layer and microkernel service layer of embodiments of the present application may be implemented in a user state.
Illustratively, the FS service provides services such as opening/closing/reading/writing files to apps used by users. Illustratively, the hardware corresponding to the FS service is optionally a hard disk.
Illustratively, a network service is used for providing a service for reading and writing network data to an App used by a user. The hardware corresponding to the network service is optionally a physical network card, for example.
Illustratively, the framework is optionally used to run drivers for the respective hardware devices.
Illustratively, a process scheduling service is used to schedule processes serviced in the microkernel service layer.
Illustratively, the memory management service is used for managing the memory used by the microkernel service layer.
Illustratively, a power management service is used for power management.
Illustratively, the security service is used for performing security management on the services in the microkernel service layer.
Under the micro-kernel architecture, the data IO processed by the FS service and the network service are more, the IO task is heavier, the hardware driver runs in the process space of the Devhost, and the Devhost, the FS service and the network service are respectively in different processes of the user mode, so that when the system processes IO, the IPC exists between the FS service and the Devhost, the IPC exists between the network service and the Devhost, the IPC, the scheduling delay, the memory copy and the like can cause unnecessary expenditure, and the IO performance is reduced.
The application layer and the microkernel service layer shown in fig. 2 do not constitute a specific limitation on the device. In other embodiments of the present application, the apparatus may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components.
Fig. 3 (1) illustrates a system architecture diagram according to an embodiment of the present application, based on the architecture of fig. 2.
In fig. 3 (1), the present application employs a split driver architecture, where the hardware driver is split into two parts and runs in different process spaces. The hardware Driver is illustratively implemented as a lower Driver (also known as a Low Driver module) and an upper Driver (also known as a High Driver module).
The hardware Driver of a hardware device may include a Low Driver module and at least one High Driver module.
The Low Driver module is illustratively configured in Devhost in the microkernel architecture.
The Low Driver module is optionally used for performing logic operations such as connection management and IO error processing on the hardware device, which are strongly coupled with the driving framework, so that better universality is obtained, and the hardware of different manufacturers can be supported. Wherein the connection management may include initializing the hardware device, uninstalling the hardware device.
The High Driver module is configured in a service (including but not limited to FS service, network service, etc.) with a heavy IO task in the microkernel service layer, so as to alleviate the IO performance.
Illustratively, the High Driver module of the hard disk may be configured in the FS service; the High Driver module of the physical network card may be configured in the network service.
The High Driver module is optionally configured to encapsulate the IO request according to a hardware protocol, and interact with the hardware through a hardware queue in the hardware to issue the IO request to the hardware for performing an IO operation.
The High Driver module is optionally configured to receive an interrupt notification sent by the hardware, so as to receive an IO processing result returned by the hardware.
Optionally, in one possible implementation manner, the High Driver module may be configured with a hardware protocol of multiple pieces of hardware supported by the system, so that the encapsulation of the IO request is performed by using the hardware protocol of the hardware according to the specific piece of hardware accessed by the IO request of the App, so as to implement the High Driver module of multiple pieces of hardware in common use. For example, the High Driver modules of the hard disk of different protocol versions may be identical.
Alternatively, in another possible implementation manner, the High Driver module may configure a hardware protocol of a hardware to encapsulate an IO request of the hardware accessed by an App, so as to implement the High Driver module supporting a certain hardware. The High Driver modules of the hard disk with different protocol versions may be different, and the hardware protocol in the High Driver modules may be different.
For example, the hardware may include a hardware queue, and the Low Driver module may be configured to abstract the hardware queue of the hardware and write the abstract hardware queue to the shared memory.
For example, the High Driver module may call an abstracted hardware queue in the shared memory to operate the hardware queue in the hardware, so that the App may perform IO operations on the hardware. The High Driver module may pass the hardware queue through the shared memory to the process of the business service (e.g., FS) to issue the IO to the hardware by directly operating the hardware queue.
In fig. 1 (3), all hardware drivers (e.g., block device drivers) are running in a driving framework, and the hardware drivers and FS are running in different process spaces, so that when performing IO operations, lengthy code of the driving framework needs to be run, and process interaction between the FS and the driving framework is unavoidable, and each time an IO request is issued, the FS needs to call the driving framework to issue an IO to the hardware driver through a remote method, so that IO performance is reduced.
In this embodiment of the present application, in a microkernel scenario, a split driving architecture is adopted, and a High Driver module for processing an IO request is deployed in an FS or a network service, so that a Driver module (for example, a High Driver module) for processing an IO request is located in a process of the FS. When the High Driver module issues an IO request, the High Driver module bypasses the lengthy code of the driving framework (such as Devhost), and can provide an efficient IO processing mode for a service process (such as an FS process). In addition, the High Driver module processes IO in the process of the FS, and does not need to carry out inter-process communication with the FS, so that a large number of IPCs under microkernels can be reduced, and complicated processing flows in a driving frame are reduced, thereby improving IO performance; in addition, the High Driver module can directly issue IO requests to the hardware through a hardware queue of the hardware, so that IPC is further reduced, and IO performance is improved.
In addition, in the drive separation architecture provided by the embodiment of the application, the Low Driver module can be deployed in the open-source Devhost, so as to ensure that the Low Driver module is still in an independent process of the drive framework as an open-source drive code; the High Driver module with High performance exists in microkernel services with heavier data IO tasks such as FS and network, so that the FS and the self-developed High Driver module operate in an independent process space, only codes for driving a frame process space are required to be opened, and the High Driver module is prevented from being polluted by the opened sources.
In this example and the following examples, the High Driver module is mainly implemented in the FS service as an example, and when the High Driver module is implemented in the network service, the principle is similar, and the same parts are not described in detail.
Optionally, in order to improve the IO performance, a Low Driver module corresponding to each protocol version of hardware accessed in the system may be configured in the Devhost. The type of hardware may include, but is not limited to, a hard disk, a physical network card, etc.
For example, in the case that the system supports multiple hardware, multiple Low Driver modules corresponding to different hardware may be configured in the Devhost. For example, a Low Driver module 1 bound to the hard disk 1, a Low Driver module 2 bound to the hard disk 2, a Low Driver module 3 bound to the physical network card 1, a Low Driver module 4 bound to the physical network card 2, and the like. Because of the different modes of initializing, unloading, IO error processing and the like of different hardware, the Low Driver modules corresponding to different hardware in the Devhost are different, in other words, the Low Driver modules can be bound with the hardware, and one Low Driver module supports the hardware of one protocol version.
The protocol versions of the hard disk 1 and the hard disk 2 are different, and may be hard disk products of different vendors, for example.
The protocol versions of the physical network card 1 and the physical network card 2 are different, for example, the physical network card products of different manufacturers can be used.
For example, for hardware of different protocol versions, a Low Driver module matched with the protocol version may be configured separately.
For example, when the type of the accessed hardware is a hard disk, a High Driver module that can be used to process the IO request to the hard disk may be configured in the file system service (i.e., FS) in fig. 2; when the type of the accessed hardware is a physical network card, a High Driver module that can be used to process the IO request for the physical network card may be configured in the network service in fig. 2.
Optionally, a High Driver module supporting hard disks with multiple protocol versions can be configured, that is, a hardware protocol of the hard disk with multiple protocol versions is configured in one High Driver module; optionally, a High Driver module supporting multiple protocol versions of the physical network card may be configured, that is, a hardware protocol of the physical network card with multiple protocol versions is configured in one High Driver module. The system of the embodiment of the application can support the hard disk with multiple protocol versions and the physical network card with multiple protocol versions.
As shown in fig. 3 (2), taking a system accessing a hard disk as an example, under a microkernel architecture, different partitions of the same hard disk may correspond to different FS, and then the High Driver modules provided in the embodiments of the present application may be respectively configured in multiple FS corresponding to the hard disk in the system. In fig. 3 (2), illustratively, two partitions of the hard disk correspond to FS1 and FS2, respectively, and then the same High Driver module that can be used to process the IO request of the hard disk is pre-deployed in both FS1 and FS2. Similarly, a Low Driver module corresponding to the hard disk is pre-deployed in the Devhost. In the system of the embodiment of the present application, a hardware Driver of a protocol version of hardware may include a Low Driver module and one or more High Driver modules.
For example, app1 and App2 access partition 1 and partition 2 of the hard disk, respectively, partition 1 corresponding to FS1 and partition 2 corresponding to FS2. The IO requests of App1 and App2 are sent to FS1 and FS2, respectively. The High Driver module in the FS1 may encapsulate the IO request received by the FS1 for the partition 1 according to the protocol of the hard disk, and send the IO request to the hardware queue 1 to the hardware queue 4 of the hard disk by calling the hardware queue interface. And the High Driver module positioned in the FS2 also encapsulates the IO request of the partition 2 received by the FS2 according to the protocol of the hard disk, and sends the IO request to the hard disk hardware queues 5-6 by calling the hardware queue interface.
For example, the Low Driver module may allocate the hardware queues used by the High Driver module in FS1 and the High Driver module in FS2 to isolate the hardware queues of different partitions of the hard disk from each other based on the total number of hardware queues of the hard disk. For example, the IO request for partition 1 may be written into the hardware queue 1 to the hardware queue 4 corresponding to FS1 (or partition 1); the IO request for partition 2 may be written to the corresponding hardware queue 5-6 of FS2 (or partition 2).
After the hardware processes the IO request sent by the High Driver module, the processing result may be notified to the FS by means of an interrupt notification, so that the High Driver module reads the processing result from the hardware queue.
For the same parts as those of fig. 3 (1) in fig. 3 (2), reference may be made to the description of fig. 3 (1), and a detailed description thereof will be omitted.
In the system of fig. 1 (3), since the driving framework includes all the drivers of the hardware, the IO requests of each App of the application layer for any one hardware and each storage unit (e.g. partition) of the hardware are issued to the same driving framework through the FS for processing, which results in a larger IO pressure and a lower IO performance of the driving framework. In addition, for the IO requests of the same block device, the IO requests of different apps are distributed through a software queue, so that the IO requests of different applications are mutually influenced, and the IO isolation is poor. For another example, when a low priority application issues a large number of IO requests, the IO requests of a high priority application may also be affected, and deterministic latency may not be achieved.
For example, the C disk and the D disk in the computer are different partitions of the hard disk, the hard disk has 10 hardware queues, if the application layer is always performing a large number of IO reads and writes on the D disk, according to the system of fig. 1 (3), all 10 queues may be occupied by the IO request of the D disk, and the C disk needs to preempt the queue resources with the D disk to perform IO processing, so that the IO performance is reduced, the IO isolation is poor, and the system is blocked.
For example, IO isolation may be used to represent the degree of IO interaction of different processes and the degree of IO interaction between different devices. The higher the degree of influence between IOs, the worse the IO isolation.
In the system of fig. 3 (2), the hardware queues of different partitions of the hard disk are independent and isolated, and according to the above examples of the C disk and the D disk, the Low Driver module may allocate 8 hardware queues to the C disk and 2 hardware queues to the D disk, so even if the application layer performs a large number of IO reads and writes to the D disk, the IO request to the D disk may only be issued to the two hardware queues, but not written to the 8 hardware queues allocated to the C disk. The system of the embodiment of the application can ensure isolation among queues from a hardware level through isolating hardware queues of hardware, the IO in the hardware queues corresponding to different partitions cannot be mutually influenced, IO performance is improved, and isolation of IO accessing different partitions of the hardware is ensured.
In the system of fig. 3 (2), the High Driver modules are respectively deployed in the FS corresponding to the different partitions of the hard disk, so that the IO requests for the different partitions of the hard disk can be processed in parallel by using the High Driver modules deployed in the different FS, and thus, the IO requests are processed by scheduling the multiple partitions of the hard disk in parallel, and the IO processing speed is increased.
In the system of fig. 3 (2), for example, a greater number of hardware queues may be dynamically allocated to the High Driver module in the FS with a higher priority of service requirements to accelerate the IO processing speed of the High priority service, for example, different apps access data of different partitions, and then an App accessing a partition corresponding to the FS with a higher priority will not be interfered by an App accessing a partition corresponding to the FS with a lower priority, which is more beneficial to ensuring deterministic latency.
By way of example, in the system of fig. 3 (2), by respectively configuring High Driver modules for FS corresponding to different partitions of the hardware, not only can parallel access be performed on different partitions of the hardware, but also data access of other non-failed partitions by an application is not affected in the case that a part of the partitions in the hardware are damaged.
Referring to fig. 4 in conjunction with fig. 3, a flowchart of an IO processing method of one embodiment of the present application is exemplarily shown.
The process may include the steps of:
s101, initializing hardware by a Low Driver module.
Illustratively, after hardware (e.g., disk) is accessed to the host system, devhost may read the hardware type from disk via a physical interface, where the hardware type is differentiated from the type of hardware described above. The kind of hardware refers to a manufacturer of the hardware, or refers to a protocol version of the hardware, etc.
It should be noted that, the hardware connected to the host is not limited to the hardware device inserted from the outside through the physical interface, but may include a solid state hard disk integrated in the host, a storage device based on a storage area network (SAN: storage Area Network), and the like, which is not limited in this application.
In this example, after the Devhost reads the hardware type, the Low Driver module in the process that matches the hardware type, for example, the Low Driver module of vendor a, may be called to initialize the disk.
For example, a plurality of Low Driver modules may be pre-deployed in the Devhost, where each Low Driver module corresponds to a protocol version of hardware supported by the system, and operations such as initializing, unloading, and IO error processing may be performed on the protocol version of hardware.
For example, the Low Driver module may be bound to the vendor's hardware, and the Low Driver module configures the initialization operation logic of the vendor's hardware. Different vendors 'hardware may differ in the initialization process, and then the hardware initialization logic configured by the Low Driver module bound to different vendors' hardware may differ. In the Low Driver modules corresponding to the hardware of different manufacturers, the different settings can be made according to the initialization process of the hardware provided by the manufacturer, which is not limited in the application.
For example, as can be seen from the description of fig. 3 (2), one hard disk may correspond to a plurality of FS, and then FS are independent from each other and not multiplexed with each other in different protocol versions or between hard disks of different vendors.
For example, two partitions of hard disk 1 correspond to FS1 and FS2, respectively; the two partitions of the hard disk 2 correspond to FS3 and FS4, respectively. Not only are the FS corresponding to different partitions of the hard disk of the same protocol version different, but also the FS are mutually independent between the hard disks of different protocol versions. Thus, IO isolation between different hard disks can be ensured.
Illustratively, FIG. 5 shows a schematic diagram of the Low Driver module initializing hardware.
As shown in fig. 5, the Low Driver module may read information such as hardware information, hardware state, queue information, etc. from the accessed hardware through a physical interface, and the hardware may be illustrated as a disk.
Illustratively, the hardware information may include, but is not limited to: register information, interface type, interface rate, etc.
The hardware state may include, for example, the state of each piece of hardware to which the above-described hardware information relates.
For example, hardware states may include, but are not limited to: the state of the device and the states of all components in the device.
For example, device states may include, but are not limited to: a normal state, a power-down state, a startup-on state, a state that can operate after startup, a state that is being processed for an IO, an error state that indicates that an IO is being processed, an error state that indicates that a device has failed, and the like.
For example, the status of components within a disk may include, but is not limited to: normal state, fault state, etc.
For example, components within a disk may include, but are not limited to: LUN device (Logical Unit Number ), target, bus, scsi host.
Illustratively, the queue information may include, but is not limited to: information on hardware queues, etc., such as the number of hardware queues that a disk includes, the maximum number of IO requests that each hardware queue can store, etc.
Illustratively, the process of initializing hardware by the Low Driver module may further include, but is not limited to: and allocating a section of shared memory with fixed size to the hardware, and writing preset information of the hardware into the shared memory.
Illustratively, the preset information optionally includes, but is not limited to: the Low Driver module reads the hardware information, the hardware state and the queue information from the hardware.
Optionally, considering the device state in the hardware state, the device state may change along with the change of the hardware initialization process, so the Low Driver module may update the device state in the shared memory in real time during and after the initialization process is completed.
For example, during the initialization process, the Low Driver module may set the device state to the active state in the shared memory; after the initialization is completed, the Low Driver module can update the device state in the shared memory to a state that can work after being started.
Illustratively, the initialization process may further include, but is not limited to: the Low Driver module creates an abstracted hardware queue (e.g., a hardware queue object) in the shared memory for the hardware queue in the hardware according to the information of the hardware queue read from the hardware, so as to be used as a hardware queue interface, and the hardware queue in the hardware can be accessed through the hardware queue interface. The preset information may also include an abstract hardware queue therein.
Illustratively, the initialization process may further include, but is not limited to: the Low Driver module sets the queue state of each hardware queue in the shared memory.
The queue information in the shared memory may also include, for example, information of an abstracted hardware queue, such as information of a hardware queue object.
For example, the hardware queue initial state in the hardware is an empty queue, and after the hardware initialization is completed, the High Driver module may call the abstracted hardware queue to write the IO request to the hardware queue in the hardware.
Illustratively, the queue status may include, but is not limited to: an available state, or an unavailable state, etc.
For example, in fig. 3 (2), FS1 corresponds to partition 1 of the disk, partition 1 corresponds to hardware queue 1-4, and if a granule in partition 1 is damaged, the Low Driver module may set the queue status of the hardware queue corresponding to the granule to an unavailable status.
For example, a hardware queue in hardware fails, the queue state of the hardware queue is an available state.
Illustratively, while in the example of fig. 5, hardware information and queue information, and hardware status and queue status in a shared memory are shown, a hardware queue is also a component of hardware, and then the hardware information may also include queue information, and the hardware status may also include queue status.
S103, the Low Driver module informs the High Driver module that hardware initialization is completed.
After the Low Driver module completes the initialization of the hardware, the Low Driver module may notify the High Driver module corresponding to the hardware.
Optionally, if each partition of the plurality of partitions of the hard disk corresponds to one FS, the Low Driver module may notify the High Driver module in the FS corresponding to each partition of the hard disk, that is, the Low Driver module may notify the plurality of High Driver modules corresponding to the hardware.
Because the Low Driver module is deployed in the Devhost, belongs to a part of the Devhost and operates in the Devhost process; the High Driver module is deployed in the FS, belongs to a part of the FS, and operates in the FS process, so the communication between the Low Driver module and the High Driver module may be directly implemented through the communication between the Devhost and the FS. In the following embodiments, for convenience of explanation, communication between the Low Driver module and the High Driver module is described by using the communication between the Low Driver module and the High Driver module, and the description of Devhost to which the Low Driver module belongs and FS or Network to which the High Driver module belongs will not be repeated.
For example, as shown in fig. 3 (2), the Low Driver module may send a notification message indicating that the initialization of the hardware is completed to the High Driver module in the FS1 and the High Driver module in the FS2, so as to share the information of the hardware in the shared memory to the High Driver module. The High Driver module in FS1 is the same as the High Driver module in FS 2.
The Low Driver module writes preset information such as a hardware state, a queue state and the like into the shared memory allocated to the hardware, the queue state and the hardware state can be controlled by the Low Driver module, the High Driver module is read only, and the hardware state and the queue state in the shared memory can be protected through a read-write lock when the hardware is specifically implemented.
S105, the Low Driver module allocates a hardware queue to the High Driver module.
For example, after receiving a notification message indicating that hardware initialization is completed, the High Driver module may communicate with the Low Driver module, and the Low Driver module may allocate a hardware queue to the High Driver module.
For example, with continued reference to fig. 3 (2), the Low Driver module may allocate hardware queues to the High Driver module in FS1 and the High Driver module in FS2 according to the priority of the partition 1 corresponding to FS1 and the priority of the partition 2 corresponding to FS 2.
For example, the total number of hardware queues of the disk is 6, the priority of partition 1 is higher than that of partition 2, and the Low Driver module may allocate a larger number of hardware queues, here 4 hardware queues, to the High Driver module in FS1, and allocate a smaller number of hardware queues, here 2 hardware queues, to the High Driver module in FS 2.
For example, the High Driver module in the FS may issue an IO request to a hardware queue corresponding to the High Driver module through an abstracted hardware queue, where the hardware queue is a hardware queue allocated to the High Driver module by the Low Driver module of the disk.
It should be noted that, the correspondence between different partitions of the disk and the hardware queues may be flexibly configured according to the service requirement, in other words, the Low Driver module may dynamically allocate a greater number of hardware queues to the High Driver module in the FS corresponding to the partition with higher priority for the service requirement, so as to accelerate the processing of the High priority service.
Illustratively, after disk initialization is completed, the Low Driver module allocates 4 hardware queues to the High Driver module in FS1 in fig. 3 (2), and allocates 2 hardware queues to the High Driver module in FS 2. However, with the change of the service, for example, when the IO priority of the partition 2 is higher, the Low Driver module may flexibly adjust the number of hardware queues allocated to the High Driver module in the FS2, for example, to allocate 2 hardware queues to the High Driver module in the FS1 and allocate 4 hardware queues to the High Driver module in the FS 2.
For example, when the accessed hardware is a physical Network card, a High Driver module may be configured in the Network service (also referred to as Network) in fig. 2, and the physical Network card may also include a hardware queue. In S105, the Low Driver module bound to the physical network card may allocate all hardware queues in the physical network card to the High Driver module in the network service.
Considering the IO isolation of different applications, for example, app1 and App2 have internet surfing requirements, a High Driver module in network service can allocate hardware queues for responding to IO requests of the applications to App1 and App2 with different priorities, wherein the High Driver module in network service can determine to allocate different numbers of hardware queues in a physical network card to App1 and App2, so that when different applications surf the internet, the different hardware queues of the physical network card are operated by the High Driver module to respond to the IO requests of the different applications, and the IO isolation of the application access network is ensured; further, the High Driver module may allocate a greater number of hardware queues in the physical network card to an App with a higher priority, so that network access of the App with a High priority is not interfered by the App with a low priority.
Illustratively, after receiving a registration request initiated by an App for using a network, a network service allocates a certain number of hardware queues to the App according to a network usage priority of the App based on the number of hardware queues of a physical network card in response to the registration request by a High Driver module in the network service.
For example, a certain network game App and a certain video App are accessing a network service to perform internet surfing, a user performs online games by using the network game App and performs video downloading by using the video App, so that a High Driver module in the network service can perform IO request issuing on a network game App with higher priority, operate a larger number of hardware queues of the physical network card, perform IO request issuing on a video App with lower priority, operate a smaller number of hardware queues of the physical network card, so that online games of the network game App are not blocked, and meanwhile, video downloading of the video App is not affected.
Regarding the process that the High Driver module in the network service issues IO to the physical network card through the hardware queue, the process that the High Driver module in the FS issues IO to the disk through the hardware queue is the same as the process that the High Driver module in the FS issues IO to the disk, and the hardware queue in the hardware is operated by means of the abstracted hardware queue to perform data writing, reading and other operations.
S201, the App sends an IO request to the hardware to the High Driver module.
For example, in connection with fig. 3, a user may trigger an IO operation such as reading and writing of a file in an App, and in response to the IO operation, the App may call a function (including but not limited to a function such as open, read, write) conforming to an interface of an operating system (including but not limited to a posix interface) to issue an IO request to an FS for performing the reading and writing operation, for example: read-write disk files, or network communications (where App issues an IO request to a network service), etc.
Illustratively, app accesses an address in the disk, and then the IO request may be sent to the FS corresponding to the partition to which the address belongs; the FS performs address conversion on the received IO request, and converts the address from a data structure of a file system to a data structure of a disk, for example, from a virtual address of a user mode to a physical address of the disk; then, the FS calls a function in the FS process to send the IO request after address conversion to a High Driver module in the FS process.
S203, the High Driver module acquires a hardware state and a queue state.
For example, after receiving an IO request subjected to address conversion by an FS on the App side, the High Driver module may acquire a hardware state of hardware accessed by the IO request, and in a case where the hardware state is a normal state, the High Driver module may acquire a queue state.
Illustratively, FIG. 6 shows a schematic diagram of a High Driver module handling IO requests.
Referring to fig. 6, the high Driver module may obtain status information of the disk in the shared memory, including but not limited to: LUN state, target state, bus state, scsi host state, device state (here disk state).
In the case where the states of the above components in the hardware are all normal, with continued reference to fig. 6, the high Driver module may also obtain the queue state of the hardware from the shared memory.
Of course, the hardware queue is also a component of hardware, and in other embodiments, the queue status may be considered as a component status.
The normal state is, for example, one of a plurality of states of hardware.
For example, referring to the above, the hardware state may include, but is not limited to, a device state, a state of each component within the device, and then in the case where the device state and the state of each component within the device are both normal states, the state of the hardware may be determined to be the normal state.
Illustratively, the hardware (e.g., disk) in FIG. 6 includes n hardware queues, such as queues 1-n shown in the structure of the hardware of FIG. 6; in the process of initializing hardware, the Low Driver module creates n abstracted hardware queues (queues 1 to n shown in the shared memory in fig. 6) for n hardware queues, wherein the n hardware queues in the hardware are in one-to-one correspondence with the n abstracted hardware queues in the shared memory; in the process of initializing hardware, the Low Driver module allocates four hardware queues, such as queues 1-4, in the hardware to the partition corresponding to the High Driver module; in addition, the shared memory stores state information of n hardware queues of the hardware.
Alternatively, the Low Driver module may create a hardware queue object for a plurality of hardware queues in the hardware, and use the hardware queue object to operate the plurality of hardware queues in the hardware.
Illustratively, although the shared memory of fig. 6 includes all hardware queues that are abstracted by hardware, the High Driver module (for example, the High Driver module in FS1 in fig. 3 (2)) may only operate on queues 1 to 4 in the dashed line box in the shared memory to issue IO requests to queues 1 to 4 in the hardware structure.
Illustratively, the High Driver module in FIG. 6 may obtain the status information of the hardware queues (here including queues 1-4) in the shared memory.
The target hardware queue is a hardware queue corresponding to a physical address accessed by the IO request.
Illustratively, in FIG. 6, the High Driver module may, in response to the IO request, obtain in shared memory a queue state of a target hardware queue (e.g., queue 1) corresponding to a physical address accessed by the IO request.
Under the condition that the queue state is available, the queue state is indicated to be normal; when the queue status is unavailable, the abnormal queue status is indicated.
For example, one hardware queue may store 32 requests, then in the case that the hardware queue is already full, which indicates that the IO request has not been processed yet, the queue state of the queue is also available, and then the High Driver module may wait for the hardware queue to free space to write the IO request to the hardware queue.
Optionally, in the case that the hardware state or the queue state is abnormal, in other words, in the case that the state of any component or queue in the hardware is not normal, the High Driver module may notify the Low Driver module of an abnormal object (component or queue) in the device, wait for a processing result returned by the Low Driver module for the abnormal situation, and temporarily not issue a currently received IO request by the High Driver module before the Low Driver module returns the processing result. Illustratively, the processing results may include, but are not limited to: processing of the IO request may continue or may be denied. For example, the processing result may be sent to the High Driver module by the Low Driver module in a notification manner.
Optionally, if the High Driver module receives the notification sent by the Low Driver module to reject processing the IO request, the High Driver module may return an IO error, for example, data cannot be accessed, data is in error, or the like, to the App through the FS.
Optionally, after receiving a notification indicating that an abnormal object exists in the device, the Low Driver module may reset the abnormal object (component or hardware queue in the device), and if the state of the abnormal object is recovered to a normal state after the reset, notify the High Driver module to continuously process the IO request; if the abnormal component is still faulty after the reset, marking the state of the abnormal object in the shared memory as an unavailable state, and informing the High Driver module not to issue the IO request.
S205, under the condition that the hardware state and the queue state are normal, the High Driver module encapsulates the IO request according to the hardware protocol.
For example, a set of hardware protocols of the hardware may include two parts, one of which is a protocol for issuing an IO, i.e., a hardware protocol in S205, and the other of which is a protocol for managing a hardware device, i.e., a hardware protocol in S307 described below.
Illustratively, in S205, the High Driver module may encapsulate the received IO request into a hardware command (including, but not limited to, a read command, a write command, an IO location (e.g., a physical address), an identification bit, etc.) according to a protocol for issuing an IO of the hardware (including, but not limited to, an IO protocol, a USB interface protocol, a peripheral bus protocol, etc.); then, the High Driver module encapsulates the hardware command into a hardware-readable IO request according to a hardware protocol for issuing IO.
Optionally, in the case that the High Driver module is configured with a hardware protocol of a plurality of hardware supported by the system, the High Driver module may determine, according to a protocol version and a hardware type of the hardware accessed by the received IO request, a hardware protocol matching the protocol version and the hardware type, so as to encapsulate the IO request according to the hardware protocol.
Alternatively, in the case of a hardware protocol configured with one hardware, the High Driver module may encapsulate the received IO request according to the configured hardware protocol.
S207, the High Driver module sends the encapsulated IO request to the hardware.
The High Driver module may call a hardware queue object in the shared memory to write the encapsulated IO request into a hardware queue in the hardware to send the IO request to the hardware.
For example, referring to fig. 6, for example, the target hardware queue corresponding to the address accessed by the IO request is the queue 1 in hardware, and the high Driver module may call the queue 1 in the shared memory (an abstract hardware queue, for example, a hardware queue object) to control the queue 1 in hardware, so as to write the encapsulated IO request into the queue 1 in hardware.
Illustratively, queue 1 in shared memory is the call interface for queue 1 in the hardware.
For example, referring to fig. 2, the host is connected to the hardware through a physical interface, and then the High Driver module may send the encapsulated IO request to the hardware through the physical interface.
For example, after writing the encapsulated IO request into the queue 1 in the hardware, the High Driver module may send a notification message to the hardware, where the notification message may carry the identification information of the queue 1 in the hardware, so that the hardware may read the IO request written by the High Driver module in the queue 1 for processing.
S209, the hardware processes the IO request.
For example, referring to fig. 6, for example, the IO request is a read request, and after receiving the notification message sent by the High Driver module, the hardware may extract the IO request written by the High Driver module from the queue 1 in the hardware; the hard disk may then read the data in the physical address accessed in the IO request and write the read data as IO results into queue 1 in the hardware.
For example, the IO request is a write request, and after receiving the notification message sent by the High Driver module, the hardware may extract the IO request written by the High Driver module from the queue 1 in the hardware; then, the hard disk may extract the data to be written and the physical address to be written in the IO request, write the data to be written to the physical address in the hardware, and write the IO result indicating that the IO request is successful to the queue 1 in the hardware.
Illustratively, if the hard disk is in error in response to an IO request within queue 1 in the hardware, the hard disk may write IO results representing the IO error into queue 1 in the hardware, which may include an error number, for example.
S211, the hardware sends an interrupt notice to the High Driver module.
For example, referring to FIG. 3, after processing a portion of the IO request, the hardware may send an interrupt notification to the FS, and a High Driver module within the FS process may receive the interrupt notification.
The interrupt notification is used to remind the High Driver module to read the IO result of the hardware in response to the IO request from the hardware queue.
Illustratively, interrupts consume more system resources, then the hardware may send an interrupt notification to the FS after processing multiple IO requests.
Illustratively, the interrupt notification may carry the number of IO requests that the hardware has processed.
In this embodiment of the present application, the interrupt notification of the IO request by the hardware may be sent to the FS process where the High Driver module is located, and the High Driver module uniformly processes the interrupt notification.
S213, the High Driver module obtains the IO result from the hardware queue.
For example, the High Driver module may call a hardware queue object in the shared memory to read the IO result from the hardware queue in the hardware after receiving the interrupt notification.
For example, IO results may be divided into two results of whether an IO request is erroneous.
Illustratively, the IO result used to indicate that the IO request is free of errors may be referred to as IO normal, and then the IO result may include a read result, a write result, and the like;
for example, the IO result used to represent an IO request error may be referred to as an IO error, and then the IO result may include an error number.
For example, the error number may identify the error type.
In one possible implementation manner, when the High Driver module determines that the IO result is that the IO is normal, the execution goes to S301, and the High Driver module sends the IO result read from the hardware queue and in response to the IO request to the App.
Optionally, in most cases, no exception occurs in the IO request issued to the hardware, and then the IO request only needs to be processed by the High Driver module, and the IO result is returned to the App without participation of the Devhost and Low Driver modules in the processing.
For example, referring to fig. 3, a High Driver module located in the FS process may send the IO result to the App through the FS.
Alternatively, in another possible implementation, when the IO request is in error, if it is a simple error (including but not limited to a check error, etc.) that does not involve a hardware state change, the error IO request may also be processed by the High Driver module.
For example, the High Driver module may be configured with a preset number list, and the High Driver module may process the IO error represented by the error number in the preset number list, and may be configured with a processing manner for the IO error.
For example, in a case where an error number in an IO result read from the hardware queue by the High Driver module matches a number in a preset number list (for example, a check error), the IO error is described as an IO error within a preset processing range of the High Driver module with reference to fig. 4. The processing manner of the High Driver module to process the IO error may include re-issuing the IO request, after S213, going to S203 to re-package the IO request and issue the IO request to the hardware.
Optionally, in another possible implementation manner, in a case that the IO result read by the High Driver module from the hardware queue is an IO error that needs to reset hardware, the IO result may be processed by the Low Driver module.
For example, referring to fig. 4, if the error number in the IO result is not in the preset number list, which indicates that the IO error is not within the preset processing range of the High Driver module (e.g., a hardware error), the High Driver module executes S303.
S303, the High Driver module notifies the Low Driver module to process IO errors.
For example, an error number in the IO results may be included in the notification.
In the embodiment of the application, after receiving the interrupt notification of the hardware, the High Driver module can read the hardware queue to obtain the IO result of the IO request, determine whether the IO request is in error or not based on the IO result, and if the IO request is not in error, the High Driver module can return the IO result to the upper layer application; if the IO request is in error and the IO error is a simple error (such as a check error) which does not involve the change of the equipment state, the High Driver module can process the IO error, and the High Driver module only needs to issue the IO request again; if the High Driver module determines that the IO error needs to be processed by the Low Driver module, for example, a hardware error, based on the error number in the IO result, the High Driver module may notify the Low Driver module to perform response processing by using an IPC method.
After receiving the notification of the IO error by the High Driver module, the Low Driver module may refer to the schematic diagrams of processing the IO error by the Low Driver module shown in fig. 4 and fig. 7.
Referring to fig. 4 and 7, after S303, since the IO error is a hardware error, that is, the location accessed by the IO request is wrong, in order to avoid the repeated occurrence of the same type of IO error, the Low Driver module may perform S305 before resetting the hardware.
S305, the Low Driver module updates the hardware state and/or the queue state in the shared memory.
For example, the Low Driver module may determine an object within the device that is an error based on an error number in the IO result read from the hardware queue, which may be a component within the device, and/or the hardware queue.
The Low Driver module may change the state of the object with the error in the shared memory, for example, to a fault state, or to an unavailable state, so that when the High Driver module receives the App-side IO request again, if the hardware accessed by the IO request is related to the object with the error, the IO error is directly returned to the App through the FS.
Illustratively, as shown in fig. 7, the states of the plurality of components including the hard disk in the shared memory may change the states of the components having errors in the device to the unavailable states.
For example, if an error occurs in a certain granule of a certain partition in the hard disk, the Low Driver module may mark a segment of address in the hardware queue corresponding to the granule in the shared memory as an unavailable state. Illustratively, a set of granules in a partition corresponds to a hardware queue.
S307, the Low Driver module resets the hardware object with error in the hardware according to the hardware protocol.
For example, referring to fig. 4 and 7, the low Driver module may determine a hardware object (an intra-device component and/or a hardware queue) in which an error occurs in the device according to an error number in the IO result, reset the object in which the error occurs in the hardware according to a hardware protocol, and after the reset, obtain state information of the reset object from the hardware. Then referring to fig. 4, S305 may also be performed after S307. For example, the Low Driver module may update the state information of the re-acquired reset object to the shared memory.
The hardware protocol in S307 may include a protocol for managing hardware devices of the hardware, for example.
For example, the Low Driver module may reset the LUN of the hard disk, and after the reset, obtain the state of the LUN from the hard disk, and if the LUN is operating normally, update the LUN in the shared memory from the unavailable state to the available state.
For the operations of resetting and updating states of the components in the device such as the host Target, bus, scsi in fig. 7, reference is made to the above examples, and the details are not repeated here.
After all the components (which may include a hardware queue) in the hardware device are reset, each component in the device does not return to a normal operating state, and then the Low Driver module may uninstall the hardware device.
In other embodiments, the triggering of hardware offloading is not limited to: each component in the device in the hardware is abnormal, and the user actively triggers the hardware to unload, so that the application does not limit the hardware.
The unloading process of the Low Driver module to the hardware can be understood as the reverse process of the process of initializing the hardware by the Low Driver module.
For example, when the Low Driver module unloads the hardware, the data in the hardware queue abstracted in the shared memory may be emptied to the disk, and then the connection between the hardware and the host is disconnected.
Specifically, the Low Driver module may include, but is not limited to, the following steps:
updating the state of the hardware queue of the hardware in the shared memory into an unavailable state;
storing the data of the hardware queue object in the shared memory into a hardware queue in the hardware;
releasing the shared memory allocated to the hardware;
the information about the hardware in the hardware directory of Devhost is deleted. Illustratively, after the hardware accesses the system, the Devhost may write information of the accessed hardware (e.g., identification information of the hardware) to the hardware directory.
An IO processing system provided in the embodiment of the present application is described below. As shown in fig. 8:
Fig. 8 is a schematic structural diagram of an IO processing system according to an embodiment of the present application. As shown in fig. 8, the IO processing system 500 may include: processor 501, transceiver 505, and optionally memory 502.
The transceiver 505 may be referred to as a transceiver unit, a transceiver circuit, etc. for implementing a transceiver function. The transceiver 505 may include a receiver, which may be referred to as a receiver or a receiving circuit, etc., for implementing a receiving function, and a transmitter; the transmitter may be referred to as a transmitter or a transmitting circuit, etc., for implementing a transmitting function.
The memory 502 may store a computer program or software code or instructions 504, which computer program or software code or instructions 504 may also be referred to as firmware. The processor 501 may control the MAC layer and the PHY layer by running a computer program or software code or instructions 503 therein or by calling a computer program or software code or instructions 504 stored in the memory 502 to implement the IO processing method provided in the embodiments of the present application. The processor 501 may be a central processing unit (central processing unit, CPU), and the memory 502 may be, for example, a read-only memory (ROM), or a random access memory (random access memory, RAM).
The processor 501 and transceiver 505 described herein may be implemented on an integrated circuit (integrated circuit, IC), analog IC, radio frequency integrated circuit RFIC, mixed signal IC, application specific integrated circuit (application specific integrated circuit, ASIC), printed circuit board (printed circuit board, PCB), electronic device, or the like.
The IO processing system 500 may further include an antenna 506, and the modules included in the IO processing system 500 are only exemplary, and are not limited in this application.
As described above, the IO processing system in the above embodiment description may be an automatic driving system, but the scope of the IO processing system described in the present application is not limited thereto, and the structure of the IO processing system may not be limited by fig. 8. The IO processing system may be a stand-alone device or may be part of a larger device. For example, the implementation form of the IO processing system may be:
(1) A stand-alone integrated circuit IC, or chip, or a system-on-a-chip or subsystem; (2) A set of one or more ICs, optionally including storage means for storing data, instructions; (3) modules that may be embedded within other devices; (4) an in-vehicle apparatus, etc.; (5) others, and so forth.
For the case where the implementation form of the IO processing system is a chip or a chip system, reference may be made to the schematic diagram of the chip shown in fig. 9. The chip shown in fig. 9 includes a processor 601 and an interface 602. Wherein the number of processors 601 may be one or more, and the number of interfaces 602 may be a plurality. Alternatively, the chip or system of chips may include a memory 603.
All relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
Based on the same technical idea, the embodiments of the present application also provide a computer-readable storage medium storing a computer program, where the computer program includes at least one piece of code, and the at least one piece of code is executable by a computer to control the computer to implement the above-mentioned method embodiments.
Based on the same technical idea, the embodiments of the present application also provide a computer program for implementing the above-mentioned method embodiments when the computer program is executed by a terminal device.
The program may be stored in whole or in part on a storage medium that is packaged with the processor, or in part or in whole on a memory that is not packaged with the processor.
Based on the same technical conception, the embodiment of the application also provides a chip which comprises a network port controller and a processor. The network port controller and the processor can realize the method embodiment.
The steps of a method or algorithm described in connection with the disclosure of the embodiments disclosed herein may be embodied in hardware, or may be embodied in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in random access Memory (Random Access Memory, RAM), flash Memory, read Only Memory (ROM), erasable programmable Read Only Memory (Erasable Programmable ROM), electrically Erasable Programmable Read Only Memory (EEPROM), registers, hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (31)

1. An IO processing system, wherein the IO processing system is connected to a hardware device, the IO processing system comprising: the system comprises a preset business service and a driving frame, wherein a first driving module is arranged in the preset business service, and a second driving module is arranged in the driving frame; the first driving module comprises a hardware protocol;
The first driving module is used for:
responding to a received first IO request, and encapsulating the first IO request according to a hardware protocol of the hardware equipment to generate a second IO request;
the second IO request is sent to the hardware equipment through operating a hardware queue of the hardware equipment;
the second driving module is used for:
and carrying out connection management and IO error processing on the hardware equipment.
2. The system of claim 1, wherein the hardware device comprises a hard disk, and wherein the preset business service comprises: a plurality of file system services corresponding to the hard disk, wherein the first driving modules are respectively arranged in the plurality of file system services;
and the first driving module is arranged in a plurality of file system services of the hard disk and is used for operating different hardware queues of the hard disk.
3. The system of claim 2, wherein the system further comprises a controller configured to control the controller,
the number of the hardware queues of the hard disk operated by the first driving module arranged in the file system service with high priority is greater than the number of the hardware queues of the hard disk operated by the first driving module arranged in the file system service with low priority.
4. A system according to claim 2 or 3, characterized in that the second drive module is in particular adapted to:
and distributing the hardware queues for operation to a plurality of first driving modules in different file system services of the hard disk based on the number of the hardware queues of the hard disk.
5. The system of claim 1, wherein the hardware device comprises a physical network card and the preset business service comprises a network service;
the first driving module is disposed in the network service, and is specifically configured to:
in responding to IO requests from different applications, different hardware queues are operated.
6. The system of claim 5, wherein the first driver module disposed in the network service is further configured to:
and determining the number of the hardware queues corresponding to the application according to the priority of the application accessing the network service based on the number of the hardware queues of the physical network card.
7. The system of claim 1, wherein the driving framework has a plurality of second driving modules built therein, and the plurality of second driving modules are used for performing connection management and IO error processing on hardware devices with different protocol versions.
8. The system of claim 1, wherein the first driver module comprises a hardware protocol of one or more protocol versions of a hardware device.
9. The system of claim 1, wherein the system further comprises a controller configured to control the controller,
the second driving module is specifically configured to:
and responding to the initialization request of the hardware equipment, creating a hardware queue object for a hardware queue of the hardware equipment and writing the hardware queue object into a first memory.
The first driving module is specifically configured to:
and writing the second IO request into a hardware queue of the hardware device by calling the hardware queue object in the first memory.
10. The system of claim 9, wherein the system further comprises a controller configured to control the controller,
the second driving module is specifically configured to:
setting a queue state of a hardware queue of the hardware device and a hardware state of the hardware device to the first memory;
the first driving module is specifically configured to:
responding to a received first IO request, and acquiring a hardware state of the hardware device and a queue state of a target hardware queue in the first memory, wherein the target hardware queue is a hardware queue corresponding to a physical address accessed by the first IO request;
And under the condition that the hardware state and the queue state of the target hardware queue are both preset states, the first IO request is packaged according to the hardware protocol of the hardware equipment, and a second IO request is generated.
11. The system of claim 1, wherein the first drive module is further configured to:
receiving an interrupt of the hardware device;
and according to the interrupt, acquiring an IO result of the hardware equipment responding to the second IO request from the hardware queue of the operation.
12. The system of claim 11, wherein the first drive module is further configured to:
and sending the IO result to a target application, wherein the target application triggers the first IO request.
13. The system of claim 11, wherein the first drive module is further configured to:
and under the condition that the error number in the IO result is matched with the number in the preset number list, repackaging the first IO request according to the hardware protocol of the hardware equipment, and retransmitting the packaged first IO request to the hardware equipment through the re-operation of the hardware queue of the hardware equipment.
14. The system of claim 13, wherein the system further comprises a controller configured to control the controller,
the first driving module is further configured to:
under the condition that the error number in the IO result is matched with the number in the preset number list, the error number is sent to the second driving module;
the second driving module is specifically configured to:
determining an error hardware object in the hardware device according to the error number, and updating the state of the hardware object of the hardware device in a first memory into an unavailable state;
resetting the hardware object of the hardware device according to the hardware protocol;
and acquiring the state of the hardware object after the reset operation, so as to update the state of the hardware object in the first memory under the condition that the state of the hardware object is changed.
15. An IO processing method is applied to an IO processing system, the IO processing system is connected with hardware equipment, and the IO processing system comprises: the system comprises a preset business service and a driving frame, wherein a first driving module is arranged in the preset business service, and a second driving module is arranged in the driving frame; the first driving module comprises a hardware protocol;
The first driving module responds to the received first IO request, encapsulates the first IO request according to a hardware protocol of the hardware device, and generates a second IO request;
the first driving module sends the second IO request to the hardware equipment through operating a hardware queue of the hardware equipment;
and the second driving module is used for carrying out connection management and IO error processing on the hardware equipment.
16. The method of claim 15, wherein the hardware device comprises a hard disk, and wherein the preset business service comprises: a plurality of file system services corresponding to the hard disk, wherein the first driving modules are respectively arranged in the plurality of file system services;
the first driving module is arranged in a plurality of file system services of the hard disk, and the hardware queues of the operated hard disk are different.
17. The method of claim 16, wherein the step of determining the position of the probe comprises,
the number of the hardware queues of the hard disk operated by the first driving module arranged in the file system service with high priority is greater than the number of the hardware queues of the hard disk operated by the first driving module arranged in the file system service with low priority.
18. The method according to claim 16 or 17, wherein the second driver module performs connection management on the hardware device, including:
the second driving module allocates hardware queues for operation to the plurality of first driving modules placed in different file system services of the hard disk based on the number of the hardware queues of the hard disk.
19. The method of claim 15, wherein the hardware device comprises a physical network card and the pre-set business service comprises a network service;
the first driving module arranged in the network service responds to IO requests from different applications, and the operating hardware queues are different.
20. The method of claim 19, wherein the method further comprises:
the first driving module arranged in the network service determines the number of hardware queues corresponding to the application according to the priority of the application accessing the network service based on the number of the hardware queues of the physical network card.
21. The method of claim 15, wherein the drive frame has a plurality of the second drive modules built-in, the method further comprising:
And the plurality of second driving modules respectively carry out connection management and IO error processing on hardware devices with different protocol versions.
22. The method of claim 15, wherein the first driver module comprises a hardware protocol of one or more protocol versions of a hardware device.
23. The method of claim 15, wherein the step of determining the position of the probe is performed,
the second driving module performs connection management on the hardware device, including:
and the second driving module responds to the initialization request of the hardware equipment, creates a hardware queue object for a hardware queue of the hardware equipment and writes the hardware queue object into a first memory.
The first driving module sends the second IO request to the hardware device by operating a hardware queue of the hardware device, including:
and the first driving module writes the second IO request into a hardware queue of the hardware device by calling the hardware queue object in the first memory.
24. The method of claim 23, wherein the step of determining the position of the probe is performed,
the second driving module performs connection management on the hardware device, including:
the second driving module is used for setting the queue state of the hardware queue of the hardware device and the hardware state of the hardware device to the first memory;
The first driving module, in response to the received first IO request, encapsulates the first IO request according to a hardware protocol of the hardware device, and generates a second IO request, including:
the first driving module responds to a received first IO request, and obtains the hardware state of the hardware device and the queue state of a target hardware queue in the first memory, wherein the target hardware queue is a hardware queue corresponding to a physical address accessed by the first IO request;
and the first driving module encapsulates the first IO request according to the hardware protocol of the hardware equipment under the condition that the hardware state and the queue state of the target hardware queue are both preset states, and generates a second IO request.
25. The method of claim 15, wherein the method further comprises:
the first driving module receives the interrupt of the hardware equipment;
and the first driving module acquires an IO result of the hardware equipment responding to the second IO request from the operated hardware queue according to the interrupt.
26. The method of claim 25, wherein the method further comprises:
And the first driving module sends the IO result to a target application, wherein the target application triggers the first IO request.
27. The method of claim 25, wherein the method further comprises:
and under the condition that the error number in the IO result is matched with the number in the preset number list, the first driving module repackages the first IO request according to the hardware protocol of the hardware equipment, and retransmits the packaged first IO request to the hardware equipment through the hardware queue of the hardware equipment.
28. The method of claim 27, wherein the method further comprises:
under the condition that the error number in the IO result is matched with the number in the preset number list, the first driving module sends the error number to the second driving module;
the second driving module performs IO error processing on the hardware device, including:
the second driving module determines an error hardware object in the hardware device according to the error number, and updates the state of the hardware object of the hardware device in the first memory into an unavailable state; and carrying out reset operation on the hardware object of the hardware device according to the hardware protocol, and acquiring the state of the hardware object after the reset operation so as to update the state of the hardware object in the first memory under the condition that the state of the hardware object is changed.
29. An electronic device, comprising: a memory and a processor, the memory and the processor coupled; the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the IO processing method of any one of claims 15 to 28.
30. A computer readable storage medium comprising a computer program which, when run on a computer or processor, causes the computer or processor to perform the IO processing method of any one of claims 15 to 28.
31. A chip comprising one or more interface circuits and one or more processors; the interface circuit is configured to receive a signal from a memory of an electronic device and to send the signal to the processor, the signal including computer instructions stored in the memory; the computer instructions, when executed by the processor, cause the electronic device to perform the IO processing method of any one of claims 15 to 28.
CN202111357287.XA 2021-11-16 2021-11-16 IO processing method and system Pending CN116136737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111357287.XA CN116136737A (en) 2021-11-16 2021-11-16 IO processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111357287.XA CN116136737A (en) 2021-11-16 2021-11-16 IO processing method and system

Publications (1)

Publication Number Publication Date
CN116136737A true CN116136737A (en) 2023-05-19

Family

ID=86326897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111357287.XA Pending CN116136737A (en) 2021-11-16 2021-11-16 IO processing method and system

Country Status (1)

Country Link
CN (1) CN116136737A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998047074A1 (en) * 1997-04-15 1998-10-22 Microsoft Corporation File system primitive allowing reprocessing of i/o requests by multiple drivers in a layered driver i/o system
US20110119665A1 (en) * 2009-11-15 2011-05-19 Santos Jose Renato G Switching between direct mode and indirect mode for virtual machine I/O requests
CN102609298A (en) * 2012-01-11 2012-07-25 中国科学技术大学苏州研究院 Network card virtualizing system and network card virtualizing method on basis of hardware array expansion
US9256440B1 (en) * 2009-03-30 2016-02-09 Amazon Technologies, Inc. Facilitating device driver interactions
CN112148422A (en) * 2019-06-29 2020-12-29 华为技术有限公司 IO processing method and device
CN113297122A (en) * 2020-02-21 2021-08-24 英特尔公司 Influencing processor throttling based on serial bus aggregation IO connection management
CN113312155A (en) * 2021-07-29 2021-08-27 阿里云计算有限公司 Virtual machine creation method, device, equipment, system and computer program product

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998047074A1 (en) * 1997-04-15 1998-10-22 Microsoft Corporation File system primitive allowing reprocessing of i/o requests by multiple drivers in a layered driver i/o system
US9256440B1 (en) * 2009-03-30 2016-02-09 Amazon Technologies, Inc. Facilitating device driver interactions
US20110119665A1 (en) * 2009-11-15 2011-05-19 Santos Jose Renato G Switching between direct mode and indirect mode for virtual machine I/O requests
CN102609298A (en) * 2012-01-11 2012-07-25 中国科学技术大学苏州研究院 Network card virtualizing system and network card virtualizing method on basis of hardware array expansion
CN112148422A (en) * 2019-06-29 2020-12-29 华为技术有限公司 IO processing method and device
CN113297122A (en) * 2020-02-21 2021-08-24 英特尔公司 Influencing processor throttling based on serial bus aggregation IO connection management
CN113312155A (en) * 2021-07-29 2021-08-27 阿里云计算有限公司 Virtual machine creation method, device, equipment, system and computer program product

Similar Documents

Publication Publication Date Title
US7937518B2 (en) Method, apparatus, and computer usable program code for migrating virtual adapters from source physical adapters to destination physical adapters
US7421533B2 (en) Method to manage memory in a platform with virtual machines
US7483974B2 (en) Virtual management controller to coordinate processing blade management in a blade server environment
US8028184B2 (en) Device allocation changing method
EP0321724A2 (en) Apparatus and method for alterable resource partitioning enforcement in a data processing system having central processing units using different operating systems
US8544012B2 (en) Changing a scheduler in a virtual machine monitor
EP1691287A1 (en) Information processing device, process control method, and computer program
US5797031A (en) Method and apparatus for peripheral device control by clients in plural memory addressing modes
EP0321723A2 (en) Apparatus for a data processing system having a peer relationship among a plurality of central processing units
US20070033389A1 (en) Multiple, cooperating operating systems (OS) platform system and method
US20060020940A1 (en) Soft-partitioning systems and methods
US11360925B2 (en) Method and apparatus for host adaptation to a change of persona of a configurable integrated circuit die
JP2004220218A (en) Information processor
JPH1124943A (en) Computer restarting method and computer stopping method
US8635632B2 (en) High performance and resource efficient communications between partitions in a logically partitioned system
US9483782B2 (en) Automating capacity upgrade on demand
EP3436947B1 (en) Secure driver platform
CN114741233A (en) Quick start method
CN116136737A (en) IO processing method and system
JP2001236237A (en) Method for constituting multi-os
CN113296821A (en) Apparatus and method for providing container service and hot upgrade method of the apparatus
Neumann et al. Intel Virtualization Technology in Embedded and Communications Infrastructure Applications.
US11829772B2 (en) Heterogeneous compute domains with an embedded operating system in an information handling system
US20240103917A1 (en) Computer system, processing method, and recording medium
CN113934509A (en) System supporting hybrid virtual machine manager and operation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination