CN117891567B - Data processing method, device, system and storage medium - Google Patents

Data processing method, device, system and storage medium Download PDF

Info

Publication number
CN117891567B
CN117891567B CN202410303359.XA CN202410303359A CN117891567B CN 117891567 B CN117891567 B CN 117891567B CN 202410303359 A CN202410303359 A CN 202410303359A CN 117891567 B CN117891567 B CN 117891567B
Authority
CN
China
Prior art keywords
data
transmitted
queue
virtual queue
bitmap
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410303359.XA
Other languages
Chinese (zh)
Other versions
CN117891567A (en
Inventor
郭敬宇
徐源浩
苏广峰
亓开元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Jinan data Technology Co ltd
Original Assignee
Inspur Jinan data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Jinan data Technology Co ltd filed Critical Inspur Jinan data Technology Co ltd
Priority to CN202410303359.XA priority Critical patent/CN117891567B/en
Publication of CN117891567A publication Critical patent/CN117891567A/en
Application granted granted Critical
Publication of CN117891567B publication Critical patent/CN117891567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Computer And Data Communications (AREA)

Abstract

Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a data processing method, apparatus, system, and storage medium. The main steps of the method comprise: receiving a request for issuing data to be transmitted from a general block device layer, and writing the data to be transmitted into a virtual queue; updating stored bitmap information to be transmitted, wherein the bitmap information to be transmitted is used for representing the data writing state of a virtual queue, the bitmap number of the bitmap information to be transmitted corresponds to the queue number of the virtual queue, and one bitmap number represents the data writing state of one queue; when a preset notification sending condition is met, sending a data receiving notification to a back-end device in the VirtIO network architecture, wherein the back-end device is used for acquiring updated waiting bitmap information, and reading waiting data from a corresponding virtual queue according to the data receiving notification. By adopting the method, the system overhead caused by frequent interruption between the client and the host can be reduced, and the network performance is improved.

Description

Data processing method, device, system and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data processing method, apparatus, system, and storage medium.
Background
In a network architecture based on virtual input and output (virtual input and output), when an I/O request operation exists, a front end driver writes request data into a virtual queue, triggers VM exit (VM-Exit Control Fields, network virtual machine exit mechanism) to notify back end equipment, and after the back end equipment reads the request data in the virtual queue, interrupt notification is injected into the virtual machine, so that a VCPU (Virtual Central Processing Unit ) in the virtual machine triggers an interrupt handler to clean cache data of the virtual queue.
However, in many I/O request operations, the front-end driver of the client side frequently triggers the virtual machine to exit to notify the back-end device to read the request data, and the host side also frequently initiates the interrupt notification injection, and accordingly, the VCPU of the client side needs to respond to the interrupt frequently to process, which increases the overhead of the client side and the host side, so that the network performance is poor.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a data processing method, device, system and storage medium for improving network performance of the virto architecture.
In a first aspect, an embodiment of the present disclosure provides a data processing method applied to a front-end driver in a virto network architecture, the method including: receiving a request for issuing data to be transmitted from a general block device layer, and writing the data to be transmitted into a virtual queue;
Updating stored bitmap information to be transmitted, wherein the bitmap information to be transmitted is used for representing the data writing state of a virtual queue, the bitmap number of the bitmap information to be transmitted corresponds to the queue number of the virtual queue, and one bitmap number represents the data writing state of one queue; when a preset notification sending condition is met, sending a data receiving notification to a back-end device in the VirtIO network architecture, wherein the back-end device is used for acquiring updated waiting bitmap information, and reading waiting data from a corresponding virtual queue according to the data receiving notification. The notification sending condition includes that the quantity of the data to be transmitted accumulated in the virtual queue reaches a quantity threshold value N, or that the time interval of receiving the data to be transmitted by the virtual queue reaches a time threshold value T, wherein N is an integer greater than 1, and T is greater than 0.
In some embodiments, the data processing method may further comprise the steps of: when a preset notification sending condition is met, the updated bitmap information to be transmitted is written into a first area of a PCI (PCI Capability register, a set of standard registers agreed in PCI/PCIe (peripheral component interconnect/peripheral component interconnect express) specifications) register, and the first area is used for enabling the back-end equipment to acquire the updated bitmap information to be transmitted.
In some embodiments, the data processing method may further comprise the steps of: mapping the first region into an address space preset by a client of a VirtIO network architecture in response to client starting or equipment hot adding to obtain a file descriptor for executing the input/output read/write operation of the first region; a pending bitmap area for storing pending bitmap information is created in a client memory.
In some embodiments, the virto network architecture includes a first timer counter, and the data processing method may further include the following steps: when first data to be transmitted is written in the full empty virtual queue, a first timing counter is started;
Acquiring a recording result of the first timing counter; and determining whether the notification sending condition is met according to the recording result.
In some embodiments, the number threshold is less than or equal to the queue depth; the data processing method may further include the steps of: determining a quantity threshold according to the total quantity of data to be transmitted received in the time threshold and the single page size value of the memory page of the operating system; the time threshold is a preset maximum time for causing service delay perception or affecting the use experience of the user.
In some embodiments, the virtual queue includes a flow control buffer provided with a clock multiplier register for recording data representing the number of units of time for data transfer; the data processing method may further include the steps of: determining a time threshold based on the product of the unit interval of data transmission and the clock multiplier register value; the back-end equipment is used for determining the data processing speed of the host in a preset evaluation period, and increasing the clock multiplier register value when the current data processing speed in the current evaluation period is greater than the original data processing speed in the previous evaluation period; and when the current data processing speed in the current evaluation period is smaller than the original data processing speed in the last evaluation period, reducing the value of the clock multiplier register.
In some embodiments, writing the pending data to the virtual queue includes: and writing a plurality of data to be transmitted into the virtual queues according to the sequence from small to large of the queue number information of the queues in the virtual queues, wherein the number of bitmaps from low to high of the bitmap information to be transmitted corresponds to the virtual queues from small to large of the queue number information one by one.
In some embodiments, the data processing method may further comprise the steps of: receiving an interrupt notification injected from a back-end device; determining a virtual queue which finishes data reading according to the interrupt notification; and cleaning the read virtual queue cache data and updating the bitmap information to be transmitted.
In some embodiments, flushing the buffered data of the virtual queue according to the interrupt notification includes: and determining a virtual queue which finishes data reading according to the interrupt notification, and cleaning the read virtual queue to cache data.
In a second aspect, an embodiment of the present disclosure provides a data processing method applied to a backend device in a virto network architecture, where the data processing method includes: receiving a data receiving notification sent by a front-end driver in a VirtIO network architecture; the front-end driver is used for writing the data to be transmitted into the virtual queue according to the received issuing request of the data to be transmitted of the universal block device layer, updating stored bitmap information to be transmitted, wherein the bitmap information to be transmitted is used for representing the data writing state of the virtual queue, the bitmap number of the bitmap information to be transmitted corresponds to the number of the queues of the virtual queue, one bitmap number represents the data writing state of one queue, and when the preset notification sending condition is met, a data receiving notification is sent; acquiring updated bitmap information to be transmitted; and reading the data to be transmitted from the corresponding virtual queue according to the updated bitmap information to be transmitted.
The notification sending condition includes that the quantity of the data to be transmitted accumulated in the virtual queue reaches a quantity threshold value N, or that the time interval of receiving the data to be transmitted by the virtual queue reaches a time threshold value T, wherein N is an integer greater than 1, and T is greater than 0.
In some embodiments, the number of bitmaps from low to high of the bitmap information to be transmitted corresponds to the virtual queue from small to large of the queue number information one by one; reading the data to be transmitted from the corresponding virtual queue according to the updated bitmap information to be transmitted, including: confirming the data writing state of the virtual queue bit by bit from low order to high order according to the updated bitmap information to be transmitted, and determining the queue number information for writing the data to be transmitted; and reading the data to be transmitted from the virtual queue corresponding to the queue number information.
In some embodiments, the data processing method further comprises the steps of: and writing the read data to be transmitted into a storage device of a host in the VirtIO network architecture.
In some embodiments, the data processing method further comprises the steps of: after the read data to be transmitted is written into the storage equipment of the host, updating the stored read bitmap information, wherein the read bitmap information is used for representing the data reading state of the virtual queue; after the data to be transmitted corresponding to the data receiving notification is written into the storage device, writing the updated read bitmap information into a second area of the PCI register, and injecting an interrupt notification to the front-end driver; the front-end driver is used for acquiring the read bitmap information in the second area according to the interrupt notification and cleaning the cache data of the virtual queue corresponding to the read bitmap information.
In some embodiments, the number of bits of the read bitmap information corresponds to the number of queues of the virtual queue, and one of the number of bitmaps in the read bitmap information corresponds to the data read status of one of the queues of the virtual queue.
In some embodiments, the data processing method further comprises the steps of: determining the data processing speed of a host in a virtual input/output (VirtIO) network architecture in a preset evaluation period; the clock multiplier register value is determined according to the data processing speed and is used for enabling the front-end driver to determine a time threshold corresponding to the notification sending condition.
In some embodiments, the virto network architecture includes a second timer counter, configured to record a data processing amount and a processing time of the host, and determine a data processing speed of the host in a preset evaluation period, where the second timer counter includes:
and determining the data processing speed in the current evaluation period according to the sampling record result of the second timing counter in the evaluation period.
In some embodiments, determining the clock multiplier register value from the data processing speed includes: when the current data processing speed in the current evaluation period is greater than the original data processing speed in the previous evaluation period, increasing the value of a clock multiplier register;
and when the current data processing speed in the current evaluation period is smaller than the original data processing speed in the last evaluation period, reducing the value of the clock multiplier register.
In some embodiments, increasing the clock multiplier register value includes: increasing the clock multiplier register value according to the increased proportional value of the data processing speed; reducing the clock multiplier register value, comprising: the clock multiplier register value is reduced in accordance with the reduced scale value of the data processing speed.
In a third aspect, embodiments of the present disclosure provide a data processing apparatus for use in a front-end driver in a virto network architecture, the apparatus comprising: the request receiving module is used for receiving a request for issuing the data to be transmitted from the universal block equipment layer and writing the data to be transmitted into the virtual queue; the waiting transmission bitmap updating module is used for updating stored waiting transmission bitmap information, the waiting transmission bitmap information is used for representing the data writing state of the virtual queue, the bitmap number of the waiting transmission bitmap information corresponds to the number of the queues of the virtual queue, and one bitmap number represents the data writing state of one queue; the notification sending module is used for sending a data receiving notification to the back-end equipment in the VirtIO network architecture when a preset notification sending condition is met, and the back-end equipment is used for acquiring updated bitmap information to be transmitted according to the data receiving notification and reading the data to be transmitted from the corresponding virtual queue.
The notification sending condition includes that the quantity of the data to be transmitted accumulated in the virtual queue reaches a quantity threshold value N, or that the time interval of receiving the data to be transmitted by the virtual queue reaches a time threshold value T, wherein N is an integer greater than 1, and T is greater than 0.
In a fourth aspect, an embodiment of the present disclosure provides a data processing apparatus applied to a backend device in a virto network architecture, the apparatus including: the notification receiving module is used for receiving a data receiving notification sent by a front-end driver in the VirtIO network architecture; the front-end driver is used for writing the data to be transmitted into the virtual queue according to the received issuing request of the data to be transmitted of the universal block device layer, updating stored bitmap information to be transmitted, wherein the bitmap information to be transmitted is used for representing the data writing state of the virtual queue, the bitmap number of the bitmap information to be transmitted corresponds to the number of the queues of the virtual queue, one bitmap number represents the data writing state of one queue, and when a preset notification sending condition is met, a data receiving notification is sent; the waiting-transmission bitmap information acquisition module is used for acquiring updated waiting-transmission bitmap information; and the data reading module is used for reading the data to be transmitted from the corresponding virtual queue according to the updated bitmap information to be transmitted.
The notification sending condition includes that the quantity of the data to be transmitted accumulated in the virtual queue reaches a quantity threshold value N, or that the time interval of receiving the data to be transmitted by the virtual queue reaches a time threshold value T, wherein N is an integer greater than 1, and T is greater than 0.
In a fifth aspect, embodiments of the present disclosure provide a data processing system for use in a virto network architecture, the system comprising a client, a host, and a virtual queue; the client comprises a front-end driver, a PCI register and a general block device layer, wherein the front-end driver is used for executing the steps of the data processing method provided in any embodiment of the first aspect, the PCI register is used for transmitting device and parameter information between the client and the host, and the general block device layer is used for generating a data issuing request according to a data operation request of the general block device and sending the data issuing request to the front-end driver.
The host comprises a back-end device for performing the steps of the data processing method provided in any embodiment of the second aspect of the disclosure; the virtual queues are used for data transfer between the client and the host.
In a sixth aspect, embodiments of the present disclosure provide a front end driver of a virto network architecture, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the data processing method provided in any of the embodiments of the first aspect when the computer program is executed.
In a seventh aspect, embodiments of the present disclosure provide a backend device of a virto network architecture, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the data processing method provided in any of the embodiments of the present disclosure of the second aspect when the computer program is executed.
In an eighth aspect, embodiments of the present disclosure provide a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the data processing method provided in any of the embodiments of the first or second aspects.
According to the data processing method, device, system, equipment and medium, the front-end driver receives the issuing request of the data to be transmitted, writes the data to be transmitted into the virtual queue, updates the bitmap information to be transmitted, sends the data receiving notification to the back-end equipment when the preset notification sending condition is met, sends the data to be transmitted corresponding to the I/O requests to the back-end equipment through one-time data receiving notification, and can avoid the front-end driver to frequently trigger the virtual machine to exit to notify the back-end equipment to read the data to be transmitted, so that the system overhead is reduced, and the network performance is improved. Meanwhile, after the back-end equipment reads the data to be transmitted, one-time interrupt notification is injected to the front-end driver, so that the system overhead caused by frequent interrupt is reduced.
The data writing state and the data reading state of the virtual queue are respectively represented by the bitmap information to be transmitted and the bitmap information to be read, so that the data is more compact when the PCI register transmits the VirtIO configuration, the utilization efficiency of the configuration space is higher, different bitmap values are used for representing different state types, the expression is concise, the writing and reading conditions of the queue data can be finely distinguished, the number of the target queue can be determined through simple shift operation, and the efficiency of inquiring the queue to be processed is improved.
And secondly, sending a data receiving notification when any one of the quantity threshold value and the time threshold value is met, so that the situation that service delay and poor user experience are caused because the quantity accumulation condition is achieved only after waiting for too long time due to too long data waiting interval can be avoided, and the situation that the waiting data are more and cannot be transmitted in time because the virtual queue is overloaded when the time interval condition is achieved under the condition that the data waiting for transmission is concentrated and input in a large quantity is avoided.
By adjusting the value of the clock multiplier register according to the data processing speed of the host, the front-end driver adjusts the time threshold value in the notification sending condition according to the value of the clock multiplier register, so that more accurate and fine notification sending control can be realized, self-adaptive tuning of the time threshold value according to the load condition of the host is realized, and the data processing efficiency is improved.
Drawings
FIG. 1 is a diagram of an application environment for a data processing method in some embodiments;
FIG. 2 is a flow chart of a data processing method applied to a front-end driver in some embodiments;
FIG. 3 is a flow chart of data processing steps applied to a backend device in some embodiments;
FIG. 4 is a flow chart of steps involved in reading data to be transferred in some embodiments;
FIG. 5 is a flow chart of steps involved in backend device injection interrupt notification in some embodiments;
FIG. 6 is a block diagram of a data processing apparatus applied to a front end driver in some embodiments;
FIG. 7 is a block diagram of a data processing apparatus applied to a back-end device in some embodiments;
FIG. 8 is a block diagram of a data processing system in some embodiments;
FIG. 9 is an internal block diagram of a front end driver in some embodiments;
fig. 10 is an internal block diagram of a backend device in some embodiments.
Detailed Description
In order to make the technical scheme and advantages of the present disclosure more apparent, the embodiments of the present disclosure and related technical contents are further described in detail below with reference to the accompanying drawings and the description. It should be understood that the following description is only for illustrating the technical solutions of the embodiments of the present disclosure, and is not intended to limit more possible implementations of the present disclosure.
It is noted that the relational terms such as "first," "second," and the like, herein may be used solely to distinguish one from another article, state, or action without necessarily indicating, implying a relative importance or order relationship. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that the object that is included may not be limited to the objects listed herein. The term "plurality" or other variants are used to denote a number of objects of two or more.
The following first explains some terms in the embodiments of the present disclosure. It should be noted that these explanations are for the convenience of those skilled in the art, and do not limit the scope of the present application.
(1)VirtIO
VirtIO is an I/O paravirtualized solution, is a set of general purpose I/O device virtualized programs, and is an abstraction of a set of general purpose I/O devices in a paravirtualized Hypervisor. A communication framework and a programming interface between an upper layer application and each Hypervisor virtualization device (KVM, xen, VMware and the like) are provided, so that compatibility problems caused by cross-platform are reduced, and the development efficiency of a driving program is greatly improved.
The architecture of virtoio may include a front-end driver running in a client, a back-end device running in a host, and a virtual queue.
Front end driver: the virtual io inside the client (virtual machine) emulates the driver corresponding to the device. The system is used for receiving a user-state request, then packaging the request according to a transmission protocol, rewriting an I/O operation, and sending a notification to QEMU (Quick Emulator, an open-source virtual machine simulator used for simulating computer systems of different architectures) back-end equipment.
Back-end equipment: the method comprises the steps of creating in the QEMU of the host machine, receiving an I/O request sent by the front-end driver, analyzing according to a transmission protocol, operating the physical equipment, and informing the front-end equipment through a terminal mechanism.
Virtual queues: is a segment of ring buffer shared between the virtual machine and the QEMU, and uses VirtIO queues according to the transport protocol. The device has several queues, each handling a different data transmission.
(2) Host and client
A host refers to a physical computer running virtualization software, which may be a physical machine or other virtual machine, that provides physical resources and a virtualization software environment for creating and managing virtual machines.
A guest refers to a virtual machine running in a virtualized environment that can run different operating systems and applications, isolated from each other. Clients can be created, started, paused, resumed, and deleted on the host, with which they communicate and share resources through virtualization software.
In a first aspect, embodiments of the present disclosure provide a data processing method. The method may be applied in an application environment as shown in fig. 1. Data interaction is performed between the client 110 and the host 130 through the shared memory of the virtual queue 120. The device characteristics and configuration information in the virto network architecture are stored in the PCI register 140, where the PCI register 140 may be a set of standard registers agreed in the PCI/PCIe specification, and the stored information may include general characteristics, proprietary characteristics, device status information in the client 110, the host 130, and the like of the virto.
Specifically, the client 110 includes a general block device 111 and a front-end driver 112, and the host 130 includes a back-end device 131 and a storage device 132. When the general block device 111 has an I/O request, the general block data layer transmits the relevant request to the front end driver 112, and the front end driver 112 is configured to write the data to be transmitted into the virtual queue 120, so that the back end device 131 can read the data to be transmitted from the virtual queue 120, and complete data transmission. When a preset notification condition is met, the front end driver 112 sends a corresponding data receiving notification to the back end device 131, and updates relevant device characteristics/status information in the PCI register 140, so that the back end device 131 obtains receiving information corresponding to the data receiving notification, and further reads the data to be transmitted from the virtual queue 120, and the storage device 132 is configured to store the data to be transmitted read by the back end device 131. Arrows in the figure are used to indicate the flow of data transmission/notification information transmission between the devices/modules.
It should be noted that, the method for performing data processing between the client and the host provided by the embodiments of the present disclosure may be performed by a server. A user may interact with the server through a terminal device to receive and transmit information, and the terminal device may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, laptop and desktop computers, and the like.
Taking the front-end driver 112 in the virto network architecture of fig. 1 as an example, in some embodiments, as shown in fig. 2, the data processing method includes steps S201 to S203 that the front-end driver 112 may perform. The respective steps are explained below.
Step S201, receiving a request for issuing the data to be transmitted from the universal block device layer, and writing the data to be transmitted into the virtual queue.
The generic block device layer is used to process I/O operations related to the generic block device 111, where the generic block device 111 may include a hard disk, a floppy disk drive, a flash memory, a usb disk, an SD card, and the like. When the general block device 111 performs an I/O operation, the general block device layer generates a transmission request corresponding to data to be transmitted, and transmits the transmission request to the front end driver 112. The data to be transferred refers to data corresponding to the I/O operation performed by the general block device 111. The request for issuing the data to be transmitted may be a request for issuing one message data, or may be a request for issuing one message data. The virtual queue 120 includes a plurality of queues for temporarily storing data to be transmitted.
Specifically, the front end driver 112 receives a request for issuing pending data from the generic block device layer, and writes the pending data to the virtual queue 120. Upon receipt of a request for issue, the front end driver 112 writes the pending data corresponding to the request for issue into the virtual queue 120.
Step S202, updating the stored waiting bitmap information.
The bitmap information to be transmitted is used to represent the data writing state of the virtual queue 120, where the data writing state includes a non-empty (written data) state and an empty (unwritten data) state, the bitmap number of the bitmap information to be transmitted corresponds to the number of queues of the virtual queue 120, and one bitmap number represents the data writing state of one queue, specifically, one bitmap (bitmap) may represent the data writing state of all queues in the virtual queue 120, where the data writing state is in the non-empty state and may be represented by a bitmap number with a value of 1, and where the data writing state is in the empty state and may be represented by a bitmap number with a value of 0, for example: when the number of queues of the virtual queue 120 is 8, the number of bitmaps of bitmap information to be transmitted is also 8, where the bitmap information to be transmitted is: 0000 0001, which means that only the queue with the queue number 0 in the virtual queue 120 has data, and no data exists in other queues; the bitmap information to be transmitted is: 0001 1111, there is data in the queues of queue numbers 0 through 4 in virtual queue 120, and there is no data in the queues of queue numbers 5 through 7.
Specifically, the front end driver 112 updates the pending bitmap information according to the data writing state of each queue in the virtual queue after writing the pending data into the virtual queue 120.
Step S203, when the preset notification sending condition is met, sending a data receiving notification to the back-end equipment in the VirtIO network architecture.
The notification sending condition includes that the number of data to be transmitted accumulated in the virtual queue 120 reaches a number threshold N, or that a time interval in which the virtual queue 120 receives the data to be transmitted reaches a time threshold T, where N is an integer greater than 1, and T is greater than 0.
The specific values of the number threshold N and the time threshold T may be set empirically by those skilled in the art, or may be set comprehensively by the front-end driver 112 or the back-end device 131 according to the current I/O device throughput, system configuration, and the like.
The time threshold T may be the maximum time interval that causes traffic delay perception or may affect the user's usage experience, and in some cases, the time threshold T may be 1ms.
Specifically, in the case where any one of the number threshold N and the time threshold T is satisfied, the front-end driver 112 issues a data reception notification to the back-end device 131.
The back-end device 131 is configured to obtain updated pending bitmap information, and read pending data from a corresponding virtual queue according to a data reception notification. The data reception notification is used to indicate that there is pending data in the current virtual queue. But does not carry specific virtual queue number information, and the queue number information with data to be received is represented by the bitmap information to be transmitted.
In some embodiments, the data processing method may further comprise the steps of: and when the preset notification sending condition is met, writing the updated bitmap information to be transmitted into a first area of the PCI register. The first area is used for enabling the back-end equipment to acquire updated waiting bitmap information.
The back-end device 131 may include a KVM module and a Qemu module, and in some embodiments, during the device initialization phase, the KVM module registers to monitor the first area, and when the first area is written with updated bitmap information, the KVM module monitors that a change occurs. The KVM module notifies the Qemu module of the obtained change information about the pending bitmap information, and after the back-end device receives the data receiving notification, the Qemu module determines, based on the updated pending bitmap information, queue number information in a virtual queue that needs to read the pending data, and reads the pending data from the virtual queue 120 according to the queue number information.
In some embodiments, when a client is started or hot standby is added, the front end driver performs device initialization, needs to map a first area in the PCI register, maps the first area into an address space preset by the client of the virto network architecture, obtains a file descriptor that a driver program can use to perform input/output read/write operations of the first area, creates a waiting bitmap area in a client memory, where the waiting bitmap area has the same size as the first area, and is used to record a data writing state of each queue in the virtual queue, changes each bitmap number of gbm when each queue data writing state in the virtual queue 120 changes, and when a preset notification sending condition is met, gbm is flushed into the first area in the PCI register. In the device initialization stage, the back-end device registers and monitors the first area, and after the updated gbm is brushed into the first area, the back-end device monitors the writing state, and determines that the data to be transmitted exist to be processed.
Through steps S201 to S203, the front end driver 112 receives the request for issuing the data to be transmitted, writes the data to be transmitted into the virtual queue 120, updates the bitmap information to be transmitted, and sends a data receiving notification to the back end device 131 when a preset notification sending condition is satisfied, and sends the data to be transmitted corresponding to the plurality of I/O requests to the back end device 131 through one data receiving notification, so that the front end driver 112 can be prevented from frequently triggering VM exit to notify the back end device 131 to read the data to be transmitted, thereby reducing system overhead and improving network performance.
Meanwhile, the data writing state of the virtual queue is represented by the bitmap information to be transmitted including the multi-bit bitmap number, so that the data is more compact when the PCI register 140 transmits the VirtIO configuration, the utilization efficiency of configuration space is higher, the bitmap values 0 and 1 are used for representing the empty state and the non-empty state, the expression is concise, whether the queue has data or not can be finely distinguished, the virtual queue number corresponding to the data to be transmitted can be determined through simple shift operation, and the efficiency of inquiring the queue to be processed is improved.
One of the quantity accumulation condition (the quantity of the data to be transmitted accumulated in the virtual queue reaches a quantity threshold value N) and the time interval condition (the time interval of the virtual queue receiving the data to be transmitted reaches a time threshold value T) is met, namely, a data receiving notification is sent out, so that the situation that the quantity accumulation condition is needed to be achieved after the data to be transmitted is too long, service delay and poor user experience are caused, and the situation that the virtual queue is overloaded when the time interval condition is reached under the condition that the data to be transmitted is centralized and large in quantity is input, and the situation that the data to be transmitted cannot be transmitted in time is avoided.
In some embodiments, the virto network architecture may include a first timer counter, and the data processing method may further include the steps of: and when the first waiting data is written in the full empty virtual queue, starting a first timing counter.
The full empty virtual queue refers to that the writing state of the data to be transmitted in the virtual queue 120 is the empty state at the current moment, and the first timer counter is set in the client of the virto network architecture and is used for recording the number of the data to be transmitted accumulated in the virtual queue and the time interval of the virtual queue receiving the data to be transmitted. And when the first waiting data is written in the full empty virtual queue, starting a first timing counter. After the data to be transmitted in the virtual queue is read and the cache data is cleaned, the recording result of the first timing counter is reset.
In some alternative embodiments, when the generic block device layer issues a request for the delivery of data to be transferred to the front-end driver while the virtual queue 120 is in the full empty state, the first timer counter is started.
In some embodiments, the data processing method may further comprise the steps of: acquiring a recording result of the first timing counter; and determining whether the notification sending condition is met according to the recording result.
Specifically, the front end driver 112 acquires the recording result of the first timer counter, and determines that the notification issue condition is satisfied when it is determined that one of the number accumulation condition or the time interval condition is satisfied, based on the recording result of the first timer counter.
In some embodiments, the data processing method may further comprise the steps of: and determining a quantity threshold according to the queue depth of the virtual queue and the attribute information of the operating system.
In some embodiments, the number threshold is less than or equal to the queue depth, and the data processing method may further include the steps of: and determining a quantity threshold according to the total quantity of the data to be transmitted received in the time threshold and the single page size value of the memory page of the operating system.
The time threshold is a preset maximum time for causing service delay perception or affecting the use experience of the user.
The queue depth refers to the maximum number of packets that can be accommodated in a queue for storing and processing packets in a network device or system. In general, the size of the queue depth can affect the performance and traffic handling capabilities of the network device. In some cases the value of the queue depth may be 32, 64 or other values. In the case where the queue depth value of the virtual queue is 64, the virtual queue may temporarily store 64 caches of data to be transferred.
The operating system attribute information may include an operating system memory page single page size.
Specifically, the number threshold needs to be less than or equal to the queue depth, and when the number threshold is greater than the queue depth, there may be a queue overflow or packet loss, which results in network congestion and performance degradation.
The front-end driver 112 or the back-end device 131 may determine the quantity threshold according to the operating system attribute information, specifically, may determine the total quantity of data to be transmitted X and the single page size Y of the operating system memory page received within the time threshold. The time threshold may be a maximum time T that may cause service delay to sense or affect the user experience, and the total amount X of data to be transmitted is determined according to the transmission speed of the data to be transmitted and the time threshold T. Since page-wise alignment is typically considered before writing the data to the virtual queue, the data is scattered into multiple buffers, the number threshold N can be determined by dividing the total amount of data received during the time threshold X by the single page size of the operating system memory page Y.
In other examples, the quantity threshold may also be determined empirically by a technician based on operating system attribute information and virto network performance.
The quantity threshold value is determined according to the queue depth of the virtual queue and the attribute information of the operating system, so that the situation that network congestion and poor performance are caused by the fact that the quantity threshold value is larger than the queue depth can be avoided, and meanwhile, the suitability of the quantity threshold value and a current operating system can be enhanced by considering the attribute information of the operating system.
In some embodiments, the virtual queue includes a flow control buffer provided with a clock multiplier register for recording data representing the number of units of time for data transmission, and the data processing method may further include the steps of: the time threshold is determined based on the product of the unit interval of data transfer and the clock multiplier register value.
The back-end equipment is used for determining the data processing speed of the host in a preset evaluation period, and increasing the clock multiplier register value when the current data processing speed in the current evaluation period is greater than the original data processing speed in the previous evaluation period; and when the current data processing speed in the current evaluation period is smaller than the original data processing speed in the last evaluation period, reducing the value of the clock multiplier register.
The data processing speed of the host reflects the throughput of the I/O device to a certain extent, the throughput of the I/O device is smaller, the data processing speed of the host is limited by the I/O device, and when the data processing speed is smaller and smaller than the first threshold value, the time threshold value is reduced according to the data processing speed, so that the time interval of sending a data receiving notification to the rear-end device by the front-end driver can be shortened, and the response speed and the overall performance of the system are improved. Similarly, when the data processing speed of the host is higher and is greater than the second threshold, the throughput of the I/O device has a peak, and at this time, the time threshold is adjusted to be higher according to the data processing speed, so that the time interval for the front-end driver 112 to send the data receiving notification to the back-end device 131 is prolonged, and after the data to be transmitted is accumulated for a period of time, the data receiving notification is performed once, thereby fully playing the advantages of batch processing and improving the data processing efficiency. Wherein the first threshold and the second threshold are not limited herein and may be configured according to experience and actual needs by those skilled in the art.
In some embodiments, writing the pending data to the virtual queue includes: and writing a plurality of data to be transmitted into the virtual queues according to the sequence from small to large of the queue number information of the queues in the virtual queues, wherein the number of bitmaps from low to high of the bitmap information to be transmitted corresponds to the virtual queues from small to large of the queue number information one by one.
Specifically, the priority order of the data writing sequence of the queues is ordered, and according to the sequence from small to large of the queue number information, the smaller the queue number is, the higher the priority of the queue data writing is, and the front end driver writes the data into the queue with high priority preferentially.
By prioritizing the data writing sequence of the virtual queues, accordingly, when the back-end device 131 obtains updated bitmap information to be transmitted, the queues corresponding to the data to be transmitted are determined through shift operation, in general, the shift operation is confirmed from low order to high order, the number of bitmaps from low order to high order of the bitmap information to be transmitted corresponds to the virtual queues from small to large in number of queues one by one, the queues with data can be queried preferentially, and the queues without data can be terminated in time, so that query efficiency is further improved.
In some embodiments, the data processing method may further comprise the steps of: receiving an interrupt notification injected from the back-end equipment, and determining a virtual queue for completing data reading according to the interrupt notification; and cleaning the read virtual queue cache data and updating the bitmap information to be transmitted.
The interrupt notification is used to indicate that the backend device 131 has completed the read and write operations of the data to be extracted.
Specifically, after the back-end device 131 finishes reading and storing the data to be extracted in the virtual queue, it sends an interrupt notification to the front-end driver 112, and the front-end driver 112 receives the interrupt notification, and clears the buffered data of the virtual queue according to the interrupt notification, so that the virtual queue 120 has been changed from the non-empty state to the full-empty state, accordingly, since the bitmap information to be transmitted is used to represent the data writing state of the virtual queue, after the buffered data of the virtual queue is cleared, the bitmap information to be transmitted is updated, for example, the bitmap information to be transmitted is updated from 0001 1111 to 0000 0000.
In some embodiments, the interrupt notification is sent via a irqfd mechanism (a communication mechanism that provides a shortcut for the backend device to send the notification to the front-end driver).
In some examples, the interrupt notification may carry queue number information of the virtual queue that has completed reading data, and the front end driver 112 determines, according to the interrupt notification, the queue number information that has completed reading data, and then clears the buffered data in the corresponding queue.
In other examples, the queue number information of the completed data read may be stored in the PCI register by a bitmap or other form, and the front end driver 112 may read the corresponding queue number information from the PCI register according to the interrupt notification, and determine that the queue of the completed read is cleared for the buffered data.
In a second aspect, embodiments of the present disclosure provide a data processing method. The method is applied to the backend device 131 in the virto network architecture shown in fig. 1, and as shown in fig. 1 and 3, the data processing method includes steps S301 to S302 that the backend device 131 can execute. The respective steps are explained below.
In step S301, the front end driver 112 in the virto network architecture sends out a data receiving notification.
The front end driver 112 is configured to write the to-be-transmitted data into the virtual queue 120 according to the received request for issuing the to-be-transmitted data of the generic block device layer, update stored to-be-transmitted bitmap information, where the to-be-transmitted bitmap information is used to represent a data writing state of the virtual queue, and the number of bitmaps of the to-be-transmitted bitmap information corresponds to the number of queues of the virtual queue, where one bitmap number represents a data writing state of one queue, and send a data receiving notification when a preset notification sending condition is met. The notification sending condition includes that the quantity of the data to be transmitted accumulated in the virtual queue reaches a quantity threshold value N, or that the time interval of receiving the data to be transmitted by the virtual queue reaches a time threshold value T, wherein N is an integer greater than 1, and T is greater than 0.
Step S302, updated waiting bitmap information is obtained.
Step S303, reading the data to be transmitted from the corresponding virtual queue according to the updated bitmap information to be transmitted.
Through steps S301 to S303, the back-end device 131 receives the data receiving notification sent by the front-end driver 112, acquires updated bitmap information, determines the virtual queue 120 in which the data to be transmitted exists according to the updated bitmap information, and further reads the data to be transmitted from the virtual queue. When the front-end driver 112 meets the preset notification sending condition, it sends a data receiving notification to the back-end device 131, and sends the data to be transmitted corresponding to the plurality of I/O requests to the back-end device 131 through one data receiving notification, so that the front-end driver 112 can be prevented from frequently triggering the virtual machine to exit to notify the back-end device 131 to read the data to be transmitted, system overhead is reduced, and network performance can be improved.
The embodiments of the related operations, notification issuing conditions, and acquisition of the bitmap information by the backend device 131 after the front-end driver 112 receives the issuing request are described in the data processing method applied to the front-end driver 112 provided in the foregoing first aspect, and are not described in detail herein.
In some embodiments, the number of bitmaps from low to high of the bitmap information to be transferred corresponds to the virtual queue from the small to the large of the queue number information one by one, as shown in fig. 4, step S303 may include step S401 and step S402.
Step S401, confirming the data writing state of the virtual queue bit by bit from the low order to the high order according to the updated waiting bit map information, and determining the queue number information for writing waiting data.
As described above, the to-be-transmitted bitmap information is used to represent the data writing state of the virtual queue, and the backend device 131 may determine the queue number information written into the to-be-transmitted data by performing shift operation on the updated to-be-transmitted bitmap information.
In some specific embodiments, since the bitmap information to be transmitted can respectively represent the non-empty state and the empty state of the queue by using 1 and 0, the values of the bitmaps of the bitmap information to be transmitted can be determined by performing displacement operation and bitwise and operation on the bitmap information to be transmitted, so as to confirm the writing state of the queue. For example: the queue number information is expressed as queue_index, the bitmap information to be transmitted is expressed as bitmap, and when (1 < < queue_index) & bitmap >0 is satisfied, the data writing state of the queue queue_index is a non-empty state. Wherein "(1 < < queue_index)" indicates that the number 1 is shifted left by the queue_index bit, when the queue number information is 2, the result after the number 1 is shifted left by 2 bits is 100, "& bitmap" indicates that the displacement result and the bitmap information to be transmitted are bitwise and operated, and only when the two numbers compared by bits are 1, the result is 1, otherwise the result is 0. At this time, if the bitmap information to be transmitted is 1100, the bits and (100) & (1100) are pressed, and as a result, 100, since 100>0, the queue data writing state with the queue number information of 2 is a non-empty state.
Step S402, reading the data to be transmitted from the virtual queue corresponding to the queue number information.
By performing shift operation on bitmap information to be transmitted, each bitmap number in the bitmap information to be transmitted can be determined more quickly than in the conventional arithmetic or other logic operations, and meanwhile, the storage space required for determining the bitmap number can be remarkably reduced by using shift operation.
In some embodiments, the data processing method may further comprise the steps of: the read pending data is written to the host's storage device 132.
Wherein the storage device 132 may be a plurality of types of physical or virtual storage devices. May be a block device, for example: local disk, SAN (Storage Area Network) disk, iSCSI disk, etc. But may also be a network file system, distributed storage system, etc.
In some embodiments, the data processing method may further comprise the steps of: and transmitting the read data to be transmitted to a switch or other data transfer equipment.
In some embodiments, as shown in fig. 5, the data processing method may further include step S501 and step S502.
In step S501, after the read pending data is written into the storage device 132 of the host 130, the read bitmap information is updated.
The read bitmap information is used for representing data reading states of the virtual queues, wherein the data reading states comprise a read state and an unread state.
In some embodiments, the number of bitmaps of the read bitmap information corresponds to the number of queues of the virtual queue, one bitmap number in the read bitmap information corresponds to the data reading state of one queue of the virtual queue, one bitmap number indicates the data reading state of one queue, specifically, one bitmap (bitmap) may be used to indicate the data reading state of all queues in the virtual queue, when the data reading state is the read state, the bitmap number with a value of 1 may be used, and when the data reading state is the null state, the bitmap number with a value of 0 may be used, for example: when the number of queues in the virtual queue is 8, the number of bitmaps of the read bitmap information is also 8, and the read bitmap information is: 0001 1111, queue data indicating that queue numbers 0 to 4 in the virtual queue have been read.
In some specific embodiments, during the device initialization phase, the backend device 131 sets a second area in the PCI register for storing updated read bitmap information, and the backend device 131 initializes a bitmap (indicated as hbm) with the same size as the second area, for recording the data reading status of each queue in the virtual queue, while the front-end driver maps the second area to a file descriptor that can be used by the driver based on the io read/write, and hbm is flushed into the second area when the data to be transferred in the virtual queue is all read and written by the backend device.
In step S502, after the data to be transmitted corresponding to the data receiving notification is written into the storage device 132, the updated read bitmap information is written into the second area of the PCI register, and the interrupt notification is injected into the front-end driver 112.
The data receiving notification is written into the storage device 132 corresponding to the data to be transmitted, that is, the back-end device 131 completes the read-write operation on the data to be transmitted in the virtual queue.
In some embodiments, during the device initialization phase, the host allocates irqfd and registers it with the KVM module of the back-end device 131 when the PCI device is emulated, so as to notify the KVM module to inject an interrupt into the front-end drive 112 when the back-end device 131 has processed the data to be transmitted.
In the related art, the Qemu module of the back-end device 131 notifies the KVM injection interrupt by using irqfd corresponding to the queue after the read-write operation is completed on the data to be transmitted in a certain queue according to irqfd which is allocated to the same number as the number of queues of the virtual queue, and when the data to be transmitted is available in a plurality of queues, the interrupt injection is frequent, but in the embodiment of the present disclosure, only through 1 special irqfd, after the data to be transmitted in the virtual queue is read-written by the back-end device 131, an interrupt notification is injected to the front-end driver 112 once, so that the situation that the system overhead is increased and the network performance is reduced due to frequent interrupt is avoided.
The front end driver 112 is configured to obtain the read bitmap information in the second area according to the interrupt notification, and clear cache data of the virtual queue corresponding to the read bitmap information.
In some embodiments, the pre-driver 112 triggers an interrupt handling function, obtains the read bitmap information from the PCI register, and obtains all the processed queue number information via a shift operation, clearing the buffered data corresponding to the queue space.
In some embodiments, the data processing method may further comprise the steps of: determining the data processing speed of the host in a preset evaluation period, and determining the value of a clock multiplier register according to the data processing speed.
The clock multiplier register value is used to cause the front-end driver 112 to determine a time threshold corresponding to the notification issue condition.
The evaluation period may be a period preset by a technician for evaluating the data processing speed of the host, and in some examples, the evaluation period may be any period from when the I/O device starts data transmission (I/O device starts) to when the data transmission is completed (I/O device unloads); in other examples, the evaluation period may be a specific period during which the I/O device starts data transmission to the end of the data transmission, for example, the period during which the data transmission to the end of the data transmission is equally divided into a plurality of evaluation periods; in other examples, the evaluation of the data processing speed may also be performed at regular time intervals for one evaluation period, for example, every 5ns (nanoseconds). Specifically, the period of the evaluation period may be set according to actual needs, and is not particularly limited herein.
In some embodiments, the clock multiplier register is used to store a multiplier value (clock multiplier register value), which may be the number of units of time for data transfer.
The multiplier value is used to enable the front-end driver 112 to adjust the time threshold corresponding to the notification sending condition, that is, adjust the timing interval of the first timing counter, so as to change the time interval of the front-end driver 112 sending the data receiving notification. Wherein, a flow control buffer area for flow control can be set in the virtual queue, and the clock multiplier register is a configurable field in the flow control buffer area.
In some embodiments, the clock multiplier register value may be represented by CMR, which may range in value from 1 to 1000000.
In some particular embodiments, the front end driver 112 determining a time threshold for notification of an issue condition based on the clock multiplier register value may include: the time threshold is determined based on the product of the unit time of the data transfer and the clock multiplier register value.
By adjusting the clock multiplier register value according to the data processing speed of the host, the front-end driver 112 adjusts the time threshold value in the notification sending condition according to the clock multiplier register value, thereby realizing more accurate and fine notification sending control, realizing self-adaptive tuning of the time threshold value according to the load condition of the host, and improving the data processing efficiency.
In some embodiments, the virto network architecture includes a second timer counter, and determining the data processing speed of the host 130 in the preset evaluation period includes: and determining the data processing speed in the current evaluation period according to the sampling record result of the second timing counter in the evaluation period.
The second timer counter is set in the host 130 of the virto network architecture, and is used for recording the data processing amount and processing time of the host, in some specific embodiments, the second timer counter is started when a preset evaluation period starts, multiple times of timer counter sampling can be performed in the evaluation period, and the data processing speed of the host can be determined according to a result of recording a certain time or multiple times of sampling.
Setting the second timer counter in the host 130 to perform multiple sampling records can reduce occupation of virtual machine resources compared with setting the second timer counter in the client, and also facilitate the on-line modification of the time threshold and/or the number threshold of the first timer counter by the back-end device 131.
In some embodiments, determining the clock multiplier register value based on the data processing speed may include the following two cases.
Case one: the current data processing speed in the current evaluation period is greater than the original data processing speed in the previous evaluation period.
The current data processing speed is greater than the original data processing speed, which means that the throughput of the I/O device in the current evaluation period is greater than the throughput of the I/O device in the previous evaluation period, and the clock multiplier register value is increased.
And a second case: the current data processing speed in the current evaluation period is smaller than the original data processing speed in the previous evaluation period.
The current data processing speed is smaller than the original data processing speed, which means that the throughput of the I/O device in the current evaluation period is smaller than the throughput of the I/O device in the last evaluation period, and the clock multiplier register value is reduced.
In some embodiments, increasing the clock multiplier register value may include: the clock multiplier register value is increased in accordance with the increased proportional value of the data processing speed. Reducing the clock multiplier register value may include: the clock multiplier register value is reduced in accordance with the reduced scale value of the data processing speed.
In some specific embodiments, the upper limit of the range of CMR is determined according to the maximum time value that causes traffic delay perception or affects the user experience and the unit time of data transmission, for example, the maximum time value is 1ms, the unit time of data transmission is 1ns, and the upper limit of the range of CMR is 1000000.
In some embodiments, the amount of change in the increase/decrease in the clock multiplier register value is determined based on the amount of change in the data processing speed in the current evaluation period and the last evaluation period, and the amount of change in the increase/decrease in the clock multiplier register value is determined by the proportional value of the increase/decrease in the data processing speed. For example: the original clock multiplier register value is denoted as CMR 0, the changed clock multiplier register value is denoted as CMR 1, the maximum value of the clock multiplier register value is denoted as CMR max, and the current data processing speed is increased or decreased by Var compared with the original data processing speed, the value of CMR 1 can be expressed by the following formula:
CMR1=CMR0±CMRmax×Var%
in some specific embodiments, the time threshold for notification of an outgoing condition may be expressed by the following formula:
T=CMR1×Δt
where Δt is the unit time of data transmission.
It should be understood that, although the steps in the flowcharts of fig. 2 to 5 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps illustrated in fig. 2-5, as well as the steps involved in other embodiments, are not strictly limited to the order of execution unless explicitly stated herein, and may be performed in other orders. Moreover, at least some of the steps of the foregoing embodiments may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
In a third aspect, an embodiment of the present disclosure provides a data processing apparatus, applied to a front end driver in a virto network architecture, as shown in fig. 6, a data processing apparatus 600 includes: a request receiving module 601, a pending bitmap updating module 602, and a notification transmitting module 603.
The request receiving module 601 is configured to receive a request for issuing to-be-transmitted data from the generic block device layer, and write the to-be-transmitted data into the virtual queue.
The pending bitmap updating module 602 is configured to update stored pending bitmap information, where the pending bitmap information is used to represent a data writing state of a virtual queue, and the number of bitmaps of the pending bitmap information corresponds to the number of queues of the virtual queue, and one bitmap number represents a data writing state of one queue.
The notification sending module 603 is configured to send a data receiving notification to a back-end device in the virto network architecture when a preset notification sending condition is met, where the back-end device is configured to obtain updated bitmap information to be transmitted according to the data receiving notification, and read data to be transmitted from a corresponding virtual queue.
The notification sending condition includes that the quantity of the data to be transmitted accumulated in the virtual queue reaches a quantity threshold value N, or that the time interval of receiving the data to be transmitted by the virtual queue reaches a time threshold value T, wherein N is an integer greater than 1, and T is greater than 0.
In some embodiments, the data processing apparatus 600 further includes a pending bitmap writing module, configured to write updated pending bitmap information into a first area of the PCI register when a preset notification sending condition is met, where the first area is configured to enable the backend device to acquire the updated pending bitmap information.
In some embodiments, the data processing apparatus 600 further includes a timing count control module configured to start a first timing counter when first pending data is written in the full empty virtual queue.
In some embodiments, the data processing apparatus 600 further includes a timer counter result acquisition module configured to acquire a recording result of the first timer counter, and a notification condition determination module configured to determine whether a notification issue condition is satisfied according to the recording result.
In some embodiments, the data processing apparatus 600 further comprises a quantity threshold determination module for determining a quantity threshold based on the queue depth of the virtual queue and the operating system attribute information.
In some embodiments, the data processing apparatus 600 further includes a time threshold determining module configured to determine a time threshold according to a data processing speed of a host in the virto network architecture, the time threshold being proportional to the data processing speed.
In some embodiments, the request receiving module 601 includes a priority transmission unit, configured to write a plurality of pending data into a virtual queue according to a sequence from a small queue number information of a plurality of queues in the virtual queue.
In some embodiments, the interrupt notification receiving module is configured to receive an interrupt notification injected from the backend device, and the cache cleaning module is configured to clean cache data of the virtual queue according to the interrupt notification, and update the bitmap information to be transmitted.
In some embodiments, the buffer cleaning module includes a cleaning queue determining unit, configured to determine, according to the interrupt notification, a virtual queue in which data reading is completed, and clean the read virtual queue to buffer data.
For more specific limitations of the data processing apparatus 600, reference may be made to the above description of a data processing method, and the data processing apparatus 600 may be used to perform further steps of the method in any embodiment of the first aspect of the present disclosure, which is not described here again. The various modules in the data processing apparatus 600 described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In a fourth aspect, an embodiment of the present disclosure provides a data processing apparatus, applied to a backend device in a virto network architecture, as shown in fig. 7, where the data processing apparatus 700 includes: a notification receiving module 701, a waiting bitmap information acquiring module 702 and a data reading module 703, wherein: a notification receiving module 701, configured to receive a data reception notification sent by a front end driver in a virto network architecture; the front-end driver is used for writing the data to be transmitted into the virtual queue according to the received issuing request of the data to be transmitted of the universal block device layer, updating stored bitmap information to be transmitted, wherein the bitmap information to be transmitted is used for representing the data writing state of the virtual queue, the bitmap number of the bitmap information to be transmitted corresponds to the number of the queues of the virtual queue, one bitmap number represents the data writing state of one queue, and when preset notification sending conditions are met, a data receiving notification is sent.
The pending bitmap information acquiring module 702 is configured to acquire updated pending bitmap information.
The data reading module 703 is configured to read the pending data from the corresponding virtual queue according to the updated pending bitmap information.
The notification sending condition includes that the quantity of the data to be transmitted accumulated in the virtual queue reaches a quantity threshold value N, or that the time interval of receiving the data to be transmitted by the virtual queue reaches a time threshold value T, wherein N is an integer greater than 1, and T is greater than 0.
In some embodiments, the data reading module 703 includes a queue determining unit and a data reading unit, where the queue determining unit is configured to determine queue number information written into the data to be transmitted according to the updated bitmap information to be transmitted, and the data reading unit is configured to read the data to be transmitted from a virtual queue corresponding to the queue number information.
In some embodiments, the data processing apparatus 700 further comprises a data writing module for writing the read pending data to a storage device of the host.
In some embodiments, the data processing apparatus 700 further includes a read bitmap information update module and an interrupt notification module.
And the read bitmap information updating module is used for updating the read bitmap information stored in the PCI register area after the read data to be transmitted are written into the storage device of the host, wherein the read bitmap information is used for representing the data reading state of the virtual queue.
And the interrupt notification module is used for writing the updated read bitmap information into the second area of the PCI register after the data to be transmitted corresponding to the data receiving notification is written into the storage device, and injecting the interrupt notification into the front-end driver. The front-end driver is used for acquiring the read bitmap information in the second area according to the interrupt notification and cleaning the cache data of the virtual queue corresponding to the read bitmap information.
In some embodiments, the data processing apparatus 700 further includes a data processing speed determination module and a numerical determination module.
And the data processing speed determining module is used for determining the data processing speed of the host machine in a preset evaluation period.
And the numerical value determining module is used for determining a clock multiplier register numerical value according to the data processing speed, wherein the clock multiplier register numerical value is used for enabling the front-end driver to determine a time threshold corresponding to the notification sending condition.
In some embodiments, the data processing speed determining module is further configured to determine the data processing speed in the current evaluation period according to the result of the sampling record in the evaluation period by the second timer counter.
In some embodiments, the value determination module includes a value increasing unit and a value decreasing unit.
And the numerical value increasing unit is used for increasing the numerical value of the clock multiplier register when the current data processing speed in the current evaluation period is greater than the original data processing speed in the previous evaluation period.
And the value reducing unit is used for reducing the value of the clock multiplier register when the current data processing speed in the current evaluation period is smaller than the original data processing speed in the last evaluation period.
In some embodiments, the value increasing unit is configured to increase the clock multiplier register value in accordance with an increased scale value of the data processing speed. The value reducing unit is used for reducing the clock multiplier register value according to the reduced proportional value of the data processing speed.
For more specific limitations of the data processing apparatus 700, reference may be made to the above description of a data processing method, and the data processing apparatus 700 may be used to perform further steps of the method in any of the embodiments of the second aspect of the present disclosure, which is not described here in detail. The various modules in the data processing apparatus 700 described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In a fifth aspect, embodiments of the present disclosure provide a data processing system for use in a virto network architecture, as shown in fig. 8, a data processing system 800 including a client 801, a host 802, and a virtual queue 803.
The client 801 includes front end drivers, PCI registers, and generic block device layers.
Wherein the front end driver is configured to perform the steps of the data processing method disclosed in any of the embodiments of the first aspect. The PCI register is used for transmitting equipment and parameter information between the client and the host, and the universal block equipment layer is used for generating a data issuing request according to the data operation request of the universal block equipment and sending the data issuing request to the front-end driver.
The host 802 includes a back-end device for performing the steps of the data processing method disclosed in any of the embodiments of the second aspect.
The virtual queue 803 is used for data transfer between the client and the host.
In a sixth aspect, embodiments of the present disclosure provide a front-end driver of a virto network architecture, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the data processing method disclosed in any of the embodiments of the first aspect when the computer program is executed.
In some embodiments, the front end driver may be a server, and an internal structure thereof may be as shown in fig. 9, and in particular, the front end driver 900 may include a processor, a memory, and a network interface connected through a system bus. Wherein the processor is configured to provide computing and control capabilities. The memory includes a non-volatile storage medium and an internal memory. The nonvolatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement the data processing method in any of the embodiments of the first aspect of the present disclosure.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with an embodiment of the present disclosure and is not limiting of the front end driver to which an embodiment of the present disclosure is applied, and that a particular front end driver may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In a seventh aspect, embodiments of the present disclosure provide a backend device of a virto network architecture, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the data processing method disclosed in any of the embodiments of the second aspect when the computer program is executed.
In some embodiments, the backend device may be a server, and its internal structure may be as shown in fig. 10, and specifically, the backend device 1000 may include a processor, a memory, and a network interface connected through a system bus. Wherein the processor is configured to provide computing and control capabilities. The memory includes a non-volatile storage medium and an internal memory. The nonvolatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a data processing method in any of the embodiments of the second aspect of the present disclosure.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is merely a block diagram of a portion of the structure associated with an embodiment of the present disclosure and is not limiting of the backend device to which an embodiment of the present disclosure applies, and that a particular backend device may include more or fewer components than shown, or may combine some components, or have a different arrangement of components.
In an eighth aspect, embodiments of the present disclosure provide a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a data processing method in any of the embodiments of the first or second aspects of the present disclosure.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments of the present disclosure may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (Synchnonous Link) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the disclosure.
The foregoing examples merely represent embodiments of the present disclosure, which are described in more detail and detail, but are not to be construed as limiting the scope of the present disclosure. It should be noted that variations and modifications can be made by those skilled in the art without departing from the spirit of the disclosure, which are within the scope of the disclosure. Accordingly, the scope of the present disclosure should be determined from the following claims.

Claims (23)

1. A data processing method applied to a front-end driver in a virto network architecture, the method comprising:
Receiving a request for issuing data to be transmitted from a general block device layer, and writing the data to be transmitted into a virtual queue;
Updating bitmap information to be transmitted stored in a client memory of the virtuo network architecture, wherein the bitmap information to be transmitted is used for representing a data writing state of the virtual queue, the bitmap number of the bitmap information to be transmitted corresponds to the queue number of the virtual queue, and one bitmap number represents the data writing state of one queue;
When a preset notification sending condition is met, writing updated waiting bitmap information into a PCI (peripheral component interconnect) register, sending a data receiving notification to a back-end device in the VirtIO network architecture, wherein the back-end device is used for acquiring the updated waiting bitmap information from the PCI register, determining queue number information written into waiting data according to the updated waiting bitmap information, and reading the waiting data from a corresponding virtual queue according to the queue number information;
the notification sending condition includes that the quantity of the data to be transmitted accumulated in the virtual queue reaches a quantity threshold value N, or that the time interval of the virtual queue receiving the data to be transmitted reaches a time threshold value T, wherein N is an integer greater than 1, and T is greater than 0.
2. The method according to claim 1, wherein writing the updated pending bitmap information into the PCI register when a preset notification issue condition is satisfied, comprises:
When a preset notification sending condition is met, writing the updated bitmap information to be transmitted into a first area of a PCI register; the first area is used for enabling the back-end equipment to acquire the updated waiting bitmap information.
3. The method according to claim 2, wherein the method further comprises:
Mapping the first area into an address space preset by the client in response to the client start-up or the equipment hot addition, and obtaining a file descriptor for executing the first area input/output read-write operation;
and creating a waiting bitmap area for storing the waiting bitmap information in the client memory.
4. The method of claim 1, wherein the virtoio network architecture includes a first timer counter; the method further comprises the steps of:
when first data to be transmitted is written in the full empty virtual queue, starting the first timing counter;
acquiring a recording result of the first timing counter;
and determining whether the notification sending condition is met according to the recording result.
5. The method of claim 1, wherein the number threshold is equal to or less than a queue depth; the method further comprises the steps of:
determining a quantity threshold according to the total quantity of data to be transmitted received in the time threshold and a single page size value of a memory page of an operating system;
the time threshold is a preset maximum time for causing service delay perception or affecting user experience.
6. The method of claim 1, wherein the virtual queue includes a flow control buffer provided with a clock multiplier register for recording data representing the number of units of time for data transmission; the method further comprises the steps of:
determining the time threshold according to the product of the unit time interval of data transmission and the value of the clock multiplier register;
The back-end equipment is used for determining the data processing speed of the host in a preset evaluation period, and increasing the clock multiplier register value when the current data processing speed in the current evaluation period is greater than the original data processing speed in the previous evaluation period; and reducing the clock multiplier register value when the current data processing speed in the current evaluation period is smaller than the original data processing speed in the last evaluation period.
7. The method of claim 1, wherein the writing the pending data to a virtual queue comprises:
and writing a plurality of data to be transmitted into the virtual queue according to the sequence from small to large of the queue number information of the queues in the virtual queue, wherein the bitmap numbers from low to high of the bitmap information to be transmitted correspond to the virtual queue from small to large of the queue number information one by one.
8. The method according to claim 1, wherein the method further comprises:
receiving an interrupt notification injected from the back-end device;
Determining a virtual queue in which data reading is completed according to the interrupt notification;
And cleaning the read virtual queue cache data and updating the waiting bitmap information.
9. A data processing method applied to a backend device in a virto network architecture, the method comprising:
Receiving a data receiving notification sent by a front-end driver in the VirtIO network architecture; the front-end driver is used for writing the data to be transmitted into a virtual queue according to a received issuing request of the data to be transmitted of the universal block device layer, updating bitmap information to be transmitted stored in a client memory of the virtual queue of the virtual IO network architecture, wherein the bitmap information to be transmitted is used for representing a data writing state of the virtual queue, the bitmap number of the bitmap information to be transmitted corresponds to the queue number of the virtual queue, one bitmap number represents the data writing state of one queue, and when a preset notification sending condition is met, the updated bitmap information to be transmitted is written into a PCI register and a data receiving notification is sent;
Acquiring the updated bitmap information to be transmitted from the PCI register;
Determining queue number information written into to-be-transmitted data according to the updated to-be-transmitted bitmap information, and reading the to-be-transmitted data from a corresponding virtual queue according to the queue number information;
the notification sending condition includes that the quantity of the data to be transmitted accumulated in the virtual queue reaches a quantity threshold value N, or that the time interval of the virtual queue receiving the data to be transmitted reaches a time threshold value T, wherein N is an integer greater than 1, and T is greater than 0.
10. The method according to claim 9, wherein the number of bitmaps from low order to high order of the bitmap information to be transmitted corresponds to a virtual queue from small to large in queue number information one by one;
the determining the queue number information written into the waiting data according to the updated waiting bitmap information comprises the following steps:
confirming the data writing state of the virtual queue bit by bit from low order to high order according to the updated bitmap information to be transmitted, and determining the queue number information for writing the data to be transmitted;
the reading the data to be transmitted from the corresponding virtual queue comprises the following steps: and reading the data to be transmitted from the virtual queue corresponding to the queue number information.
11. The method according to claim 9, wherein the method further comprises: and writing the read data to be transmitted into a storage device of a host in the VirtIO network architecture.
12. The method of claim 11, wherein the method further comprises: after the read data to be transmitted is written into the storage equipment of the host, updating the stored read bitmap information, wherein the read bitmap information is used for representing the data reading state of the virtual queue;
After the data to be transmitted corresponding to the data receiving notification is written into the storage device, writing updated read bitmap information into a second area of the PCI register, and injecting an interrupt notification to the front-end driver;
The front-end driver is used for acquiring the read bitmap information in the second area according to the interrupt notification and cleaning the cache data of the virtual queue corresponding to the read bitmap information.
13. The method of claim 12, wherein the number of bits of the read bitmap information corresponds to the number of queues of the virtual queue, and wherein one of the number of bitmaps of the read bitmap information corresponds to a data read status of one of the queues of the virtual queue.
14. The method according to claim 9, wherein the method further comprises:
Determining the data processing speed of a host in the VirtIO network architecture in a preset evaluation period;
And determining a clock multiplier register value according to the data processing speed, wherein the clock multiplier register value is used for enabling the front-end driver to determine the time threshold corresponding to the notification sending condition.
15. The method of claim 14, wherein the virto network architecture includes a second timer counter for recording a data throughput and a processing time of the host, and the determining the data processing speed of the host in the preset evaluation period includes:
And determining the data processing speed in the current evaluation period according to the sampling record result of the second timing counter in the evaluation period.
16. The method of claim 15, wherein said determining a clock multiplier register value based on said data processing speed comprises:
when the current data processing speed in the current evaluation period is greater than the original data processing speed in the previous evaluation period, increasing the value of the clock multiplier register;
and reducing the clock multiplier register value when the current data processing speed in the current evaluation period is smaller than the original data processing speed in the last evaluation period.
17. The method of claim 16, wherein the step of determining the position of the probe comprises,
The increasing the clock multiplier register value includes: increasing the clock multiplier register value according to an increased proportional value of the data processing speed;
the reducing the clock multiplier register value includes: the clock multiplier register value is reduced according to a reduced scale value of the data processing speed.
18. A data processing apparatus for use in a front-end driver in a virto network architecture, the apparatus comprising:
the request receiving module is used for receiving a request for issuing the data to be transmitted from the universal block device layer and writing the data to be transmitted into the virtual queue;
the device comprises a waiting bitmap updating module, a waiting bitmap updating module and a data processing module, wherein the waiting bitmap updating module is used for updating waiting bitmap information stored in a client memory of the VirtIO network architecture, the waiting bitmap information is used for representing a data writing state of the virtual queue, the bitmap number of the waiting bitmap information corresponds to the number of queues of the virtual queue, and one bitmap number represents the data writing state of one queue;
The notification sending module is used for writing updated bitmap information to a PCI register when a preset notification sending condition is met, sending a data receiving notification to a back-end device in the VirtIO network architecture, wherein the back-end device is used for acquiring the updated bitmap information from the PCI register, determining queue number information written to the data to be transmitted according to the updated bitmap information, and reading the data to be transmitted from a corresponding virtual queue according to the queue number information;
the notification sending condition includes that the quantity of the data to be transmitted accumulated in the virtual queue reaches a quantity threshold value N, or that the time interval of the virtual queue receiving the data to be transmitted reaches a time threshold value T, wherein N is an integer greater than 1, and T is greater than 0.
19. A data processing apparatus applied to a backend device in a virtoio network architecture, the apparatus comprising:
The notification receiving module is used for receiving a data receiving notification sent by a front-end driver in the VirtIO network architecture; the front-end driver is used for writing the data to be transmitted into a virtual queue according to a received issuing request of the data to be transmitted of the universal block device layer, updating bitmap information to be transmitted stored in a client memory of the virtual queue of the virtual IO network architecture, wherein the bitmap information to be transmitted is used for representing a data writing state of the virtual queue, the bitmap number of the bitmap information to be transmitted corresponds to the number of queues of the virtual queue, one bitmap number represents the data writing state of one queue, and when a preset notification sending condition is met, the updated bitmap information to be transmitted is written into a PCI register and a data receiving notification is sent out;
the waiting bitmap information acquisition module is used for acquiring the updated waiting bitmap information from the PCI register;
The data reading module is used for determining queue number information written into the data to be transmitted according to the updated bitmap information to be transmitted and reading the data to be transmitted from the corresponding virtual queue according to the queue number information;
the notification sending condition includes that the quantity of the data to be transmitted accumulated in the virtual queue reaches a quantity threshold value N, or that the time interval of the virtual queue receiving the data to be transmitted reaches a time threshold value T, wherein N is an integer greater than 1, and T is greater than 0.
20. A data processing system applied to a virto network architecture, wherein the system comprises a client, a host and a virtual queue;
The client comprises a front-end driver, a PCI register and a general block device layer, wherein the front-end driver is used for executing the steps of the method of any one of claims 1-8, the PCI register is used for transmitting device and parameter information between the client and the host, and the general block device layer is used for generating a data issuing request according to a data operation request of the general block device and sending the data issuing request to the front-end driver;
the host comprises a back-end device for performing the steps of the method of any one of claims 9-17;
the virtual queue is used for data transfer between the client and the host.
21. A front-end driver of a virtoio network architecture comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed by the processor.
22. A backend device of a virtoio network architecture, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any one of claims 9 to 17 when the computer program is executed by the processor.
23. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 17.
CN202410303359.XA 2024-03-18 2024-03-18 Data processing method, device, system and storage medium Active CN117891567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410303359.XA CN117891567B (en) 2024-03-18 2024-03-18 Data processing method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410303359.XA CN117891567B (en) 2024-03-18 2024-03-18 Data processing method, device, system and storage medium

Publications (2)

Publication Number Publication Date
CN117891567A CN117891567A (en) 2024-04-16
CN117891567B true CN117891567B (en) 2024-06-07

Family

ID=90641599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410303359.XA Active CN117891567B (en) 2024-03-18 2024-03-18 Data processing method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN117891567B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618158A (en) * 2015-01-28 2015-05-13 上海交通大学 Embedded network virtualization environment VirtIO (virtual input and output) network virtualization working method
CN113590254A (en) * 2020-04-30 2021-11-02 深信服科技股份有限公司 Virtual machine communication method, device, system and medium
WO2023093634A1 (en) * 2021-11-25 2023-06-01 北京字节跳动网络技术有限公司 Data storage method and apparatus, and readable medium and electronic device
CN116382839A (en) * 2022-12-29 2023-07-04 天翼云科技有限公司 Method and device for detecting state of virtual machine, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8291135B2 (en) * 2010-01-15 2012-10-16 Vmware, Inc. Guest/hypervisor interrupt coalescing for storage adapter virtual function in guest passthrough mode

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618158A (en) * 2015-01-28 2015-05-13 上海交通大学 Embedded network virtualization environment VirtIO (virtual input and output) network virtualization working method
CN113590254A (en) * 2020-04-30 2021-11-02 深信服科技股份有限公司 Virtual machine communication method, device, system and medium
WO2023093634A1 (en) * 2021-11-25 2023-06-01 北京字节跳动网络技术有限公司 Data storage method and apparatus, and readable medium and electronic device
CN116382839A (en) * 2022-12-29 2023-07-04 天翼云科技有限公司 Method and device for detecting state of virtual machine, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Real-Time Virtio-Based Framework for Predictable Inter-VM Communication;Gero Schwäricke .et al;IEEE;20211207;全文 *
半虚拟化框架Virtio的网络请求性能优化;刘禹燕;牛保宁;;小型微型计算机系统;20180115(第01期);全文 *
多虚拟机环境下磁盘写优化机制;余林琛;廖小飞;;计算机工程与科学;20121015(第10期);全文 *

Also Published As

Publication number Publication date
CN117891567A (en) 2024-04-16

Similar Documents

Publication Publication Date Title
US11960725B2 (en) NVMe controller memory manager providing CMB capability
CN107430493B (en) Sequential write stream management
US9563367B2 (en) Latency command processing for solid state drive interface protocol
US9665440B2 (en) Methods and systems for removing virtual machine snapshots
EP3796168A1 (en) Information processing apparatus, information processing method, and virtual machine connection management program
US20130132960A1 (en) Usb redirection for read transactions
US8356299B2 (en) Interrupt processing method and system
CN107783727B (en) Access method, device and system of memory device
EP4152140A1 (en) Network card and method for network card to process data
US11409466B2 (en) Access control in CMB/PMR virtualization environment
CN112214157A (en) Executing device and method for host computer output and input command and computer readable storage medium
KR20200057311A (en) Storage device throttling amount of communicated data depending on suspension frequency of operation
US10545697B1 (en) Reverse order request queueing by para-virtual device drivers
CN117891567B (en) Data processing method, device, system and storage medium
CN114356219A (en) Data processing method, storage medium and processor
CN116954675A (en) Used ring table updating method and module, back-end equipment, medium, equipment and chip
CN113032088A (en) Dirty page recording method and device, electronic equipment and computer readable medium
CN111666036A (en) Method, device and system for migrating data
CN110825485A (en) Data processing method, equipment and server
KR101559929B1 (en) Apparatus and method for virtualization
CN113093994A (en) Data processing method and device
CN117978754A (en) Data transmission method, device, system, computer equipment and storage medium
US11966743B2 (en) Reverse order queue updates by virtual devices
CN117931406A (en) VIRTIO equipment interrupt method and device, back-end equipment and chip
CN116401079A (en) Data processing method, system and related components

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant