CN109302386B - Server compression and decompression blade, system and compression and decompression method - Google Patents

Server compression and decompression blade, system and compression and decompression method Download PDF

Info

Publication number
CN109302386B
CN109302386B CN201811058468.0A CN201811058468A CN109302386B CN 109302386 B CN109302386 B CN 109302386B CN 201811058468 A CN201811058468 A CN 201811058468A CN 109302386 B CN109302386 B CN 109302386B
Authority
CN
China
Prior art keywords
decompression
compression
blade
hardware
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811058468.0A
Other languages
Chinese (zh)
Other versions
CN109302386A (en
Inventor
罗禹铭
罗禹城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wangyu Safety Technology Shenzhen Co ltd
Original Assignee
Wangyu Safety Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wangyu Safety Technology Shenzhen Co ltd filed Critical Wangyu Safety Technology Shenzhen Co ltd
Priority to CN201811058468.0A priority Critical patent/CN109302386B/en
Publication of CN109302386A publication Critical patent/CN109302386A/en
Application granted granted Critical
Publication of CN109302386B publication Critical patent/CN109302386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Sources (AREA)

Abstract

The invention discloses a server compression and decompression blade, a system and a compression and decompression method, wherein the compression and decompression blade comprises: the hardware compression and decompression modules are used for compressing and decompressing data; PCIe Switch chip connected with several hardware compression and decompression modules; the hardware compression and decompression module supports an I/O virtualization standard SR-IOV and is connected with a PCIe Switch chip through a PCIe slot; the hardware compression and decompression module comprises compression and decompression resources. The compression and decompression blade can provide compression and decompression resources with higher capacity, can realize full cross connection among the X86 calculation blade, the compression and decompression blade and the internal modules thereof, can virtualize a physical compression and decompression function of each hardware compression and decompression module into hundreds of logical compression and decompression functions, and greatly facilitates flexible scheduling and sharing of the compression and decompression resources.

Description

Server compression and decompression blade, system and compression and decompression method
Technical Field
The invention relates to the technical field of data compression and decompression, in particular to a mobile server compression and decompression blade, a mobile server compression and decompression system and a compression and decompression method.
Background
Currently, a cloud computing server generally completes compression and decompression functions required by applications through pure software or inserting a hardware compression and decompression accelerator card into a PCIe slot of the server. In the prior art, compression and decompression are realized by pure software, which is mainly realized by executing an X86 instruction, as shown in FIG. 1. The compression and decompression instruction and the data are stored in the DDR, the X86 core runs a software instruction to complete compression and decompression of the data in the DDR, and the result is also stored in the DDR. By using the enhancement technology provided by Intel and adopting pure software to realize compression and decompression, the requirement of compression and decompression with lower capacity can be met.
However, if the compression/decompression traffic is higher, the compression/decompression needs to be implemented by hardware, i.e., using a hardware compression/decompression accelerator card, as shown in fig. 1. However, this structure has the following problems: 1. although the parallelism of data processing is improved by the Intel through technologies such as SIMD, hyper-threading, out-of-order execution, special instruction set extension and the like, the parallelism of the finally obtained data processing is not high due to the nature of serial execution of software instructions, so that the efficiency of realizing compression and decompression is low. 2. The PCIe expansion card needs to be inserted into a PCIe slot of the server, and can only be used by an application program on the server, and cannot be shared by multiple servers or blades, which is inconvenient for the cloud computing environment to flexibly schedule compression and decompression resources. Because the resources are bound to a single physical server or blade, if the virtual machine to be scheduled has a high-capacity resource requirement for compression and decompression, the virtual machine needs to be scheduled to the server provided with the hardware compression and decompression accelerator card, so that the scheduling flexibility of the virtual machine is greatly restricted, and even the virtual machine cannot be scheduled successfully.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a server compression and decompression blade, a system and a compression and decompression method, aiming at solving the problems that the compression and decompression blade in the prior art has low efficiency and faces safety problems, and a hardware compression and decompression acceleration card is inconvenient to expand and does not support sharing.
The technical scheme adopted by the invention for solving the technical problem is as follows:
a server compression and decompression blade, wherein the compression and decompression blade comprises: the hardware compression and decompression modules are used for compressing and decompressing data; the PCIeSwitch chip is connected with the hardware compression and decompression modules;
the hardware compression and decompression module supports single-root I/O virtualization of an I/O virtualization standard and is connected with the PCIe Switch chip through a PCIe slot;
the hardware compression and decompression module comprises compression and decompression resources for compressing and decompressing data.
Preferably, the server compresses the decompression blade, wherein the PCIe Switch chip supports multi-root I/O virtualization of an I/O virtualization standard.
Preferably, the server compresses and decompresses the blade, wherein, the compressing and decompressing blade further comprises a monitoring management module, and the monitoring management module is connected with the plurality of hardware compressing and decompressing modules and is used for monitoring the module power supply and the module running state in the compressing and decompressing blade.
Preferably, the server compresses and decompresses the blade, wherein the compression and decompression blade supports hot plug.
A server compression and decompression system, wherein the server compression and decompression system comprises: compressing and decompressing blades; the blade server backboard is connected with the compression and decompression blade through a PCIe slot; a number of X86 compute blades connected with the blade server backplane through PCIe slots.
Preferably, the server compression/decompression system, wherein the compression/decompression blade includes: the hardware compression and decompression modules are used for compressing and decompressing data; PCIe Switch chip connected with several hardware compression and decompression modules; the monitoring management module is connected with the hardware compression and decompression modules; the monitoring management module is used for monitoring a module power supply and a module running state in the compression and decompression blade;
the hardware compression and decompression module supports single-root I/O virtualization of an I/O virtualization standard and is connected with the PCIe Switch chip through a PCIe slot; the hardware compression and decompression module comprises compression and decompression resources for data compression and decompression;
the PCIe Switch chip supports multi-root I/O virtualization of an I/O virtualization standard and is connected with the blade server backplane through a PCIe slot.
Preferably, the server compression and decompression system is provided, wherein the compression and decompression blade supports hot plug.
A server compression and decompression method, wherein the compression and decompression method comprises the following steps:
step A, inserting a compression and decompression blade configured with a plurality of hardware compression and decompression modules into a server;
b, controlling a monitoring management module of the compression and decompression blade to acquire the total capacity and specification of compression and decompression resources on the compression and decompression blade, and reporting the total capacity and specification to a cloud operating system through a server;
step C, the cloud operating system carries out unified scheduling according to requirements, schedules the virtual machine to a server inserted with a compression and decompression blade, configures a blade server back plate and a PCIe Switch chip on the compression and decompression blade, and obtains compression and decompression resources on the compression and decompression blade;
and step D, controlling the virtual machine to run, performing compression and decompression operation through the compression and decompression resources acquired by the compression and decompression blade, and releasing the compression and decompression resources after the compression and decompression operation is completed.
Preferably, the server compression and decompression method further includes:
the cloud operating system also allocates a plurality of X86 computing blades in the virtual machine to a few hardware compression and decompression modules in a centralized manner, and closes the power supply of the idle hardware compression and decompression modules;
when the compression and decompression tasks are increased, the power of the idle hardware compression and decompression module is turned on, and the X86 computing blade is deployed in real time.
Preferably, the server compression and decompression method further includes:
the cloud operating system also adjusts the processing clock frequency of the single hardware compression and decompression module in real time according to the load.
The invention has the beneficial effects that: the compression and decompression blade of the invention can provide compression and decompression resources with larger capacity by stacking a plurality of hardware compression and decompression modules, and connecting all the hardware compression and decompression modules with the PCIe Switch chip; and the PCIe Switch chip supports the I/O virtualization standard MR-IOV, can realize the full cross connection between the X86 computing blade and the compression and decompression blade and the internal modules thereof, and one physical compression and decompression function of each hardware compression and decompression module can be virtualized into hundreds of logical compression and decompression functions, thereby greatly facilitating the flexible scheduling and sharing of compression and decompression resources.
Drawings
Fig. 1 is a schematic structural diagram of implementing compression and decompression by software and hardware compression and decompression accelerator cards in the prior art.
Fig. 2 is a schematic structural diagram of a preferred embodiment of the server compression/decompression system according to the present invention.
Fig. 3 is a schematic structural diagram of an embodiment of the server compression and decompression system according to the present invention.
Fig. 4 is a flowchart illustrating a server compression/decompression method according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to solve the problems in the prior art when the compression and decompression are realized through a software and hardware compression and decompression accelerator card, the invention provides the server compression and decompression blade, and the blade can greatly improve the compression and decompression processing capacity of the server, reduce the power consumption, improve the computing efficiency of the server and improve the overall safety of a cloud computing environment. As can be seen from fig. 2, the compression/decompression blade in fig. 2 includes: the hardware compression and decompression modules are used for compressing and decompressing data; and the PCIe Switch chip is connected with the hardware compression and decompression modules.
In particular, since the PCIe slot is limited due to space limitation in a general server, 1 to 2 pieces of PCIe hardware compression decompression accelerator cards are inserted at most. In order to provide a higher-capacity compression and decompression service, a plurality of hardware compression and decompression modules are stacked in the compression and decompression blade. And a plurality of hardware compression and decompression modules on the compression and decompression blade are connected with a PCIe Switch chip through PCIe slots, can be completely increased or decreased according to needs, and have very good expandability.
In the present invention, the hardware compression and decompression module supports a Single Root I/O Virtualization (Single I/O Virtualization) standard SR-IOV issued by PCI-SIG (PCI-Special interest group), and can virtualize a Physical compression and decompression Function (PF) into hundreds of logical compression and decompression functions (Virtual functions, VF). The compression and decompression resources are finely divided. The cloud operating system can very flexibly schedule such fine-grained compressed and decompressed resources to the X86 computing blades in the server. And the hardware compression and decompression module is connected with the PCIe Switch chip through a PCIe slot, and due to the high bandwidth and low delay of PCIe, the service provided by the compression and decompression blade can be ensured.
In this embodiment, the PCIe Switch chip supports an I/O Virtualization standard MR-IOV (Multi-root I/O Virtualization), facilitating any X86 blade of the upstream port to share any compression and decompression module accessing the downstream port.
Preferably, the compression and decompression blade of the present invention further includes a monitoring management module, and the monitoring management module is connected to all of the plurality of hardware compression and decompression modules, and is configured to monitor a module power supply and a module running state in the compression and decompression blade. The compression and decompression blade supports hot plug, and a calculation blade in a server does not need to be opened and a PCIe card is additionally inserted. The whole compression and decompression blade is inserted into the server, and the operation can be started immediately. When there is no compression/decompression task, the power of the whole compression/decompression blade can be turned off, or the compression/decompression blade can be directly pulled out, and the slot position is assigned to the X86 calculation blade.
Further, the present invention also provides a server compression and decompression system having the above compression and decompression blade, as shown in fig. 2 specifically. The compression and decompression system comprises a blade server backboard connected with the compression and decompression blade through a PCIe slot besides the compression and decompression blade; a number of X86 compute blades connected with the blade server backplane through PCIe slots.
Specifically, the compression/decompression blade includes: the hardware compression and decompression modules are used for compressing and decompressing data; PCIe Switch chip connected with several hardware compression and decompression modules; the monitoring management module is connected with the hardware compression and decompression modules; the monitoring management module is used for monitoring the module power supply and the module running state in the compression and decompression blade. The hardware compression and decompression module supports an I/O virtualization standard SR-IOV and is connected with the PCIe Switch chip through a PCIe slot; the hardware compression and decompression module comprises compression and decompression resources for data compression and decompression; the PCIe Switch chip supports an I/O virtualization standard MR-IOV and is connected with the blade server backplane through a PCIe slot.
In this embodiment, PCIe Switch chips supporting MR-I/OV are used between the multiple X86 compute blades and the multiple hardware compression and decompression modules. One hardware compression and decompression module can serve a plurality of X86 computing blades at the same time, and the compression and decompression card can only be bound to a certain server where a mainboard is located, unlike a server card insertion mode in the prior art. In the embodiment, one X86 computing blade can also use a plurality of hardware compression/decompression modules at the same time. And the multiple X86 computing blades and the multiple hardware compression and decompression modules are connected in a full cross mode. The system can assemble a plurality of hardware compression and decompression modules to provide high-capacity compression and decompression service for a certain X86 computing blade; and a certain hardware compression and decompression module can be distributed to a plurality of X86 calculation blades for shared use according to the fine granularity of virtual function segmentation.
In addition, when the compression and decompression load is light, the cloud operating system can intensively schedule compression and decompression services required by a plurality of X86 computing blades to a few modules, and close the power supply of an idle hardware compression and decompression module, so that the power consumption of the whole compression and decompression blade is reduced; when the compression and decompression tasks become heavy, the power supply of the idle hardware compression and decompression module in the compression and decompression blade can be quickly turned on; the processing clock frequency of a single hardware compression and decompression module can be adjusted according to the load, and the processing clock frequency is matched with the processing clock frequency, so that the power consumption of the hardware compression and decompression module in operation is reduced.
The present invention provides an embodiment of a specific application, as shown in fig. 3. FIG. 3 is a schematic diagram of a server architecture holding 14 hot-pluggable blades. The server is configured with 12X 86 compute blades, 2 compression decompression blades, interconnected through a blade server backplane that supports a pci express chip. Each X86 compute blade has two X86 chips, scheduled by the cloud operating system, which can carry multiple virtual machines for sharing compute/storage/network resources on the blade.
Each compression and decompression blade comprises a PCIe Switch chip supporting MR-IOV, four hardware compression and decompression modules and a monitoring management module. The PCIe Switch chip on the compression and decompression blade and the PCIe Switch chip on the blade server backplane are configured by the cloud operating system according to a resource scheduling scheme, and logical connection is established between the X86 computing blades and the compression and decompression blades. The compression and decompression blade supports hot plug and supports full cross connection between the X86 blade and the compression and decompression blade.
The hardware compression and decompression module is provided with a PCIe interface, supports SR-IOV, virtualizes a single physical compression and decompression resource into hundreds of logical compression and decompression resources, and facilitates flexible scheduling of a cloud operating system. And the monitoring management module provides auxiliary functions of resource capability reporting, module power management, module running state monitoring, power-on self-test and the like of the compression and decompression blade.
Further, based on the above embodiments, the present invention further provides a method for compressing and decompressing a server, as shown in fig. 4. The server compression and decompression method comprises the following steps:
s100, inserting a compression and decompression blade configured with a plurality of hardware compression and decompression modules into a server;
step S200, controlling a monitoring management module of the compression and decompression blade to acquire the total capacity and specification of compression and decompression resources on the compression and decompression blade, and reporting the total capacity and specification to a cloud operating system through a server;
step S300, the cloud operating system performs unified scheduling according to the requirements (the requirements of the virtual machine on computing, storage, network and compression/decompression resources), schedules the virtual machine to a server inserted with a compression/decompression blade, configures a blade server back plate and a PCIe Switch chip on the compression/decompression blade, and acquires the compression/decompression resources on the compression/decompression blade;
and step S400, controlling the virtual machine to run, performing compression and decompression operation through the compression and decompression resources acquired by the compression and decompression blade, and releasing the compression and decompression resources after the compression and decompression operation is completed.
Preferably, the step S300 further includes: the cloud operating system also allocates a plurality of X86 computing blades in the virtual machine to a few hardware compression and decompression modules in a centralized manner, and closes the power supply of the idle hardware compression and decompression modules; when the compression and decompression tasks are increased, the power of the idle hardware compression and decompression module is turned on, and the X86 computing blade is deployed in real time. In addition, the cloud operating system also adjusts the processing clock frequency of a single hardware compression and decompression module in real time according to the load, and reduces the power consumption of the hardware compression and decompression module in operation.
In summary, the present invention provides a server compression/decompression blade, a system and a compression/decompression method, wherein the compression/decompression blade includes: the hardware compression and decompression modules are used for compressing and decompressing data; PCIe Switch chip connected with several hardware compression and decompression modules; the hardware compression and decompression module supports an I/O virtualization standard SR-IOV and is connected with the PCIe Switch chip through a PCIe slot; the hardware compression and decompression module comprises compression and decompression resources for compressing and decompressing data.
The compression and decompression blade of the invention can provide compression and decompression resources with larger capacity by stacking a plurality of hardware compression and decompression modules, and connecting all the hardware compression and decompression modules with the PCIe Switch chip; and the PCIeSlwitch chip supports I/O virtualization standard MR-IOV, can realize the full cross connection between the X86 calculation blade and the compression and decompression blade and the internal modules thereof, and can virtualize one physical compression and decompression function of each hardware compression and decompression module into hundreds of logical compression and decompression functions, thereby greatly facilitating the flexible scheduling and sharing of compression and decompression resources.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (5)

1. A server compression/decompression blade, comprising: the hardware compression and decompression modules are used for compressing and decompressing data; PCIe Switch chip connected with several hardware compression and decompression modules;
the hardware compression and decompression module supports single-root I/O virtualization of an I/O virtualization standard and is connected with the PCIe Switch chip through a PCIe slot;
the hardware compression and decompression module comprises compression and decompression resources for data compression and decompression;
the PCIe Switch chip supports multi-root I/O virtualization of an I/O virtualization standard; the compression and decompression blade also comprises a monitoring management module which is connected with a plurality of hardware compression and decompression modules and is used for monitoring module power supplies and module running states in the compression and decompression blade;
stacking a plurality of hardware compression and decompression modules in the compression and decompression blade;
the PCIe Switch chip supports any X86 computation blade sharing access to any hardware compression and decompression module of the downstream port of the upstream port;
the hardware compression and decompression module serves a plurality of the X86 computing blades simultaneously; the X86 compute blade uses a plurality of the hardware compression and decompression modules simultaneously; the X86 computing blade is in full cross connection with the hardware compression and decompression module;
the monitoring management module of the compression and decompression blade acquires the total capacity and specification of compression and decompression resources on the compression and decompression blade and reports the total capacity and specification to the cloud operating system through the server; the cloud operating system carries out unified scheduling according to requirements, schedules the virtual machine to a server inserted with a compression and decompression blade, configures a blade server back plate and a PCIeSwitch chip on the compression and decompression blade, and acquires compression and decompression resources on the compression and decompression blade; performing compression and decompression operation through the compression and decompression resources acquired by the compression and decompression blade, and releasing the compression and decompression resources after the compression and decompression operation is completed;
the cloud operating system also allocates a plurality of X86 computing blades to a few hardware compression and decompression modules in a centralized manner, and closes the power supply of the idle hardware compression and decompression modules; when the compression and decompression tasks are increased, the power supply of the idle hardware compression and decompression module is turned on, and the X86 calculation blade is allocated in real time; the cloud operating system also adjusts the processing clock frequency of the single hardware compression and decompression module in real time according to the load.
2. The server blade of claim 1, wherein the blade supports hot-plug.
3. A server compression/decompression system, comprising: compressing and decompressing blades; the blade server backboard is connected with the compression and decompression blade through a PCIe slot; a number of X86 compute blades connected with the blade server backplane through PCIe slots;
the compression and decompression blade comprises: the hardware compression and decompression modules are used for compressing and decompressing data; PCIe Switch chip connected with several hardware compression and decompression modules; the monitoring management module is connected with the hardware compression and decompression modules; the monitoring management module is used for monitoring a module power supply and a module running state in the compression and decompression blade;
the hardware compression and decompression module supports single-root I/O virtualization of an I/O virtualization standard and is connected with the PCIe Switch chip through a PCIe slot; the hardware compression and decompression module comprises compression and decompression resources for data compression and decompression;
the PCIe Switch chip supports multi-root I/O virtualization of an I/O virtualization standard and is connected with the blade server back plate through a PCIe slot;
stacking a plurality of hardware compression and decompression modules in the compression and decompression blade;
the PCIe Switch chip supports any hardware compression and decompression module of any X86 computation blade sharing access downstream port of the upstream port;
the hardware compression and decompression module serves a plurality of the X86 computing blades simultaneously; the X86 compute blade uses a plurality of the hardware compression and decompression modules simultaneously; the X86 computing blade is in full cross connection with the hardware compression and decompression module;
the monitoring management module of the compression and decompression blade acquires the total capacity and specification of compression and decompression resources on the compression and decompression blade and reports the total capacity and specification to the cloud operating system through the server; the cloud operating system carries out unified scheduling according to requirements, schedules the virtual machine to a server inserted with a compression and decompression blade, configures a blade server back plate and a PCIe Switch chip on the compression and decompression blade, and acquires compression and decompression resources on the compression and decompression blade; performing compression and decompression operation through the compression and decompression resources acquired by the compression and decompression blade, and releasing the compression and decompression resources after the compression and decompression operation is completed;
the cloud operating system also allocates a plurality of X86 computing blades to a few hardware compression and decompression modules in a centralized manner, and closes the power supply of the idle hardware compression and decompression modules; when the compression and decompression tasks are increased, the power supply of the idle hardware compression and decompression module is turned on, and the X86 calculation blade is allocated in real time; the cloud operating system also adjusts the processing clock frequency of the single hardware compression and decompression module in real time according to the load.
4. The server compression and decompression system according to claim 3, wherein the compression and decompression blade supports hot plug.
5. A server compression and decompression method is characterized by comprising the following steps:
step A, inserting a compression and decompression blade configured with a plurality of hardware compression and decompression modules into a server;
b, controlling a monitoring management module of the compression and decompression blade to acquire the total capacity and specification of compression and decompression resources on the compression and decompression blade, and reporting the total capacity and specification to a cloud operating system through a server;
step C, the cloud operating system carries out unified scheduling according to requirements, schedules the virtual machine to a server inserted with a compression and decompression blade, configures a blade server back plate and a PCIe Switch chip on the compression and decompression blade, and obtains compression and decompression resources on the compression and decompression blade;
step D, controlling the virtual machine to run, performing compression and decompression operation through the compression and decompression resources obtained from the compression and decompression blade, and releasing the compression and decompression resources after the compression and decompression operation is completed; the step C further comprises the following steps:
the cloud operating system also allocates a plurality of X86 computing blades in the virtual machine to a few hardware compression and decompression modules in a centralized manner, and closes the power supply of the idle hardware compression and decompression modules;
when the compression and decompression tasks are increased, the power supply of the idle hardware compression and decompression module is turned on, and the X86 calculation blade is allocated in real time;
the step C further comprises the following steps:
the cloud operating system also adjusts the processing clock frequency of the single hardware compression and decompression module in real time according to the load;
the PCIe Switch chip supports multi-root I/O virtualization of an I/O virtualization standard;
the compression and decompression blade also comprises a monitoring management module which is connected with a plurality of hardware compression and decompression modules and is used for monitoring module power supplies and module running states in the compression and decompression blade;
stacking a plurality of hardware compression and decompression modules in the compression and decompression blade;
the PCIe Switch chip supports any hardware compression and decompression module of any X86 computation blade sharing access downstream port of the upstream port;
the hardware compression and decompression module serves a plurality of the X86 computing blades simultaneously; the X86 compute blade uses a plurality of the hardware compression and decompression modules simultaneously; and the X86 computing blade is fully connected with the hardware compression and decompression module in a cross way.
CN201811058468.0A 2018-09-11 2018-09-11 Server compression and decompression blade, system and compression and decompression method Active CN109302386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811058468.0A CN109302386B (en) 2018-09-11 2018-09-11 Server compression and decompression blade, system and compression and decompression method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811058468.0A CN109302386B (en) 2018-09-11 2018-09-11 Server compression and decompression blade, system and compression and decompression method

Publications (2)

Publication Number Publication Date
CN109302386A CN109302386A (en) 2019-02-01
CN109302386B true CN109302386B (en) 2020-08-25

Family

ID=65166494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811058468.0A Active CN109302386B (en) 2018-09-11 2018-09-11 Server compression and decompression blade, system and compression and decompression method

Country Status (1)

Country Link
CN (1) CN109302386B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819447A (en) * 2012-05-29 2012-12-12 中国科学院计算技术研究所 Direct I/O virtualization method and device used for multi-root sharing system
CN104601684A (en) * 2014-12-31 2015-05-06 曙光云计算技术有限公司 Cloud server system
CN104750631A (en) * 2013-12-27 2015-07-01 国际商业机器公司 Method and system used for placement of input / output adapter cards in server

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5332000B2 (en) * 2008-12-17 2013-10-30 株式会社日立製作所 COMPUTER COMPUTER DEVICE, COMPOSITE COMPUTER MANAGEMENT METHOD, AND MANAGEMENT SERVER

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819447A (en) * 2012-05-29 2012-12-12 中国科学院计算技术研究所 Direct I/O virtualization method and device used for multi-root sharing system
CN104750631A (en) * 2013-12-27 2015-07-01 国际商业机器公司 Method and system used for placement of input / output adapter cards in server
CN104601684A (en) * 2014-12-31 2015-05-06 曙光云计算技术有限公司 Cloud server system

Also Published As

Publication number Publication date
CN109302386A (en) 2019-02-01

Similar Documents

Publication Publication Date Title
CN109190420B (en) Server encryption and decryption blade, system and encryption and decryption method
US9710310B2 (en) Dynamically configurable hardware queues for dispatching jobs to a plurality of hardware acceleration engines
US8954997B2 (en) Resource affinity via dynamic reconfiguration for multi-queue network adapters
US10402223B1 (en) Scheduling hardware resources for offloading functions in a heterogeneous computing system
US9804874B2 (en) Consolidation of idle virtual machines on idle logical processors
US20110010721A1 (en) Managing Virtualized Accelerators Using Admission Control, Load Balancing and Scheduling
CN104714846A (en) Resource processing method, operating system and equipment
CN108958924B (en) Memory system with delay profile optimization and method of operating the same
CN109712060B (en) Cloud desktop display card sharing method and system based on GPU container technology
CN109284192B (en) Parameter configuration method and electronic equipment
US9280493B2 (en) Method and device for enumerating input/output devices
CN112352221A (en) Shared memory mechanism to support fast transfer of SQ/CQ pair communications between SSD device drivers and physical SSDs in virtualized environments
CN106789337B (en) Network performance optimization method of KVM
US20140237017A1 (en) Extending distributed computing systems to legacy programs
US8230260B2 (en) Method and system for performing parallel computer tasks
CN109302386B (en) Server compression and decompression blade, system and compression and decompression method
US11334436B2 (en) GPU-based advanced memory diagnostics over dynamic memory regions for faster and efficient diagnostics
CN113568734A (en) Virtualization method and system based on multi-core processor, multi-core processor and electronic equipment
Ekane et al. FlexVF: Adaptive network device services in a virtualized environment
US10649943B2 (en) System and method for I/O aware processor configuration
US20230111884A1 (en) Virtualization method, device, board card and computer-readable storage medium
US10261817B2 (en) System on a chip and method for a controller supported virtual machine monitor
CN111475295B (en) Software and hardware layered management method and device and computer readable storage medium
US10747615B2 (en) Method and apparatus for non-volatile memory array improvement using a command aggregation circuit
Yang et al. On construction of a virtual GPU cluster with InfiniBand and 10 Gb Ethernet virtualization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant