WO2017173618A1 - Procédé, appareil et dispositif pour compresser des données - Google Patents

Procédé, appareil et dispositif pour compresser des données Download PDF

Info

Publication number
WO2017173618A1
WO2017173618A1 PCT/CN2016/078667 CN2016078667W WO2017173618A1 WO 2017173618 A1 WO2017173618 A1 WO 2017173618A1 CN 2016078667 W CN2016078667 W CN 2016078667W WO 2017173618 A1 WO2017173618 A1 WO 2017173618A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
compressed
compression
data
computing
Prior art date
Application number
PCT/CN2016/078667
Other languages
English (en)
Chinese (zh)
Inventor
顾雄礼
方磊
刘鹏
钱斌海
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201680057387.1A priority Critical patent/CN108141471B/zh
Priority to PCT/CN2016/078667 priority patent/WO2017173618A1/fr
Publication of WO2017173618A1 publication Critical patent/WO2017173618A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications

Definitions

  • the present invention relates to the field of information technology and, more particularly, to a method, apparatus and apparatus for compressing data.
  • a data processing technology which performs calculation processing by generating (or mapping) a device (running a process for calculation processing) to generate intermediate data, and The intermediate data is subjected to processing such as aggregation, simplification, and merging by simplification (or reduction) of the device (running with a process for simplification).
  • processing can also be called “map”, which means that each element of the conceptual list composed of independent elements performs the specified operation.
  • Reduce can also be called “reduction”, which refers to the appropriate merging of elements of a list.
  • the simplification device since the simplification device needs to input the output of the computing device as an input, the intermediate data needs to be transmitted between the computing device and the simplification device. Therefore, the amount of data of the intermediate data directly affects the operation of the system. Efficiency and processing performance.
  • the data compression technology compresses the intermediate data, which can effectively reduce the amount of data transmitted between the computing device and the simplification device, and improve the operating efficiency and processing performance of the system.
  • the above compression is implemented by executing software in a computing device.
  • the processing process occupies a large amount of processing resources (for example, a processor) of the computing device.
  • processing resources of the computing device are small, the compression speed is not only caused. Slow, and a large number of compression tasks can seriously affect the normal computing tasks of computing devices, which affects operational efficiency and processing performance.
  • Embodiments of the present invention provide a method, an apparatus, and a device for compressing data, which can improve operation efficiency and processing performance.
  • a method for compressing data is provided, which is executed in a system including a computing node, a management node, and at least two compression nodes, the compression node is configured to perform compression processing on the data to be compressed generated by the computing node to generate Compressing data, the method comprising: the computing node sending a compression request message to the management node; the computing node acquiring indication information of the target compression node, the target The indication information of the compression node is used to indicate that the target compression node is determined by the management node from the at least two compression nodes when receiving the compression request message, and the current working state of the target compression node is an idle state.
  • the working state includes an idle state and a busy state; the computing node determines the target compressed node according to the indication information of the target compressed node; the computing node and the target compressed node transmit the first data to be compressed and the first compressed data, where The first compressed data is data generated by the target compression node after compressing the first compressed data.
  • the computing node can select a compression node in an idle state to provide a compression service for the computing node when the data needs to be compressed, which can reduce the calculation.
  • the burden of nodes improves operational efficiency and processing performance.
  • the management node the working state of the compressed node can be grasped in real time, and the running error of the compressed node is avoided, and the reliability of the operation is improved.
  • At least two computing processes that generate the data to be compressed are run in the computing node, where the first data to be compressed is in the at least two computing processes The first computing process is generated, and the method further includes: the computing node prohibits transmitting, by the target compression node, the second data to be compressed generated by the second computing process, where the second computing process is in the at least two computing processes The computing process outside of the first computing process.
  • the method for compressing data by prohibiting a process other than the first calculation process of generating the first data to be compressed from transmitting data to the target compression node, it is possible to prevent the compression node from returning data from other processes to the first The calculation process, thereby avoiding data mistransmission, and avoiding the positive impact of the data mistransmission on the first calculation, thereby further improving operational efficiency and processing performance.
  • the method before the computing node and the target compression node transmit the first data to be compressed and the first compressed data, the method further includes: The computing node determines shared memory, the shared memory is accessible by the at least two compressed nodes, the shared memory includes at least one sub-memory; the computing node determines the first sub-memory from the shared memory, the first sub-memory and the target compression The node corresponds to the node; the computing node sends the indication information of the first sub-memory to the target compression node, where the indication information of the first sub-memory includes a start position of the first sub-memory relative to a start of the shared memory An offset of the location; and the computing node and the target compressed node transmitting the first data to be compressed and the first compressed data, comprising: the computing node storing the first data to be compressed in the first sub-memory; the computing node Reading the first compressed data in the first sub
  • a method of compressing data can simplify a computing node by setting a shared memory capable of being addressed by a computing node and a compressed node, and causing the computing node and the compressed node to store the data to be compressed and the compressed data in the shared memory And the data transfer process between the compressed nodes, thereby further improving the operational efficiency and processing performance.
  • At least two computing processes that generate the data to be compressed are run in the computing node, where the first compressed data is the at least two The first computing process is generated by the computing process, and the method further includes: the computing node prohibits storing the second data to be compressed or the second compressed data in the first sub-memory, the second data to be compressed is the second Calculating data generated by the process, where the second computing process is a computing process other than the first computing process in the at least two computing processes, the second compressed data is data generated by the second compressed node, and the second compressed node is a compression node other than the target compression node of the at least two compression nodes; or the method further includes: the computing node prohibiting storing the first to-be-compressed data or the first compressed data in the second sub-memory, the second The sub-memory is the memory in the shared memory other than the first sub-memory.
  • the method for compressing data by causing a first sub-memory for storing the first to-be-compressed data and the first compressed data in the shared memory to be prohibited from being divided by other computing processes other than the first computing process and Accessing other compression nodes other than a compression node can prevent other data from generating interference to the work of the first calculation process and the first compression node; in addition, by prohibiting the first to-be-compressed data or the first compressed data from being stored in the first sub-
  • the memory outside the memory can prevent the first to-be-compressed data or the first compressed data from generating interference to the operations of other compression nodes and the computing process, thereby further improving the running efficiency and the processing performance.
  • the shared memory includes at least two sub-memory
  • the method further includes: the computing node determining the at least two sub-memory and the at least two a one-to-one mapping relationship between the compressed nodes; the computing node stores the first data to be compressed in the first sub-memory, comprising: the computing node according to the at least two sub-memory and the at least two compressed nodes a first mapping relationship, storing the first data to be compressed in the first sub-memory; the computing node reading the first compressed data in the first sub-memory, comprising: the computing node according to the at least two sub-memory a one-to-one mapping relationship between the at least two compression nodes, the first compressed data being read in the first sub-memory.
  • a method of compressing data according to an embodiment of the present invention by predetermining and recording each sub-memory and The one-to-one mapping relationship between the compression nodes can quickly determine the first sub-memory, thereby further improving operational efficiency and processing performance.
  • the calculating, by the computing node, the indication information of the target compression node includes: the computing node receiving the first processing instruction message sent by the management node The first processing instruction message includes indication information of the target compression node.
  • the calculating, by the computing node, the indication information of the target compression node includes: the computing node receiving the compression response message sent by the target compression node,
  • the compressed response message includes indication information of the target compressed node.
  • a method for compressing data is provided, which is implemented in a system including a computing node, a management node, and at least two compression nodes, the compression node is configured to perform compression processing on the data to be compressed generated by the computing node to generate And compressing the data, the method includes: when receiving the compression request sent by the computing node, determining, by the management node, a current working state of each of the at least two compressed nodes, where the working state includes an idle state and a busy state; The management node determines a target compression node from the at least two compression nodes according to a current working state of each compression node, where the current working state of the target compression node is an idle state; the management node sends a processing instruction message to compress the target The node compresses the data to be compressed from the computing node.
  • the computing node can select a compression node in an idle state to provide a compression service for the computing node when the data needs to be compressed, which can reduce the calculation.
  • the burden of nodes improves operational efficiency and processing performance.
  • the management node the working state of the compressed node can be grasped in real time, and the running error of the compressed node is avoided, and the reliability of the operation is improved.
  • the method further includes: the management node determining a location of each compressed node and a location of the computing node; and the management node is current according to each compressed node Working state, determining a target compression node from the at least two compression nodes, comprising: determining, by the management node, the target compression node according to a current working state of each compression node, a location of the computing node, and a location of each compression node, So that the target compression node is the compression node closest to the computing node among the compression nodes whose current working state is the idle state.
  • the target compressed node can be reduced by making the compressed node closest to the computing node among the compressed nodes whose current working state is the idle state. Less data transmission distance, thus, can further improve operational efficiency and processing performance.
  • the management node determines, according to a current working state of each compressed node, a location of the computing node, and a location of each compressed node.
  • the target compression node includes: the management node generates an alternate compressed node list according to the current working state of each compressed node, the location of the computing node, and the location of each compressed node, and the candidate compressed node list records at least two devices.
  • the candidate compression node is a compressed node whose current working state is an idle state, wherein an order of the identifiers of the candidate compressed nodes in the candidate compressed node list and each of the candidate compressions Corresponding to the magnitude relationship of the distance between the node and the computing node; and determining, by the management node, the order of the identifiers of the candidate compressed nodes in the list of candidate compressed nodes, determining the Target compression node.
  • an alternative compressed node list can be generated based on the current working state of each compressed node, the location of the computing node, and the location of each compressed node, based on the candidate compressed node list.
  • the current compressed state of the compressed node whose current working state is the idle state is the closest to the computing node, so that the running efficiency and the processing performance can be further improved.
  • the management node sends the processing instruction message, where the management node sends a first processing instruction message to the computing node, where the first processing
  • the instruction message includes indication information of the target compression node, and the indication information of the target compression node is used to indicate the target compression node, so that the computing node sends the indication information to the target compression node according to the first processing instruction message.
  • the target compression node sends the data to be compressed.
  • the management node sends a processing instruction message, where the management node sends a second processing instruction message to the target compression node, where the second The processing instruction message includes indication information of the computing node, where the indication information of the computing node is used to indicate the computing node, so that the target compression node is based on the second processing instruction message, based on the indication information of the computing node, from the computing node. Get the data to be compressed.
  • a method for compressing data is provided, which is executed in a system including a computing node, a management node, and at least two compression nodes, the compression node is configured to perform compression processing on the data to be compressed generated by the computing node to generate Compressing the data, the method includes: the target compression node acquires the first data to be compressed from the computing node, where the current working state of the target compression node is idle The working state includes an idle state and a busy state; the target compression node performs compression processing on the first data to be compressed to generate first compressed data; and the target compression node transmits the first compressed data to the computing node.
  • the computing node can select a compression node in an idle state to provide a compression service for the computing node when the data needs to be compressed, which can reduce the calculation.
  • the burden of nodes improves operational efficiency and processing performance.
  • the management node the working state of the compressed node can be grasped in real time, and the running error of the compressed node is avoided, and the reliability of the operation is improved.
  • the method before the target compression node receives the first data to be compressed sent by the computing node, the method further includes: receiving, by the target compression node, the second And processing the instruction message, where the processing instruction message includes the indication information of the computing node; the target compression node sends a compression response message to the computing node according to the second processing instruction message, where the compression response message includes indication information of the target compression node.
  • a shared memory is configured in the computing node, where the shared memory can be accessed by the at least two compressed nodes, where the shared memory includes at least a sub-memory
  • the method further includes: the target compression node receiving the indication information of the first sub-memory sent by the computing node, where the indication information of the first sub-memory includes a starting position of the first sub-memory relative to The offset of the starting position of the shared memory; the target compression determines the first sub-memory according to the indication information of the first sub-memory; and the target compressed node acquires the first data to be compressed from the computing node, including: The target compression node reads the first data to be compressed in the first sub-memory; the target compression node transmits the first compressed data to the computing node, including: the target compression node stores the first in the first sub-memory A compressed data.
  • the method further includes: the target compression node prohibits storing or reading data in the second sub-memory, the second sub-memory is The memory in the shared memory except the first sub memory.
  • an apparatus for compressing data comprising means for performing the steps of the first aspect and the implementations of the first aspect described above.
  • an apparatus for compressing data comprising means for performing the steps of the second aspect and the implementations of the first aspect.
  • an apparatus for compressing data comprising means for performing the steps of the second aspect and the implementations of the first aspect.
  • a seventh aspect an apparatus for compressing data, comprising a memory and a processor, the memory for storing a computer program, the processor for calling and running the computer program from the memory, such that the device compressing the data performs the first Aspect, and a method of data processing of any of its various implementations.
  • an apparatus for compressing data comprising: a memory for storing a computer program, the processor for calling and running the computer program from the memory, such that the device compressing the data performs the second Aspect, and a method of data processing of any of its various implementations.
  • a ninth aspect an apparatus for compressing data, comprising a memory and a processor, the memory for storing a computer program, the processor for calling and running the computer program from the memory, such that the device compressing the data performs the third Aspect, and a method of data processing of any of its various implementations.
  • FIG. 1 is a schematic diagram of a system in which a method of compressing data according to an embodiment of the present invention is applied.
  • FIG. 2 is an interaction diagram of a method of compressing data in accordance with an embodiment of the present invention.
  • 3 is a schematic diagram of interactions between processes running in a compute node.
  • FIG. 4 is a diagram showing an example of a distribution of shared memory according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a process of compressing data according to an embodiment of the present invention.
  • FIG. 6 is a performance comparison diagram of a method of compressing data and a method of compressing data in the prior art according to an embodiment of the present invention.
  • FIG. 7 is another performance comparison diagram of a method of compressing data and a method of compressing data in the prior art according to an embodiment of the present invention.
  • FIG. 8 is a schematic block diagram of an example of an apparatus for compressing data according to an embodiment of the present invention.
  • FIG. 9 is a schematic block diagram of another example of an apparatus for compressing data according to an embodiment of the present invention.
  • FIG. 10 is a schematic block diagram of still another example of an apparatus for compressing data according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of an example of an apparatus for compressing data according to an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of another example of an apparatus for compressing data according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of still another example of an apparatus for compressing data according to an embodiment of the present invention.
  • the method, apparatus and device for compressing data provided by the embodiments of the present invention can be applied to a computer, which includes a hardware layer, an operating system layer running on the hardware layer, and an application layer running on the operating system layer.
  • the hardware layer includes hardware such as a CPU (Central Processing Unit), a memory management unit (MMU), and a memory (also referred to as main memory).
  • the operating system may be any one or more computer operating systems that implement business processing through a process, such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a Windows operating system.
  • the application layer includes applications such as browsers, contacts, word processing software, and instant messaging software.
  • the computer may be a handheld device such as a smart phone, or may be a terminal device such as a personal computer, and the present invention is not particularly limited as long as the method for recording the compressed data of the embodiment of the present invention can be operated.
  • the program of the code may process the data in a method of compressing data according to an embodiment of the present invention.
  • the execution body of the method for compressing data of the embodiment of the present invention may be a computer device or a functional module of the computer device capable of calling a program and executing the program.
  • the term "article of manufacture” as used in this application encompasses a computer program accessible from any computer-readable device, carrier, or media.
  • the computer readable medium may include, but is not limited to, a magnetic storage device (eg, a hard disk, a floppy disk, or a magnetic tape), and an optical disk (eg, a CD (Compact Disc), a DVD (Digital Versatile Disc). Etc.), smart cards and flash memory devices (eg, EPROM (Erasable Programmable Read-Only Memory), cards, sticks or key drivers, etc.).
  • various storage media described herein can represent one or more devices and/or other machine-readable media for storing information.
  • the term "machine-readable medium” may include, without limitation, a wireless channel and various other mediums capable of storing, containing, and/or carrying instructions and/or data.
  • FIG. 1 is a schematic diagram of a system 100 to which a method of compressing data is applied in accordance with an embodiment of the present invention. As shown in Figure 1, the system includes:
  • At least one computing node 110 At least one computing node 110;
  • At least two compression nodes 130 At least two compression nodes 130.
  • the computing node 110 is in communication with the management node 120, and the management node 120 and the Each of the at least two compression nodes 130 is communicatively coupled, and the compute node 110 is in communication with each of the at least two compression nodes 130.
  • the computing node 110 is configured to generate data to be compressed.
  • the compression node 130 is configured to perform compression processing on the data to be compressed to generate compressed data.
  • the management node 120 is configured to determine the working state of each compression node 130.
  • the management node 120 can periodically send a query request to query the operational status of each of the compressed nodes 130.
  • the working state of the compression node 130 includes an idle state and a busy state. If the working state of a compression node 130 is an idle state, it indicates that the compression node 130 is currently capable of performing compression processing on the data to be compressed; if the working state of a compression node 130 is in a busy state, it indicates that the compression node 130 is currently unable to The data to be compressed is compressed. It should be further noted that the above-mentioned “idle state” and "busy state” can be understood as whether there are sufficient resources (computation resources, storage resources, etc.) to perform compression processing.
  • a compression node when a compression node is in a working state (running a certain compression task), but there are enough resources, it is considered that the compression node can compress the compressed data. Determining whether there are enough resources to compress the compressed data can be done by the device that manages the compressed node.
  • the computing node 110 may send a compression request to the management node 120. After receiving the compression request, the computing node 110 may select one according to the working state of each compression node 130. The compressed node in the idle state provides the data compression service for the computing node 110. Subsequently, the specific process will be described in detail.
  • the computing node is a server running a mapping process
  • the data to be compressed is intermediate data generated by the mapping process.
  • distributed computing Hadoop is a JAVA-based distributed file system that supports data-intensive distributed applications. It mainly includes Hadoop Distributed File System (HDFS, Hadoop Distributed File System) and MapReduce parallel computing framework.
  • HDFS Hadoop Distributed File System
  • MapReduce parallel computing framework This system, like a cluster operating system, enables inexpensive general-purpose hardware to form a resource pool to form a powerful distributed cluster system. Users can develop distributed programs without knowing the underlying details of the distributed, and can handle many Distributed applications related to big data.
  • the Map process During the processing of the MapReduce parallel computing framework, the Map process generates a large amount of intermediate data. The data needs to be temporarily stored on the local disk.
  • the Reduce process reads the intermediate data through the network, aggregates the intermediate data of multiple Map processes, and performs simplification (also called: merge) processing.
  • a big bottleneck in Hadoop is the input/output (I/O, Input/Output) problem.
  • the compute node running the Map process and the generation node as its input data can be the same physical device (for example: server) or both the compute node and the generation node are arranged in different physical devices, but the two are generally
  • the physical distance is very close, and the simplification node running the Reduce process needs the output of multiple Map processes (that is, the intermediate result) as the input, which is often far away from the computing node running the Map process. Therefore, the Reduce process needs to wait for the compute node.
  • the intermediate result of the Map process is transmitted to the simplification node running the Reduce process, processing can be performed.
  • the intermediate result of the Map process is very large and needs to be temporarily stored on the local disk. This also imposes high requirements on the local disk storage space. Therefore, the read/write I/O of the disk is also very high for the Hadoop cluster. Great impact.
  • the intermediate result that is, an example of the data to be processed in the embodiment of the present invention
  • the disk storage space not only can the disk storage space be reduced, the amount of data read and written to the disk can be reduced, and the amount of data transmission on the network can be reduced, so that Reduce the transmission time of data and improve the processing performance of the job.
  • the present invention can be applied to compression processing for intermediate results generated by a computing node (specifically, a Map process running in the computing node) in the above HDFS. That is, in the embodiment of the present invention, the server running the Map process (hereinafter, for ease of understanding and distinction, referred to as: Map server) can be used as the computing node 110 in the embodiment of the present invention.
  • Map server the server running the Map process
  • the Hadoop process can be run on the computing node 110 (ie, a Hadoop server).
  • the Hadoop process is responsible for running specific computing tasks, and executes multiple Map processes (ie, an instance of the computing process) and multiple Reduce processes.
  • the daemon process can also be run on the computing node 110.
  • the daemon process can be used to implement signaling transmission and data transmission between the computing node 110 and the management node 120, and the daemon process is also It can be used to implement signaling transmission and data transmission between the computing node 110 and the compression node 130.
  • the daemon process may initialize the software running environment of the compression node by calling an application programming interface (API) used by the compression node during the initialization process, and instantiate the compression node (
  • API application programming interface
  • it is set to be a hardware accelerator that executes a compression algorithm, that is, by operating a compression program as a hardware device as a compression node, enabling the hardware device to implement the function of the compression node 130.
  • the Hadoop process and management node 120 running in the compute node 110 (specifically, The functional software of the management node 120 can transmit information (or signals) via the daemon.
  • the daemon may receive a compression request initiated by the Hadoop process (more precisely, the Map process) and notify the management node 120 to perform hardware compression on the data to be compressed by the compression node 130 selected by the management node 120.
  • the compressed data is returned to the Hadoop process (for example, a Map process or a Reduce process).
  • the Hadoop process and the daemon process realize the cooperative work through the semaphore, and realize the data interaction through the shared memory, and then the process is described in detail.
  • the compression node is a Field Programmable Gate Array (FPGA).
  • FPGA Field Programmable Gate Array
  • the FPGA is, for example, Programmable Array Logic (PAL), General Array Logic (GAL, Generic Array). Logic), the product of further development of programmable devices such as Complex Programmable Logic Device (CPLD). It appears as a semi-custom circuit in the field of Application Specific Integrated Circuit (ASIC), which not only solves the shortcomings of the custom circuit, but also overcomes the shortcomings of the limited number of original programmable device gates.
  • ASIC Application Specific Integrated Circuit
  • the system designer can connect the logic blocks inside the FPGA through an editable connection as needed, just as a circuit board is placed on a chip.
  • the logic blocks and connections of a finished FPGA can be changed by the designer, so the FPGA can perform the required logic functions.
  • the FPGA uses a Logic Cell Array (LCA), which includes three parts: a Configurable Logic Block (CLB), an Input Output Block (IOB), and an Interconnect.
  • LCA Logic Cell Array
  • CLB Configurable Logic Block
  • IOB Input Output Block
  • Interconnect an Interconnect
  • FPGAs can have different structures compared to traditional logic circuits and gate arrays (such as PAL, GAL, and CPLD devices) through different programming methods.
  • the FPGA uses a small lookup table (16 ⁇ 1 RAM) to implement the combinational logic. Each lookup table is connected to the input of a D flip-flop, and the flip-flop drives other logic circuits or drives I/O, thereby forming a combination.
  • the logic function implements the basic logic unit modules of the sequential logic functions, which are interconnected or connected to the I/O modules by metal wires.
  • the logic of the FPGA is realized by loading programming data into the internal static storage unit. The value stored in the memory unit determines the logic function of the logic unit and the connection mode between modules or modules and I/O, and finally determines The FPGA allows for unlimited programming.
  • the FPGA is programmed (OpenCL, Open Computing Language) to enable the FPGA to implement the functions of the compression node 130 of the embodiment of the present invention.
  • OpenCL Open Computing Language
  • an interface provided by OpenCL can be used as an interface between the compression node 130 and the computing node 110 or the management node 120.
  • OpenCL is a programming language for parallel computing for heterogeneous systems.
  • the syntax of OpenCL is very simple. It is based on the C language and C++ language.
  • the extension defines some data types, data structures and functions.
  • OpenCL is more than just a programming language, it is a complete parallel programming framework.
  • tasks are called kernels, and kernel programs are created based on several kernel functions.
  • the kernel program is directed to one or more compatible OpenCL devices that are sent to one or more corresponding OpenCL devices by a host program (eg, a program running in compute node 110 or management node 120 in an embodiment of the invention) ( That is, the compression node 130) in the embodiment of the present invention runs, and the result is returned to the host program after the operation is completed.
  • a host program eg, a program running in compute node 110 or management node 120 in an embodiment of the invention
  • the host program manages all connected OpenCL devices through a container called a context. Each OpenCL device corresponds to a command queue.
  • the host program creates a kernel program and adds the kernel program to the command queue.
  • the kernel program enters the command queue, the associated OpenCL device executes the kernel program.
  • an FPGA chip can be used as a compression node 130 in the embodiment of the present invention.
  • the management node 120 may be an FPGA resource manager capable of communicating with each FPGA chip and capable of determining the working state of each FPGA chip.
  • the FPGA resource manager may be integrally configured in an FPGA chip, or the FPGA resource manager may be independently configured with each FPGA chip, or the FPGA resource manager may also be calculated.
  • the node 110 is integrated and configured, and the present invention is not particularly limited.
  • each compression node 130 may be configured in the same device (for example, a server).
  • a server for example, multiple FPGA chips may be independently configured, and the present invention is not particularly limited.
  • the FPGA resource manager and each compression node 130 when the FPGA resource manager and each compression node 130 are configured in the same device (for example, a server), the FPGA resource manager can perform compression with each bus (for example, a PCIE bus).
  • the node 130 is connected, i.e., signaling or data transfer between the FPGA resource manager and each of the compression nodes 130 can be implemented over the bus.
  • information or a signal transceiver can be configured in the FPGA resource manager and each compression node 130, and the FPGA resources are connected through the transmission cable.
  • the manager and the transceivers in each of the compression nodes 130 implement signaling or data transfer between the FPGA resource manager and each of the compression nodes 130.
  • the computing node 110 and the FPGA resource manager may also be the same device, or the computing node 110 and the FPGA resource manager may be configured on the same device.
  • the compute node 110 can be connected to the FPGA resource manager via a bus (eg, a PCIE bus), ie, Signaling or data transfer between the compute node 110 and the FPGA resource manager is accomplished over the bus.
  • a bus eg, a PCIE bus
  • the FPGA resource manager and compute node 110 when the FPGA resource manager and compute node 110 are configured in different devices, information or signal transceivers can be configured in the FPGA resource manager and compute node 110, and the FPGA resource manager and compute node 110 can be connected by a transmission cable.
  • the transceiver in the implementation to implement signaling or data transfer between the FPGA resource manager and the compute node 110. It should be noted that when the computing node 110 and the management node 120 are the same device, the management node 120 can directly obtain the computing process running from the management node 120 (for example, a Hadoop process, or more precisely, a Map process. ) the compression request.
  • the computing node 110 and each compression node 130 when the computing node 110 and each compression node 130 are configured in the same device (for example, a server), the computing node 110 may be connected to each compression node 130 through a bus (for example, a PCIE bus). That is, signaling or data transfer between the compute node 110 and each of the compressed nodes 130 can be accomplished over a bus.
  • a bus for example, a PCIE bus
  • information or a signal transceiver may be configured in the computing node 110 and each compression node 130, and the computing node 110 and each compression node 130 are connected through a transmission cable. The transceiver in the implementation to implement signaling or data transmission between the compute node 110 and each of the compression nodes 130.
  • FIG. 2 shows an interaction diagram of a method 200 of compressing data according to an embodiment of the present invention.
  • the method 200 is performed in a system including a compute node, a management node, and at least two compression nodes, the compression node being configured to perform compression processing on the data to be compressed generated by the compute node to generate compressed data.
  • the actions performed by the management node in the method 200 include:
  • the management node Upon receiving the compression request sent by the computing node, the management node determines a current working state of each of the at least two compressed nodes, where the working state includes an idle state and a busy state. state;
  • the management node sends a processing instruction message to cause the target compression node to perform compression processing on the data to be compressed from the computing node.
  • the number of the computing nodes may be one or more, and the method for compressing data in the embodiment of the present invention is similar to the processing procedure of each computing node.
  • the description will be made by taking the processing procedure for the calculation node #A as an example.
  • the computing node #A when the computing node #A generates data to be compressed that needs to be compressed (that is, an example of the first data to be compressed), the computing node #A may send the data to the management node. Compressing a request message, the compression request message is used to instruct the management node to allocate a target compression node for compressing the data to be compressed from the plurality of compression nodes.
  • the compression request comes from a distributed computing Hadoop process running on the computing node.
  • one or more computing processes ie, an instance of a Hadoop process, for example, a Map process
  • a daemon may be run in the computing node #A.
  • S210 in FIG. 2 may include the following process:
  • calculation process #A when a calculation process (hereinafter, for ease of understanding and differentiation, it is referred to as: calculation process #A), data to be compressed is generated (that is, another example of the first data to be compressed, below, in order to facilitate understanding and differentiation)
  • the calculation process #A can send a preset first signal to the daemon, informing the daemon that the data to be compressed #A needs to be compressed by the compression node.
  • the compression request message may be sent to the management node through the transmission link between the computing node #A and the management node.
  • the management node can determine the operational status of each compression node.
  • the working state may include a busy state and an idle state.
  • the busy state may be that the compression node has performed the compression task, or the load of the compression node is greater than or equal to the preset threshold and cannot complete the compression task within a specified time (for example, may be determined according to the processing delay set by the user).
  • the idle state may be that the compression node does not perform the compression task, or the load of the compression node is less than a preset threshold and the compression task can be completed within a prescribed time.
  • the management node may determine the working state of each compressed node by the following manner.
  • each compression node may periodically report the working status indication information to the management node, so that the management node may record the receiving time when the compression request message is received, and may The working state of the compression node is determined as the current working state of each compression node.
  • the management node when receiving the compression request message, may send a status reporting instruction to each compression node, so that each compression node may report the current working status to the management when receiving the status reporting instruction. node.
  • the management node may determine, from the compressed node, a target compression node whose current working state is an idle state according to the current working state of each compressed node.
  • the determined target compression node is referred to as: compression node #A, that is, the compression node #A is a compression node determined by the management node for performing compression processing on the compressed data #A.
  • the method further includes:
  • the management node determines the location of each compressed node and the location of the compute node
  • the management node determines the target compression node from the at least two compression nodes according to the current working state of each compression node, including:
  • the management node determines the target compression node according to the current working state of each compression node, the location of the computing node, and the location of each compression node, so that the target compression node is in a compression node whose current working state is an idle state.
  • the closest compressed node to the compute node is the closest compressed node to the compute node.
  • the management node may also consider the distance between the compression node and the computing node #A when determining the target compression node.
  • the management node may select a compression node in the idle state whose physical location is closest to the calculation node #A as the target compression node. . This can reduce the distance of data transmission between the compute node #A and the target compression node, thereby shortening the time of data transmission.
  • the computing node #A may send information indicating the physical location of the computing node to the management node based on the indication of the management node or autonomously; or the management node may also obtain the information indicating the calculation through an input of the administrator. Information about the physical location of node #A.
  • the compression node may send information indicating the physical location of the compressed node to the management node based on the indication of the management node or autonomously; or the management node may also obtain the information indicating the Information that compresses the physical location of a node.
  • the management node determines the target compression node according to a current working state of each compressed node, a location of the computing node, and a location of each compressed node, including:
  • the management node generates an alternate compressed node list according to the current working state of each compressed node, the location of the computing node, and the location of each compressed node, where the candidate compressed node list records the identifiers of at least two candidate compressed nodes.
  • the candidate compression node is a compressed node whose current working state is an idle state, wherein an order of the identifiers of the candidate compressed nodes in the candidate compressed node list and each of the candidate compressed nodes to the computing node The size of the distance corresponds to each other;
  • the management node determines the target compressed node from the candidate compressed node according to the order of the identifiers of the candidate compressed nodes in the candidate compressed node list.
  • the management node may assign an indication identifier to each compression node, where an indication identifier is used to uniquely indicate a compression node.
  • the management node can maintain two queues.
  • the queue is used to store the indication identifier of the compression node whose working state is the idle state.
  • the queue is recorded as an idle queue.
  • a busy queue is used to store the indication identifier of the compression node whose working state is busy.
  • the queue is recorded as: a busy queue.
  • the management node may take out a compression node from the idle queue as the target compression node (ie, compression node #A), and add the indication identifier of the compression node #A to the busy queue.
  • the idle queue can be designed as a priority queue, that is, the closer the distance between the idle node and the compute node #A, the higher the priority, and the closer to the head of the queue when entering the queue. Conversely, the farther away from the compute node, the lower the priority, the closer the queue is to the end of the queue.
  • the management node selects the target compression node, it only needs to select the compression node indicating that the identifier is located at the head of the idle queue.
  • the arrangement of the compression nodes enumerated above in the queue is merely exemplary.
  • the present invention is not limited to this, as long as the order of arrangement of the compression nodes in the queue and the order of the distance between the compression nodes and the calculation nodes can be sequentially matched.
  • the management node may send a first processing instruction message to the computing node #A, the first processing instruction message.
  • the indication information of the compression node #A is included to cause the calculation node #A to determine the compression processing by the compression node #A for the data to be compressed (for example, the data to be compressed #A) generated for the calculation node #A.
  • the management node sends the processing instruction message, optionally, the management node sends a first processing instruction message to the computing node, where the first processing instruction message includes indication information of the target compression node, where the target compression node
  • the indication information is used to indicate the target compression node, so that the computing node sends the to-be-compressed data to the target compression node based on the indication information of the target compression node according to the first processing instruction message.
  • the management node may send a second processing instruction message to the compression node #A, the second processing instruction, at S232.
  • the message includes the indication information of the calculation node #A to cause the compression node #A to determine that the data to be compressed (for example, the data to be compressed #A) generated by the calculation node #A needs to be compressed.
  • the compression node #A may send a compression response message to the computing node #A, the compression response message including the indication information of the compression node #A, such that the calculation node #A determines that the calculation node is performed by the compression node #A #A A compression process of the generated data to be compressed (for example, data to be compressed #A).
  • the management node sends a processing instruction message, including:
  • the management node sends a second processing instruction message to the target compression node, where the second processing instruction message includes indication information of the computing node, where the indication information of the computing node is used to indicate the computing node, so that the target compression node is configured according to the
  • the second processing instruction message acquires the data to be compressed from the computing node based on the indication information of the computing node.
  • the indication information of the computing node may be a device identifier of the computing node, and, in the embodiment of the present invention, a device identifier can uniquely indicate a computing device, and thus, The management node and the compression node can distinguish each computing node according to the device identifier.
  • the indication information of the compression node may be the device number of the compression node, and in the embodiment of the present invention, one device number can uniquely indicate a compression device, thereby managing the node and calculating The node can perform compression on each compressed node according to the device number. distinguish.
  • the calculation node #A transmits the data to be compressed #A to the compression node #A.
  • the computing node #A may record Generating a mapping relationship between the calculation process (ie, calculation process #A) of the data to be compressed #A and the compression node #A, and prohibiting the data to be compressed generated by the calculation process other than the calculation process #A from being sent to the compression Node #A, thereby ensuring that the compressed node #A processes only the data generated by the calculation process #A, and can avoid that the calculation node #A (specifically, the calculation process #A) does not correspond to the data to be compressed and the compressed data. A running error has occurred.
  • the calculation process #A for example, a Map process
  • the method for compressing data by prohibiting a process other than the first calculation process of generating the first data to be compressed from transmitting data to the target compression node, it is possible to prevent the compression node from returning data from other processes to the first The calculation process, thereby avoiding data mis-transmission, and avoiding the impact of the data mis-transmission on the operation of the first computing process, thereby further improving operational efficiency and processing performance.
  • the compressed node #A may perform compression processing on the data to be compressed #A to obtain compressed data (ie, first compression).
  • compressed data #A An example of the data, hereinafter, for the sake of easy understanding and distinction, is written as: compressed data #A).
  • the compressed node may run an OpenCL-based Host program to write the acquired data to be compressed into the memory of the compressed node through the PCIE bus, and then the compressed node is initialized.
  • the OpenCL Kernel begins to compress the data.
  • the Host program reads back the compressed data through the PCIE bus.
  • the Host program ends, the thread exits, and the compression process ends.
  • the compressed node #A transmits the compressed data #A to the computing node #A at S260.
  • the data transmission between the compressed node #A and the computing node #A can be implemented by reading and writing data in the same memory. The process is described in detail.
  • the computing node and each compression node can access the same memory (ie, shared memory), and in the embodiment of the present invention, the shared memory may belong to a storage device configured in the computing node, or The shared memory may also belong to a storage device independent of the computing node and the compression node, and the present invention is not particularly limited.
  • the computing node #A may determine, from the shared memory, a memory space in which the compressed data #A and the data to be compressed #A are stored (that is, the first sub-memory, below, in order to facilitate understanding and differentiation, Do: sub memory #A).
  • the sub-memory #A may include two parts, one part (hereinafter, for ease of understanding and explanation, it is recorded as: sub-memory #A1) for storing to be compressed.
  • Data #A another part (hereinafter, for ease of understanding and explanation, note: sub-memory #A2) is used to store compressed data #A.
  • the size of the sub-memory #A may be set by an administrator or may be set according to the compressed data #A and the data to be compressed #A, and the present invention is not particularly limited.
  • the sub-memory #A can be determined in the following manner.
  • the compute node #A (for example, the daemon of the compute node #A) may maintain a mapping entry for recording each compressed node (including the compressed node #A) and each child. a one-to-one mapping relationship between the memory (including the sub-memory #A), wherein each sub-memory is used to store the compressed data of the corresponding compressed node and the data to be compressed, or each sub-memory is corresponding to the compressed node. Used to read and write data.
  • mapping entry may be generated when the system is established, that is, after the computing node learns that a certain compressed node is set in the system and can be used for data compression, the compressed node is recorded in the mapping entry.
  • the mapping entry may also be after the computing node determines that a compression node is used to perform compression processing on the data generated by the computing node (for example, after receiving the first compression response message or the second compression response message) ), the compressed node is recorded in the mapping table entry.
  • the computing node may notify each compression node of the sub-memory corresponding to the compressed node.
  • each compression node performs a data storage operation (ie, reads the data to be compressed and writes the compressed data) in the corresponding sub-memory.
  • the first sub-memory can be quickly determined by predetermining and recording a one-to-one mapping relationship between each sub-memory and each compression node, thereby Can further improve operational efficiency and processing performance.
  • the shared memory includes N sub-memory spaces (that is, respectively, sub-memory #1 to sub-memory #N), where N represents the number of compressed nodes (for example, FPGAs), that is, N can be simultaneously
  • the compression node provides a data compression service for the computing node, and each sub-memory stores a Compressor Buffer Offset information, wherein the Compressor Buffer Offset information is used to indicate the sub-memory corresponding to each compression node.
  • the input (ValidIn) space stores the data to be compressed.
  • the output (ValidOut) space stores the compressed data.
  • the data to be compressed is stored in each sub-memory, and the data to be compressed is used to indicate the amount of data to be compressed in the sub-memory.
  • the data to be compressed is The information can be set by the Map process to which the sub-memory is allocated, that is, when the Map process puts the data to be compressed into the corresponding area of the shared memory (ie, the sub-memory), the data information to be compressed in the sub-memory is set.
  • the compressed data information is used to indicate the amount of data of the compressed data in the sub-memory in each sub-memory.
  • the compressed data information may be a compressed node to which the sub-memory is allocated. To set, that is, when the compressed node puts the compressed data into the corresponding area of the shared memory (ie, the sub-memory), the compressed data information in the sub-memory is set.
  • the compute node #A may position the sub-memory #A in the shared memory (or, sub-memory).
  • the compressed node #A and the computing node #A can know the start address of the shared memory in advance.
  • the compute node #A may send the offset of the start address of the sub-memory #A to the start address of the shared memory to the compressed node #A (for example, the Compressor Buffer Offset information of the sub-memory #A)
  • the compression node can determine the sub-memory #A based on the start address of the shared memory and the offset of the start address of the sub-memory #A with respect to the start address of the shared memory.
  • the manner of determining the sub-memory #A enumerated above is merely an exemplary description, and the present invention is not limited thereto.
  • the computing node #A may also send the size of the sub-memory #A to the compressed node #A. Instructions.
  • the calculation node #A can store the data to be compressed #A in the sub memory #A1, and read the compressed data #A in the sub memory #A2.
  • the compressed node #A can read the data to be compressed #A in the sub memory #A1 and the compressed data #A in the sub memory #A2. Thereby, the transmission between the data to be compressed #A and the compressed data #A between the calculation node #A and the compression node #A can be completed.
  • one or more computing processes for example, a Map process
  • a daemon process may be run in the computing node #A.
  • the memory #A1 can be determined by the daemon, and the daemon can also send a preset second signal to the calculation process #A, informing the calculation process #A that the compressed data and the data to be compressed need to be stored in the sub-memory #A. .
  • the compressed data and the data to be compressed may be stored in the sub memory #A.
  • the computing node #A in the case that the computing node #A can run multiple computing processes, the computing node #A (for example, the daemon running in the computing node #A) can prohibit the dividing process # Data generated by processes other than A is stored in the sub-memory #A, and data generated by a compression node other than the compressed node #A can be prohibited from being stored in the sub-memory #A.
  • the computing node #A for example, the daemon running in the computing node #A
  • the computing node #A can prohibit the dividing process # Data generated by processes other than A is stored in the sub-memory #A, and data generated by a compression node other than the compressed node #A can be prohibited from being stored in the sub-memory #A.
  • a method of compressing data by causing a first in a shared memory to store the first data to be compressed (eg, data to be compressed #A) and first compressed data (eg, compressed data #A)
  • the sub-memory (for example, sub-memory #A) is prohibited from being accessed by other computing processes other than the first computing process and other compressed nodes except the first compressed node, and other data can be avoided for the first computing process and the first compressed node. Work generates interference, which in turn can further improve operational efficiency and processing performance.
  • the calculation node #A (for example, the daemon running in the calculation node #A) may prohibit the calculation process #A
  • the generated data is stored in the sub memory other than the sub memory #A, and the data generated by the compressed node #A can be prohibited from being stored in the sub memory other than the sub memory #A.
  • the compressed node #A may prohibit the data generated by the compressed node #A from being stored in the sub memory other than the sub memory #A.
  • a method of compressing data according to an embodiment of the present invention by prohibiting the first to-be-compressed data (for example, data to be compressed #A) or the first compressed data (for example, compressed data #A) from being stored in the first sub-memory (for example Memory other than sub memory #A) can avoid the first data to be compressed or the first A compressed data interferes with the work of other compression nodes and computing processes.
  • first to-be-compressed data for example, data to be compressed #A
  • the first compressed data #A for example, compressed data #A
  • the first sub-memory for example Memory other than sub memory #A
  • FIG. 5 is a schematic diagram of a process of compressing data according to an embodiment of the present invention.
  • a computing node eg, a daemon
  • the computing node (eg, the daemon) may send a compression request to the management node after detecting the first signal.
  • the target compressed node may be determined according to the compression, and the data to be compressed and the compressed data are transmitted with the compressed node.
  • a compute node eg, a daemon
  • the Map process sleeps waiting for a semaphore to be available, ie, waiting for a compressed node that is in an idle state.
  • the daemon may send a second signal to the Map process, where the value of the second signal may be used to indicate the first sub-memory in the shared memory, so that the Map process may be according to the second The value of the signal, reading and writing data in the first sub-memory in the shared memory (for example, writing the data to be compressed, and reading the compressed data after compression).
  • the computing node (eg, the Map process) can input the data to be compressed into the first sub-memory (specifically, the storage space in the first sub-memory for storing the data to be compressed).
  • the first sub-memory specifically, the storage space in the first sub-memory for storing the data to be compressed.
  • the computing node eg. the Map process
  • the computing node eg, the daemon
  • the computing node can query whether the first sub-memory (specifically, the storage space of the first sub-memory for storing the data to be compressed) is filled, if the first sub-memory is not full. , determining that the first sub-memory can continue to store data to be compressed; otherwise, determining that the first sub-memory has been filled.
  • the first sub-memory specifically, the storage space of the first sub-memory for storing the data to be compressed
  • the compute node eg, the daemon
  • the target compression node eg, an FPGA
  • the Map process can sleep and wait for the compressed data to be written back.
  • the compressed data is transmitted to the daemon, and the daemon can write the compressed data to the first sub-memory (specifically, the first sub-memory for storing the compressed data) The storage space), and wake up the Map process, so that the Map process can read the compressed data from the first sub-memory.
  • the daemon can write the compressed data to the first sub-memory (specifically, the first sub-memory for storing the compressed data) The storage space), and wake up the Map process, so that the Map process can read the compressed data from the first sub-memory.
  • a compute node eg, a daemon may determine whether there are compressed data in the FPGA that has not been written to the shared memory, or data that has been compressed but not transmitted to the Map process.
  • the compute node eg, the daemon
  • the compute node can instruct the compression node to read the uncompressed data and compress it, or the compute node (eg, the daemon) can simply read the compressed data by the Map process.
  • the entire compression process is ended, releasing the semaphore and the first sub-memory that interact between the daemon and the Map process.
  • FIG. 6 shows a time required for compression processing of data of different data amounts by a software compression scheme of the prior art and a hardware compression scheme according to an embodiment of the present invention
  • FIG. 7 shows a method of data processing of the present method.
  • the compression processing speed increase rate is performed by software.
  • the delay of the data processing method of the present method (or the compression processing time of the present invention, that is, T2) is much smaller than the delay of the compression processing by software in the prior art (or, now, There is a technical compression processing time, ie, T1).
  • the computing node can select a compression node in an idle state to provide a compression service for the computing node when the data needs to be compressed, which can reduce the calculation.
  • the burden of nodes improves operational efficiency and processing performance.
  • the management node the working state of the compressed node can be grasped in real time, and the running error of the compressed node is avoided, and the reliability of the operation is improved.
  • FIG. 8 is a schematic block diagram of an apparatus 300 for compressing data in accordance with an embodiment of the present invention.
  • the device 300 is configured in a system including a management node and at least two compression nodes, and the compression node is configured to perform compression processing on the data to be compressed generated by the device to generate compressed data.
  • the device 300 includes:
  • the sending unit 310 is configured to send a compression request message to the management node.
  • the receiving unit 320 is configured to receive indication information of the target compression node, where the indication information of the target compression node is used to indicate the target compression node, where the target compression node is the management node receives the compression request message from the at least two As determined in the compression node, the current working state of the target compression node is an idle state, and the working state includes an idle state and a busy state;
  • a determining unit 330 configured to determine the target compression according to the indication information of the target compression node node
  • the processing unit 340 is configured to transmit the first data to be compressed and the first compressed data to the target compression node, where the first compressed data is data generated by the target compression node after performing compression processing on the first data to be compressed.
  • At least two computing processes that generate the data to be compressed are generated in the device, where the first compressed data is generated by a first computing process of the at least two computing processes, and
  • the processing unit is further configured to disable transmission of the second data to be compressed generated by the second computing process with the target compression node, where the second computing process is a computing process other than the first computing process in the at least two computing processes.
  • the determining unit is further configured to determine shared memory, the shared memory is accessible by the at least two compressed nodes, the shared memory includes at least one sub-memory; and the first sub-memory is determined from the shared memory, A sub-memory corresponds to the target compression node;
  • the sending unit is further configured to send the indication information of the first sub-memory to the target compression node, where the indication information of the first sub-memory includes a starting position of the first sub-memory relative to a starting position of the shared memory Offset;
  • the processing unit is configured to store the first compressed data in the first sub-memory; and read the first compressed data in the first sub-memory, wherein the first compressed data is compressed by the target
  • the indication information of the first sub-memory is stored in the first sub-memory.
  • At least two computing processes for generating the data to be compressed are generated in the device, the first compressed data is generated by a first computing process of the at least two computing processes, and the processing unit is further configured to: Disabling, in the first sub-memory, the second to-be-compressed data or the second compressed data, where the second to-be-compressed data is the second to-be-compressed data generated by the second computing process, where the second computing process is the at least two calculations a computing process other than the first computing process in the process, the second compressed data is data generated by a second compressed node, and the second compressed node is a compressed node other than the target compressed node among the at least two compressed nodes.
  • the processing unit is further configured to prohibit storing the first to-be-compressed data or the first compressed data in the second sub-memory, where the second sub-memory is a memory other than the first sub-memory in the shared memory. .
  • the shared memory includes at least two sub-memory, and
  • the determining unit is further configured to determine a one-to-one mapping relationship between the at least two sub-memory and the at least two compression nodes;
  • the processing unit is configured to store, according to the one-to-one mapping relationship between the at least two sub-memory and the at least two compression nodes, the first to-be-compressed data in the first sub-memory; and the at least two sub-memory according to the at least two sub-memory And the one-to-one mapping relationship between the at least two compression nodes, the first compressed data is read in the first sub-memory.
  • the receiving unit is specifically configured to receive a first processing instruction message sent by the management node, where the first processing instruction message includes indication information of the target compression node.
  • the receiving unit is specifically configured to receive a compression response message sent by the target compression node, where the compression response message includes indication information of the target compression node.
  • Each unit or module in the apparatus 300 is configured to perform the operations and functions of the computing node in the method 200, respectively, and the action of the management node is similar to the action of the management node in the method 200, and the action of the compressed node and the above method
  • the actions of the compressed nodes in 200 are similar, and detailed descriptions thereof are omitted herein to avoid redundancy.
  • the computing node can select a compression node in an idle state to provide a compression service for the computing node when the data needs to be compressed, which can reduce the calculation.
  • the burden of nodes improves operational efficiency and processing performance.
  • the management node the working state of the compressed node can be grasped in real time, and the running error of the compressed node is avoided, and the reliability of the operation is improved.
  • FIG. 9 is a schematic block diagram of an apparatus 400 for compressing data in accordance with an embodiment of the present invention.
  • the apparatus 400 is configured to be executed in a system including a computing node and at least two compression nodes, the compression node is configured to perform compression processing on the data to be compressed generated by the computing node to generate compressed data, as shown in FIG. include:
  • the receiving unit 410 is configured to receive a compression request sent by the computing node
  • a determining unit 420 configured to determine a current working state of each of the at least two compressed nodes, where the working state includes an idle state and a busy state; and configured to use at least two according to a current working state of each compressed node Determining a target compression node in the compression node, where the current working state of the target compression node is an idle state;
  • the sending unit 430 is configured to send a processing instruction message, so that the target compression node performs compression processing on the data to be compressed from the computing node.
  • the determining unit is specifically configured to determine a location of each compressed node and a location of the computing node;
  • each compressed node Used for the current working state of each compressed node, the location of the compute node, and each compression The location of the node determines the target compression node such that the target compression node is the compression node closest to the computing node among the compression nodes whose current working state is the idle state.
  • the determining unit is specifically configured to generate an alternate compressed node list according to a current working state of each compressed node, a location of the computing node, and a location of each compressed node, where the candidate compressed node list records at least two An identifier of the candidate compression node, wherein the candidate compression node is a compression node whose current working state is an idle state, wherein an order of the identifiers of the candidate compression nodes in the candidate compressed node list and each candidate Corresponding to the magnitude relationship of the distance between the compression node and the computing node;
  • the sending unit is configured to send, to the computing node, a first processing instruction message, where the first processing instruction message includes indication information of the target compression node, where the indication information of the target compression node is used to indicate the target compression node. So that the computing node sends the data to be compressed to the target compression node based on the indication information of the target compression node according to the first processing instruction message.
  • the sending unit is configured to send, to the target compression node, a second processing instruction message, where the second processing instruction message includes indication information of the computing node, where the indication information of the computing node is used to indicate the computing node, so that And the target compression node acquires the data to be compressed from the computing node according to the instruction information of the computing node according to the second processing instruction message.
  • Each unit or module in the apparatus 400 is configured to perform the actions and functions of the management node in the method 200 above, and the action of the computing node is similar to the action of the computing node in the method 200, and the action of the compressed node and the method described above
  • the actions of the compressed nodes in 200 are similar, and detailed descriptions thereof are omitted herein to avoid redundancy.
  • the computing node can select a compression node in an idle state to provide a compression service for the computing node when the data needs to be compressed, which can reduce the calculation.
  • the burden of nodes improves operational efficiency and processing performance.
  • the management node the working state of the compressed node can be grasped in real time, and the running error of the compressed node is avoided, and the reliability of the operation is improved.
  • FIG. 10 is a schematic block diagram of an apparatus 500 for compressing data in accordance with an embodiment of the present invention. As shown in FIG. 10, the apparatus 500 includes:
  • the obtaining unit 510 is configured to acquire, by the computing node, the first data to be compressed, where the current working state of the device is an idle state, where the working state includes an idle state and a busy state;
  • the processing unit 520 is configured to perform compression processing on the first data to be compressed to generate first compressed data.
  • the transmitting unit 530 is configured to transmit the first compressed data to the computing node.
  • the device further includes:
  • the receiving unit 540 is configured to receive, by the management node, a second processing instruction message, where the second processing instruction message includes indication information of the computing node, where the indication information of the computing node is used to indicate the computing node;
  • the sending unit 550 is configured to send, according to the indication information of the computing node, a compression response message to the computing node, where the compression response message includes indication information of the device, where the indication information of the device is used to indicate the device.
  • a shared memory is provided in the computing node, the shared memory is accessible by the device, and the shared memory includes at least one sub-memory, and
  • the device also includes:
  • the receiving unit 540 is configured to receive indication information of the first sub-memory sent by the computing node, where the indication information of the first sub-memory includes a starting position of the first sub-memory relative to a starting position of the shared memory. Offset;
  • the determining unit 560 is configured to determine the first sub-memory according to the indication information of the first sub-memory
  • the processing unit is specifically configured to read the first data to be compressed in the first sub-memory
  • the processing unit is specifically configured to store the first compressed data in the first sub-memory.
  • the processing unit is further configured to prohibit storing or reading data in the second sub-memory, where the second sub-memory is memory other than the first sub-memory.
  • Each unit or module in the apparatus 500 is configured to perform the actions and functions of the compression node (specifically, the target compression node) in the above method 200, and the actions of the computing node and the action of the computing node in the method 200, respectively.
  • the operation of the management node is similar to the operation of the management node in the above method 200, and a detailed description thereof will be omitted herein to avoid redundancy.
  • the computing node can select a compression node in an idle state to provide a compression service for the computing node when the data needs to be compressed, which can reduce the calculation.
  • the burden of nodes improves operational efficiency and processing performance.
  • the management node the working state of the compressed node can be grasped in real time, and the running error of the compressed node is avoided, and the reliability of the operation is improved.
  • FIG. 11 is a schematic structural diagram of an apparatus 600 for compressing data according to an embodiment of the present invention.
  • the device 600 is configured in a system including a management node and at least two compression nodes, and the compression node is configured to perform compression processing on the data to be compressed generated by the device to generate compressed data.
  • the device 600 includes:
  • processor 630 connected to the bus
  • transceiver 640 connected to the bus
  • the processor is configured to call and execute a program in the memory via the bus, for controlling the transceiver to send a compression request message to the management node;
  • indication information for controlling the transceiver to acquire a target compression node where the indication information of the target compression node is used to indicate the target compression node, where the target compression node is the management node receives the compression request message from the at least two As determined in the compression node, the current working state of the target compression node is an idle state, and the working state includes an idle state and a busy state;
  • At least two computing processes that generate the data to be compressed are generated in the device, where the first compressed data is generated by a first computing process of the at least two computing processes, and
  • the processor is further configured to disable transmission of the second to-be-compressed data generated by the second computing process with the target compression node, where the second computing process is a computing process other than the first computing process in the at least two computing processes.
  • the processor is further configured to determine shared memory, the shared memory is accessible by the at least two compressed nodes, the shared memory includes at least one sub-memory;
  • the processor is further configured to determine a first sub-memory from the shared memory, where the first sub-memory corresponds to the target compression node;
  • the processor is further configured to control the transceiver to send the indication information of the first sub-memory to the target compression node, where the indication information of the first sub-memory includes a starting position of the first sub-memory relative to the shared memory The offset of the starting position;
  • the processor is specifically configured to store the first to-be-compressed data in the first sub-memory
  • the processor is specifically configured to read the first compressed data in the first sub-memory, wherein the first compressed data is stored in the first sub-port according to the indication information of the first sub-memory In memory.
  • At least two computing processes for generating the data to be compressed are generated in the device, the first compressed data is generated by a first computing process of the at least two computing processes, and the processor is further configured to: Disabling, in the first sub-memory, the second to-be-compressed data or the second compressed data, where the second to-be-compressed data is the second to-be-compressed data generated by the second computing process, where the second computing process is the at least two calculations a computing process other than the first computing process, the second compressed data is data generated by a second compressed node, and the second compressed node is a compressed node other than the target compressed node of the at least two compressed nodes; or
  • the processor is further configured to prohibit storing the first to-be-compressed data or the first compressed data in the second sub-memory, where the second sub-memory is a memory other than the first sub-memory in the shared memory.
  • the shared memory includes at least two sub-memory, and
  • the processor is further configured to determine a one-to-one mapping relationship between the at least two sub-memory and the at least two compression nodes;
  • the processor is specifically configured to store, according to the one-to-one mapping relationship between the at least two sub-memory and the at least two compression nodes, the first to-be-compressed data in the first sub-memory; and the at least two sub-memory according to the at least two sub-memory And the one-to-one mapping relationship between the at least two compression nodes, the first compressed data is read in the first sub-memory.
  • the processor is specifically configured to control the transceiver to receive the first processing instruction message sent by the management node, where the first processing instruction message includes indication information of the target compression node.
  • the processor is specifically configured to control the transceiver to receive a compression response message sent by the target compression node, where the compression response message includes indication information of the target compression node.
  • the processor 630 may be a central processing unit (“CPU"), and the processor 630 may also be other general-purpose processors, digital signal processors (DSPs). , an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, and the like.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory 620 can include read only memory and random access memory and provides instructions and data to the processor 330. A portion of the memory 620 can also include a non-volatile random access memory. For example, the memory 620 can also store information of the device type.
  • the bus 610 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus 610 in the figure.
  • each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 630 or an instruction in a form of software.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented as a hardware processor, or may be performed by a combination of hardware and software modules in the processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 620, and the processor 630 reads the information in the memory 620 and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • Each unit or module in the device 600 is configured to perform the actions and functions of the computing node in the method 200, and the action of the management node is similar to the action of the management node in the method 200, and the action of the compressed node and the method are as described above.
  • the actions of the compressed nodes in 200 are similar, and detailed descriptions thereof are omitted herein to avoid redundancy.
  • the computing node can select a compression node in an idle state to provide a compression service for the computing node when the data needs to be compressed, which can reduce the calculation.
  • the burden of nodes improves operational efficiency and processing performance.
  • the management node the working state of the compressed node can be grasped in real time, and the running error of the compressed node is avoided, and the reliability of the operation is improved.
  • FIG. 12 is a schematic structural diagram of an apparatus 700 for compressing data according to an embodiment of the present invention.
  • the device 700 is configured in a system including a computing node and at least two compression nodes, and the compression node is configured to perform compression processing on the data to be compressed generated by the computing node to generate compressed data.
  • the device 700 includes :
  • processor 730 connected to the bus
  • transceiver 740 connected to the bus
  • the processor is configured to call and execute a program in the memory via the bus, to control the transceiver to receive a compression request sent by the computing node;
  • the processor is specifically configured to determine a location of each compressed node and a location of the computing node;
  • the processor is specifically configured to generate an alternate compressed node list according to a current working state of each compressed node, a location of the computing node, and a location of each compressed node, where the candidate compressed node list records at least two An identifier of the candidate compression node, wherein the candidate compression node is a compression node whose current working state is an idle state, wherein an order of the identifiers of the candidate compression nodes in the candidate compressed node list and each candidate Corresponding to the magnitude relationship of the distance between the compression node and the computing node;
  • the processor is specifically configured to control the transceiver to send a first processing instruction message to the computing node, where the first processing instruction message includes indication information of the target compression node, and the indication information of the target compression node is used to indicate The target compresses the node, so that the computing node sends the to-be-compressed data to the target compression node according to the indication information of the target compression node according to the first processing instruction message.
  • the processor is specifically configured to control the transceiver to send a second processing instruction message to the target compression node, where the second processing instruction message includes indication information of the computing node, where the indication information of the computing node is used to indicate the Computing the node, so that the target compression node acquires the data to be compressed from the computing node based on the indication information of the computing node according to the second processing instruction message.
  • the processor 730 may be a central processing unit (“CPU"), and the processor 730 may also be other general-purpose processors, digital signal processors (DSPs). , an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, and the like.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory 720 can include read only memory and random access memory and provides instructions and data to the processor 730. A portion of the memory 720 can also include a non-volatile random access memory. For example, the memory 720 can also store information of the device type.
  • the bus 710 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus 710 in the figure.
  • each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 730 or an instruction in a form of software.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented as a hardware processor, or may be performed by a combination of hardware and software modules in the processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in memory 720, and processor 730 reads the information in memory 720 and, in conjunction with its hardware, performs the steps of the above method. To avoid repetition, it will not be described in detail here.
  • Each unit or module in the device 700 is configured to perform the actions and functions of the management node in the foregoing method 200, and the action of the computing node is similar to the action of the computing node in the method 200, and the action of the compressed node and the foregoing method
  • the actions of the compressed nodes in 200 are similar, and detailed descriptions thereof are omitted herein to avoid redundancy.
  • the computing node can select a compression node in an idle state to provide a compression service for the computing node when the data needs to be compressed, which can reduce the calculation.
  • the burden of nodes improves operational efficiency and processing performance.
  • the management node the working state of the compressed node can be grasped in real time, and the running error of the compressed node is avoided, and the reliability of the operation is improved.
  • FIG. 13 is a schematic structural diagram of an apparatus 800 for compressing data according to an embodiment of the present invention. As shown in FIG. 13, the device 800 includes:
  • processor 830 connected to the bus
  • the processor is configured to invoke and execute a program in the memory via the bus, to control acquiring the first data to be compressed from the computing node, where the current working state of the device is an idle state, and the working state includes an idle state and a busy state. status;
  • the device further includes: a transceiver 840 coupled to the bus;
  • the processor is further configured to control the transceiver to receive a second processing instruction message sent by the management node, where the second processing instruction message includes indication information of the computing node;
  • the processor is further configured to control, according to the indication information of the computing node, the transceiver to send a compression response message to the computing node, where the compression response message includes indication information of the device.
  • a shared memory is provided in the computing node, the shared memory is accessible by the device, the shared memory includes at least one sub-memory, and the device further includes: a transceiver connected to the bus;
  • the processor is further configured to control the transceiver to receive the indication information of the first sub-memory sent by the computing node, where the indication information of the first sub-memory includes a starting position of the first sub-memory relative to the shared memory. The offset of the starting position;
  • the processor is further configured to determine the first sub-memory according to the indication information of the first sub-memory
  • the processor is specifically configured to read the first to-be-compressed data in the first sub-memory
  • the processor is specifically configured to store the first compressed data in the first sub-memory.
  • the processor is further configured to prohibit storing or reading data in the second sub-memory, where the second sub-memory is memory other than the first sub-memory.
  • the processor 830 may be a central processing unit (“CPU"), and the processor 830 may also be other general-purpose processors, digital signal processors (DSPs). , an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, and the like.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory 820 can include read only memory and random access memory and provides instructions and data to the processor 830. A portion of the memory 820 may also include a non-volatile random access memory. For example, the memory 820 can also store information of the device type.
  • the bus 810 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus 810 in the figure.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor 830 or an instruction in the form of software.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented as a hardware processor, or may be performed by a combination of hardware and software modules in the processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 820, and the processor 830 reads the information in the memory 820 and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • Each unit or module in the device 800 is used to execute the compressed node in the above method 200, respectively.
  • the action and function of the target compression node and the operation of the computing node is similar to the action of the computing node in the method 200
  • the action of the management node is similar to the action of the management node in the method 200, here Avoid the details and omit the detailed description.
  • the computing node can select a compression node in an idle state to provide a compression service for the computing node when the data needs to be compressed, which can reduce the calculation.
  • the burden of nodes improves operational efficiency and processing performance.
  • the management node the working state of the compressed node can be grasped in real time, and the running error of the compressed node is avoided, and the reliability of the operation is improved.
  • the manner in which the computing node and the compression node enumerated above transmit data is merely an illustrative description, and the present invention is not limited thereto, and may be, for example, a computing node and a compression node.
  • the data transceiver is set in the data transmission between the computing node and the data transceiver in the compression node by means of wired communication or wireless communication.
  • the “storage” of the above enumerated data in the memory includes: writing data in the memory, and/or reading the data in the memory.
  • the shared memory may be set in the computing node.
  • the compressed node may access the shared memory by remote reading and writing, or may be the compressed node and the computing node (for example, a daemon).
  • the data is transferred between, and the computing node stores the data that the compressed node needs to access in the shared memory.
  • the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be taken to the embodiments of the present invention.
  • the implementation process constitutes any limitation.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative
  • the division of the unit is only a logical function division, and the actual implementation may have another division manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be Ignore, or not execute.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Abstract

L'invention concerne un procédé, un appareil et un dispositif pour compresser des données. Le procédé est exécuté dans un système comprenant un nœud de calcul, un nœud de gestion et au moins deux nœuds de compression. Le procédé comprend les opérations suivantes : le nœud de gestion détermine, lors de la réception d'une requête de compression envoyée par le nœud de calcul, un état de fonctionnement courant de chaque nœud de compression des au moins deux nœuds de compression, l'état de fonctionnement comprenant un état au repos et un état occupé ; le nœud de gestion détermine, selon l'état de fonctionnement courant de chaque nœud de compression, un nœud de compression cible parmi les au moins deux nœuds de compression, un état de fonctionnement courant du nœud de compression cible étant un état au repos ; et le nœud de gestion envoie un message de traitement d'instruction, de telle sorte que le nœud de compression cible réalise un traitement de compression sur des données à compresser à partir du nœud de calcul.
PCT/CN2016/078667 2016-04-07 2016-04-07 Procédé, appareil et dispositif pour compresser des données WO2017173618A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680057387.1A CN108141471B (zh) 2016-04-07 2016-04-07 压缩数据的方法、装置和设备
PCT/CN2016/078667 WO2017173618A1 (fr) 2016-04-07 2016-04-07 Procédé, appareil et dispositif pour compresser des données

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/078667 WO2017173618A1 (fr) 2016-04-07 2016-04-07 Procédé, appareil et dispositif pour compresser des données

Publications (1)

Publication Number Publication Date
WO2017173618A1 true WO2017173618A1 (fr) 2017-10-12

Family

ID=60000194

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/078667 WO2017173618A1 (fr) 2016-04-07 2016-04-07 Procédé, appareil et dispositif pour compresser des données

Country Status (2)

Country Link
CN (1) CN108141471B (fr)
WO (1) WO2017173618A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109347758A (zh) * 2018-08-30 2019-02-15 赛尔网络有限公司 一种报文压缩的方法、设备、系统和介质
CN109614043A (zh) * 2018-12-04 2019-04-12 郑州云海信息技术有限公司 一种数据压缩方法、装置、系统及计算机可读存储介质
CN110955535A (zh) * 2019-11-07 2020-04-03 浪潮(北京)电子信息产业有限公司 一种多业务请求进程调用fpga设备的方法及相关装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109213737A (zh) * 2018-09-17 2019-01-15 郑州云海信息技术有限公司 一种数据压缩方法及装置
CN115442260B (zh) * 2021-06-01 2023-09-05 中国移动通信集团设计院有限公司 数据传输方法、终端设备及存储介质
CN115809221A (zh) * 2021-09-15 2023-03-17 华为技术有限公司 一种数据压缩方法及装置
CN114064140B (zh) * 2021-10-15 2024-03-15 南京南瑞继保电气有限公司 一种故障录波数据存储和访问方法及装置、存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101997897A (zh) * 2009-08-14 2011-03-30 华为技术有限公司 一种块存储的方法、设备及系统
CN102932844A (zh) * 2012-11-28 2013-02-13 北京傲天动联技术有限公司 一种提升无线网络通信吞吐量的方法及网络节点装置
CN103020205A (zh) * 2012-12-05 2013-04-03 北京普泽天玑数据技术有限公司 一种分布式文件系统上基于硬件加速卡的压缩解压缩方法
WO2013186327A1 (fr) * 2012-06-13 2013-12-19 Telefonaktiebolaget L M Ericsson (Publ) Compression de données dans un réseau de communications

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104516821B (zh) * 2013-09-29 2017-12-19 晨星半导体股份有限公司 存储器管理方法及存储器管理装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101997897A (zh) * 2009-08-14 2011-03-30 华为技术有限公司 一种块存储的方法、设备及系统
WO2013186327A1 (fr) * 2012-06-13 2013-12-19 Telefonaktiebolaget L M Ericsson (Publ) Compression de données dans un réseau de communications
CN102932844A (zh) * 2012-11-28 2013-02-13 北京傲天动联技术有限公司 一种提升无线网络通信吞吐量的方法及网络节点装置
CN103020205A (zh) * 2012-12-05 2013-04-03 北京普泽天玑数据技术有限公司 一种分布式文件系统上基于硬件加速卡的压缩解压缩方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109347758A (zh) * 2018-08-30 2019-02-15 赛尔网络有限公司 一种报文压缩的方法、设备、系统和介质
CN109614043A (zh) * 2018-12-04 2019-04-12 郑州云海信息技术有限公司 一种数据压缩方法、装置、系统及计算机可读存储介质
CN110955535A (zh) * 2019-11-07 2020-04-03 浪潮(北京)电子信息产业有限公司 一种多业务请求进程调用fpga设备的方法及相关装置
WO2021088419A1 (fr) * 2019-11-07 2021-05-14 浪潮(北京)电子信息产业有限公司 Procédé d'invocation de dispositif fpga à l'aide de multiples procédés de demande de service et appareil associé
CN110955535B (zh) * 2019-11-07 2022-03-22 浪潮(北京)电子信息产业有限公司 一种多业务请求进程调用fpga设备的方法及相关装置

Also Published As

Publication number Publication date
CN108141471B (zh) 2020-06-26
CN108141471A (zh) 2018-06-08

Similar Documents

Publication Publication Date Title
WO2017173618A1 (fr) Procédé, appareil et dispositif pour compresser des données
US11169743B2 (en) Energy management method and apparatus for processing a request at a solid state drive cluster
US20180059939A1 (en) Method, Device, and System for Implementing Hardware Acceleration Processing
US9467512B2 (en) Techniques for remote client access to a storage medium coupled with a server
WO2017114283A1 (fr) Procédé et appareil pour traiter une requête de lecture/écriture dans un hôte physique
US10394604B2 (en) Method for using local BMC to allocate shared GPU resources inside NVMe over fabrics system
US20220263711A1 (en) Acceleration Resource Scheduling Method and Apparatus, and Acceleration System
CN111190854B (zh) 通信数据处理方法、装置、设备、系统和存储介质
CN116204456A (zh) 数据访问方法及计算设备
US20230152978A1 (en) Data Access Method and Related Device
US11372782B2 (en) Computing system for reducing latency between serially connected electronic devices
WO2016041191A1 (fr) Procédé et appareil pour lire et écrire des données, dispositif de stockage et système informatique
US20210294761A1 (en) Systems and methods for message tunneling
WO2023174146A1 (fr) Système et procédé de gestion d'espaces de nommage de carte de délestage, et système et procédé de traitement de demandes d'entrée/sortie
AU2015402888A1 (en) Computer device and method for reading/writing data by computer device
US10318362B2 (en) Information processing apparatus, information processing method, and non-transitory computer-readable storage medium
US20220283961A1 (en) Computing system for reducing latency between serially connected electronic devices
WO2018188416A1 (fr) Procédé et appareil de recherche de données, et dispositifs associés
US20230393996A1 (en) Systems and methods for message tunneling
US11601515B2 (en) System and method to offload point to multipoint transmissions
US11422963B2 (en) System and method to handle uncompressible data with a compression accelerator
US11321254B2 (en) Computing system for transmitting completion early between serially connected electronic devices
WO2022141250A1 (fr) Procédé de transmission de données et appareil associé
WO2023134588A1 (fr) Système informatique, procédé et appareil, ainsi que dispositif d'accélération
CN117667827A (zh) 任务处理方法及异构计算系统

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16897552

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16897552

Country of ref document: EP

Kind code of ref document: A1