CN109491599A - A kind of distributed memory system and its isomery accelerated method - Google Patents
A kind of distributed memory system and its isomery accelerated method Download PDFInfo
- Publication number
- CN109491599A CN109491599A CN201811246355.3A CN201811246355A CN109491599A CN 109491599 A CN109491599 A CN 109491599A CN 201811246355 A CN201811246355 A CN 201811246355A CN 109491599 A CN109491599 A CN 109491599A
- Authority
- CN
- China
- Prior art keywords
- data
- accelerator module
- back end
- fpga
- client
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000013144 data compression Methods 0.000 claims abstract description 28
- 230000006837 decompression Effects 0.000 claims description 9
- 230000003068 static effect Effects 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 7
- 238000013500 data storage Methods 0.000 claims description 7
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 235000006508 Nelumbo nucifera Nutrition 0.000 description 1
- 240000002853 Nelumbo nucifera Species 0.000 description 1
- 235000006510 Nelumbo pentapetala Nutrition 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1008—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
- G06F11/1044—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices with specific ECC/EDC distribution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/28—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0658—Controller construction arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of distributed memory system isomery accelerated methods, comprising: data to be stored is carried out piecemeal and is sent to back end by client;Back end calls FPGA isomery accelerator module that data block is input to data compression unit;Data block is compressed and is stored in storage dish by data compression unit.The invention also discloses a kind of distributed memory systems.This method and system are not only able to satisfy system performance, but also can improve storage space utilization.
Description
Technical field
The present invention relates to computer data technical field of distributed memory, and more specifically, particularly relate to a kind of distribution
Formula storage system and its isomery accelerated method.
Background technique
The internet of things era can generate the diversified isomeric data of magnanimity, this proposes the memory capacity and performance of storage system
Higher requirement since distributed memory system has many advantages, such as high scalability, high reliability has in the industry cycle obtained answering extensively
With.
In order to improve the reliability of data, storage system generallys use two kinds of technologies: one is replication policies, store multiple
Data copy, it usually needs times over the memory space of data capacity, memory space utilizes lower;Another correcting and eleting codes strategy,
Redundancy encoding is carried out to data, data recovery is carried out by data redundancy information, storage space utilization is high, but needs to consume
A large amount of computing resource, to system performance requirements height.
A kind of distributed memory system and its isomery accelerated method are not yet disclosed in the prior art, wherein each back end
An isomery accelerator card is at least assembled, data are carried out by erasure code by it and is sent to data compression unit for data block pressure
Storage is not only able to satisfy system performance, but also can improve storage space utilization into storage dish in this way after contracting.
Summary of the invention
In view of this, the purpose of the embodiment of the present invention is to propose a kind of distributed memory system and its isomery acceleration side
Method, wherein each back end at least assembles a FPGA isomery accelerator module, by FPGA isomery accelerator module by data into
Row erasure code is simultaneously sent to after data compression unit compresses data block storage into storage dish, is both able to satisfy systematicness in this way
Can, and storage space utilization can be improved.
Based on above-mentioned purpose, the one side of the embodiment of the present invention provides a kind of distributed memory system isomery acceleration side
Method, comprising:
Data to be stored is carried out piecemeal and is sent to back end by client;
Back end calls FPGA isomery accelerator module that data block is input to data compression unit;
Data block is compressed and is stored in storage dish by data compression unit.
In some embodiments, data to be stored is carried out piecemeal and is sent to back end to include: client by client
End obtains data storage location information from metadata node, chooses some back end as primary data node by the rule set
And primary data node is sent by deblocking.
In some embodiments, back end calls FPGA isomery accelerator module that data block is input to data compression list
Member includes:
It determines the need for carrying out erasure code to data;
If necessary to carry out erasure code to data, then back end calls FPGA isomery accelerator module to carry out data block
Erasure code generates redundant data block, and data block is input to data compression unit;
If you do not need to carrying out erasure code to data, then back end calls FPGA isomery accelerator module that data block is straight
It connects and is input to data compression unit.
In some embodiments, storage dish is configured as storage disk array.
In some embodiments, when carrying out data read operation, CPU call FPGA isomery accelerator module to data into
Data after decompression are entangled and delete decoding calculating and be transmitted to client by row decompression operations;Data chunk is combined into original by client
Beginning data.
The another aspect of the embodiment of the present invention additionally provides a kind of distributed memory system, comprising:
Client;
Pass through the metadata node cluster and back end cluster of network communication with client;
Wherein each back end configures at least one FPGA isomery accelerator module, which at least uses
In to entangle delete encoding and decoding, data compression accelerates.
In some embodiments, back end includes at least CPU, FPGA isomery accelerator module, storage disk array.
In some embodiments, FPGA isomery accelerator module is communicated by PCIe-DMA mode with CPU.
In some embodiments, FPGA isomery accelerator module includes static configuration area and the algorithm for executing CPU transmission
Dynamic reconfigurable area, wherein the static configuration area includes DDR4 controller, PCIe-DMA module.
In some embodiments, FPGA isomery accelerator module by SRIO x4 interface is existing and calculate node in other
The communication of FPGA Heterogeneous Computing unit.
The present invention has following advantageous effects: a kind of distributed memory system provided in an embodiment of the present invention and its different
Structure accelerated method passes through FPGA isomery accelerator module wherein each back end at least assembles a FPGA isomery accelerator module
Data are subjected to erasure code and are sent to after data compression unit compresses data block storage into storage dish, can guarantee be
While performance of uniting, data storage capacity is reduced, promotes storage space utilization.Drop is programmed using OpenCL programming framework
The development difficulty of low FPGA isomery accelerator module realizes algorithm dynamic reconfigurable.By mutual between FPGA Heterogeneous Computing unit
Link structure, realizes the flowing water parallel processing of calculating task.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the flow diagram of distributed memory system isomery accelerated method provided by the invention;
Fig. 2 is the schematic diagram of distributed memory system provided by the invention;
Fig. 3 is the enlarged drawing of the back end of distributed memory system provided by the invention;
Fig. 4 is the module and Heterogeneous Computing block diagram of CPU and FPGA the isomery accelerator module of back end provided by the invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with specific embodiment, and reference
The embodiment of the present invention is further described in attached drawing.
It should be noted that all statements for using " first " and " second " are for differentiation two in the embodiment of the present invention
The non-equal entity of a same names or non-equal parameter, it is seen that " first " " second " only for the convenience of statement, does not answer
It is interpreted as the restriction to the embodiment of the present invention, subsequent embodiment no longer illustrates this one by one.
Based on above-mentioned purpose, the first aspect of the embodiment of the present invention proposes a kind of distributed memory system isomery and adds
The embodiment of fast method.Shown in fig. 1 is the flow diagram of distributed memory system isomery accelerated method provided by the invention.
A kind of distributed memory system isomery accelerated method, optionally, comprising the following steps:
Data to be stored is carried out piecemeal and is sent to back end by step S101, client;
Step S102, back end call FPGA isomery accelerator module that data block is input to data compression unit;
Data block is compressed and is stored in storage dish by step S103, data compression unit.
As shown in Figure 1, client obtains data storage location letter from metadata node first when storing to data
Breath, then sends back end for deblocking, and the primary data node for being responsible for storage calls FPGA (Field-
Programmable Gate Array, i.e. field programmable gate array) isomery accelerator module to data carry out erasure code meter
It calculates, coded data is obtained, and the data block after coding is input to data compression unit and carries out data compression, after data are compressed
It can store in storage disk array.
In one embodiment, data to be stored is carried out piecemeal and is sent to back end to include: client by client
End obtains data storage location information from metadata node, chooses some back end as primary data node by the rule set
And primary data node is sent by deblocking.
In one embodiment, back end calls FPGA isomery accelerator module that data block is input to data compression
Unit includes:
It determines the need for carrying out erasure code to data;
If necessary to carry out erasure code to data, then back end calls FPGA isomery accelerator module to carry out data block
Erasure code generates redundant data block, and data block is input to data compression unit;
If you do not need to carrying out erasure code to data, then back end calls FPGA isomery accelerator module that data block is straight
It connects and is input to data compression unit.
In one embodiment, storage dish is configured as storage disk array.
In one embodiment, when carrying out data read operation, CPU call FPGA isomery accelerator module to data into
Data after decompression are entangled and delete decoding calculating and be transmitted to client by row decompression operations;Data chunk is combined into original by client
Beginning data.
When carrying out data read operation, CPU calls FPGA isomery accelerator module to carry out decompression operations to data first, so
The data correcting and eleting codes after decompression are decoded afterwards and are calculated, are transmitted to client, client is combined data block, finally obtains original
Data.
Preferably, the correcting and eleting codes algorithm uses Reed-Solomon algorithm.
Preferably, the compression algorithm uses Gzip algorithm.
From above-described embodiment as can be seen that distributed memory system isomery accelerated method provided in an embodiment of the present invention, often
A back end at least assembles a FPGA isomery accelerator module, and data are carried out erasure code by FPGA isomery accelerator module
And be sent to after data compression unit compresses data block and store into storage dish, while capable of guaranteeing system performance, reduce
Data storage capacity promotes storage space utilization.
It is important to note that each step in each embodiment of above-mentioned distributed memory system isomery accelerated method
Suddenly can intersect, replace, increase, deleting, therefore, these reasonable permutation and combination transformation in polynary isomery storage side
Method should also be as belonging to the scope of protection of the present invention, and should not be confined to protection scope of the present invention on the embodiment.
It is exemplary embodiment disclosed by the invention above, the disclosed sequence of the embodiments of the present invention is just to retouching
It states, does not represent the advantages or disadvantages of the embodiments.It should be noted that the discussion of any of the above embodiment is exemplary only, it is not intended that
Imply that range disclosed by the embodiments of the present invention (including claim) is limited to these examples, what is limited without departing substantially from claim
Under the premise of range, it may be many modifications and modify.According to the claim to a method of open embodiment described herein
Function, step and/or movement are not required to the execution of any particular order.In addition, although element disclosed by the embodiments of the present invention can be with
It is described or is required in the form of individual, but be unless explicitly limited odd number, it is understood that be multiple.
Based on above-mentioned purpose, the second aspect of the embodiment of the present invention proposes a kind of distributed memory system, including visitor
Family end;Pass through the metadata node cluster and back end cluster of network communication with the client;Wherein each back end
Configure at least one FPGA isomery accelerator module, FPGA isomery accelerator module be at least used for entangle delete encoding and decoding, data compression into
Row accelerates.Fig. 2 shows be distributed memory system provided by the invention schematic diagram.
As shown in Fig. 2, the system includes client, metadata node cluster, back end cluster etc., metadata node collection
Group includes that multiple metadata nodes, the back end clusters such as metadata node 1, metadata node 2 include 1-N back end, N
Be it is unknown, according to the requirement of system be arranged quantity.Each back end at least assembles a FPGA isomery accelerator module, FPGA
Isomery accelerator module is responsible for accelerating computation-intensive task, realizes that entangling for data deletes encoding and decoding and data compression, decompression
The tasks such as contracting processing.Data erasure code and squeeze operation are realized in FPGA isomery accelerator module, are born to mitigate CPU and calculate
Lotus.
In one embodiment, back end includes at least CPU, FPGA isomery accelerator module, storage disk array.Fig. 3
Show the enlarged drawing of the back end of distributed memory system provided by the invention.
As shown in figure 3, back end uses the calculating mode of CPU+FPGA isomery accelerator module, each back end is equipped with
At least one isomery acceleration equipment, it may be assumed that FPGA isomery accelerator module.Back end further includes MEM, SAS controller etc..
In one embodiment, FPGA isomery accelerator module is communicated by PCIe-DMA mode with CPU.At one
In embodiment, FPGA isomery accelerator module includes the dynamic reconfigurable area of the algorithm of static configuration area and execution CPU transmission,
Middle static configuration area includes DDR4 controller, PCIe-DMA module.Fig. 4 shows the CPU of back end provided by the invention
With the module and Heterogeneous Computing block diagram of FPGA isomery accelerator module.
As shown in figure 4, FPGA isomery accelerator module is communicated by PCIe3.0-DMA mode with CPU.CPU element packet
Include Main function module, GCC compiler module, Host executable file module, OpenCL operation support library module, PCIe driving
Module and storage FPGA executable file module.
Each FPGA isomery accelerator module can dispose polyalgorithm IP kernel, can run identical or different Mission Operations.
Optionally, FPGA isomery accelerator module uses Xilinx KU115FPGA chip, and configures with 8 ECC checks
The DDR4 of position, can support 2 SODIMM, and single maximum supports 8GB DDR4x72bit@1333MHz/2400MT/s.
It is divided into static configuration area and dynamic reconfigurable area inside the FPGA, can be realized the on-line reconfiguration of algorithm.It is quiet
State configuring area includes the modules such as DDR4 controller, PCIe-DMA module, after static configuration area powers in such a way that active loads
It is loaded from Flash, dynamic reconfigurable area is responsible for executing the algorithm that CPU is issued, and accelerates to calculating task.
In one embodiment, FPGA isomery accelerator module by SRIO x4 interface is existing and calculate node in other
The communication of FPGA Heterogeneous Computing unit.
FPGA isomery accelerator module also provides SRIO x4 interface, may be implemented and its in calculate node by the interface
Calculating task may be implemented between FPGA isomery accelerator module by the interconnection structure in the communication of his FPGA isomery accelerator module
Flowing water parallel processing.
To reduce the development difficulty of isomery accelerator module, realizing algorithm dynamic reconfigurable, the present invention is programmed using OpenCL
Frame is programmed.
FPGA is developed using OpenCL programming framework, and generates library, so that CPU is called.
OpenCL operation support library calls FPGA to provide dynamic link library for CPU, when program needs FPGA to be accelerated,
By OpenCL operation support library, function, the data load, scheduling of FPGA are realized.
It is every in the system from above-described embodiment as can be seen that a kind of distributed memory system provided in an embodiment of the present invention
A back end at least assembles a FPGA isomery accelerator module, and data are carried out erasure code by FPGA isomery accelerator module
And be sent to after data compression unit compresses data block and store into storage dish, while capable of guaranteeing system performance, reduce
Data storage capacity promotes storage space utilization.Being programmed using OpenCL programming framework, which reduces FPGA isomery, accelerates list
The development difficulty of member realizes algorithm dynamic reconfigurable.By the interconnection structure between FPGA Heterogeneous Computing unit, realizes to calculate and appoint
The flowing water parallel processing of business.
It should be understood by those ordinary skilled in the art that: the discussion of any of the above embodiment is exemplary only, not
It is intended to imply that range disclosed by the embodiments of the present invention (including claim) is limited to these examples;In the think of of the embodiment of the present invention
Under road, it can also be combined between the technical characteristic in above embodiments or different embodiments, and exist as described above
Many other variations of the different aspect of the embodiment of the present invention, for simplicity, they are not provided in details.Therefore, all at this
Within the spirit and principle of inventive embodiments, any omission, modification, equivalent replacement, improvement for being made etc. should all be included in this hair
Within the protection scope of bright embodiment.
Claims (10)
1. a kind of distributed memory system isomery accelerated method, which comprises the following steps:
Data to be stored is carried out piecemeal and is sent to back end by client;
Back end calls FPGA isomery accelerator module that data block is input to data compression unit;
Data block is compressed and is stored in storage dish by data compression unit.
2. the method according to claim 1, wherein data to be stored is carried out piecemeal and is sent to number by client
Include: client from metadata node acquisition data storage location information according to node, and sends data section for deblocking
Point.
3. the method according to claim 1, wherein back end calls FPGA isomery accelerator module by data block
Being input to data compression unit includes:
It determines the need for carrying out erasure code to data;
If necessary to carry out erasure code to data, then back end calls FPGA isomery accelerator module entangle deleting by data block
Coding generates redundant data block, and data block is input to data compression unit;
If you do not need to carrying out erasure code to data, then back end calls FPGA isomery accelerator module that data block is directly defeated
Enter to data compression unit.
4. the method according to claim 1, wherein the storage dish is configured as storage disk array.
5. the method according to claim 1, wherein further comprising the steps of:
When carrying out data read operation, CPU call FPGA isomery accelerator module to data carry out decompression operations, will be after decompression
Data, which are entangled, deletes decoding calculating and is transmitted to client;
Data chunk is combined into initial data by client.
6. a kind of distributed memory system characterized by comprising
Client;
Pass through the metadata node cluster and back end cluster of network communication with client;
Wherein each back end be configured at least one FPGA isomery accelerator module, the FPGA isomery accelerator module for pair
Entangle delete encoding and decoding, data compression is accelerated.
7. system according to claim 6, which is characterized in that the back end accelerates including at least CPU, FPGA isomery
Unit, storage disk array.
8. system according to claim 6, which is characterized in that the FPGA isomery accelerator module passes through PCIe-DMA mode
It is communicated with CPU.
9. system according to claim 8, which is characterized in that the FPGA isomery accelerator module include static configuration area and
The dynamic reconfigurable area for executing the algorithm of CPU transmission, wherein the static configuration area includes DDR4 controller, PCIe-DMA mould
Block.
10. system according to claim 6, which is characterized in that the FPGA isomery accelerator module passes through SRIO x4 interface
Now with the communication of other FPGA Heterogeneous Computing units in calculate node.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811246355.3A CN109491599A (en) | 2018-10-24 | 2018-10-24 | A kind of distributed memory system and its isomery accelerated method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811246355.3A CN109491599A (en) | 2018-10-24 | 2018-10-24 | A kind of distributed memory system and its isomery accelerated method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109491599A true CN109491599A (en) | 2019-03-19 |
Family
ID=65691828
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811246355.3A Pending CN109491599A (en) | 2018-10-24 | 2018-10-24 | A kind of distributed memory system and its isomery accelerated method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109491599A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110244939A (en) * | 2019-05-20 | 2019-09-17 | 西安交通大学 | A kind of RS code decoding method based on OpenCL |
CN110795497A (en) * | 2018-08-02 | 2020-02-14 | 阿里巴巴集团控股有限公司 | Cooperative compression in distributed storage systems |
CN111722930A (en) * | 2020-06-23 | 2020-09-29 | 恒为科技(上海)股份有限公司 | Data preprocessing system |
CN112347721A (en) * | 2020-10-29 | 2021-02-09 | 北京长焜科技有限公司 | System for realizing data processing acceleration based on FPGA and acceleration method thereof |
CN113270120A (en) * | 2021-07-16 | 2021-08-17 | 北京金山云网络技术有限公司 | Data compression method and device |
US11144207B2 (en) | 2019-11-07 | 2021-10-12 | International Business Machines Corporation | Accelerating memory compression of a physically scattered buffer |
CN113672431A (en) * | 2021-07-29 | 2021-11-19 | 济南浪潮数据技术有限公司 | Optimization method and device for acceleration chip erasure code plug-in for realizing distributed storage |
WO2022016137A1 (en) * | 2020-07-17 | 2022-01-20 | Softiron Limited | Computing acceleration framework |
CN116954523A (en) * | 2023-09-20 | 2023-10-27 | 苏州元脑智能科技有限公司 | Storage system, data storage method, data reading method and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106201766A (en) * | 2016-07-25 | 2016-12-07 | 深圳市中博科创信息技术有限公司 | Data storage control method and data server |
CN106250349A (en) * | 2016-08-08 | 2016-12-21 | 浪潮(北京)电子信息产业有限公司 | A kind of high energy efficiency heterogeneous computing system |
CN106598738A (en) * | 2016-12-13 | 2017-04-26 | 郑州云海信息技术有限公司 | Computer cluster system and parallel computing method thereof |
CN106598499A (en) * | 2016-12-14 | 2017-04-26 | 深圳市中博睿存科技有限公司 | FPGA-based distributed file system architecture |
CN107273331A (en) * | 2017-06-30 | 2017-10-20 | 山东超越数控电子有限公司 | A kind of heterogeneous computing system and method based on CPU+GPU+FPGA frameworks |
CN107644030A (en) * | 2016-07-20 | 2018-01-30 | 华为技术有限公司 | Data synchronization method for distributed database, relevant apparatus and system |
-
2018
- 2018-10-24 CN CN201811246355.3A patent/CN109491599A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107644030A (en) * | 2016-07-20 | 2018-01-30 | 华为技术有限公司 | Data synchronization method for distributed database, relevant apparatus and system |
CN106201766A (en) * | 2016-07-25 | 2016-12-07 | 深圳市中博科创信息技术有限公司 | Data storage control method and data server |
CN106250349A (en) * | 2016-08-08 | 2016-12-21 | 浪潮(北京)电子信息产业有限公司 | A kind of high energy efficiency heterogeneous computing system |
CN106598738A (en) * | 2016-12-13 | 2017-04-26 | 郑州云海信息技术有限公司 | Computer cluster system and parallel computing method thereof |
CN106598499A (en) * | 2016-12-14 | 2017-04-26 | 深圳市中博睿存科技有限公司 | FPGA-based distributed file system architecture |
CN107273331A (en) * | 2017-06-30 | 2017-10-20 | 山东超越数控电子有限公司 | A kind of heterogeneous computing system and method based on CPU+GPU+FPGA frameworks |
Non-Patent Citations (1)
Title |
---|
田庚.BING: "使用FPGA进行加速计算", 《CSDN》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110795497A (en) * | 2018-08-02 | 2020-02-14 | 阿里巴巴集团控股有限公司 | Cooperative compression in distributed storage systems |
CN110244939B (en) * | 2019-05-20 | 2021-02-09 | 西安交通大学 | RS code encoding and decoding method based on OpenCL |
CN110244939A (en) * | 2019-05-20 | 2019-09-17 | 西安交通大学 | A kind of RS code decoding method based on OpenCL |
US11144207B2 (en) | 2019-11-07 | 2021-10-12 | International Business Machines Corporation | Accelerating memory compression of a physically scattered buffer |
CN111722930A (en) * | 2020-06-23 | 2020-09-29 | 恒为科技(上海)股份有限公司 | Data preprocessing system |
CN111722930B (en) * | 2020-06-23 | 2024-03-01 | 恒为科技(上海)股份有限公司 | Data preprocessing system |
WO2022016137A1 (en) * | 2020-07-17 | 2022-01-20 | Softiron Limited | Computing acceleration framework |
CN112347721A (en) * | 2020-10-29 | 2021-02-09 | 北京长焜科技有限公司 | System for realizing data processing acceleration based on FPGA and acceleration method thereof |
CN112347721B (en) * | 2020-10-29 | 2023-05-26 | 北京长焜科技有限公司 | System for realizing data processing acceleration based on FPGA and acceleration method thereof |
CN113270120A (en) * | 2021-07-16 | 2021-08-17 | 北京金山云网络技术有限公司 | Data compression method and device |
CN113672431A (en) * | 2021-07-29 | 2021-11-19 | 济南浪潮数据技术有限公司 | Optimization method and device for acceleration chip erasure code plug-in for realizing distributed storage |
CN113672431B (en) * | 2021-07-29 | 2023-12-22 | 济南浪潮数据技术有限公司 | Optimization method and device for realizing acceleration chip erasure code plug-in of distributed storage |
CN116954523A (en) * | 2023-09-20 | 2023-10-27 | 苏州元脑智能科技有限公司 | Storage system, data storage method, data reading method and storage medium |
CN116954523B (en) * | 2023-09-20 | 2024-01-26 | 苏州元脑智能科技有限公司 | Storage system, data storage method, data reading method and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109491599A (en) | A kind of distributed memory system and its isomery accelerated method | |
Li et al. | Coding for distributed fog computing | |
US10956276B2 (en) | System state recovery in a distributed, cloud-based storage system | |
Rashmi et al. | Having your cake and eating it too: Jointly optimal erasure codes for {I/O}, storage, and network-bandwidth | |
US10003357B2 (en) | Systems and methods for verification of code resiliency for data storage | |
CN108681569B (en) | Automatic data analysis system and method thereof | |
CN110737398B (en) | Method, apparatus and computer program product for coordinating access operations | |
CN107544862A (en) | A kind of data storage reconstructing method and device, memory node based on correcting and eleting codes | |
CN105933408B (en) | A kind of implementation method and device of Redis universal middleware | |
CN110233802B (en) | Method for constructing block chain structure with one main chain and multiple side chains | |
US10268741B2 (en) | Multi-nodal compression techniques for an in-memory database | |
CN109769028A (en) | Redis cluster management method, device, equipment and readable storage medium storing program for executing | |
CN104202423A (en) | System for extending caches by aid of software architectures | |
Aliasgari et al. | Coded computation against processing delays for virtualized cloud-based channel decoding | |
CN113541870A (en) | Recovery optimization method for erasure code storage single node failure | |
US10152248B2 (en) | Erasure coding for elastic cloud storage | |
CN109982315A (en) | Log method for uploading and relevant device | |
Akash et al. | Rapid: A fast data update protocol in erasure coded storage systems for big data | |
Shi et al. | UMR-EC: A unified and multi-rail erasure coding library for high-performance distributed storage systems | |
JP6175785B2 (en) | Storage system, disk array device, and storage system control method | |
CN111670560A (en) | Electronic device, system and method | |
CN113934510A (en) | Mirror image processing method and device, electronic equipment and computer readable storage medium | |
Dong | Coop-u: a cooperative update scheme for erasure-coded storage systems | |
De Florio et al. | A system structure for adaptive mobile applications | |
Song et al. | Hv-snsp: A low-overhead data recovery method based on cross-checking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190319 |
|
RJ01 | Rejection of invention patent application after publication |