CN109783032A - A kind of distributed storage accelerating method and device based on Heterogeneous Computing - Google Patents

A kind of distributed storage accelerating method and device based on Heterogeneous Computing Download PDF

Info

Publication number
CN109783032A
CN109783032A CN201910069303.1A CN201910069303A CN109783032A CN 109783032 A CN109783032 A CN 109783032A CN 201910069303 A CN201910069303 A CN 201910069303A CN 109783032 A CN109783032 A CN 109783032A
Authority
CN
China
Prior art keywords
distributed storage
host
accelerator module
fpga accelerator
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910069303.1A
Other languages
Chinese (zh)
Inventor
赵瑞东
徐永强
刘毅枫
王则陆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Chaoyue CNC Electronics Co Ltd
Original Assignee
Shandong Chaoyue CNC Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Chaoyue CNC Electronics Co Ltd filed Critical Shandong Chaoyue CNC Electronics Co Ltd
Priority to CN201910069303.1A priority Critical patent/CN109783032A/en
Publication of CN109783032A publication Critical patent/CN109783032A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a kind of distributed storage accelerated method based on Heterogeneous Computing, comprising: S1, the running environment for configuring distributed storage software and FPGA accelerator module;S2, host receive the store tasks that client is sent, and carry out priority ranking according to calculation amount to the store tasks by distributed storage software, and will be greater than predetermined computation amount of the task is sent to FPGA accelerator module and is calculated;And after S3, FPGA accelerator module calculate task, result is returned into host, so that host is written and read data according to the result of return.The invention also discloses a kind of distributed storage accelerator based on Heterogeneous Computing.Distributed storage acceleration method and device proposed by the present invention based on Heterogeneous Computing can effectively improve storage speed.

Description

A kind of distributed storage accelerating method and device based on Heterogeneous Computing
Technical field
The present invention relates to field of storage, more specifically, particularly relating to a kind of distributed storage acceleration based on Heterogeneous Computing Method and device.
Background technique
Data storage requirement is explosive in the past few years to be increased.Studies have shown that data are every year with 40% to 60% Speed increases, and the data scale of many companies will increase one times every year.IDC analyst estimation, the shared digital number in the whole world in 2000 According to 54.4 Chinese mugworts byte (Exabyte);By 2007, reach 295 Chinese mugwort bytes;To the year two thousand twenty, it is expected to reach 44 damp bytes (Zettabyte).Traditional storage system can not cope with this data speedup, it would be desirable to the expansible distribution as Ceph Formula storage system, and most importantly its more economical material benefit.
Distributed storage system has equipment price low, and maintenance cost is low, and building environment is wanted in the deployment of low capacity device distribution Seek low advantage.But the shortcomings that distributed storage system, is also obvious, for example backup is difficult, if user stores data In respective system, rather than they are stored in center system, are difficult to formulate an effective back-up plan.The situation It is also possible to the different editions for causing user to use same file.The better PC machine of performance is required in order to run program, it is desirable that is used Program appropriate, the file data of different computers need to replicate, and certain PC machine are required with have enough memory capacity, is formed not Necessary carrying cost;It manages and maintains more complicated;Equipment has to compatible.
Summary of the invention
In view of this, the purpose of the embodiment of the present invention is to propose a kind of distributed storage acceleration side based on Heterogeneous Computing Method and device can reduce delay, and accelerate the rate of storage.
Based on above-mentioned purpose, the one side of the embodiment of the present invention provides a kind of distributed storage based on Heterogeneous Computing and adds Fast method, comprising: S1, the running environment for configuring distributed storage software and FPGA accelerator module;S2, host receive client hair The store tasks sent carry out priority ranking according to calculation amount to store tasks by distributed storage software, and will be greater than pre- The determining calculation amount of the task is sent to FPGA accelerator module and is calculated;And after S3, FPGA accelerator module calculate task, Result is returned into host, so that host is written and read data according to the result of return.
In some embodiments, step S1 includes: S11, installation distributed storage software and the driving of FPGA accelerator module; S12, configuration storage network, create distributed storage software cluster;S13, hard disk is initialized based on distributed storage software cluster; And S14, addition and FPGA accelerator module is activated, FPGA accelerator module is tested.
In some embodiments, step S13 includes: to be initialized as disc cache for the 15% of the memory space of hard disk, is remained Remaining memory space is initialized as data disks.
In some embodiments, step S1 includes: that the first cache layer is arranged in client.
In some embodiments, step S1 further include: the second cache layer is set between client and the hard disk of host.
The another aspect of the embodiment of the present invention additionally provides a kind of distributed storage accelerator based on Heterogeneous Computing, Include: host, further includes distributed storage software, distributed storage in host for receiving the store tasks of client transmission Software configuration is to carry out priority ranking to store tasks;And FPGA accelerator module, led to host by PCIe bus Letter, wherein distributed storage software, which is further configured to send FPGA accelerator module for the task of high calculation amount, to be calculated; FPGA accelerator module is further configured to carry out calculating to task and result is returned to host, so that host is according to the result of return Data are written and read.
In some embodiments, host includes CPU, and CPU is separately connected client and FPGA accelerator module, is configured to It receives store tasks and split and issue.
In some embodiments, host includes hard disk, and the 15% of hard-disc storage space is for caching.
In some embodiments, client further includes the first cache layer, is configured to reduce delay.
In some embodiments, device further includes the second cache layer, and client and host is arranged in the second cache layer Between hard disk, it is configured to improve the stability of data.
The present invention has following advantageous effects: can reduce delay, and accelerates the rate of storage, and passes through this Ground caching is read, multi-level caching is read and write in distributed caching read-write and rear end caching, and it is high to performance and steady to meet different business Total demand, the multi-level caching on IO (input and output) path, the IO of entire distributed type assemblies will be more steady, from IO IO fluctuation is reduced in the level of processing.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with Other embodiments are obtained according to these attached drawings.
Fig. 1 is the process signal of the embodiment of the distributed storage accelerated method provided by the invention based on Heterogeneous Computing Figure.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with specific embodiment, and reference The embodiment of the present invention is further described in attached drawing.
It should be noted that all statements for using " first " and " second " are for differentiation two in the embodiment of the present invention The non-equal entity of a same names or non-equal parameter, it is seen that " first " " second " only for the convenience of statement, does not answer It is interpreted as the restriction to the embodiment of the present invention, subsequent embodiment no longer illustrates this one by one.
It in distributed storage, is made of the server of more or up to a hundred, and uses copy mode.So an IO It by network, is handled on multiple replica servers, and each copy has data consistent check algorithm, these operations all will Increase the time delay of IO.A system complicated thing again in fact for how to reduce the time delay of distributed storage, lead a hair and Dynamic whole body.
Based on above-mentioned purpose, the first aspect of the embodiment of the present invention proposes a kind of distribution based on Heterogeneous Computing Store the embodiment of accelerated method.Shown in fig. 1 is the distributed storage accelerated method provided by the invention based on Heterogeneous Computing Embodiment flow diagram.As shown in Figure 1, the embodiment of the present invention includes following steps:
S1, the running environment for configuring distributed storage software and FPGA accelerator module;
S2, host receive the store tasks that client is sent, by distributed storage software to store tasks according to calculating Amount carries out priority ranking, and will be greater than predetermined computation amount of the task is sent to FPGA accelerator module and is calculated;And
After S3, FPGA accelerator module calculate task, result is returned into host, so that host is according to the result of return Data are written and read.
It can specifically include in step S1:
S11. under Linux system, installation and deployment distributed storage software and the driving of FPGA accelerator module;S12. it configures Then specific store network creates distributed storage software cluster;S13. the management software based on distributed storage software cluster Hard disk is initialized, storage pool and cache pool are created;S14. the management software based on distributed storage software cluster, adds and activates FPGA accelerator module carries out FPGA and accelerates operation test.
In the present embodiment, in the process of S13 initialization hard disk, the 15% of hard-disc storage space can be initialized as automatically Disc cache, the space of residue 85% are data disks.15% selection is determined in conjunction with the redundancy of enterprise customer's memory capacity, General enterprises can reserve 20% free space in layout data memory space, it is ensured that be no more than 80% using data.When So, this is not the limitation to disc cache size, in other examples, 20% memory space can also be initialized as Disc cache.
Computing unit can be divided into universal computing unit (CPU) and dedicated computing unit (GPU/DSP etc.).Briefly, The system for adding one or several dedicated computing building units by one or several universal computing units is exactly Heterogeneous Computing system System, being got up to execute general computational tasks jointly by the two collaboration is exactly Heterogeneous Computing.Currently, the most common combination on computers It is exactly CPU+GPU.It is combined in the present embodiment using the Heterogeneous Computing of CPU+FPGA.
After being configured with running environment, when client sends store tasks, host receives store tasks, can root The computationally intensive task in predetermined value is consigned to FPGA and accelerated by the sequence that priority is carried out according to the size of store tasks calculation amount Unit is calculated, and the calculating task of CPU can be significantly reduced, and discharges a large amount of cpu resources.Also, since time-consuming data are read Write bit moves calculating task and gives FPGA accelerator module progress calculation process, and delay can be significantly reduced.The predetermined value of calculation amount can To be set according to the actual situation.
FPGA accelerator module in the present embodiment can be FPGA accelerator card, SCSI (small computer system interface) agreement Stack include three layers, one be upper layer protocol-driven, refer to disk drive, magnetic tape drive.The second layer is middle layer, and middle layer is exactly The instruction of conversion SCSI can adapt to different hardware by instruction morphing at standardization.Nethermost is that (host bus is suitable by HBA Orchestration) layer, this layer is connected directly with hardware.Generally carry out data transmission needing the hardware from the bottom by SCSI protocol stack Start, undergoes all layers to return in hardware later, this time delay is just very high.Special FPGA accelerator card drives in the present embodiment Dynamic layer is directed through SCSI protocol stack, is directly interacted by driving layer with generic block layer, to reach the target for reducing IO time delay, simultaneously Use more queue mechanisms of generic block layer, further improving performance.
In general, delay can be reduced by increasing data buffer storage layer, that is, utilizes and give muti-piece HDD (hard disk drive) in memory node One piece of SSD (solid state hard disk) is configured, open source BCache (caching) scheme is reused, such scheme is that one kind is general economical and practical Solution.But this scheme is limited to performance boost, and main cause or I/O path are too long, and distributed storage core layer is patrolled It collects excessively complicated.
Step S1 includes: that the first cache layer is arranged in client.First cache layer said herein, does not imply that specific one Block is specifically used to do the hard disk cached, but plays one piece of memory space of caching.First cache layer can decouple storage core The partial function of central layer, to reduce delay.
Step S1 further include: the second cache layer is set between client and the hard disk of host.Pass through network in client It after task is sent, can write data into cache layer, realize distributed caching read-write, while can guarantee data Reliability.
It is important to note that in each embodiment of the above-mentioned distributed storage accelerated method based on Heterogeneous Computing Each step can be intersected, replaces, increases, be deleted, therefore, these reasonable permutation and combination transformation in be based on isomery The distributed storage accelerated method of calculating should also be as belonging to the scope of protection of the present invention, and should not be by protection scope of the present invention It is confined on embodiment.
Based on above-mentioned purpose, the second aspect of the embodiment of the present invention proposes a kind of distribution based on Heterogeneous Computing Store the embodiment of accelerator.Distributed storage accelerator based on Heterogeneous Computing of the invention includes:
Host further includes distributed storage software in host, distribution is deposited for receiving the store tasks of client transmission Storing up software configuration is to carry out priority ranking to store tasks;And
FPGA accelerator module is communicated with host by PCIe bus,
Wherein, distributed storage software is further configured to send the task of high calculation amount to the progress of FPGA accelerator module It calculates;FPGA accelerator module is further configured to carry out calculating to task and result is returned to host, so that host is according to return Result data are written and read.
Host can also be responsible for the relevant issues and scheduling FPGA accelerator module of management storage.FPGA is since it is with powerful Computation capability, be responsible for the computation-intensive task that issues of processing host.When user intensively carries out reading and writing data, host Read-write task is split, FPGA card is then issued to and carries out position calculating.After FPGA card carries out operation to task, it will return As a result host is returned.Host quickly carries out data read-write operation according to the index data of return.
It include CPU in host, CPU is separately connected client and FPGA accelerator module, is configured to receive store tasks simultaneously Split and issues.Disc driver is controlled by the result that FPGA has been handled, carries out the read-write of data.And time-consuming data are read Write bit moves calculating task and FPGA card is then transferred to carry out calculation process, returns to CPU after having handled.
Further include hard disk in host, the 15% of hard-disc storage space can be used to cache.
Client further includes the first cache layer, is configured to reduce delay
It is additionally provided with the second cache layer between client and the hard disk of host, is configured to improve the stability of data.
Finally, it should be noted that those of ordinary skill in the art will appreciate that realizing the whole in above-described embodiment method Or part process, related hardware can be instructed to complete by computer program, the distributed storage based on Heterogeneous Computing adds The program of fast method can be stored in a computer-readable storage medium, and the program is when being executed, it may include such as above-mentioned each side The process of the embodiment of method.Wherein, the storage medium of program can be magnetic disk, CD, read-only memory (ROM) or deposit at random Store up memory body (RAM) etc..The embodiment of above-mentioned computer program can achieve corresponding aforementioned any means embodiment phase Same or similar effect.
In addition, disclosed method is also implemented as the computer journey executed by processor according to embodiments of the present invention Sequence, the computer program may be stored in a computer readable storage medium.When the computer program is executed by processor, hold The above-mentioned function of being limited in row method disclosed by the embodiments of the present invention.
In addition, above method step and system unit also can use controller and for storing so that controller is real The computer readable storage medium of the computer program of existing above-mentioned steps or Elementary Function is realized.
In addition, it should be appreciated that the computer readable storage medium (for example, memory) of this paper can be volatibility and deposit Reservoir or nonvolatile memory, or may include both volatile memory and nonvolatile memory.As an example and Unrestricted, nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM) or flash memory.Volatile memory may include that arbitrary access is deposited Reservoir (RAM), the RAM can serve as external cache.As an example and not restrictive, RAM can be with a variety of Form obtains, such as synchronous random access memory (DRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate SDRAM (DDRSDRAM), enhance SDRAM (ESDRAM), synchronization link DRAM (SLDRAM) and directly RambusRAM (DRRAM).Institute The storage equipment of disclosed aspect is intended to the memory of including but not limited to these and other suitable type.
Those skilled in the art will also understand is that, various illustrative logical blocks, mould in conjunction with described in disclosure herein Block, circuit and algorithm steps may be implemented as the combination of electronic hardware, computer software or both.It is hard in order to clearly demonstrate This interchangeability of part and software, with regard to various exemplary components, square, module, circuit and step function to its into General description is gone.This function is implemented as software and is also implemented as hardware depending on concrete application and application To the design constraint of whole system.The function that those skilled in the art can realize in various ways for every kind of concrete application Can, but this realization decision should not be interpreted as causing a departure from range disclosed by the embodiments of the present invention.
Various illustrative logical blocks, module and circuit, which can use, in conjunction with described in disclosure herein is designed to The following component of function here is executed to realize or execute: general processor, digital signal processor (DSP), dedicated integrated electricity It is road (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete Any combination of hardware component or these components.General processor can be microprocessor, but alternatively, processor can To be any conventional processors, controller, microcontroller or state machine.Processor also may be implemented as calculating the group of equipment Close, for example, the combination of DSP and microprocessor, multi-microprocessor, one or more microprocessors combination DSP and/or it is any its Its this configuration.
The step of method in conjunction with described in disclosure herein or algorithm, can be directly contained in hardware, be held by processor In capable software module or in combination of the two.Software module may reside within RAM memory, flash memory, ROM storage Device, eprom memory, eeprom memory, register, hard disk, removable disk, CD-ROM or known in the art it is any its In the storage medium of its form.Illustrative storage medium is coupled to processor, enables a processor to from the storage medium Information is written to the storage medium in middle reading information.In an alternative, storage medium can be integral to the processor Together.Pocessor and storage media may reside in ASIC.ASIC may reside in user terminal.In an alternative In, it is resident in the user terminal that pocessor and storage media can be used as discrete assembly.
In one or more exemplary designs, function can be realized in hardware, software, firmware or any combination thereof. If realized in software, can using function as one or more instruction or code may be stored on the computer-readable medium or It is transmitted by computer-readable medium.Computer-readable medium includes computer storage media and communication media, which is situated between Matter includes any medium for helping for computer program to be transmitted to another position from a position.Storage medium can be energy Any usable medium being enough accessed by a general purpose or special purpose computer.As an example and not restrictive, the computer-readable medium It may include that RAM, ROM, EEPROM, CD-ROM or other optical disc memory apparatus, disk storage equipment or other magnetic storages are set It is standby, or can be used for carrying or storage form be instruct or the required program code of data structure and can by general or Special purpose computer or any other medium of general or specialized processor access.In addition, any connection can suitably claim For computer-readable medium.For example, if using coaxial cable, optical fiber cable, twisted pair, digital subscriber line (DSL) or all It is if the wireless technology of infrared ray, radio and microwave to send software from website, server or other remote sources, then above-mentioned coaxial Cable, fiber optic cable, twisted pair, DSL or such as wireless technology of infrared ray, radio and microwave are included in determining for medium Justice.As used herein, disk and CD include compact disk (CD), it is laser disk, CD, digital versatile disc (DVD), soft Disk, Blu-ray disc, wherein disk usually magnetically reproduce data, and CD using laser optics reproduce data.Above content Combination should also be as being included in the range of computer-readable medium.
It is exemplary embodiment disclosed by the invention above, it should be noted that in the sheet limited without departing substantially from claim Under the premise of inventive embodiments scope of disclosure, it may be many modifications and modify.According to open embodiment described herein The function of claim to a method, step and/or movement be not required to the execution of any particular order.In addition, although the present invention is implemented Element disclosed in example can be described or be required in the form of individual, but be unless explicitly limited odd number, it is understood that be multiple.
It should be understood that it is used in the present context, unless the context clearly supports exceptions, singular " one It is a " it is intended to also include plural form.It is to be further understood that "and/or" used herein refers to including one or one Any and all possible combinations of a above project listed in association.
It is for illustration only that the embodiments of the present invention disclose embodiment sequence number, does not represent the advantages or disadvantages of the embodiments.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware Complete, relevant hardware can also be instructed to complete by program, program can store in a kind of computer-readable storage In medium, storage medium mentioned above can be read-only memory, disk or CD etc..
It should be understood by those ordinary skilled in the art that: the discussion of any of the above embodiment is exemplary only, not It is intended to imply that range disclosed by the embodiments of the present invention (including claim) is limited to these examples;In the think of of the embodiment of the present invention Under road, it can also be combined between the technical characteristic in above embodiments or different embodiments, and there is this hair as above Many other variations of the different aspect of bright embodiment, for simplicity, they are not provided in details.Therefore, all in the present invention Within the spirit and principle of embodiment, any omission, modification, equivalent replacement, improvement for being made etc. be should be included in of the invention real It applies within the protection scope of example.

Claims (10)

1. a kind of distributed storage accelerated method based on Heterogeneous Computing characterized by comprising
S1, the running environment for configuring distributed storage software and FPGA accelerator module;
S2, host receive the store tasks that client is sent, by distributed storage software to the store tasks according to calculating Amount carries out priority ranking, and will be greater than predetermined computation amount of the task is sent to FPGA accelerator module and is calculated;And
After S3, FPGA accelerator module calculate task, result is returned into host, so that host is according to the result logarithm of return According to being written and read.
2. distributed storage accelerated method according to claim 1, which is characterized in that step S1 includes:
S11, installation distributed storage software and the driving of FPGA accelerator module;
S12, configuration storage network, create distributed storage software cluster;
S13, hard disk is initialized based on distributed storage software cluster;And
S14, addition simultaneously activate FPGA accelerator module, test FPGA accelerator module.
3. distributed storage accelerated method according to claim 2, which is characterized in that the step S13 includes: by hard disk Memory space 15% be initialized as disc cache, remaining memory space is initialized as data disks.
4. distributed storage accelerated method according to claim 1, which is characterized in that step S1 includes: to set in client Set the first cache layer.
5. distributed storage accelerated method according to claim 1, which is characterized in that step S1 further include: in client Second cache layer is set between the hard disk of host.
6. a kind of distributed storage accelerator based on Heterogeneous Computing characterized by comprising
Host includes distributed storage software, the distribution in the host for receiving the store tasks of client transmission Storing software configuration is to carry out priority ranking to the store tasks;And
FPGA accelerator module is communicated with the host by PCIe bus,
Wherein, the distributed storage software is further configured to send the task of high calculation amount to the FPGA accelerator module It is calculated;The FPGA accelerator module is further configured to carry out calculating to task and result is returned to host, for host Data are written and read according to the result of return.
7. distributed storage accelerator according to claim 6, which is characterized in that the host includes CPU, described CPU is separately connected client and FPGA accelerator module, is configured to receive store tasks and split to issue.
8. distributed storage accelerator according to claim 6, which is characterized in that the host includes hard disk, described The 15% of hard-disc storage space is for caching.
9. distributed storage accelerator according to claim 6, which is characterized in that the client further includes first slow Layer is deposited, is configured to reduce delay.
10. distributed storage accelerator according to claim 6, which is characterized in that it further include the second cache layer, it is described Second cache layer is arranged between client and the hard disk of host, is configured to improve the stability of data.
CN201910069303.1A 2019-01-24 2019-01-24 A kind of distributed storage accelerating method and device based on Heterogeneous Computing Pending CN109783032A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910069303.1A CN109783032A (en) 2019-01-24 2019-01-24 A kind of distributed storage accelerating method and device based on Heterogeneous Computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910069303.1A CN109783032A (en) 2019-01-24 2019-01-24 A kind of distributed storage accelerating method and device based on Heterogeneous Computing

Publications (1)

Publication Number Publication Date
CN109783032A true CN109783032A (en) 2019-05-21

Family

ID=66502241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910069303.1A Pending CN109783032A (en) 2019-01-24 2019-01-24 A kind of distributed storage accelerating method and device based on Heterogeneous Computing

Country Status (1)

Country Link
CN (1) CN109783032A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021047120A1 (en) * 2019-09-12 2021-03-18 苏州浪潮智能科技有限公司 Resource allocation method in fpga heterogeneous accelerator card cluster, device, and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657330A (en) * 2015-03-05 2015-05-27 浪潮电子信息产业股份有限公司 High-performance heterogeneous computing platform based on x86 architecture processor and FPGA (Field Programmable Gate Array)
CN105426127A (en) * 2015-11-13 2016-03-23 浪潮(北京)电子信息产业有限公司 File storage method and apparatus for distributed cluster system
CN106354805A (en) * 2016-08-28 2017-01-25 航天恒星科技有限公司 Optimization method and system for searching and caching distribution storage system NoSQL
CN107046563A (en) * 2017-01-19 2017-08-15 无锡华云数据技术服务有限公司 A kind of implementation method, system and the cloud platform of distribution type high efficient cloud disk
CN107273331A (en) * 2017-06-30 2017-10-20 山东超越数控电子有限公司 A kind of heterogeneous computing system and method based on CPU+GPU+FPGA frameworks
CN108776649A (en) * 2018-06-11 2018-11-09 山东超越数控电子股份有限公司 One kind being based on CPU+FPGA heterogeneous computing systems and its accelerated method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657330A (en) * 2015-03-05 2015-05-27 浪潮电子信息产业股份有限公司 High-performance heterogeneous computing platform based on x86 architecture processor and FPGA (Field Programmable Gate Array)
CN105426127A (en) * 2015-11-13 2016-03-23 浪潮(北京)电子信息产业有限公司 File storage method and apparatus for distributed cluster system
CN106354805A (en) * 2016-08-28 2017-01-25 航天恒星科技有限公司 Optimization method and system for searching and caching distribution storage system NoSQL
CN107046563A (en) * 2017-01-19 2017-08-15 无锡华云数据技术服务有限公司 A kind of implementation method, system and the cloud platform of distribution type high efficient cloud disk
CN107273331A (en) * 2017-06-30 2017-10-20 山东超越数控电子有限公司 A kind of heterogeneous computing system and method based on CPU+GPU+FPGA frameworks
CN108776649A (en) * 2018-06-11 2018-11-09 山东超越数控电子股份有限公司 One kind being based on CPU+FPGA heterogeneous computing systems and its accelerated method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021047120A1 (en) * 2019-09-12 2021-03-18 苏州浪潮智能科技有限公司 Resource allocation method in fpga heterogeneous accelerator card cluster, device, and medium

Similar Documents

Publication Publication Date Title
CN102467408B (en) Method and device for accessing data of virtual machine
CN107025070B (en) Versioned stores device and method
US10970254B2 (en) Utilization of tail portions of a fixed size block in a deduplication environment by deduplication chunk virtualization
US11436159B2 (en) Using multi-tiered cache to satisfy input/output requests
US20140215127A1 (en) Apparatus, system, and method for adaptive intent logging
CN107832423B (en) File reading and writing method for distributed file system
CN109558457A (en) A kind of method for writing data, device, equipment and storage medium
US10459641B2 (en) Efficient serialization of journal data
KR20220125198A (en) Data additional writing method, apparatus, electronic device, storage medium and computer programs
CN111752479A (en) Method and system for efficient storage of data
CN114138200A (en) Pre-writing log method and system based on rocksDB
CN104778100A (en) Safe data backup method
US11314435B2 (en) Converting small extent storage pools into large extent storage pools in place
CN103927215A (en) kvm virtual machine scheduling optimization method and system based on memory disk and SSD disk
CN113722319A (en) Data storage method based on learning index
US10705853B2 (en) Methods, systems, and computer-readable media for boot acceleration in a data storage system by consolidating client-specific boot data in a consolidated boot volume
US11340819B2 (en) Method, device and computer program product for locking at least two address references pointing to storage block in raid type conversion
CN109783032A (en) A kind of distributed storage accelerating method and device based on Heterogeneous Computing
US11210024B2 (en) Optimizing read-modify-write operations to a storage device by writing a copy of the write data to a shadow block
US11010091B2 (en) Multi-tier storage
US11132138B2 (en) Converting large extent storage pools into small extent storage pools in place
US11789622B2 (en) Method, device and computer program product for storage management
US20220318015A1 (en) Enforcing data placement requirements via address bit swapping
US11030111B2 (en) Representing an address space of unequal granularity and alignment
US11360687B2 (en) Method of processing a input-output request, an electronic device, and a computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190521

RJ01 Rejection of invention patent application after publication