CN109344109B - System and method for accelerating artificial intelligence calculation in big data based on solid state disk - Google Patents

System and method for accelerating artificial intelligence calculation in big data based on solid state disk Download PDF

Info

Publication number
CN109344109B
CN109344109B CN201811236270.7A CN201811236270A CN109344109B CN 109344109 B CN109344109 B CN 109344109B CN 201811236270 A CN201811236270 A CN 201811236270A CN 109344109 B CN109344109 B CN 109344109B
Authority
CN
China
Prior art keywords
artificial intelligence
solid state
state disk
hardware
gate array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811236270.7A
Other languages
Chinese (zh)
Other versions
CN109344109A (en
Inventor
洪振洲
李庭育
陈育鸣
魏智汎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Huacun Electronic Technology Co Ltd
Original Assignee
Jiangsu Huacun Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Huacun Electronic Technology Co Ltd filed Critical Jiangsu Huacun Electronic Technology Co Ltd
Priority to CN201811236270.7A priority Critical patent/CN109344109B/en
Publication of CN109344109A publication Critical patent/CN109344109A/en
Application granted granted Critical
Publication of CN109344109B publication Critical patent/CN109344109B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4221Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Power Sources (AREA)

Abstract

The invention provides a system for accelerating artificial intelligence calculation in big data based on a solid state disk, which comprises the solid state disk, wherein an integrated circuit chip is arranged in the solid state disk, and the innovation points of the system are as follows: the integrated circuit chip is integrated by a central microprocessor and a hardware algorithm module, and the hardware algorithm module is realized by adopting an embedded programmable logic gate array module. In the main control chip of the solid state disk, the hardware is used for accelerating the artificial intelligence operation in an embedded field programmable gate array mode, so that the problem of high power consumption caused by using a display card as the artificial intelligence operation can be solved, the problem of low elasticity caused by using pure hardware to realize the artificial intelligence algorithm is solved, and the problem of high bandwidth requirement of a high-speed serial computer expansion bus caused by executing the artificial intelligence operation on the computer host is solved.

Description

System and method for accelerating artificial intelligence calculation in big data based on solid state disk
Technical Field
The invention relates to the technical field of intelligent computing, in particular to a system for accelerating artificial intelligent computing in big data based on a solid state disk and a method for accelerating artificial intelligent computing in the big data of the solid state disk.
Background
The conventional artificial intelligence algorithm is commonly 1. artificial intelligence operation is performed by means of a display adapter, and 2. artificial intelligence operation is performed by means of hardware acceleration, and the method 1 has the main defect that the power consumption is too large, and the display adapter is not specially designed for the artificial intelligence operation. The main drawback of method 2 is that the application of artificial intelligence is limited due to the loss of flexibility of the algorithm in hardware implementation. In addition, both methods 1 and 2 have a common disadvantage, and since large data needs to be accessed and calculated, it can be known that a very large amount of bus bandwidth of a high-speed serial computer expansion bus (PCIe) is occupied, which greatly reduces the performance of the whole system in cloud computing.
In the prior art, chinese patent CN103413164B discloses a method for implementing encryption and decryption functions in an intelligent chip by using an embedded programmable gate array, and the technical solution refers to a device for implementing data encryption and decryption functions, where the device includes a system bus, a channel, an embedded microcontroller, and an intelligent card interface module, where the device further includes a hardware encryption and decryption algorithm module composed of a decryption module and an encryption module, and the device for implementing data encryption and decryption functions uses the embedded programmable gate array module as the hardware encryption algorithm module. The technical scheme discloses that the problem that the running speed is low when the data (encryption and decryption) algorithm is realized by software is mainly solved, and how to accelerate the hardware algorithm is not mentioned in the technical scheme.
The traditional embedded programmable gate array module is a gate array module generated by combining minimum units consisting of combinational logic units and sequential logic units. The main operation basis of the technical scheme listed in the comparison document 1 is the combinational logic unit, so that the adoption of the scheme removes the time sequence logic unit which hardly works from the embedded programmable gate array module, so as to reduce the area of the embedded programmable gate array module and improve the storage resource.
Chinese patent CN100369017C discloses an encryption device and an encryption method for a programmable gate array chip of a static random access memory, the encryption device of the present invention includes a FLASH FPGA chip, a handshake circuit implemented in FLASH FPGA and SRAM FPGA, and a part of low speed logic in the FLASH FPGA chip, which uses the remaining logic to implement the system function, so as to further improve the security of the system. The invention relates to an encryption method of a programmable logic gate chip based on a static random access memory, therefore, the encryption method inevitably has the defects of the static random access memory, for example, when the power of the encryption method is off, the information stored in the static random access memory is lost, and the information needs to be loaded again after the power is on, so that the time of an encryption process is increased invisibly, and the encryption method is not suitable for a high-speed data communication system.
Disclosure of Invention
In order to overcome the problems of over large power consumption and limitation on the application of artificial intelligence due to the loss of elasticity of the hardware implementation of the algorithm in the prior art, the invention provides a system and a method for accelerating the artificial intelligence calculation in big data based on a solid state disk.
The technical scheme adopted by the invention is as follows: the utility model provides a system for accelerate artifical intelligent computation among big data based on solid state hard drives, includes solid state hard drives, set up integrated circuit chip, its characterized in that in the solid state hard drives: the integrated circuit chip is integrated by a central microprocessor and a hardware algorithm module, and the hardware algorithm module is realized by adopting an embedded programmable logic gate array module.
In an embodiment of the present invention, the system further includes a computer host, and the computer host generates the configuration file according to the embedded programmable gate array module resource.
In an embodiment of the present invention, the integrated circuit chip further includes a data unit and a program unit, and the central microprocessor performs data reading and writing operations on the data unit and the program unit.
In an embodiment of the present invention, the embedded programmable gate array module adopted by the hardware algorithm module implements a high-speed algorithm written therein to perform a high-speed complex data processing mode.
Another object of the present invention is to provide a method for artificial intelligence computation using a system for accelerating artificial intelligence computation in big data based on a solid state disk, wherein the method comprises the following steps: the method comprises the following steps:
the artificial intelligent client software in the computer host end transmits characteristic data to be searched by the solid state disk through a high-speed serial computer bus;
the integrated circuit chip of the solid state disk internally carries out timely operation on the access of the big data stored in the data unit and the program unit;
then the accorded characteristic data is directly transmitted back to the client software of the computer host through the high-speed serial computer bus.
In an embodiment of the invention, the embedded programmable gate array module implements hardware artificial intelligence operation, provides maximum flexibility of artificial intelligence algorithm, and writes different algorithms and algorithm data according to different application requirements.
In an embodiment of the present invention, the integrated circuit chip is integrated by a central microprocessor and a hardware algorithm module, and the hardware algorithm module is implemented by an embedded programmable logic gate array module.
Compared with the prior art, the invention has the beneficial effects that: in a main control chip in a Solid State Disk (SSD), hardware acceleration artificial intelligence operation is realized in an embedded field programmable gate array (eFPGA) mode, so that the problem of high power consumption caused by using a display card as artificial intelligence operation can be solved firstly, the problem of low elasticity caused by using pure hardware (pure ASIC) to realize artificial intelligence algorithm is solved secondly, and the problem of high-speed serial computer expansion bus (PCIe) high bandwidth requirement caused by executing artificial intelligence operation at a computer host end is solved thirdly.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an artificial intelligence algorithm as is common in the prior art;
FIG. 2 is a schematic diagram of an artificial intelligence algorithm of the present invention;
FIG. 3 is a schematic diagram of an integrated chip according to the present invention.
Detailed Description
The following embodiments of the present invention are provided by way of specific examples, and other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure herein. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that, in the case of no conflict, the features in the following examples and examples may be combined with each other to further describe the present invention. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The current artificial intelligence algorithm is commonly as follows:
firstly, performing artificial intelligent operation by means of a display adapter; one of the main disadvantages of the method is that it consumes too much power, since the display adapter is not specifically designed for artificial intelligence operations.
Secondly, performing artificial intelligence operation by using hardware acceleration; the second main disadvantage of the method is that the application of artificial intelligence is limited due to the loss of flexibility of the algorithm in hardware implementation.
In addition, both methods one and two have a common disadvantage, and since large data needs to be accessed and calculated, it can be known that a very large amount of bus bandwidth of a high-speed serial computer expansion bus (PCIe) is occupied, and the performance of the whole system is greatly reduced in cloud computing. Regardless of whether the current common artificial intelligence algorithm is a display adapter-assisted operation or a specially designed hardware operation, it requires a large amount of data to be read from a storage device in the big data, consumes a large amount of high-speed serial computer expansion bus (PCIe) bandwidth, and lacks flexibility and consumes power (as shown in fig. 1.)
In view of the above existing state, the present invention utilizes an embedded field programmable gate array (effpga) to implement hardware-accelerated artificial intelligence operations in a main control chip of a (SSD) solid state disk, which firstly solves the problem of high power consumption caused by using a display card as an artificial intelligence operation, secondly solves the problem of low elasticity caused by using pure hardware (pure ASIC) to implement an artificial intelligence algorithm, and thirdly solves the problem of high-speed serial computer expansion bus (PCIe) high bandwidth requirement caused by executing an artificial intelligence operation at a computer host.
Specifically, the invention discloses a system for accelerating artificial intelligence calculation in big data based on a solid state disk, which comprises the solid state disk, wherein an integrated circuit chip is arranged in the solid state disk, as shown in fig. 3: the integrated circuit chip is integrated by a central microprocessor and a hardware algorithm module, and the hardware algorithm module is realized by adopting an embedded programmable logic gate array module. The chip architecture has the high-speed operation characteristic of an Application Specific Integrated Circuit (ASIC) and the high elasticity of an embedded programmable gate array (FPGA). The ASIC part realizes the high-speed solid state disk main control design, the eFPGA part is used for realizing hardware artificial intelligence operation, provides the maximum elasticity of an artificial intelligence algorithm, and writes different algorithms and algorithm data according to different application requirements. Furthermore, the system also comprises a computer host, and the computer host generates a configuration file according to the embedded programmable gate array module resource.
The integrated circuit chip can also comprise a data unit and a program unit, and the central microprocessor carries out data reading and writing operations on the data unit and the program unit. The embedded programmable gate array module adopted by the hardware algorithm module realizes the high-speed algorithm written in the embedded programmable gate array module so as to carry out a high-speed complex data processing mode.
Another objective of the present invention is to provide a method for artificial intelligence computation using a system for accelerating artificial intelligence computation in big data based on a solid state disk, specifically, as shown in fig. 2: the method comprises the following steps:
the artificial intelligent client software in the computer host end transmits characteristic data to be searched by the solid state disk through a high-speed serial computer bus;
the integrated circuit chip of the solid state disk performs timely operation on the access of the big data stored in the data unit and the program unit;
then the accorded characteristic data is directly transmitted back to the client software of the computer host through the high-speed serial computer bus. The invention updates the artificial intelligence algorithm according to the demand, supports more artificial intelligence application, saves a large amount of data transmission bandwidth and power consumption and improves the efficiency.
The embedded programmable gate array module realizes hardware artificial intelligence operation, provides the maximum elasticity of an artificial intelligence algorithm, and writes different algorithms and algorithm data according to different application requirements. The integrated circuit chip is integrated by a central microprocessor and a hardware algorithm module, and the hardware algorithm module is realized by adopting an embedded programmable logic gate array module.
In summary, in the (SSD) solid state disk main control chip of the present invention, the embedded field programmable gate array (effpga) is used to implement hardware accelerated artificial intelligence operation, so as to firstly solve the problem of high power consumption caused by using the display card as artificial intelligence operation, secondly solve the problem of low elasticity caused by using pure hardware (pure ASIC) to implement artificial intelligence algorithm, and thirdly solve the problem of high bandwidth requirement of the high speed serial computer expansion bus (PCIe) caused by executing artificial intelligence operation at the computer host.
In the description of the present invention, it is to be understood that the terms "coaxial", "bottom", "one end", "top", "middle", "other end", "upper", "one side", "top", "inner", "front", "center", "both ends", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, and do not indicate or imply that the referenced device or element must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "disposed," "connected," "secured," "screwed" and the like are to be construed broadly, e.g., as meaning fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; the terms may be directly connected or indirectly connected through an intermediate, and may be communication between two elements or interaction relationship between two elements, unless otherwise specifically limited, and the specific meaning of the terms in the present invention will be understood by those skilled in the art according to specific situations.
While the foregoing specification illustrates and describes the preferred embodiments of the present invention, it is to be understood that the invention is not limited to the precise forms disclosed herein and is not to be interpreted as excluding the existence of additional embodiments that are also intended to be encompassed by the present invention as modified within the spirit and scope of the invention as described herein. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (1)

1. The utility model provides a system for accelerate artifical intelligent computation among big data based on solid state disk, includes solid state disk, set up integrated circuit chip, its characterized in that in the solid state disk main control chip: the integrated circuit chip is integrated by a central microprocessor and a hardware algorithm module, and the hardware algorithm module is realized by adopting an embedded programmable logic gate array module;
the system also comprises a computer host, wherein the computer host generates a configuration file according to the embedded programmable gate array module resources;
the integrated circuit chip also comprises a data unit and a program unit, and the central microprocessor carries out data reading and writing operations on the data unit and the program unit;
the embedded programmable logic gate array module adopted by the hardware algorithm module realizes the high-speed algorithm written in the hardware algorithm module so as to carry out a high-speed complex data processing mode;
the method for accelerating the artificial intelligence calculation of the system for the artificial intelligence calculation in the big data based on the solid state disk comprises the following steps:
artificial intelligent client software in the computer host end transmits characteristic data to be searched by the solid state disk through a high-speed serial computer bus;
the integrated circuit chip of the solid state disk performs timely operation on the access of the big data stored in the data unit and the program unit;
then directly transmitting the matched characteristic data back to client software of the computer host through a high-speed serial computer bus;
the embedded programmable gate array module realizes hardware artificial intelligence operation, provides the maximum elasticity of an artificial intelligence algorithm, and writes different algorithms and algorithmic data according to different application requirements;
the integrated circuit chip is integrated by a central microprocessor and a hardware algorithm module, and the hardware algorithm module is realized by adopting an embedded programmable logic gate array module.
CN201811236270.7A 2018-10-23 2018-10-23 System and method for accelerating artificial intelligence calculation in big data based on solid state disk Active CN109344109B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811236270.7A CN109344109B (en) 2018-10-23 2018-10-23 System and method for accelerating artificial intelligence calculation in big data based on solid state disk

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811236270.7A CN109344109B (en) 2018-10-23 2018-10-23 System and method for accelerating artificial intelligence calculation in big data based on solid state disk

Publications (2)

Publication Number Publication Date
CN109344109A CN109344109A (en) 2019-02-15
CN109344109B true CN109344109B (en) 2022-07-26

Family

ID=65311207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811236270.7A Active CN109344109B (en) 2018-10-23 2018-10-23 System and method for accelerating artificial intelligence calculation in big data based on solid state disk

Country Status (1)

Country Link
CN (1) CN109344109B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069834A (en) * 2019-04-01 2019-07-30 京微齐力(北京)科技有限公司 A kind of system-in-a-package method of integrated fpga chip and artificial intelligence chip
CN109947694A (en) * 2019-04-04 2019-06-28 上海威固信息技术股份有限公司 A kind of Reconfigurable Computation storage fusion flash memory control system
CN110070187A (en) * 2019-04-18 2019-07-30 山东超越数控电子股份有限公司 A kind of design method of the portable computer towards artificial intelligence application
CN112580285A (en) * 2020-12-14 2021-03-30 深圳宏芯宇电子股份有限公司 Embedded server subsystem and configuration method thereof

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339492A (en) * 2008-08-11 2009-01-07 湖南源科创新科技股份有限公司 Native SATA solid-state hard disk controller
CN101373493A (en) * 2008-09-22 2009-02-25 浪潮电子信息产业股份有限公司 SOC chip logical verification method special for multimedia storage gateway
CN101634936A (en) * 2008-07-23 2010-01-27 深圳市中深瑞泰科技有限公司 Method for achieving interfaces between ARM architecture processor and hard disk through FPGA
CN102117183A (en) * 2010-01-04 2011-07-06 翔晖科技股份有限公司 Computer device and method for using solid state disk in computer device
CN104834484A (en) * 2015-05-11 2015-08-12 上海新储集成电路有限公司 Data processing system and processing method based on embedded type programmable logic array
CN105589938A (en) * 2015-12-13 2016-05-18 公安部第三研究所 Image retrieval system and retrieval method based on FPGA
CN105843753A (en) * 2015-02-02 2016-08-10 Hgst荷兰公司 Logical block address mapping for hard disk drives
CN106228238A (en) * 2016-07-27 2016-12-14 中国科学技术大学苏州研究院 The method and system of degree of depth learning algorithm is accelerated on field programmable gate array platform
CN106598889A (en) * 2016-08-18 2017-04-26 湖南省瞬渺通信技术有限公司 SATA (Serial Advanced Technology Attachment) master controller based on FPGA (Field Programmable Gate Array) sandwich plate
CN107038134A (en) * 2016-11-11 2017-08-11 济南浪潮高新科技投资发展有限公司 A kind of SRIO interface solid hard disks system and its implementation based on FPGA
CN108090560A (en) * 2018-01-05 2018-05-29 中国科学技术大学苏州研究院 The design method of LSTM recurrent neural network hardware accelerators based on FPGA
CN207965873U (en) * 2018-03-20 2018-10-12 深圳市腾讯计算机系统有限公司 Artificial intelligence accelerator card and server

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101634936A (en) * 2008-07-23 2010-01-27 深圳市中深瑞泰科技有限公司 Method for achieving interfaces between ARM architecture processor and hard disk through FPGA
CN101339492A (en) * 2008-08-11 2009-01-07 湖南源科创新科技股份有限公司 Native SATA solid-state hard disk controller
CN101373493A (en) * 2008-09-22 2009-02-25 浪潮电子信息产业股份有限公司 SOC chip logical verification method special for multimedia storage gateway
CN102117183A (en) * 2010-01-04 2011-07-06 翔晖科技股份有限公司 Computer device and method for using solid state disk in computer device
CN105843753A (en) * 2015-02-02 2016-08-10 Hgst荷兰公司 Logical block address mapping for hard disk drives
CN104834484A (en) * 2015-05-11 2015-08-12 上海新储集成电路有限公司 Data processing system and processing method based on embedded type programmable logic array
CN105589938A (en) * 2015-12-13 2016-05-18 公安部第三研究所 Image retrieval system and retrieval method based on FPGA
CN106228238A (en) * 2016-07-27 2016-12-14 中国科学技术大学苏州研究院 The method and system of degree of depth learning algorithm is accelerated on field programmable gate array platform
CN106598889A (en) * 2016-08-18 2017-04-26 湖南省瞬渺通信技术有限公司 SATA (Serial Advanced Technology Attachment) master controller based on FPGA (Field Programmable Gate Array) sandwich plate
CN107038134A (en) * 2016-11-11 2017-08-11 济南浪潮高新科技投资发展有限公司 A kind of SRIO interface solid hard disks system and its implementation based on FPGA
CN108090560A (en) * 2018-01-05 2018-05-29 中国科学技术大学苏州研究院 The design method of LSTM recurrent neural network hardware accelerators based on FPGA
CN207965873U (en) * 2018-03-20 2018-10-12 深圳市腾讯计算机系统有限公司 Artificial intelligence accelerator card and server

Also Published As

Publication number Publication date
CN109344109A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109344109B (en) System and method for accelerating artificial intelligence calculation in big data based on solid state disk
CN108351813B (en) Method and apparatus for enabling individual non-volatile memory express (NVMe) input/output (IO) queues on different network addresses of NVMe controller
US11709623B2 (en) NAND-based storage device with partitioned nonvolatile write buffer
US10739836B2 (en) System, apparatus and method for handshaking protocol for low power state transitions
CN110309088B (en) ZYNQ FPGA chip, data processing method thereof and storage medium
CN104536701A (en) Realizing method and system for NVME protocol multi-command queues
CN110941395B (en) Dynamic random access memory, memory management method, system and storage medium
MX2012005934A (en) Multi-interface solid state disk (ssd), processing method and system thereof.
CN109992203A (en) It is able to carry out the high-capacity storage of fine granularity reading and/or write operation
US20190155541A1 (en) Command processing method and storage controller using the same
CN111221759A (en) Data processing system and method based on DMA
CN112017700A (en) Dynamic power management network for memory devices
CN104239232A (en) Ping-Pong cache operation structure based on DPRAM (Dual Port Random Access Memory) in FPGA (Field Programmable Gate Array)
US20200264789A1 (en) Data storage device, system, and data writing method
CN103514140B (en) For realizing the reconfigurable controller of configuration information multi-emitting in reconfigurable system
CN106133838B (en) A kind of expansible configurable FPGA storage organization and FPGA device
CN110096456A (en) A kind of High rate and large capacity caching method and device
CN116010331A (en) Access to multiple timing domains
CN105630400A (en) High-speed massive data storage system
CN111177027B (en) Dynamic random access memory, memory management method, system and storage medium
CN202067260U (en) Buffer memory system for reducing data transmission
CN101813971B (en) Processor and internal memory thereof
US20210109674A1 (en) Memory command queue management
CN102110065B (en) Cache system for reducing data transmission
CN109542811B (en) Data communication processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190215

Assignee: Zhongguancun Technology Leasing Co.,Ltd.

Assignor: JIANGSU HUACUN ELECTRONIC TECHNOLOGY Co.,Ltd.

Contract record no.: X2022980017352

Denomination of invention: System and method for accelerating AI calculation in big data based on SSD

Granted publication date: 20220726

License type: Exclusive License

Record date: 20220930

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: System and method for accelerating AI calculation in big data based on SSD

Effective date of registration: 20221008

Granted publication date: 20220726

Pledgee: Zhongguancun Technology Leasing Co.,Ltd.

Pledgor: JIANGSU HUACUN ELECTRONIC TECHNOLOGY Co.,Ltd.

Registration number: Y2022980017514