WO2019120274A1 - 用于socfpga的数据循环缓冲方法及装置、存储介质、终端 - Google Patents

用于socfpga的数据循环缓冲方法及装置、存储介质、终端 Download PDF

Info

Publication number
WO2019120274A1
WO2019120274A1 PCT/CN2018/122551 CN2018122551W WO2019120274A1 WO 2019120274 A1 WO2019120274 A1 WO 2019120274A1 CN 2018122551 W CN2018122551 W CN 2018122551W WO 2019120274 A1 WO2019120274 A1 WO 2019120274A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
buffer
buffers
read
latest
Prior art date
Application number
PCT/CN2018/122551
Other languages
English (en)
French (fr)
Inventor
皮紫威
刘兴伟
向少卿
李一帆
Original Assignee
上海禾赛光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海禾赛光电科技有限公司 filed Critical 上海禾赛光电科技有限公司
Publication of WO2019120274A1 publication Critical patent/WO2019120274A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements

Definitions

  • the present invention relates to the field of data processing technologies, and in particular, to a data loop buffering method and apparatus for a SOC FPGA, a storage medium, and a terminal.
  • the system-on-a-chip System on Chip FPGA (SOCFPGA) device Due to the integration of the processor and the Field Programmable Gate Array (FPGA) architecture, the system-on-a-chip System on Chip FPGA (SOCFPGA) device has more features than the traditional device. High integration, lower power consumption, smaller board area, and greater bandwidth communication between the processor and FPGA.
  • FPGA Field Programmable Gate Array
  • ZYNQ scalable processing platform
  • ARM system-level chip-enhanced RISC Machines
  • FPGA is used for data high-speed acquisition and high-speed processing in the data acquisition of the platform, and its data processing speed can reach nanosecond (ns) level;
  • ARM is mainly responsible for data display and data analysis, and its data processing speed is generally Millisecond (ms) level.
  • the communication speed between FPGA and ARM largely determines the efficiency of the system processing data of SOCFPGA, and the existing solutions can not provide a reasonable processing logic, can not effectively solve the data of FPGA and ARM.
  • the problem of data processing efficiency of systems based on low configuration programmable devices (eg, low configuration ZYNQ) caused by processing speed mismatch is affected.
  • the technical problem solved by the present invention is how to improve the data processing efficiency of the system.
  • an embodiment of the present invention provides a data loop buffering method for a SOC FPGA, including: sequentially writing data in n buffers according to a data writing sequence, where n is a positive integer and is determined according to a buffer depth. Determining a valid buffer according to a latest write position in which the buffer of the unread data is stored in the n buffers, and a data from the last read position when reading the data The data stored in the valid buffer is sequentially read in the data reading order until the latest write position.
  • the data writing sequence is the same as the data reading order.
  • the data is continuously written from the first buffer.
  • the data stored in the nth buffer is read, the data is continuously read from the first buffer.
  • determining the valid buffer according to the latest write location and the last read location includes: when the latest write location is equal to the last read location, the number of valid buffers is zero.
  • the determining, according to the latest write location and the last read location, that the effective buffering comprises: the data writing sequence is a positive direction, and when the latest writing location is greater than the last reading location, The effective buffer is a buffer between the last read position and the latest write position.
  • the determining, according to the latest write location and the last read location, that the effective buffering comprises: the data writing sequence is a positive direction, and when the latest writing location is less than the last reading location, The effective buffer is the buffer between the last read position to the nth buffer and the first buffer to the latest write position.
  • the n buffers are consecutively disposed in the same buffer; or the n buffers are dispersedly disposed in multiple buffers and sequentially linked by an organization structure.
  • An embodiment of the present invention further provides a data loop buffering apparatus for a SOC FPGA, comprising: a writing module, configured to sequentially write data in n buffers according to a data writing sequence, wherein the n is a positive integer and is determined according to a buffer depth a determining module, configured to determine a valid buffer according to a latest write position, wherein the valid buffer is a buffer in which the unbuffered data is stored in the n buffers; and a read module, when reading data, The data stored in the valid buffer is sequentially read in the data reading order from the last read position until the latest write position.
  • the embodiment of the invention further provides a storage medium on which computer instructions are stored, and the computer instructions execute the steps of the above method when running.
  • the embodiment of the invention further provides a terminal, comprising a memory and a processor, wherein the memory stores computer instructions executable on the processor, and the processor executes the steps of the method when the computer instruction is executed.
  • the technical solution of the embodiment of the present invention provides a data loop buffering method for a SOC FPGA, including: sequentially writing data in n buffers according to a data writing sequence, where n is a positive integer and is determined according to a buffer depth; The write position and the last read position determine a valid buffer, wherein the valid buffer stores a buffer in which the unbuffered data is stored in the n buffers; when the data is read, the data is read from the last read position The data stored in the valid buffer is sequentially read in the order until the latest write position.
  • the solution in the embodiment of the present invention can set n buffers according to the buffer depth and cyclically write data therein, and record the data by recording the last read position and the latest write position.
  • the end (such as ARM's CPU) can adjust the read progress according to its own busy condition. Since the number of buffers is enough (ie n), it is necessary to write new data because the buffered data is not read yet. The situation of the data.
  • the cyclic buffering method according to the embodiment of the present invention can enable the SOCFPGA system to not generate data loss when collecting and forwarding large amounts of data, especially for low configuration programmable devices (ie, low). Configuring a SOCFPGA, such as a low-configuration ZYNQ system, can ensure data integrity and greatly optimize system data processing efficiency while greatly reducing system overhead.
  • the data writing sequence is the same as the data reading order to ensure data integrity and ensure that the earliest written data can be read as soon as possible.
  • FIG. 1 is a flowchart of a data buffering method of a first embodiment of the present invention
  • FIGS. 2 to 4 are schematic diagrams showing the principle of a typical application scenario of the first embodiment of the present invention.
  • FIG. 5 is a schematic diagram of another exemplary application scenario of the first embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a data buffering device according to a second embodiment of the present invention.
  • ARM enhanced RISC Machines
  • the technical solution of the embodiments of the present invention provides a data loop buffering method for a SOC FPGA, including: sequentially writing data in n buffers according to a data writing sequence, where n is a positive integer and according to Determining the buffer depth; determining a valid buffer according to the latest write position and the last read position, wherein the valid buffer is a buffer in which the unbuffered data is stored in the n buffers; when the data is read, from the last read The fetching position starts to sequentially read the data stored in the valid buffer in the data reading order until the latest writing position.
  • the solution in the embodiment of the present invention can set n buffers according to the buffer depth and cyclically write data therein, by recording the last read position and the latest write position, so that the data read end (such as ARM)
  • the CPU can adjust the read progress according to its own busy condition. Since the number of buffers is enough (ie, n), it is effective to avoid the situation where the buffered data has not been read and needs to be written with new data.
  • the cyclic buffering method according to the embodiment of the present invention can enable the system to collect and forward large data volume data without data loss, especially for systems based on low configuration programmable devices (eg, low configuration ZYNQ). It can guarantee data integrity and greatly optimize the data processing efficiency of the system on the basis of greatly saving system overhead.
  • low configuration programmable devices eg, low configuration ZYNQ
  • FIG. 1 is a flow chart of a data loop buffering method for a SOC FPGA according to a first embodiment of the present invention.
  • the present embodiment can be applied to a SOCFPGA system based on a programmable device, including a system based on a low-configuration programmable device (such as a low-configuration ZYNQ), and can also be applied to data processing speeds of other data writing modules and data reading modules.
  • the SOCFPGA refers to an FPGA with SOC, that is, an FPGA with integrated SOC.
  • the data loop buffering method for the SOC FPGA may include the following steps:
  • step S101 data is sequentially written in n buffers according to the data writing order, and n is a positive integer and is determined according to the buffer depth.
  • Step S102 determining a valid buffer according to the latest write position and the last read position, wherein the valid buffer is a buffer in which the unbuffered data is stored in the n buffers.
  • Step S103 when reading data, sequentially read data stored in the valid buffer in the data reading order from the last reading position until the latest writing position.
  • n buffers also referred to as buffers
  • the n buffers may be partitioned on a block of memory.
  • the buffer time required for data processing at a certain time may be determined according to a time jitter of the system processing, and then a suitable buffer depth is deduced according to the buffer time, thereby determining the n accurate value. For example, when the ARM Central Processing Unit (CPU) is busy, it takes a plurality of milliseconds (ie, multiple buffers) to read the data, and the number n of the buffers is at least greater than 2.
  • CPU Central Processing Unit
  • the buffer depth is also related to the hardware performance of the ARM.
  • the buffer depth is also related to data processing capabilities of other modules, and the other modules may be modules in the system that consume CPU resources in addition to modules for processing data stored in the n buffers. (or event), so that the data reading module (such as ARM) that executes the solution described in this embodiment can control when the data in the n buffers is read, and when the CPU resources should be given out to other modules, Allow other modules to have time to process their respective data.
  • the data reading module such as ARM
  • the instantaneous processing speed of data can be converted into an average efficiency, instead of simply reducing the amount of data or increasing the frequency of the CPU of the ARM to solve the problem that the CPU is instantaneously busy and buffered.
  • the problem of data loss is conducive to improving data integrity.
  • n may be a positive integer greater than two.
  • the n may be 3 or 4 to enable automatic buffering when the CPU is busy.
  • the data writing sequence and the data reading order may be in the same direction to ensure data integrity, and ensure that the earliest written data can be read as soon as possible.
  • the data writing order may be in the order from the first buffer to the nth buffer. Further, after the nth buffer is written with data, the data can be continuously written from the first buffer.
  • the data reading order may also be in the order from the first buffer to the nth buffer. Further, after the data stored in the nth buffer is read, the data can be continuously read from the first buffer.
  • the write module (such as an FPGA) writes data in a buffer
  • it can inform the read module (such as ARM) of the latest write position, and the read module can record its last read position to smoothly read the data.
  • the data stored in the buffer may be determined according to the latest writing position and the last reading position.
  • the latest write location may be equal to the last read location during an initial phase or when the CPU is idle. At this time, the number of valid buffers is zero, that is, the write The data written by the module can be immediately read by the read module.
  • the write module may still write data in the n buffers in sequence according to the data writing sequence, and the read module also follows the data.
  • the reading order sequentially reads data from the n buffers, but at this time, only one of the n buffers stores unread data, and the stored unread data can be read immediately.
  • the latest write position is smaller than the last read position, or the latest write position is greater than the last read position (in the positive direction of the data write sequence) At this time, the effective buffer is not empty.
  • the write module can always write data to the n buffers according to its data writing order, and can cycle to the first buffer to continue writing when the data writing position moves to the nth buffer. Into the data. During this time, if the CPU is in the busy phase, the read module may not have much time to process the data written in the n buffers, then the last read position may remain unchanged. Further, when the CPU is idle, the reading module can quickly digest the data stored in the valid buffer until the last read position moves to coincide with the latest write position, thereby avoiding data persistence. Backlog.
  • the manner of cyclic buffering employed in this embodiment can provide a very flexible system operating mode.
  • the data write location may be moved backward from the first buffer to the nth buffer and recycled back to the first buffer until the idle location is found, in the process, due to the read
  • the module has no time to process the data, so the last read position does not move; when the CPU is idle, the read module can read and process the data stored in the valid buffer, at this time, the latest write position Moving to the free buffer and writing data, the last read position quickly moves to coincide with the most recent write location, thereby quickly processing all cached data.
  • the idle location may refer to a free buffer that has not been written data.
  • all the data stored in the buffer can be batch processed when the CPU is idle, and enough buffer is provided to the write module to buffer data when the CPU is busy, thereby effectively ensuring data integrity.
  • each buffer is numbered sequentially from 0 to 7 in accordance with the data write order (i.e., the data read order).
  • the FPGA is configured to sequentially write data in the 0 to 7 buffer according to the data writing order, and modify the latest write position.
  • the ARM is used to read and process data stored in the 8 buffers by the CPU, and save and record the last read position.
  • the FPGA in a typical application scenario, it may happen that the latest write location is greater than the last read location, that is, the FPGA writes data faster than the ARM read data. speed.
  • the buffer from the last read position to the latest write position is effectively buffered, as shown in FIG. 2, buffers No. 0 and No. 1, buffers No. 3 and No. 4 shown in FIG. Figure 4 shows buffers No. 6 and No. 7, the data stored in these valid buffers is valid data waiting for the CPU of the ARM to read and process, and the buffers except the valid buffer are buffered in the eight buffers.
  • the FPGA can continue to write data therein in the order in which the data is written.
  • the FPGA writes data in the 8 buffers in sequence according to the data writing sequence
  • the latest write location is less than the The last read position, that is, the speed at which the FPGA writes data is greater than the speed at which the ARM reads data
  • the FPGA has cycled to write data to the buffer before the last read position.
  • the buffer from the last read position to the eighth buffer, and the first buffer to the latest write position is the effective buffer, as shown in FIG. 5, the buffer No. 7 and No. 0.
  • the data stored in the effective buffer is valid data waiting for the CPU of the ARM to read and process.
  • the buffers other than the valid buffer in the 8 buffers are buffers to be written, and the FPGA can continue. Data is sequentially written therein in the order in which the data is written.
  • the n buffers may be consecutively placed in the same buffer to enable the FPGA to sequentially write data therein.
  • the n buffers can be distributed in multiple buffers and sequentially linked through an organizational structure to make full use of all buffers and avoid resource waste. Further, by adding the description information of the organization, more accurate judgment is achieved, and the buffer depth is increased to accommodate larger jitter, which is beneficial to ensure data integrity and better solve data loss caused by CPU busy. problem.
  • n buffers can be set according to the buffer depth and data can be cyclically written therein, and the data read end (such as the ARM CPU) can be made by recording the last read position and the latest write position. According to its own busy condition, the read progress is adjusted by itself. Since the number of buffers is sufficient (ie, n), it is effective to avoid the situation where the buffered data has not been read and the new data needs to be written.
  • the cyclic buffering method according to the embodiment of the present invention causes the SOCFPGA system to not generate data loss when collecting and forwarding large amounts of data, especially for systems based on low-configuration programmable devices (such as low-configuration ZYNQ). It can guarantee data integrity and greatly optimize the data processing efficiency of the system on the basis of greatly saving system overhead.
  • the circular buffering scheme proposed in this embodiment can ensure the integrity of data when the amount of data is large and the processing time is irregular.
  • the loop buffer provides a very flexible buffer. When the CPU is busy, the data continues to buffer. When the CPU is idle, it can quickly process all the buffered data without being constrained by the instantaneous CPU busy. Loss, improve the communication rate between the write module and the read module, and effectively improve the data processing efficiency of the system.
  • the system can use the ZYNQ 7020 type write module, the 600MHz dual-core ARM type read module, and the DDR3 type buffer to respectively perform the solution described in this embodiment and the prior art to compare the experimental effects of the embodiment, wherein the DDR is doubled.
  • DDR SDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • the UDP packet loss rate is 5% during the data buffering process by using the existing double buffering and interrupting scheme; and the UDP packet loss during the data buffering period by using the cyclic buffering scheme in this embodiment.
  • the rate has dropped to 0.0007%.
  • the solution of the embodiment can fully exert the performance of the low-configuration programmable device, and can still satisfy the definition of the product performance and improve the stability on the relatively low-efficiency scalable platform.
  • FIG. 6 is a schematic structural diagram of a data loop buffer device for a SOC FPGA according to a second embodiment of the present invention.
  • the loop buffer device 8 (hereinafter simply referred to as the loop buffer device 8) of the data in this embodiment can be used to implement the method technical solution described in the embodiment shown in FIG.
  • the cyclic buffer device 8 may include: a write module 81, configured to sequentially write data in n buffers according to a data writing sequence, where n is a positive integer and is determined according to a buffer depth a determining module 82, configured to determine a valid buffer according to a latest write position and a last read position, wherein the valid buffer is a buffer in which the unbuffered data is stored in the n buffers; the reading module 83 reads the data At this time, data stored in the valid buffer is sequentially read in the data reading order from the last read position until the latest write position.
  • the data writing order is the same as the data reading order.
  • the write module 81 can continue to write data from the first buffer.
  • the read module 83 can continue to read data from the first buffer.
  • the determining module 82 can include a first determining sub-module 821 that is zero when the most recent write location is equal to the last read location.
  • the determining module 82 may include: a second determining sub-module 822, wherein the data writing sequence is a positive direction, and when the latest writing position is greater than the last reading position
  • the effective buffer is a buffer between the last read position and the latest write position.
  • the determining module 82 may include: a third determining submodule 823, wherein the data writing sequence is a positive direction, when the latest writing position is less than the last reading position
  • the valid buffer is a buffer between the last read position to the nth buffer and the first buffer to the latest write position.
  • the n buffers may be consecutively placed in the same buffer.
  • the n buffers can be distributed across multiple buffers and linked sequentially through the organizational structure.
  • the write module 81 can be integrated into the FPGA of the low configuration programmable device (such as low configuration ZYNQ); the determining module 82 and the degree module 83 can be integrated into the ARM (such as the CPU of the ARM).
  • the circular buffer device 8 of the present embodiment may also be independent of the FPGA and the ARM, and send the written and read data to the corresponding modules of the system for use.
  • first determining sub-module 81, the second determining sub-module 82, and the third determining sub-module 83 may be the same module; or, the three may be independent of each other, and each of them is executed in a corresponding scenario. The operation of the effective buffer is determined.
  • the circular buffer device 8 may be integrated in the system to optimize the data processing efficiency of the system by performing the scheme described in the embodiment.
  • the embodiment of the present invention further discloses a storage medium on which computer instructions are stored, and when the computer instructions are executed, the technical solution of the method described in the embodiment shown in FIG. 1 is executed.
  • the storage medium may comprise a computer readable storage medium.
  • the storage medium may include a ROM, a RAM, a magnetic disk, an optical disk, or the like.
  • an embodiment of the present invention further discloses a terminal, including a memory and a processor, where the computer stores computer instructions capable of running on the processor, and the processor executes the above figure when the computer instruction is executed.
  • the terminal can be the system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Transfer Systems (AREA)

Abstract

一种用于SOCFPGA的数据循环缓冲方法及装置、存储介质、终端,所述方法包括:按照数据写入顺序在n个缓冲中顺序写入数据,所述n为正整数并根据缓冲深度确定(S101);根据最新写入位置和最后读取位置确定有效缓冲,所述有效缓冲为所述n个缓冲中存储有未被读取数据的缓冲(S102);在读取数据时,自所述最后读取位置开始按照数据读取顺序依次读取所述有效缓冲中存储的数据,直至所述最新写入位置(S103)。通过上述方法能够有效提高系统的数据处理效率。

Description

用于SOCFPGA的数据循环缓冲方法及装置、存储介质、终端
本申请要求于2017年12月21日提交中国专利局、申请号为201711392556.X、发明名称为“用于SOCFPGA的数据循环缓冲方法及装置、存储介质、终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及数据处理技术领域,具体地涉及一种用于SOCFPGA的数据循环缓冲方法及装置、存储介质、终端。
背景技术
由于集成了处理器和现场可编程逻辑门阵列(Field Programmable Gate Array,简称FPGA)体系结构,较之传统器件,系统级芯片现场可编程逻辑门阵列(System on Chip FPGA,简称SOCFPGA)器件具有更高的集成度、更低的功耗、更小的电路板面积,以及处理器和FPGA之间带宽更大的通信等优点。
以可扩展处理平台(ZYNQ)为例,其是一种带系统级芯片增强型精简指令集计算机(Advanced RISC Machines,简称ARM)的FPGA解决方法。所述ZYNQ既能够拥有FPGA实时和并行的特性,又能在ARM上启动系统来提供丰富的功能。
其中,FPGA在该平台的数据采集中用于数据的高速采集和高速处理,其数据处理的速度可以达到纳秒(ns)级;ARM主要负责数据显示和数据分析,其数据处理的速度一般在毫秒(ms)级。
所以,FPGA和ARM之间的通信速度在很大程度上决定了SOCFPGA的系统处理数据的效率,而现有的解决方案均无法提供一种较合理的处理逻辑,无法有效解决FPGA和ARM的数据处理速度不匹配所导致的基于低配置可编程器件(例如低配置ZYNQ)的系统 的数据处理效率受到影响的问题。
发明内容
本发明解决的技术问题是如何提高系统的数据处理效率。
为解决上述技术问题,本发明实施例提供一种用于SOCFPGA的数据循环缓冲方法,包括:按照数据写入顺序在n个缓冲中顺序写入数据,所述n为正整数并根据缓冲深度确定;根据最新写入位置和最后读取位置确定有效缓冲,所述有效缓冲为所述n个缓冲中存储有未被读取数据的缓冲;在读取数据时,自所述最后读取位置开始按照数据读取顺序依次读取所述有效缓冲中存储的数据,直至所述最新写入位置。
可选的,所述数据写入顺序与所述数据读取顺序的方向相同。
可选的,当第n个缓冲被写入数据后,从第1个缓冲开始继续写入数据。
可选的,当第n个缓冲中存储的数据被读取后,从第1个缓冲开始继续读取数据。
可选的,所述根据最新写入位置和最后读取位置确定有效缓冲包括:当所述最新写入位置等于所述最后读取位置,所述有效缓冲的数量为零。
可选的,所述根据最新写入位置和最后读取位置确定有效缓冲包括:以所述数据写入顺序为正方向,当所述最新写入位置大于所述最后读取位置时,所述有效缓冲为所述最后读取位置至所述最新写入位置之间的缓冲。
可选的,所述根据最新写入位置和最后读取位置确定有效缓冲包括:以所述数据写入顺序为正方向,当所述最新写入位置小于所述最后读取位置时,所述有效缓冲为所述最后读取位置至第n个缓冲,以及第1个缓冲至最新写入位置之间的缓冲。
可选的,所述n个缓冲连续的设置于同一缓冲区中;或者,所述n个缓冲分散设置于多个缓冲区,并通过组织结构顺序链接。
本发明实施例还提供一种用于SOCFPGA的数据循环缓冲装置,包括:写模块,用于按照数据写入顺序在n个缓冲中顺序写入数据,所述n为正整数并根据缓冲深度确定;确定模块,用于根据最新写入位置和最后读取位置确定有效缓冲,所述有效缓冲为所述n个缓冲中存储有未被读取数据的缓冲;读模块,在读取数据时,自所述最后读取位置开始按照数据读取顺序依次读取所述有效缓冲中存储的数据,直至所述最新写入位置。
本发明实施例还提供一种存储介质,其上存储有计算机指令,所述计算机指令运行时执行上述方法的步骤。
本发明实施例还提供一种终端,包括存储器和处理器,所述存储器上存储有能够在所述处理器上运行的计算机指令,所述处理器运行所述计算机指令时执行上述方法的步骤。
与现有技术相比,本发明实施例的技术方案具有以下有益效果:
本发明实施例的技术方案提供一种用于SOCFPGA的数据循环缓冲方法,包括:按照数据写入顺序在n个缓冲中顺序写入数据,所述n为正整数并根据缓冲深度确定;根据最新写入位置和最后读取位置确定有效缓冲,所述有效缓冲为所述n个缓冲中存储有未被读取数据的缓冲;在读取数据时,自所述最后读取位置开始按照数据读取顺序依次读取所述有效缓冲中存储的数据,直至所述最新写入位置。较之现有用于SOCFPGA的数据缓冲方案,本发明实施例所述方案能够根据缓冲深度设立n个缓冲并在其中循环写入数据,通过记录最后读取位置和最新写入位置,使得数据读取端(如ARM的CPU)能够根据自身的忙闲情况自行调节读取进度,由于缓冲区的数量足够多(即n个),有效避免出现缓冲的数据还未被读取处理就需要写入新数据的情形。本领域技术人员理解,采用本发明实施例所述循环缓冲方法能够使得SOCFPGA的系统在采集和转发大数据量的数据时不会产 生数据丢失的问题,尤其对于基于低配置可编程器件(即低配置SOCFPGA,例如低配置ZYNQ)的系统,能够在极大地节省系统开销的基础上,保证数据完整性,极大地优化系统的数据处理效率。
进一步,所述数据写入顺序与所述数据读取顺序的方向相同,以保证数据完整性,确保最早写入的数据能够被尽快读取。
附图说明
图1是本发明第一实施例的一种数据的循环缓冲方法的流程图;
图2至图4是本发明第一实施例的一个典型的应用场景的原理示意图;
图5是本发明第一实施例的另一个典型的应用场景的原理示意图;
图6是本发明第二实施例的一种数据的循环缓冲装置的结构示意图。
具体实施方式
本领域技术人员理解,如背景技术所言,现有SOCFPGA系统的数据缓冲逻辑仍存在局限,导致系统的数据处理效率受到极大影响。
发明人经过分析发现,上述问题是由于现有SOCFPGA系统一般采用中断加双缓冲的方式来进行数据缓冲,以解决增强型精简指令集计算机(Advanced RISC Machines,简称ARM)的现场可编程逻辑门阵列(Field Programmable Gate Array,简称FPGA)之间的通信速度不匹配的问题。
但是,由于只有两个缓冲区,当数据量大时,ARM的中央处理器(Central Processing Unit,简称CPU)繁忙,FPGA和ARM处理数据的时间不规律(如由于CPU的调度关系,以及FPGA在操作内存而ARM受到内存时序上的制约,读取数据的速度也会受到影响 等),极有可能出现缓冲的数据还没处理完就要写入新数据的情形,造成数据丢失,影响数据完整性。
而这一现象在基于低配置可编程器件(即低配置SOCFPGA,例如低配置ZYNQ)的系统上尤其明显,因为低配置可编程器件的FPGA和ARM的数据处理速度的差异更为明显,也就导致基于低配置可编程器件的系统的数据处理效率大打折扣。
为了解决上述技术问题,本发明实施例的技术方案提供一种用于SOCFPGA的数据循环缓冲方法,包括:按照数据写入顺序在n个缓冲中顺序写入数据,所述n为正整数并根据缓冲深度确定;根据最新写入位置和最后读取位置确定有效缓冲,所述有效缓冲为所述n个缓冲中存储有未被读取数据的缓冲;在读取数据时,自所述最后读取位置开始按照数据读取顺序依次读取所述有效缓冲中存储的数据,直至所述最新写入位置。
本领域技术人员理解,本发明实施例所述方案能够根据缓冲深度设立n个缓冲并在其中循环写入数据,通过记录最后读取位置和最新写入位置,使得数据读取端(如ARM的CPU)能够根据自身的忙闲情况自行调节读取进度,由于缓冲区的数量足够多(即n个),有效避免出现缓冲的数据还未被读取处理就需要写入新数据的情形。
进一步地,采用本发明实施例所述循环缓冲方法能够使得系统采集和转发大数据量的数据时不会产生数据丢失的问题,尤其对于基于低配置可编程器件(例如低配置ZYNQ)的系统,能够在极大的节省系统开销的基础上,保证数据完整性,极大地优化系统的数据处理效率。
为使本发明的上述目的、特征和有益效果能够更为明显易懂,下面结合附图对本发明的具体实施例做详细的说明。
图1是本发明第一实施例的一种用于SOCFPGA的数据循环缓冲方法的流程图。其中,本实施例可以应用于基于可编程器件的SOCFPGA系统,包括基于低配置可编程器件(如低配置ZYNQ)的 系统,也可以应用于其他数据写入模块和数据读取模块的数据处理速度存在差异的系统;所述SOCFPGA是指带SOC的FPGA,亦即集成有SOC的FPGA。
具体地,在本实施例中,所述用于SOCFPGA的数据循环缓冲方法可以包括如下步骤:
步骤S101,按照数据写入顺序在n个缓冲中顺序写入数据,所述n为正整数并根据缓冲深度确定。
步骤S102,根据最新写入位置和最后读取位置确定有效缓冲,所述有效缓冲为所述n个缓冲中存储有未被读取数据的缓冲。
步骤S103,在读取数据时,自所述最后读取位置开始按照数据读取顺序依次读取所述有效缓冲中存储的数据,直至所述最新写入位置。
更为具体地,所述n个缓冲(也可称为缓存,buffer)可以是在一块内存块上分割获得的。
作为一个非限制性实施例,可以根据系统处理的时间抖动(jitter)来确定某一时间需要进行数据处理的缓冲时间,进而根据所述缓冲时间反推出合适的缓冲深度,进而确定所述n的具体数值。例如,在所述ARM的中央处理器(Central Processing Unit,简称CPU)繁忙时需要多个毫秒(即多个缓冲)才能将数据读走,则所述缓冲的数量n至少要大于2。
进一步地,所述缓冲深度还与所述ARM的硬件性能相关。
进一步地,所述缓冲深度还与其他模块的数据处理能力相关,所述其他模块可以是所述系统中除用于处理所述n个缓冲中存储的数据的模块之外需要消耗CPU资源的模块(或事件),以使得执行本实施例所述方案的数据读取模块(如ARM)能够控制何时读取所述n个缓冲中的数据,何时应该让出CPU资源给其他模块,以使其他模块有时间处理各自的数据。本领域技术人员理解,基于本实施例所述 循环缓冲的方式,能够把数据的瞬间处理速度转换成平均效率,而不是单纯降低数据量或提升ARM的CPU的主频来解决CPU瞬间繁忙导致缓冲的数据丢失的问题,有利于提升数据完整性。
进一步地,所述n可以为大于2的正整数。优选地,所述n可以为3或4,以在所述CPU繁忙时能够实现自动缓冲的功能。
本领域技术人员理解,通过设立所述n个缓冲,无需动态分配内存即可完成队列的功能。
进一步地,所述数据写入顺序与所述数据读取顺序的方向可以相同,以保证数据完整性,确保最早写入的数据能够被尽快读取。
作为一个非限制性实施例,所述数据写入顺序可以是按照从第1个缓冲至第n个缓冲的顺序。进一步地,当第n个缓冲被写入数据后,可以从第1个缓冲开始继续写入数据。
相应的,所述数据读取顺序也可以是按照从第1个缓冲至第n个缓冲的顺序。进一步地,当所述第n个缓冲中存储的数据被读取后,可以从第1个缓冲开始继续读取数据。
本领域技术人员可以根据实际需要对所述数据写入顺序与数据读取顺序进行调整,在此不予赘述。
进一步地,写模块(如FPGA)在某一缓冲中写入数据后,可以告知读模块(如ARM)最新写入位置,所述读模块可以记录自身的最后读取位置,以顺利读取有效缓冲中存储的数据。其中,所述有效缓冲可以根据所述最新写入位置和最后读取位置确定。
在一个典型的应用场景中,在初始阶段或者所述CPU空闲时,所述最新写入位置可能等于所述最后读取位置,此时,所述有效缓冲的数量为零,亦即所述写模块写入的数据可以立即被所述读模块读取。
进一步地,为了保证数据的完整性,在本应用场景中,所述写模块仍可以按照所述数据写入顺序依次在所述n个缓冲中写入数据,所 述读模块也按照所述数据读取顺序依次从所述n个缓冲中读取数据,但此时所述n个缓冲中只有一个缓冲中存储有未读数据,且存储的未读数据可以立即被读取。
进一步地,随着所述CPU的忙闲变化,还可能出现最新写入位置小于最后读取位置,或者最新写入位置大于最后读取位置的情形(以所述数据写入顺序为正方向),此时,所述有效缓冲不为空。
具体地,所述写模块可以始终按照其数据写入顺序向所述n个缓冲中写入数据,当所述数据写入位置移动至第n个缓冲时,可循环至第1个缓冲继续写入数据。在此期间,若所述CPU正好处于繁忙阶段,所述读模块可能没有太多时间处理所述n个缓冲中写入的数据,则所述最后读取位置可以维持不变。进一步地,当所述CPU空闲时,所述读模块可以快速消化掉所述有效缓冲中存储的数据,直至所述最后读取位置移动到与所述最新写入位置相一致,从而避免数据持续积压。
本领域技术人员理解,本实施例所采用的循环缓冲的方式能够提供非常弹性的系统工作模式。例如,当所述CPU繁忙时,所述数据写入位置可以从第1个缓冲向后移动至第n个缓冲再循环回第1个缓冲直到找到空闲位置,在此过程中,由于所述读模块没有时间处理数据,所以所述最后读取位置不动;当所述CPU空闲时,所述读模块可以读取并处理所述有效缓冲中存储的数据,此时,所述最新写入位置移动至空闲缓冲并写入数据,所述最后读取位置快速移动到和最新写入位置相一致,从而快速处理掉所有缓存的数据。其中,所述空闲位置可以指尚未被写入数据的空闲缓冲。
基于此原理,能够在所述CPU空闲时批量处理所有的缓冲中存储的数据,在所述CPU繁忙时又有足够多的缓冲提供给写模块缓冲数据,从而有效保证数据的完整性。
参考图2至图5,可以设立8个缓冲(即n=8),其中,每一个缓冲按照数据写入顺序(即数据读取顺序)依次编号为0至7号缓冲。 所述FPGA用于按照所述数据写入顺序依次在0至7号缓冲中写入数据,并修改所述最新写入位置。所述ARM用于通过CPU读取和处理所述8个缓冲中存储的数据,并保存和记录最后读取位置。
参考图2至图4,在一个典型的应用场景中,可能出现所述最新写入位置大于所述最后读取位置的情形,亦即所述FPGA写入数据的速度大于所述ARM读取数据的速度。此时,从所述最后读取位置至所述最新写入位置之间的缓冲为有效缓冲,如图2示出的0号和1号缓冲、图3示出的3号和4号缓冲、图4示出的6号和7号缓冲,这些有效缓冲中存储的数据为等待所述ARM的CPU读取和处理的有效数据,所述8个缓冲中除所述有效缓冲之外的缓冲均为待写入缓冲,所述FPGA可以继续按照所述数据写入顺序依次在其中写入数据。
参考图5,在另一个典型的应用场景中,随着所述FPGA按照所述数据写入顺序依次在所述8个缓冲中写入数据,还有可能出现所述最新写入位置小于所述最后读取位置的情形,亦即所述FPGA写入数据的速度大于所述ARM读取数据的速度,且所述FPGA已经循环至向位于所述最后读取位置之前的缓冲写入数据。此时,从所述最后读取位置至第8个缓冲,以及所述第1个缓冲至最新写入位置之间的缓冲为所述有效缓冲,如图5示出的7号和0号缓冲,这些有效缓冲中存储的数据为等待所述ARM的CPU读取和处理的有效数据,所述8个缓冲中除所述有效缓冲之外的缓冲均为待写入缓冲,所述FPGA可以继续按照所述数据写入顺序依次在其中写入数据。
作为一个非限制性实施例,所述n个缓冲可以连续的设置于同一缓冲区中,以使所述FPGA能够在其中顺序写入数据。
作为一个变化例,所述n个缓冲可以分散设置于多个缓冲区,并通过组织结构顺序链接,以充分利用所有的缓冲区,避免资源浪费。进一步地,通过加入组织的描述信息的方式,实现更精准的判断,并通过提升缓冲深度的方式来容纳更大的抖动,有利于保证数据的完整 性,更好的解决CPU繁忙导致的数据丢失问题。
由上,采用本实施例的方案,能够根据缓冲深度设立n个缓冲并在其中循环写入数据,通过记录最后读取位置和最新写入位置,使得数据读取端(如ARM的CPU)能够根据自身忙闲情况自行调节读取进度,由于缓冲区的数量足够多(即n个),有效避免出现缓冲的数据还未被执行读取处理就需要写入新数据的情形。
进一步地,采用本发明实施例所述循环缓冲方法使得SOCFPGA系统在采集和转发大数据量的数据时不会产生数据丢失的问题,尤其对于基于低配置可编程器件(如低配置ZYNQ)的系统,能够在极大的节省系统开销的基础上,保证数据完整性,极大地优化系统的数据处理效率。
进一步地,本实施例提出的循环缓冲方案能够保证数据量大且处理时间不规律时数据的完整性。具体地,通过循环缓冲的方式提供非常弹性的缓冲,当CPU繁忙时,数据继续缓冲,当CPU闲时,能够快速的处理完缓冲的所有数据,而不会受制约于瞬间的CPU繁忙导致数据丢失,提升了写模块和读模块的沟通速率,有效改善系统的数据处理效率。
例如,所述系统可以采用ZYNQ 7020型写模块,600MHz双核ARM型读模块以及DDR3型缓冲分别执行本实施例所述方案和现有技术来比较本实施例的实验效果,其中,DDR为双倍速率同步动态随机存储器(Double Data Rate Synchronous Dynamic Random Access Memory,DDR SDRAM)的英文缩写。
具体地,实验期间,每秒450Mbps数据使用传输控制协议(Transmission Control Protocol,简称TCP)传输,40Mbps数据通过用户数据报协议(User Datagram Protocol,简称UDP)传输,以监测UDP的丢包率。
其中,采用现有的双缓冲加中断的方案进行数据缓冲期间,所述UDP的丢包率为5%;而采用本实施例所述循环缓冲的方案进行数据 缓冲期间,所述UDP的丢包率降低到了0.0007%。
由上,本实施例的方案能够充分发挥低配置可编程器件的性能,在比较低效率的可扩展平台上,依然能够满足产品性能的定义,提升了稳定性。
图6是本发明第二实施例的一种用于SOCFPGA的数据循环缓冲装置的结构示意图。本领域技术人员理解,本实施例所述数据的循环缓冲装置8(以下简称为循环缓冲装置8)可以用于实施上述图1所示实施例中所述的方法技术方案。
具体地,在本实施例中,所述循环缓冲装置8可以包括:写模块81,用于按照数据写入顺序在n个缓冲中顺序写入数据,所述n为正整数并根据缓冲深度确定;确定模块82,用于根据最新写入位置和最后读取位置确定有效缓冲,所述有效缓冲为所述n个缓冲中存储有未被读取数据的缓冲;读模块83,在读取数据时,自所述最后读取位置开始按照数据读取顺序依次读取所述有效缓冲中存储的数据,直至所述最新写入位置。
进一步地,所述数据写入顺序与所述数据读取顺序的方向相同。
进一步地,当第n个缓冲被写入数据后,所述写模块81可以从第1个缓冲开始继续写入数据。
进一步地,当所述第n个缓冲中存储的数据被读取后,所述读模块83可以从第1个缓冲开始继续读取数据。
作为一个非限制性实施例,所述确定模块82可以包括:第一确定子模块821,当所述最新写入位置等于所述最后读取位置,所述有效缓冲的数量为零。
作为另一个非限制性实施例,所述确定模块82可以包括:第二确定子模块822,以所述数据写入顺序为正方向,当所述最新写入位置大于所述最后读取位置时,所述有效缓冲为所述最后读取位置至所述最新写入位置之间的缓冲。
作为又一个非限制性实施例,所述确定模块82可以包括:第三确定子模块823,以所述数据写入顺序为正方向,当所述最新写入位置小于所述最后读取位置时,所述有效缓冲为所述最后读取位置至第n个缓冲,以及所述第1个缓冲至最新写入位置之间的缓冲。
在一个优选例中,所述n个缓冲可以连续的设置于同一缓冲区中。
作为一个变化例,所述n个缓冲可以分散设置于多个缓冲区,并通过组织结构顺序链接。
进一步地,所述写模块81可以集成于所述低配置可编程器件(如低配置ZYNQ)的FPGA;所述确定模块82和度模块83可以集成于所述ARM(如ARM的CPU)。或者,本实施例所述循环缓冲装置8也可以独立于所述FPGA和ARM,并将写入和读取的数据发送至系统的相应模块以供其使用。
需要指出的是,所述第一确定子模块81、第二确定子模块82和第三确定子模块83可以为同一个模块;或者,三者也可以相互独立,以各自在对应的场景下执行确定所述有效缓冲的操作。
进一步地,所述循环缓冲装置8可以集成于所述系统中,以通过执行本实施例所述方案优化所述系统的数据处理效率。
关于所述循环缓冲装置8的工作原理、工作方式的更多内容,可以参照图1中的相关描述,这里不再赘述。
进一步地,本发明实施例还公开一种存储介质,其上存储有计算机指令,所述计算机指令运行时执行上述图1所示实施例中所述的方法技术方案。优选地,所述存储介质可以包括计算机可读存储介质。所述存储介质可以包括ROM、RAM、磁盘或光盘等。
进一步地,本发明实施例还公开一种终端,包括存储器和处理器,所述存储器上存储有能够在所述处理器上运行的计算机指令,所述处理器运行所述计算机指令时执行上述图1所示实施例中所述的方法 技术方案。优选地,所述终端可以为所述系统。
虽然本发明披露如上,但本发明并非限定于此。任何本领域技术人员,在不脱离本发明的精神和范围内,均可作各种更动与修改,因此本发明的保护范围应当以权利要求所限定的范围为准。

Claims (10)

  1. 一种用于SOCFPGA的数据循环缓冲方法,其特征在于,包括:
    按照数据写入顺序在n个缓冲中顺序写入数据,所述n为正整数并根据缓冲深度确定;
    根据最新写入位置和最后读取位置确定有效缓冲,所述有效缓冲为所述n个缓冲中存储有未被读取数据的缓冲;
    在读取数据时,自所述最后读取位置开始按照数据读取顺序依次读取所述有效缓冲中存储的数据,直至所述最新写入位置。
  2. 根据权利要求1所述的数据循环缓冲方法,其特征在于,所述数据写入顺序与所述数据读取顺序的方向相同。
  3. 根据权利要求1所述的数据循环缓冲方法,其特征在于,当第n个缓冲被写入数据后,从第1个缓冲开始继续写入数据;当第n个缓冲中存储的数据被读取后,从第1个缓冲开始继续读取数据。
  4. 根据权利要求1所述的数据循环缓冲方法,其特征在于,所述根据最新写入位置和最后读取位置确定有效缓冲包括:
    当所述最新写入位置等于所述最后读取位置,所述有效缓冲的数量为零。
  5. 根据权利要求1所述的数据循环缓冲方法,其特征在于,所述根据最新写入位置和最后读取位置确定有效缓冲包括:
    以所述数据写入顺序为正方向,当所述最新写入位置大于所述最后读取位置时,所述有效缓冲为所述最后读取位置至所述最新写入位置之间的缓冲。
  6. 根据权利要求1所述的数据循环缓冲方法,其特征在于,所述根据最新写入位置和最后读取位置确定有效缓冲包括:
    以所述数据写入顺序为正方向,当所述最新写入位置小于所述最后读取位置时,所述有效缓冲为所述最后读取位置至第n个缓冲, 以及第1个缓冲至最新写入位置之间的缓冲。
  7. 根据权利要求1所述的数据循环缓冲方法,其特征在于,所述n个缓冲连续的设置于同一缓冲区中;或者,所述n个缓冲分散设置于多个缓冲区,并通过组织结构顺序链接。
  8. 一种用于SOCFPGA的数据循环缓冲装置,其特征在于,包括:
    写模块,用于按照数据写入顺序在n个缓冲中顺序写入数据,所述n为正整数并根据缓冲深度确定;
    确定模块,用于根据最新写入位置和最后读取位置确定有效缓冲,所述有效缓冲为所述n个缓冲中存储有未被读取数据的缓冲;
    读模块,在读取数据时,自所述最后读取位置开始按照数据读取顺序依次读取所述有效缓冲中存储的数据,直至所述最新写入位置。
  9. 一种存储介质,其上存储有计算机指令,其特征在于,所述计算机指令运行时执行权利要求1至7任一项所述方法的步骤。
  10. 一种终端,包括存储器和处理器,所述存储器上存储有能够在所述处理器上运行的计算机指令,其特征在于,所述处理器运行所述计算机指令时执行权利要求1至7任一项所述方法的步骤。
PCT/CN2018/122551 2017-12-21 2018-12-21 用于socfpga的数据循环缓冲方法及装置、存储介质、终端 WO2019120274A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711392556.X 2017-12-21
CN201711392556.XA CN108153490A (zh) 2017-12-21 2017-12-21 用于socfpga的数据循环缓冲方法及装置、存储介质、终端

Publications (1)

Publication Number Publication Date
WO2019120274A1 true WO2019120274A1 (zh) 2019-06-27

Family

ID=62464908

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/122551 WO2019120274A1 (zh) 2017-12-21 2018-12-21 用于socfpga的数据循环缓冲方法及装置、存储介质、终端

Country Status (2)

Country Link
CN (1) CN108153490A (zh)
WO (1) WO2019120274A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108153490A (zh) * 2017-12-21 2018-06-12 上海禾赛光电科技有限公司 用于socfpga的数据循环缓冲方法及装置、存储介质、终端

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064679A (zh) * 2012-12-25 2013-04-24 北京航天测控技术有限公司 长时间连续dma传输的缓冲区管理的软件实现方法
US20130138875A1 (en) * 2010-08-13 2013-05-30 Thomson Licensing Storing/reading several data streams into/from an array of memories
CN103744621A (zh) * 2013-12-31 2014-04-23 深圳英飞拓科技股份有限公司 缓冲区循环读写的方法及装置
CN107102818A (zh) * 2017-03-16 2017-08-29 山东大学 一种基于sd卡的高速数据存储方法
CN107247561A (zh) * 2017-05-31 2017-10-13 成都华立达电力信息系统有限公司 缓冲池循环存储读写方法
CN108153490A (zh) * 2017-12-21 2018-06-12 上海禾赛光电科技有限公司 用于socfpga的数据循环缓冲方法及装置、存储介质、终端

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1323529C (zh) * 2003-04-28 2007-06-27 华为技术有限公司 一种数字信号处理器内部数据传输的方法
US7761772B2 (en) * 2006-10-18 2010-07-20 Trellisware Technologies, Inc. Using no-refresh DRAM in error correcting code encoder and decoder implementations
CN105872668A (zh) * 2016-03-31 2016-08-17 百度在线网络技术(北京)有限公司 音视频数据处理方法、装置以及车载终端
CN107045424B (zh) * 2016-10-31 2020-11-20 航天东方红卫星有限公司 小卫星固态存储器分时复用管理读写文件方法
CN106603172A (zh) * 2016-11-24 2017-04-26 中国电子科技集团公司第四十研究所 一种应用于无线电监测接收机的带时间戳数据分时读写方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130138875A1 (en) * 2010-08-13 2013-05-30 Thomson Licensing Storing/reading several data streams into/from an array of memories
CN103064679A (zh) * 2012-12-25 2013-04-24 北京航天测控技术有限公司 长时间连续dma传输的缓冲区管理的软件实现方法
CN103744621A (zh) * 2013-12-31 2014-04-23 深圳英飞拓科技股份有限公司 缓冲区循环读写的方法及装置
CN107102818A (zh) * 2017-03-16 2017-08-29 山东大学 一种基于sd卡的高速数据存储方法
CN107247561A (zh) * 2017-05-31 2017-10-13 成都华立达电力信息系统有限公司 缓冲池循环存储读写方法
CN108153490A (zh) * 2017-12-21 2018-06-12 上海禾赛光电科技有限公司 用于socfpga的数据循环缓冲方法及装置、存储介质、终端

Also Published As

Publication number Publication date
CN108153490A (zh) 2018-06-12

Similar Documents

Publication Publication Date Title
US9870327B2 (en) Message-based memory access apparatus and access method thereof
US7526593B2 (en) Packet combiner for a packetized bus with dynamic holdoff time
US8385148B2 (en) Scalable, dynamic power management scheme for switching architectures utilizing multiple banks
US20130212594A1 (en) Method of optimizing performance of hierarchical multi-core processor and multi-core processor system for performing the method
WO2014166404A1 (zh) 一种网络数据包处理方法和装置
US9529622B1 (en) Systems and methods for automatic generation of task-splitting code
JP2011505037A (ja) 読出しデータバッファリングのシステム及び方法
WO2013044829A1 (zh) 用于非一致性内存访问的数据预取方法和装置
WO2021209051A1 (zh) 片上缓存装置、片上缓存读写方法、计算机可读介质
JP6580307B2 (ja) マルチコア装置及びマルチコア装置のジョブスケジューリング方法
CN112084136A (zh) 队列缓存管理方法、系统、存储介质、计算机设备及应用
US20160253216A1 (en) Ordering schemes for network and storage i/o requests for minimizing workload idle time and inter-workload interference
WO2019120274A1 (zh) 用于socfpga的数据循环缓冲方法及装置、存储介质、终端
US10318362B2 (en) Information processing apparatus, information processing method, and non-transitory computer-readable storage medium
US10127076B1 (en) Low latency thread context caching
US20060218313A1 (en) DMA circuit and computer system
EP4163795A1 (en) Techniques for core-specific metrics collection
US10581748B2 (en) Information processing apparatus, information processing method, and non-transitory computer-readable storage medium
WO2018228493A1 (zh) 一种数据实时处理及存储装置
WO2012163019A1 (zh) 降低数据类芯片外挂ddr功耗的方法及数据类芯片系统
US10884477B2 (en) Coordinating accesses of shared resources by clients in a computing device
WO2022110681A1 (zh) 命令响应信息的返回方法、返回控制装置和电子设备
US20130346701A1 (en) Replacement method and apparatus for cache
EP3771164B1 (en) Technologies for providing adaptive polling of packet queues
KR20210061583A (ko) 적응형 딥러닝 가속 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18892288

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18892288

Country of ref document: EP

Kind code of ref document: A1