CN115952117A - Fc-equipment-based multi-partition receiving direction dma communication system and method - Google Patents

Fc-equipment-based multi-partition receiving direction dma communication system and method Download PDF

Info

Publication number
CN115952117A
CN115952117A CN202310124951.9A CN202310124951A CN115952117A CN 115952117 A CN115952117 A CN 115952117A CN 202310124951 A CN202310124951 A CN 202310124951A CN 115952117 A CN115952117 A CN 115952117A
Authority
CN
China
Prior art keywords
dma
data
partition
cache configuration
top module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310124951.9A
Other languages
Chinese (zh)
Inventor
马文林
张国奇
李军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Quanxin Cable Technology Co Ltd
Original Assignee
Shanghai Saizhi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Saizhi Information Technology Co ltd filed Critical Shanghai Saizhi Information Technology Co ltd
Priority to CN202310124951.9A priority Critical patent/CN115952117A/en
Publication of CN115952117A publication Critical patent/CN115952117A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application relates to a multi-partition receiving direction dma communication system and method based on FC equipment, which sequentially push each partition and corresponding cache configuration data to an FPGA through a cpu in the FC equipment; the dma _ top module receives a data unit and performs data splitting on the data unit to generate a current cache configuration, wherein the current cache configuration is matched with the cache configuration data of each partition; the dma _ top module performs dma communication with the FC device according to the current cache configuration, so as to improve the utilization rate of the buffer in an fpga internal data slicing manner, and simultaneously, data transmission is performed in two dma manners, and the two manners can be switched by user configuration, so as to adapt to different transmission scenarios, thereby improving the efficiency of processing data by the system.

Description

Fc-equipment-based multi-partition receiving direction dma communication system and method
Technical Field
The application relates to the technical field of fpga communication methods, in particular to a fc-device-based multi-partition receiving direction dma communication system and method.
Background
The FPGA is a product of further development on the basis of programmable devices such as programmable array logic, universal array logic and the like. The circuit is a semi-custom circuit in the field of application-specific integrated circuits, not only overcomes the defects of the custom circuit, but also overcomes the defect that the number of gate circuits of the original programmable device is limited.
At present, regarding to a DMA communication method of an FPGA, for example, an invention patent with a publication number of CN109656848A discloses an image upsampling and DMA cooperative work implementation method based on an FPGA. Compared with the traditional DMA implementation method, the DMA implementation method has a waiting function, so that the DMA of the data transmission module between the CPU and the FPGA and the image up-sampling processing module based on the FPGA can work cooperatively, and the existence of multiple clock domains in FPGA design is avoided. In addition, in the design of the output buffer of the image up-sampling module, when the output residual data amount is equal to the number of original image columns, the DMA read state machine starts to work, and a new row of data is input.
Although the above technique can have some advantages, such as the number of clock cycles for implementing DMA read latency is about 3 times of the number of columns of the original image, it still has the problem of DMA communication inefficiency.
Disclosure of Invention
In view of the above, it is necessary to provide a fc-based device multi-partition reception direction dma communication system and method capable of improving the efficiency of system data processing.
The technical scheme of the invention is as follows:
a multi-partition receiving direction dma communication system based on FC equipment comprises an FPGA and FC equipment, wherein the FC equipment is provided with a receiving direction cache region and a transmitting direction cache region, and partitions are arranged in the receiving direction cache region and the transmitting direction cache region; cache configuration data are preset in each partition; the CPU in the FC equipment pushes each partition and corresponding cache configuration data to the FPGA for the FPGA to use;
the FPGA includes a dma _ top module configured to receive a data unit, perform data splitting on the data unit, and generate a current cache configuration, where the current cache configuration matches cache configuration data of each of the partitions, and further perform dma communication with the FC device according to the current cache configuration.
Specifically, the FPGA further comprises an rx _ top module, a tx _ top module, and an FC _ MAC module; the rx _ top module and the tx _ top module are connected with the dma _ top module; the FC _ MAC module is connected with both the rx _ top module and the tx _ top module; and the FC _ MAC module is used for carrying out data interaction with an opposite terminal.
Specifically, the cache configuration data of one partition includes a plurality of buffers, and the buffer capacity in each partition is different.
Specifically, a method for fc device based multi-partition receiver-side dma communication, which is performed based on the fc device based multi-partition receiver-side dma communication system according to any of claims 1 to 3, the method comprising the steps of:
step S100: the cpu in the FC equipment pushes each partition and corresponding cache configuration data to the FPGA;
step S200: the dma _ top module receives a data unit and performs data splitting on the data unit to generate a current cache configuration, wherein the current cache configuration is matched with the cache configuration data of each partition;
step S300: the dma _ top module performs dma communication with the FC device according to the current cache configuration.
Specifically, the current cache configuration includes a plurality of cache groups, and each cache group includes a preset buffer with the same capacity; the preset buffers in each cache group are the same or different;
step S200: the dma _ top module receives a data unit, performs data splitting on the data unit, and generates a current cache configuration, where the current cache configuration matches the cache configuration data of each partition, and specifically includes:
step S210: the dma _ top module receives a data unit, acquires a target partition and acquires a preset buffer in the target partition;
step S220: the dma _ top module splits the capacity of the data unit according to the preset buffer and generates a current cache configuration, where the current cache configuration includes multiple split units, the total capacity of each split unit is equal to the data unit, and the capacity of each split unit is a capacity type of the preset buffer in the target partition.
Specifically, the method further comprises:
when the sequential dma mode is employed, data is processed sequentially according to the received buffer order.
Specifically, when the chain dma mode is adopted, a processing order set by the user is acquired, and data processing is performed according to the processing order.
Specifically, the method further comprises:
step S410: data is sent from an opposite end and enters an rx _ top module of fpga, and the rx _ top module completes functions of frame analysis, receiving port rate control, data caching control under the multicast condition and data redundancy removal;
step S420: the dma _ top module completes the dma write and store to the destination partition cache in the receive direction.
A computer arrangement comprising a memory storing a computer program and a processor implementing the steps of the fc device multi-partition reception direction dma based communication system described above when executing the computer program.
A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the above-mentioned steps of the fc device based multi-partition receiver direction dma communication system.
The invention has the following technical effects:
according to the FC device-based multi-partition receiving direction dma communication system and method, each partition and corresponding cache configuration data are pushed to the FPGA sequentially through the CPU in the FC device; the dma _ top module receives a data unit and performs data splitting on the data unit to generate a current cache configuration, wherein the current cache configuration is matched with the cache configuration data of each partition; the dma _ top module performs dma communication with the FC device according to the current cache configuration, so as to improve the utilization rate of the buffer in an fpga internal data slicing manner, and simultaneously, data transmission is performed in two dma manners, and the two manners can be switched by user configuration, so as to adapt to different transmission scenarios, thereby improving the efficiency of processing data by the system.
Drawings
Fig. 1 is a block diagram of a fc device based multi-partition receiver-side dma communication system according to an embodiment;
FIG. 2 is a block diagram of a complete IU chain dma, according to an embodiment;
FIG. 3 is a diagram illustrating a buffer order received in a partition, according to one embodiment;
FIG. 4 is a block diagram of data slicing in one embodiment;
FIG. 5 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
In one embodiment, as shown in fig. 1, a FC device-based multi-partition receive direction dma communication system is provided, including an FPGA and an FC device, where the FC device is provided with a receive direction buffer and a transmit direction buffer, and partitions are provided in both the receive direction buffer and the transmit direction buffer; cache configuration data are preset in each partition; the CPU in the FC equipment pushes each partition and corresponding cache configuration data to the FPGA for the FPGA to use;
the FPGA comprises a dma _ top module, wherein the dma _ top module is used for receiving a data unit, splitting the data unit and generating a current cache configuration, the current cache configuration is matched with the cache configuration data of each partition, and then dma communication is carried out with the FC equipment according to the current cache configuration.
The FPGA further comprises an rx _ top module, a tx _ top module and an FC _ MAC module; the rx _ top module and the tx _ top module are connected with the dma _ top module; the FC _ MAC module is connected with both the rx _ top module and the tx _ top module; and the FC _ MAC module is used for carrying out data interaction with an opposite terminal.
As shown in fig. 1, the overall flow sending direction of the data is: the dma _ top module inside the fpga is mainly used for performing dma data interaction with each destination partition, completing dma reading in the sending direction, completing frame composition in the sending direction by the tx _ top module, and finally completing data sending to the opposite end;
the overall flow of data is in the receive direction: data is sent from an opposite end and enters an rx _ top module of fpga to complete functions of frame parsing, receiving port rate control, data caching control under multicast condition, data redundancy removal and the like, and the dma _ top module completes writing, storing and controlling signals related to bd to a target partition cache in a receiving direction.
In one embodiment, the cache configuration data of one partition includes a plurality of buffers, and the buffer capacity in each partition is different.
In one embodiment, the present invention further provides a fc device based multi-partition reception direction dma communication method, which is performed based on the fc device based multi-partition reception direction dma communication system, and includes the following steps:
step S100: the CPU in the FC equipment pushes each partition and corresponding cache configuration data to the FPGA;
step S200: the dma _ top module receives a data unit and performs data splitting on the data unit to generate a current cache configuration, wherein the current cache configuration is matched with the cache configuration data of each partition;
step S300: the dma _ top module performs dma communication with the FC device according to the current cache configuration.
According to the FC device-based multi-partition receiving direction dma communication system and method, each partition and corresponding cache configuration data are pushed to the FPGA sequentially through a CPU in the FC device; the dma _ top module receives a data unit, performs data splitting on the data unit and generates a current cache configuration, wherein the current cache configuration is matched with the cache configuration data of each partition; the dma _ top module is in dma communication with the FC device according to the current cache configuration, so that the utilization rate of a buffer is improved in an fpga internal data slicing mode, data transmission is performed in two dma modes, the two methods can be switched by user configuration, the method is suitable for different transmission scenes, and the data processing efficiency of the system can be improved
In one embodiment, the current cache configuration includes a plurality of cache groups, and each of the cache groups includes a preset buffer with the same capacity; the preset buffers in each cache group are the same or different;
step S200: the dma _ top module receives a data unit, performs data splitting on the data unit, and generates a current cache configuration, where the current cache configuration matches the cache configuration data of each partition, and specifically includes:
step S210: the dma _ top module receives a data unit, acquires a target partition and acquires a preset buffer in the target partition;
step S220: the dma _ top module splits the capacity of the data unit according to the preset buffer and generates a current cache configuration, wherein the current cache configuration includes a plurality of splitting units, the total capacity of each splitting unit is equal to the data unit, and the capacity of each splitting unit is the capacity type of the preset buffer in the target partition.
Further, the FPGA may internally decompose the size of the received iu (information unit) and upload data, where iu is the data unit.
Further, inside the system partition, the buffer is decomposed into a plurality of blocks, for example, a plurality of buffers with sizes of 64kB, 32kB and 16kB.
For the case that there are several buffers with sizes of 64kB, 32kB, and 16kB.. 1kB in a partition, the FPGA preferentially decomposes iu into sizes of 64kB, when the data size does not satisfy the size of 64kB, decomposes unsatisfied portions into 32Bk, and the FPGA performs judgment processing by analogy, and the minimum unit is 1kB (for example, but not limited thereto), until the data of one iu is cut into sizes that cannot be cut any more. If iu is cut into a plurality of blocks, the buffers corresponding to the blocks are directly taken for use, for example, if iu is cut into 1k, the FPGA will take the 1k buffers to store data.
Through the method, the use efficiency of the buffer is improved to the maximum extent, compared with the conventional method that the buffer of 64kB is fixedly used, for example, iu of a plurality of different channels are provided, each iu is 2kB, the conventional method is that a small iu of 2kB occupies a buffer of 64kB, the resource utilization rate is low, when all the transmission paths are small iu of a single frame, a large number of buffers are occupied, the data bandwidth is greatly reduced, and the buffer of 2kB is directly called for use.
In one embodiment, the method further comprises:
when the sequential dma mode is employed, data is processed sequentially according to the received buffer order.
In one embodiment, when the chain dma mode is adopted, a processing order set by a user is acquired, and data processing is performed according to the processing order.
If an iu size initiating dma is 177kB, it is split into blocks, and multiple buffer chains of different sizes are used together 64k +32k + 169k, and in addition, in this function, two ways of sequential dma and chain dma are supported, which can be configured by partitions according to actual conditions.
Specifically, bd described below is the number of buffer.
When in dma mode, the buffer is decomposed into blocks, e.g. the first iu's buffer is composed of "buffer1:64k (bd 1) "+" buffer2:8k (bd 2) ", the buffer composition of the second iu being" buffer3:64k (bd 3) "+" buffer4:32k (bd 4) ".
In the sequential dma mode, since the data of the second iu is sequentially processed only after the data of the first iu is sequentially processed, the cpu sequentially processes bd in the order of bd1, bd2, bd3, and bd4. Wherein buffer1:64k (bd 1) means that bd corresponding to buffer1 is denoted by bd1, which is 64KB in size.
In chain dma mode: and (3) decomposing the buffer into a plurality of blocks, for example, the buffer of the first iu is composed of "buffer1:64k (bd 1) "+" buffer2:8k (bd 2) "+" buffer3:16k (bd 3) ", the buffer composition of the second iu being" buffer4:64k (bd 4) "+" buffer5:32k (bd 5) ",
as shown in fig. 2, fig. 2 is a complete IU chain dma structure, a bd address to be dma next is included in a header of each buffer, for example, the first IU needs to perform dma, the FPGA first performs dma according to the bd1 address, and at the same time, the bd2 is filled in the header of the buffer1 corresponding to the bd1, so that the cpu can directly transfer and process the content of the buffer2 after reading the first buffer, and the efficiency of processing data by the cpu system is improved.
In addition, the advantage of using the chain dma mode is that when the first iu receives a part of the data but does not end, and the second iu has already received the data, the buffer chain of the second iu can be processed first, and then the buffer chain of the first iu can be processed, or the processed data can be selected autonomously according to the user selection at the PC.
Because the target partitions are multiple, each partition is independent from the other partition, and the performance of each partition is different, each partition tells the FPGA through register configuration according to the performance of the partition, and the FPGA selects different modes and divides the length of the partition.
In another embodiment, as shown in FIG. 3, the sequence of buffers received in a partition is shown, when 3 buffers are received in iu _0 and iu _1, and when the chain dma is used, the partition may process iu _1 first according to the user's choice, although data for iu _0 is received first.
When iu _0 has 4 total buffer data, but only 3 buffer data are received, and iu _1 has already received 3 buffer data (only 3 total), the partition can preferentially select to process iu _1 data; in addition, when a plurality of iu exist, iu processing can be carried out according to user selection, and if the sequence dma is used, the pc end only processes data according to the received buffer sequence.
In one embodiment, the rx _ top module receives data sent by FC _ MAC, parses a frame header and data of FC, sends the data to the dma _ top module according to different channels, partitions iu under one channel into blocks of different sizes in the dma _ top, obtains a corresponding bd value according to the size of the block, reads a physical address corresponding to bd, and reports the data dma;
when the channel data corresponds to unicast, that is, the data of a single channel is only sent to one destination partition, the dma _ top directly reports the data to the destination partition, and when the channel data corresponds to multicast, the channel data needs to be sent to different destination partitions; in the reporting process, a reporting mode using dma is determined according to a chain dma mode or a sequential dma mode configured by the cpu, and finally, data reporting in a receiving direction is completed.
In one embodiment, the method further comprises:
step S410: data is sent from an opposite end and enters an rx _ top module of fpga, and the rx _ top module completes functions of frame analysis, receiving port rate control, data caching control under the multicast condition and data redundancy removal;
step S420: the dma _ top module completes the dma write and store to the destination partition cache in the receive direction.
As shown in fig. 4, after power-on initialization, the destination partition may push BDs of different sizes (corresponding to different size buffers) to the FPGA according to its own performance and characteristics, for example, partition 1 may push several BDs of different sizes to the FPGA with a maximum buffer of 64KB and 32KB.
When the partition 1 and the partition 2 both need to receive a 129kB IU _0, the dma _ top module inside the FPGA may take two 64kB BDs and a 1kB BD of the partition 1 for the partition 1, so that a 129k of data may be completely split and sent to the destination partition 1, and in a special case, after the last 1kB of data is split, when no 1kB BD is available in the BD configuration of the partition 1, the next highest BD size BD is sequentially searched up and up, and so on until an available BD is found (for example, when no available BD is found by searching 2kB first, and then a 4kB BD is found by searching 4kB, then the 1kB of data is put into a buffer corresponding to the 4kB BD, provided that the 2kB and 4kB of BD configuration of the partition 1 have configurations), the used 4kB of buffer is not dead, but the data is not blocked, and the subsequent data transmission is not affected.
Since the bd configuration in the partition 2 is 32KB at maximum, IU _0 is cut into 4 blocks of 32KB and one block of 1KB, and filled in the buffer corresponding to the bd, thereby completing the data transmission of IU _0 to the partition 2.
In an embodiment, as shown in fig. 5, a computer arrangement comprises a memory storing a computer program and a processor implementing the steps described above for the fc device based multi-partition reception direction dma communication system when executing the computer program.
A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the above-mentioned steps of the fc device based multi-partition receiver direction dma communication system.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A multi-partition receiving direction dma communication system based on FC equipment comprises an FPGA and FC equipment, wherein the FC equipment is provided with a receiving direction cache region and a transmitting direction cache region, and partitions are arranged in the receiving direction cache region and the transmitting direction cache region; the method is characterized in that cache configuration data are preset in each partition; the CPU in the FC equipment pushes each partition and corresponding cache configuration data to the FPGA for the FPGA to use;
the FPGA includes a dma _ top module configured to receive a data unit, perform data splitting on the data unit, and generate a current cache configuration, where the current cache configuration matches cache configuration data of each of the partitions, and further perform dma communication with the FC device according to the current cache configuration.
2. An FC device based multi-partition receiver direction dma according to claim 1, wherein the FPGA further comprises an rx _ top module, a tx _ top module and an FC _ MAC module; the rx _ top module and the tx _ top module are both connected with the dma _ top module; the FC _ MAC module is connected with both the rx _ top module and the tx _ top module; and the FC _ MAC module is used for carrying out data interaction with an opposite terminal.
3. A fc device based multi-partition receiver direction dma according to claim 1, wherein the cache configuration data of one of the partitions includes a plurality of buffers, and the buffer capacity in each partition is different.
4. A method for fc device based multi-partition receiver-side dma communication, based on a system according to any of claims 1-3, comprising the following steps:
step S100: the cpu in the FC equipment pushes each partition and corresponding cache configuration data to the FPGA;
step S200: the dma _ top module receives a data unit and performs data splitting on the data unit to generate a current cache configuration, wherein the current cache configuration is matched with the cache configuration data of each partition;
step S300: the dma _ top module performs dma communication with the FC device according to the current cache configuration.
5. The fc-based device multi-partition reception direction dma communication method according to claim 4, wherein the current cache configuration comprises a plurality of cache groups, and each cache group comprises preset buffers with the same capacity; the preset buffers in each cache group are the same or different;
step S200: the dma _ top module receives a data unit, performs data splitting on the data unit, and generates a current cache configuration, where the current cache configuration matches the cache configuration data of each partition, and specifically includes:
step S210: the dma _ top module receives a data unit, acquires a target partition and acquires a preset buffer in the target partition;
step S220: the dma _ top module splits the capacity of the data unit according to the preset buffer and generates a current cache configuration, wherein the current cache configuration includes a plurality of splitting units, the total capacity of each splitting unit is equal to the data unit, and the capacity of each splitting unit is the capacity type of the preset buffer in the target partition.
6. Fc device based multi-partition receiver-side dma communication method according to claim 5, characterized in that it further comprises:
when the sequential dma mode is employed, data is processed sequentially according to the received buffer order.
7. A fc device based multi-partition receiver-direction dma communication method according to claim 5, wherein when the chain dma mode is adopted, a processing order set by a user is acquired, and data processing is performed according to the processing order.
8. An fc device-based multi-partition receiver-direction dma communication method according to claim 5, further comprising:
step S410: data is sent from an opposite end and enters an rx _ top module of fpga, and the rx _ top module completes functions of frame analysis, receiving port rate control, data caching control under the multicast condition and data redundancy removal;
step S420: the dma _ top module completes the dma write and store to the destination partition cache in the receive direction.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN202310124951.9A 2023-02-16 2023-02-16 Fc-equipment-based multi-partition receiving direction dma communication system and method Pending CN115952117A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310124951.9A CN115952117A (en) 2023-02-16 2023-02-16 Fc-equipment-based multi-partition receiving direction dma communication system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310124951.9A CN115952117A (en) 2023-02-16 2023-02-16 Fc-equipment-based multi-partition receiving direction dma communication system and method

Publications (1)

Publication Number Publication Date
CN115952117A true CN115952117A (en) 2023-04-11

Family

ID=87297000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310124951.9A Pending CN115952117A (en) 2023-02-16 2023-02-16 Fc-equipment-based multi-partition receiving direction dma communication system and method

Country Status (1)

Country Link
CN (1) CN115952117A (en)

Similar Documents

Publication Publication Date Title
US6032246A (en) Bit-slice processing unit having M CPU's reading an N-bit width data element stored bit-sliced across M memories
JP6833916B2 (en) Data processing equipment, artificial intelligence chips and electronic devices
US8612686B2 (en) Resource pool managing system and signal processing method
CN111190854B (en) Communication data processing method, device, equipment, system and storage medium
CN111176725B (en) Data processing method, device, equipment and storage medium
CN113986818B (en) Chip address reconstruction method, chip, electronic device and storage medium
CN114968102B (en) Data caching method, device, system, computer equipment and storage medium
CN116225992A (en) NVMe verification platform and method supporting virtualized simulation equipment
CN115934623A (en) Data processing method, device and medium based on remote direct memory access
US20160098212A1 (en) Information processor apparatus, memory control device, and control method
CN115952117A (en) Fc-equipment-based multi-partition receiving direction dma communication system and method
US9343157B2 (en) Writing into an EEPROM on an I2C bus
US20230367735A1 (en) Data transmission method, module and apparatus, device, and storage medium
CN117033275A (en) DMA method and device between acceleration cards, acceleration card, acceleration platform and medium
CN112765057A (en) Data transmission method, PCIE system, equipment and storage medium
CN115314438B (en) Chip address reconstruction method and device, electronic equipment and storage medium
CN109472355B (en) Convolution processing engine and control method and corresponding convolution neural network accelerator
CN113986817B (en) Method for accessing in-chip memory area by operation chip and operation chip
CN114741331A (en) Data caching device, system and data processing method
CN112395245B (en) Access device and method of processor and computer equipment
CN116306840A (en) Neural network operation method, device, chip, electronic equipment and storage medium
CN113673691A (en) Storage and computation combination-based multi-channel convolution FPGA (field programmable Gate array) framework and working method thereof
CN111966486A (en) Data acquisition method, FPGA system and readable storage medium
KR101321438B1 (en) Apparatus for extending memory in communication system
CN109857682B (en) Data access method, memory and processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240202

Address after: 12th Floor, Building 5, No. 18 Qingjiang South Road, Gulou District, Nanjing City, Jiangsu Province, 210036

Applicant after: NANJING QUANXIN CABLE TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: 201103 room 110, building 3, 1128 Wuzhong Road, Minhang District, Shanghai

Applicant before: SHANGHAI SAIZHI INFORMATION TECHNOLOGY Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right