CN105141546A - Method for reducing FIFO (first input first output) overhead in data forwarding process - Google Patents

Method for reducing FIFO (first input first output) overhead in data forwarding process Download PDF

Info

Publication number
CN105141546A
CN105141546A CN201510389392.XA CN201510389392A CN105141546A CN 105141546 A CN105141546 A CN 105141546A CN 201510389392 A CN201510389392 A CN 201510389392A CN 105141546 A CN105141546 A CN 105141546A
Authority
CN
China
Prior art keywords
fifo
data
message
data forwarding
forwarding process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510389392.XA
Other languages
Chinese (zh)
Inventor
毕研山
于治楼
姜凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Group Co Ltd
Original Assignee
Inspur Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Group Co Ltd filed Critical Inspur Group Co Ltd
Priority to CN201510389392.XA priority Critical patent/CN105141546A/en
Publication of CN105141546A publication Critical patent/CN105141546A/en
Pending legal-status Critical Current

Links

Abstract

The invention relates to the technical field of FPGA (field-programmable gate array) based data forwarding, and particularly relates to a method for reducing the FIFO overhead in the data forwarding process. In the invention, an FPGA receives message data, a message load is cached, a message header is transmitted to a follow-up module, message header data is transmitted in each module, and the message load is taken out from the cache after the message header is processed so as to finish data forwarding. The method for reducing the FIFO overhead in the data forwarding process can reduce the FIFO overhead as much as possible under the premise of ensuring that functional requirements and performance requirements are met, thereby significantly reducing the hardware cost of an FPGA board card.

Description

A kind of method reducing FIFO expense in data forwarding process
Technical field
The present invention relates to the data retransmission technical field based on FPGA, particularly a kind of method reducing FIFO expense in data forwarding process.
Background technology
The abbreviation of FIFO, FirstInputFirstOutput, First Input First Output, this is a kind of traditional manner of execution according to the order of sequence, and the instruction be introduced into first completes and retires from office, and and then just performs Article 2 instruction.
Based on FPGA(Field-ProgrammableGateArray, field programmable gate array) data retransmission design in, the requisite parts of module such as FIFO is data buffer storage, message shaping, flow control, strategy are searched, channel selecting.The FIFO size that each module is opened up requires to deposit down the maximum message segment flowing through this module, module is more, the FIFO needed is larger, and under the prerequisite of FPGA internal RAM resource-constrained, the size of FIFO expense directly determines the number that FPGA inside can use RAM resource.
In data forwarding method under prior art, there is no the design of the reduction expense for FIFO.
Summary of the invention
In order to solve the problem of prior art, the invention provides a kind of method reducing FIFO expense in data forwarding process, under the prerequisite ensureing content with funtion, performance requirement, the expense of FIFO can be reduced as much as possible, thus significantly reduce the hardware cost of FPGA board.
The technical solution adopted in the present invention is as follows:
Reduce a method for FIFO expense in data forwarding process, comprise the following steps:
After A, FPGA receive message data, message load buffer memory is got off, heading is sent to subsequent module;
After B, message header processing terminate, take out message load from buffer memory, after being spliced to heading, composition complete message forwards, and completes data retransmission.
In steps A, do not participate in the message load of computing, be only cached in the FIFO of inbound port.
In steps A, participate in each subsequent module of deal with data, only store and process heading data, no longer buffer memory message load.
The beneficial effect that technical scheme provided by the invention is brought is:
1, do not participate in the message load of computing, be only cached in the FIFO of inbound port, participate in each module of deal with data, only store and process heading data, no longer buffer memory message load.
2, after heading data processing, the message load of buffer memory need be taken out, after being spliced to heading, composition complete message forwards, and achieves the forwarding of message.
3, because except inlet module, each module is buffer memory message load no longer separately, and therefore the use amount of FIFO is reduced to 1/N, and N forwards the module number comprised in design.
In sum, adopt method of the present invention, the FIFO use amount in data retransmission face can be reduced to intrinsic 1/N(N and forward the module number comprised in design).It is nervous that this method is applicable to FPGA internal RAM resource, the occasion not high to data traffic requirement.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of fundamental diagram reducing the method for FIFO expense in data forwarding process of the present invention.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly, below in conjunction with accompanying drawing, embodiment of the present invention is described further in detail.
Embodiment one
As shown in Figure 1, a kind of method reducing FIFO expense in data forwarding process of the present embodiment, its method step is as follows:
After FPGA receives message data, got off by message load buffer memory, heading is sent to subsequent module, what transmit in each module is heading data, after message header processing terminates, takes out message load, complete data retransmission from buffer memory.Do not participate in the message load of computing, be only cached in the FIFO of inbound port, participate in each module of deal with data, only store and process heading data, no longer buffer memory message load.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (3)

1. reduce a method for FIFO expense in data forwarding process, comprise the following steps:
After A, FPGA receive message data, message load buffer memory is got off, heading is sent to subsequent module;
After B, message header processing terminate, take out message load from buffer memory, after being spliced to heading, composition complete message forwards, and completes data retransmission.
2. a kind of method reducing FIFO expense in data forwarding process according to claim 1, is characterized in that, in described steps A, does not participate in the message load of computing, is only cached in the FIFO of inbound port.
3. a kind of method reducing FIFO expense in data forwarding process according to claim 1, is characterized in that, in described steps A, participates in each subsequent module of deal with data, only stores and process heading data, no longer buffer memory message load.
CN201510389392.XA 2015-07-06 2015-07-06 Method for reducing FIFO (first input first output) overhead in data forwarding process Pending CN105141546A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510389392.XA CN105141546A (en) 2015-07-06 2015-07-06 Method for reducing FIFO (first input first output) overhead in data forwarding process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510389392.XA CN105141546A (en) 2015-07-06 2015-07-06 Method for reducing FIFO (first input first output) overhead in data forwarding process

Publications (1)

Publication Number Publication Date
CN105141546A true CN105141546A (en) 2015-12-09

Family

ID=54726759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510389392.XA Pending CN105141546A (en) 2015-07-06 2015-07-06 Method for reducing FIFO (first input first output) overhead in data forwarding process

Country Status (1)

Country Link
CN (1) CN105141546A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257457A (en) * 2008-03-31 2008-09-03 华为技术有限公司 Method for network processor to copy packet and network processor
CN102739525A (en) * 2012-06-08 2012-10-17 中兴通讯股份有限公司 Message copying method and device
CN102932262A (en) * 2011-08-11 2013-02-13 中兴通讯股份有限公司 Network processor and image realizing method thereof
US20140282560A1 (en) * 2013-03-14 2014-09-18 Altera Corporation Mapping Network Applications to a Hybrid Programmable Many-Core Device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257457A (en) * 2008-03-31 2008-09-03 华为技术有限公司 Method for network processor to copy packet and network processor
CN102932262A (en) * 2011-08-11 2013-02-13 中兴通讯股份有限公司 Network processor and image realizing method thereof
CN102739525A (en) * 2012-06-08 2012-10-17 中兴通讯股份有限公司 Message copying method and device
US20140282560A1 (en) * 2013-03-14 2014-09-18 Altera Corporation Mapping Network Applications to a Hybrid Programmable Many-Core Device

Similar Documents

Publication Publication Date Title
CN102104544B (en) Order preserving method for fragmented message flow in IP (Internet Protocol) tunnel of multi-nuclear processor with accelerated hardware
CN106788975B (en) encryption and decryption device based on SM4 cryptographic algorithm
US20130219148A1 (en) Network on chip processor with multiple cores and routing method thereof
CN110995598B (en) Variable-length message data processing method and scheduling device
CN103701710A (en) Data transmission method, core forwarding equipment and endpoint forwarding equipment
CN104636300A (en) Serial transceiver based on SOC FPGA and data receiving and sending method
CN105243399A (en) Method of realizing image convolution and device, and method of realizing caching and device
CN110287023A (en) Message treatment method, device, computer equipment and readable storage medium storing program for executing
CN105635000A (en) Message storing and forwarding method, circuit and device
CN103916316A (en) Linear speed capturing method of network data packages
CN102831091A (en) Serial port-based ship radar echo data collecting method
CN105446699A (en) Data frame queue management method
CN105141546A (en) Method for reducing FIFO (first input first output) overhead in data forwarding process
CN103338156A (en) Thread pool based named pipe server concurrent communication method
CN105471770A (en) Multi-core-processor-based message processing method and apparatus
CN102780620B (en) A kind of network processes device and message processing method
US10805233B2 (en) Fractal-tree communication structure and method, control apparatus and intelligent chip
CN110633233A (en) DMA data transmission processing method based on assembly line
CN107196879A (en) Processing method, device and the forwarded device of UDP messages
CN107783926B (en) FPGA and PC communication method based on PowerPC and internet access
CN115967589A (en) ARM and FPGA-based high-speed buffer type CAN bus communication system and method
CN104618083A (en) Multi-channel message transmitting method
CN111090611A (en) Small heterogeneous distributed computing system based on FPGA
CN105912400B (en) A kind of resource regulating method based on Zynq platform
CN104035913A (en) High-performance BW100 chip based SAR (Synthetic Aperture Radar) parallel processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20151209