CN118101608A - Gigabit single optical port server adapter with low data overflow - Google Patents

Gigabit single optical port server adapter with low data overflow Download PDF

Info

Publication number
CN118101608A
CN118101608A CN202310658501.8A CN202310658501A CN118101608A CN 118101608 A CN118101608 A CN 118101608A CN 202310658501 A CN202310658501 A CN 202310658501A CN 118101608 A CN118101608 A CN 118101608A
Authority
CN
China
Prior art keywords
module
data
transmission
queue
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310658501.8A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guang Runtong Technology Development Co ltd
Original Assignee
Beijing Guang Runtong Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guang Runtong Technology Development Co ltd filed Critical Beijing Guang Runtong Technology Development Co ltd
Priority to CN202310658501.8A priority Critical patent/CN118101608A/en
Publication of CN118101608A publication Critical patent/CN118101608A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • H04L49/352Gigabit ethernet switching [GBPS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/356Switches specially adapted for specific applications for storage area networks
    • H04L49/357Fibre channel switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/40Constructional details, e.g. power supply, mechanical construction or backplane
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9031Wraparound memory, e.g. overrun or underrun detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a gigabit single optical port server adapter with low data overflow, wherein a sending buffer module of the adapter comprises: the second transmission queue is configured as a ring queue and is used for caching the write pointer and the corresponding frame sequence number; a third transmit queue configured as a ring queue for buffering a read pointer and a corresponding frame number; and a sending state judging sub-module. The beneficial effects of the invention include: the host reads the empty and full identifiers in the first sending data sub-module, judges whether to continue sending the data frame, further reduces the overflow phenomenon of the data, and remarkably reduces the metastable state problem generated by the handshake mechanism.

Description

Gigabit single optical port server adapter with low data overflow
The application is aimed at the divisional application with the application number 2020104151608 and the application date 2020.05.15.
Technical Field
The invention belongs to the technical field of adapters, and particularly relates to a gigabit single-optical-port server adapter.
Background
With the progress of technology, people have increasingly demanded more information, and the transmission medium of network data has gradually changed from a network to an optical fiber transmission. Because the optical fiber transmits information, the optical fiber has the advantages of large transmission capacity, good confidentiality, rapidness, convenience and the like. The optical fiber server adapter is used for data transmission of high-end equipment such as servers and desktops. The optical fiber server adapter is connected with different ports by the Ethernet controller so as to realize the conversion and transmission of data.
The ethernet controller includes an ethernet media access controller (MAC layer controller) and a physical layer interface chip (PHY layer controller), and the MAC layer controller is a key for implementing flow control, and a structure frame of an IP core of the ethernet MAC controller disclosed in the prior art is shown in fig. 1. In the traditional Ethernet controller, when the port rates of a receiving device and a sending host which are connected are inconsistent, data overflow occurs, in order to prevent the data overflow, the receiving device sends a pause frame to the sending host, the host separates a control frame according to the content of the pause frame and submits the control frame to a flow control module, the flow control module analyzes the content of the control frame, extracts control parameters in the frame and determines the time for suspending sending according to the control parameters; in the case of congestion of the receiving device, the host port will typically continuously receive multiple pause frames, so long as the congestion state of the receiving device is not relieved, the relevant port will always send pause frames, which results in a decrease in the network rate, and can reduce the gigabit rate to below 100 MB/s.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a gigabit single optical port server adapter.
One of the technical solutions of the present invention provides a gigabit single optical port server adapter, the server adapter includes:
A PHY layer controller that is configured to control the PHY layer,
A controller of the MAC layer is provided with a control element,
A PCIE interface connected to the MAC layer controller; and
A gigabit fiber port configured to be connected to the PHY layer controller;
The MAC layer controller comprises a bus interface, a receiving buffer module, a sending buffer module, a flow control module, a receiving module, a first sending module, a second sending module and a PHY interface;
The bus interface is configured to connect with a PCIE interface;
the receiving buffer module is connected with the bus interface and is configured to manage the buffer of the received data frames;
The sending buffer module is connected with the bus interface and is configured to manage the sent data frame buffer;
The sending buffer module includes:
a first transmit data sub-module configured to buffer data frames to be transmitted;
The second data transmission sub-module is configured to buffer the data frames to be transmitted when the flow control module compares that the number of the data frames buffered in the first data transmission sub-module is greater than a data frame threshold;
The first sending module is connected with the PHY interface and is configured to send the data frame to be sent, which is cached in the first sending data sub-module, to the PHY layer controller through the PHY interface;
The second transmitting module is connected with the PHY interface and is configured to transmit the data frame to be transmitted, which is cached by the second transmitting data sub-module, to the PHY layer controller through the PHY interface;
the receiving module is connected with the PHY interface and is configured to analyze the data frames and buffer legal frames to the receiving buffer module;
the flow control module is further configured to monitor in real time the number of data frames buffered within the first transmit data sub-module and compare with a data frame threshold; and the second transmitting module is further configured to control the second transmitting module to read and transmit the data frames in the second transmitting data sub-module after the preset time.
In a further development of the invention,
When the number of the data frames cached in the first sending data sub-module is larger than the data frame threshold, the flow control module changes the cached flow control field in the flow state register into 1, continues to detect, and when the number of the cached data frames is not larger than the data frame threshold, the flow control field is changed into 0.
In a further improved scheme, the sending buffer module further includes:
a first transmit queue configured to cache transmit cache descriptors;
a transmission completion queue configured to buffer a frame number and a transmission state after completion of transmission;
the transmission buffer control sub-module is configured to generate a transmission buffer descriptor corresponding to a data frame to be transmitted;
And the transmission buffer state queue is configured to read the frame sequence number of which the last transmission state in the transmission completion queue is successful in transmission when the number of the data frames buffered in the first transmission data sub-module is greater than a data frame threshold value, and record all the frame sequence numbers after the frame sequence number.
In a further improved scheme, the second transmitting data sub-module is configured to update and buffer the nth data frame sent by the first transmitting data sub-module to the last data frame in a preset time, and when the number of the data frames buffered in the first transmitting data sub-module is greater than a data frame threshold, the second transmitting data sub-module is further configured to buffer the data frames to be transmitted;
the cache module further includes:
a first transmit queue configured to cache transmit cache descriptors;
a transmission completion queue configured to buffer a frame number and a transmission state after completion of transmission;
The sending buffer control sub-module is configured to generate a sending buffer descriptor corresponding to a data frame to be sent, and further includes:
in a further improved scheme, the sending buffer module further includes:
the second transmission queue is configured as a ring queue and is used for caching the write pointer and the corresponding frame sequence number;
a third transmit queue configured as a ring queue for buffering a read pointer and a corresponding frame number;
The first transmission queue and the first transmission data sub-module are annular queues, and the depths of the data structures of the first transmission data sub-module, the first transmission queue, the second transmission queue and the third transmission queue are equal; in a cycle time period t, N-1=n1+n2, where N is the number of frame numbers stored in the first transmission queue, N1 is the number of frame numbers stored in the second transmission queue, and N3 is the number of frame numbers stored in the third transmission queue;
A transmission state determination sub-module, the transmission state control sub-module being configured to record a storage state of the data frame in the first transmission data sub-module, the bit being 1 when the storage state is near full, and 0 when the storage state is near empty; the method is also configured to judge the sizes of N1 and N2, when N1 is larger than N2, the storage state of the first transmission data sub-module is close to empty, and when N1 is smaller than N2, the storage state of the first transmission data sub-module is close to full, and corresponding identifiers are marked in the first transmission data sub-module.
In a further improved scheme, the data frame threshold value x is calculated according to the following formula:
x=(N1+N2-1)/t。
In a further improved scheme, the receiving buffer module includes:
the data receiving sub-module is configured into a ring queue and is used for caching received data frames for reading by a host;
A first receive queue configured to buffer receive buffer control symbols;
The second receiving queue is configured as a ring queue, empty and full marks exist on the ring queue, and is further configured to buffer frame numbers corresponding to the reading and writing of each reading pointer and each writing pointer of the receiving data submodule in a cycle time period t;
the capacity and the depth of the received data submodule, the first receiving queue and the second receiving queue are equal;
The receiving empty-full judging sub-module is used for reading the frame numbers corresponding to the read pointer and the write pointer in the second receiving queue, judging the empty state in the second receiving queue when the position corresponding to the read pointer is empty and the frame number of the write pointer does not exist at the position adjacent to the position corresponding to the read pointer, and judging the annular queue to be in a full state when the position corresponding to the write pointer is full and the frame number of the read pointer does not exist at the position adjacent to the position corresponding to the write pointer;
and a receive buffer control sub-module configured to generate a receive buffer control symbol for a data frame to be received.
In a further improved scheme, the flow control module is further configured to monitor an application program in use, monitor the number of data frames transmitted by the application program, and determine an application program for performing a binary backoff algorithm according to the number of data frames transmitted by the application program when the number of data frames buffered in the first transmission data sub-module is detected to be greater than a data frame threshold.
In a further improved scheme, the receiving module comprises an address filtering sub-module, and the address filtering sub-module is configured to judge whether a filtering rule table corresponding to the destination IP of the data frame exists locally or not, and find a corresponding filtering rule according to the filtering rule table; the filtering rule table comprises fields including a source IP address, a source port, a destination port, a transport layer protocol, a first hash value and a second hash value; wherein the first HASH value is a HASH code generated from the IP address according to a fixed number of bits; the second HASH value is a HASH code generated from the IP address according to a random number.
In a further improved scheme, a priority sending queue is arranged in the flow control module, and the priority sending queue is used for caching the Pause frame.
The gigabit single-optical-port server adapter provided by the invention supports a computer end interface and an optical fiber port, and can realize the communication between a server and high-end equipment such as a desktop; the problem of transmission rate reduction caused by continuously sending the pause frame is solved by arranging the first transmission data sub-module, the second transmission data sub-module, the first transmission module and the second transmission module.
Drawings
Fig. 1 is a block diagram of an ethernet MAC controller IP core disclosed in the prior art;
FIG. 2 is a block diagram illustrating a gigabit single optical port server adapter in accordance with some embodiments of the present invention;
FIG. 3 is a block diagram illustrating a transmit buffer module according to some embodiments of the invention;
FIG. 4 is a block diagram illustrating a sending buffer module according to other embodiments of the present invention;
FIG. 5 is a block diagram illustrating a transmission buffer module according to a third embodiment of the present invention;
fig. 6 is a block diagram illustrating a structure of a receive buffer module according to some embodiments of the present invention.
Detailed Description
The above examples are merely illustrative of the preferred embodiments of the present invention and are not intended to limit the scope of the present invention, and various modifications and improvements made by those skilled in the art to which the present invention pertains should fall within the scope of the present invention as defined in the appended claims without departing from the spirit of the present invention.
Fig. 1 shows a structural framework of an IP core of an ethernet MAC controller disclosed in the prior art, which specifically includes:
PHY interface module: according to the working mode of PHY, converting the different data bit widths of MII and GMI interfaces, thereby providing the unified data bit width for the upper layer transmitting module and the receiving module;
and a sending module: the main functions are to complete channel access control according to CSMA/CD mechanism, and to encapsulate the data to be sent of upper layer into the format of Ethernet frame, add the preamble, frame start delimiter, PAD and CRC check field for it and send out;
And a receiving module: filtering unicast/multicast/broadcast frames, performing CRC (cyclic redundancy check), filtering out frame fragments, transmitting legal frames to an upper layer, and reporting the frame receiving state to the upper layer after the receiving is finished;
and a flow control module: completing the flow control function under full duplex;
transmit buffer/receive buffer: management of transmission/reception frame buffering is realized;
AHB bus interface: an external bus interface for completing communication with the ARM core and other AHB interface units;
MI I management module: the monitoring and setting of the PHY working mode are completed;
and the clock management module is used for: the function is to generate working clocks and clock enabling signals of all modules in different working modes;
Register and interrupt module: is responsible for system mode configuration and interrupt management.
The MAC layer controller in the gigabit single optical port server adapter disclosed by the application further improves the functions of part of the modules of the MAC layer controller disclosed in the figure 1. See in particular fig. 2.
As shown in fig. 2, some embodiments of the invention provide a gigabit single optical port server adapter comprising:
A PHY layer controller that is configured to control the PHY layer,
The PHY layer controller is a conventional physical layer chip, and may be, for example, an 802.3PHY; the flow of receiving and transmitting data is as follows: taking the sending data as an example, when the PHY layer controller sends the data, the data transmitted by the MAC layer controller is received, the error detection code of 1 bit is added every 4 bits, then the parallel data is converted into serial stream data, then the data is encoded according to the encoding rule of the physical layer, and then the data is converted into analog signals to be sent out. The flow is reversed when receiving data.
A controller of the MAC layer is provided with a control element,
A PCIE interface connected to the MAC layer controller;
the PCIE interface is used for connecting with a host computer to realize communication between the host computer and equipment at the other end through the server adapter.
A gigabit fiber port configured to be connected to the PHY layer controller;
The MAC layer controller comprises a bus interface, a receiving buffer module, a sending buffer module, a flow control module, a receiving module, a first sending module, a second sending module and a PHY interface;
The bus interface is configured to connect with a PCIE interface;
Wherein, the bus interface can be an AHB bus interface; the bus interface is an external interface and is used for completing communication with the ARM core and other interface units;
the receiving buffer module is connected with the bus interface and is configured to manage the buffer of the received data frames;
the receiving buffer module stores the effective data frames into the receiving buffer module, and records the storage information and the receiving state information of the data frames;
The sending buffer module is connected with the bus interface and is configured to manage the sent data frame buffer;
The transmission buffer module temporarily stores the data frames to be transmitted and records the transmission state information; the sending buffer module includes:
a first transmit data sub-module configured to buffer data frames to be transmitted;
The second data transmission sub-module is configured to buffer the data frames to be transmitted when the flow control module compares that the number of the data frames buffered in the first data transmission sub-module is greater than a data frame threshold;
The first sending data sub-module and the second sending data sub-module are asynchronous FIFO (first in first out) which can be loaded by a read pointer, the bit width is 32 bits, the depth is 1024, and the working mode is set to be 0;
The first sending module is connected with the PHY interface and is configured to send the data frame to be sent, which is cached in the first sending data sub-module, to the PHY layer controller through the PHY interface;
The second transmitting module is connected with the PHY interface and is configured to transmit the data frame to be transmitted, which is cached by the second transmitting data sub-module, to the PHY layer controller through the PHY interface;
The PHY interface is mainly used for unifying the data bit width of 4 bits of the MI I interface and the data bit width of 8 bits of the GMI interface into 8 bits of bit width, and is also responsible for forwarding rx_er, col and crs signals to a transmitting and receiving module and forwarding mtx _er signals to a PHY layer controller.
The first sending module and the second sending module are further used for completing channel access control according to a CSMA/CD mechanism, packaging the data frame to be sent into an Ethernet format, adding a preamble, a frame start delimiter, a PAD and a CRC check field to the data frame, and sending the data frame.
The receiving module is connected with the PHY interface and is configured to analyze the data frames and buffer legal frames to the receiving buffer module;
the receiving module specifically completes the following tasks:
1) Identifying a preamble and a frame start delimiter, and detecting a frame boundary;
2) Unicast/multicast/broadcast address filtering;
3) Performing CRC on the data frame;
4) Performing length check on the data frame;
5) Removing the preamble, the frame start delimiter, the PAD and the CRC field from the legal frame and then delivering the legal frame to an upper layer;
6) And reporting the frame receiving state to an upper layer after the receiving is finished.
The flow control module is further configured to monitor in real time the number of data frames buffered within the first transmit data sub-module and compare with a data frame threshold; and the second transmitting module is further configured to control the second transmitting module to read and transmit the data frames in the second transmitting data sub-module after the preset time.
The preset time can be manually specified time, for example, 30ms, 1s, 2s, etc., or can be a pause time for sending a pause frame for analysis according to the prior art.
In addition, the flow control module is configured to perform the function of flow control in full duplex/half duplex.
It should be noted that, the gigabit single optical port server adapter provided by the present application may also be configured with an MI I management module, a clock management module, a register and an interrupt module disclosed in the prior art, which are not limited herein specifically.
The process of receiving data and sending data by the gigabit single optical port server adapter provided by the application is as follows:
The flow of transmitting data: the host temporarily stores the data frame to be transmitted into the transmission buffer module through the PCIE interface, and the first transmission module reads the data from the transmission buffer module and transmits the data frame to be transmitted to the PHY layer controller through the PHY interface.
The flow of receiving data is as follows: the device connected with the PHY layer controller sends the data to be sent to the receiving module through the PHY interface, the receiving module analyzes and processes the received data frames, legal frames are cached in the receiving cache module, and the host reads the data frames stored in the receiving cache module.
The flow of the data transmission of the second transmission module provided by the invention is as follows: when the flow control module monitors that the number of the data frames cached in the first sending data sub-module is larger than the data frame threshold, the flow control module controls the second sending module to read and send the data frames cached in the second sending data sub-module in a preset time period.
In some preferred embodiments, the flow control module changes the buffered flow control field in the flow status register to 1 when the number of buffered data frames in the first transmit data sub-module is greater than the data frame threshold, continues to detect, and changes the flow control field to 0 when the number of buffered data frames is not greater than the data frame threshold.
The flow of data caching by the host is as follows: the host reads a flow control field in a flow status register (stored in the receiving buffer module), when the flow status register is 1, the host indicates that a data frame buffered in the first transmitting data sub-module is full, and at the moment, the host buffers data to be transmitted into the second transmitting data sub-module; when the flow control field becomes 0, the host continues to buffer the data to be sent into the first send data sub-module. The problem of reduced transmission rate caused by the continuous transmission of pause frames is solved by the above definition.
As shown in fig. 3, in this embodiment, on the basis of embodiment 1, the sending buffer module further includes:
a first transmit queue configured to cache transmit cache descriptors;
a transmission completion queue configured to buffer a frame number and a transmission state after completion of transmission;
The sending queue and the sending completion queue are asynchronous FIFOs.
The transmit buffer descriptors describe the information that the frames store in the transmit data FIFO, each transmit buffer descriptor being 32 bits in length and comprising 4 fields: frame number (8 bits), frame length (12 bits), frame store header address (10 bits), and reserved field (2 bits).
The transmission buffer control sub-module is configured to generate a transmission buffer descriptor corresponding to a data frame to be transmitted;
The sending buffer control sub-module is also used for controlling and managing the reading of data among the modules.
And the transmission buffer state queue is configured to read the frame sequence number of which the last transmission state in the transmission completion queue is successful in transmission when the number of the data frames buffered in the first transmission data sub-module is greater than a data frame threshold value, and record all the frame sequence numbers after the frame sequence number.
The flow of sending the buffer is as follows:
When the host computer has data to send, firstly, the unused storage size in the first sending data sub-module is read through the bus interface. If the remaining transmit storage is sufficient, the host then reads and saves the current write pointer wptr for the first transmit data and writes the data to be transmitted into the first transmit data sub-module. Finally, the host writes the frame number (generated by the host), the frame length, and the saved frame store head pointer into the first transmit queue. The transmission buffer module immediately starts the transmission of the data frame, and writes the frame sequence number and the transmission state information into a transmission completion queue after the transmission is completed. The host can know the transmission state of each frame by reading the transmission completion queue.
When the transmission buffer status queue reads that the number of the data frames buffered in the first transmission data sub-module is greater than the threshold value of the data frames, the last transmission status in the transmission completion queue is the frame sequence number which is successfully transmitted, all the frame sequence numbers after the frame sequence number are recorded, when the host reads that the transmission buffer status queue is not empty, the frame sequence numbers buffered in the host are read, and the data frames are buffered in the second transmission data sub-module from the first frame.
According to the invention, through setting the first transmission data sub-module, the second transmission data sub-module and the transmission buffer state queue, when the number of the data frames buffered in the first transmission data sub-module is judged to be greater than the data frame threshold, the data frames which are not successfully transmitted are directly buffered in the second transmission buffer sub-module, and are transmitted through the second transmission module after the preset time, so that the packet loss condition caused by data overflow is reduced.
As shown in fig. 4, the second transmitting data sub-module is configured to update and buffer the nth data frame transmitted by the first transmitting data sub-module to the last data frame within a preset time, and when the number of data frames buffered in the first transmitting data sub-module is greater than the data frame threshold, the second transmitting data sub-module is further configured to buffer the data frame to be transmitted;
the first sending data sub-module is an asynchronous FIFO which can be loaded by a read pointer, the bit width is 32 bits, the depth is 1024, and the working mode is set to be 0;
The update buffer buffers the data frames sent by the first sending data sub-module for the second sending data sub-module, when the first sending data sub-module sends n frames, each time a data frame is sent, the second sending data sub-module updates the buffered first frame to the n+1th frame, and so on. The capacity of the second transmitting data sub-module for buffering the data frames transmitted by the first transmitting data sub-module is generally 1-5 ms.
In some preferred embodiments, the second transmit buffer sub-module is provided with two buffers, one having a fixed capacity for buffering data frames transmitted by the first transmit data sub-module and the other for buffering data frames transmitted by the host.
The sending buffer module further includes:
a first transmit queue configured to cache transmit cache descriptors;
a transmission completion queue configured to buffer a frame number and a transmission state after completion of transmission;
The sending queue and the sending completion queue are asynchronous FIFOs.
The transmit buffer descriptors describe the information that the frames store in the transmit data FIFO, each transmit buffer descriptor being 32 bits in length and comprising 4 fields: frame number (8 bits), frame length (12 bits), frame store header address (10 bits), and reserved field (2 bits).
The transmission buffer control sub-module is configured to generate a transmission buffer descriptor corresponding to a data frame to be transmitted;
The sending buffer control sub-module is also used for controlling and managing the reading of data among the modules.
The flow of sending the buffer is as follows:
When the host computer has data to send, firstly, the unused storage size in the first sending data sub-module is read through the bus interface. If the remaining transmit storage is sufficient, the host then reads and saves the current write pointer wptr of the first transmit data sub-module and writes the data to be transmitted into the first transmit data sub-module. Finally, the host writes the frame number (generated by the host), the frame length, and the saved frame store head pointer into the first transmit queue. The transmission buffer module immediately starts the transmission of the data frame, when the data frame is transmitted to the nth frame, the first transmission data sub-module buffers the data frame into the second transmission buffer sub-module from the nth frame, and when the host detects that the number of the data frames buffered in the first transmission data sub-module is greater than a data frame threshold (the condition monitored by the flow control module is read by the receiving buffer module), the rest data frames to be transmitted are buffered into the second transmission data sub-module.
According to the invention, by setting the first transmission data sub-module and the second transmission data sub-module, when the number of the data frames cached in the first transmission data sub-module is judged to be larger than the threshold value of the data frames, the data frames which are not successfully transmitted are directly cached in the second transmission cache sub-module, and after the preset time, the data frames are transmitted through the second transmission module, so that the data transmission efficiency is improved, and the packet loss condition caused by data overflow is reduced. Because the buffer memory is updated from the nth frame in the second data sending sub-module in advance, the problem of packet loss caused by the data frame which is not successfully sent in the data overflow process is effectively avoided.
As shown in fig. 5, in some preferred embodiments, the sending buffer module further includes:
the second transmission queue is configured as a ring queue and is used for caching the write pointer and the corresponding frame sequence number;
a third transmit queue configured as a ring queue for buffering a read pointer and a corresponding frame number;
The first transmission queue and the first transmission data sub-module are annular queues, and the depths of the data structures of the first transmission data sub-module, the first transmission queue, the second transmission queue and the third transmission queue are equal; in a cycle time period t, N-1=n1+n2, where N is the number of frame numbers stored in the first transmission queue, N1 is the number of frame numbers stored in the second transmission queue, and N3 is the number of frame numbers stored in the third transmission queue;
the cycle time period is the time of one circle of cycle of the read pointer or the write pointer; the manual setting may be, for example, 1ms or 2 ms.
A transmission state determination sub-module, the transmission state control sub-module being configured to record a storage state of the data frame in the first transmission data sub-module, the bit being 1 when the storage state is near full, and 0 when the storage state is near empty; the method is also configured to judge the sizes of N1 and N2, when N1 is larger than N2, the storage state of the first transmission data sub-module is close to empty, and when N1 is smaller than N2, the storage state of the first transmission data sub-module is close to full, and corresponding identifiers are marked in the first transmission data sub-module.
The host reads the empty and full identifiers in the first sending data sub-module, judges whether to continue sending the data frame, further reduces the overflow phenomenon of the data, and remarkably reduces the metastable state problem generated by the handshake mechanism.
In some preferred embodiments, the detection continues when the flow control module changes the buffered flow control field in the flow status register to 1 and changes the flow control field to 0 when the number of buffered data frames is not greater than the data frame threshold.
The data frame threshold value x is calculated according to the following method:
x=(N1+N2-1)/t。
According to the application, firstly, the number of the data frames cached in the first sending data sub-module is controlled, the cache of the data to be sent is switched according to the condition of the data frames cached in the first sending data sub-module, and then the threshold value of the data frames is limited, so that the problem of reduction of the transmission rate caused by continuously sending the pause frame is solved, and the transmission rate is obviously improved.
As shown in fig. 6, in some preferred embodiments, the receiving buffer module includes:
the data receiving sub-module is configured into a ring queue and is used for caching received data frames for reading by a host;
A first receive queue configured to buffer receive buffer control symbols;
The second receiving queue is configured as a ring queue, empty and full marks exist on the ring queue, and is further configured to buffer frame numbers corresponding to the reading and writing of each reading pointer and each writing pointer of the receiving data submodule in a cycle time period t;
the capacity and the depth of the received data submodule, the first receiving queue and the second receiving queue are equal;
The receiving empty-full judging sub-module is used for reading the frame numbers corresponding to the read pointer and the write pointer in the second receiving queue, judging the empty state in the second receiving queue when the position corresponding to the read pointer is empty and the frame number of the write pointer does not exist at the position adjacent to the position corresponding to the read pointer, and judging the annular queue to be in a full state when the position corresponding to the write pointer is full and the frame number of the read pointer does not exist at the position adjacent to the position corresponding to the write pointer;
and a receive buffer control sub-module configured to generate a receive buffer control symbol for a data frame to be received. And is also configured to manage and control between the modules.
Wherein, receive the 3 fields of the buffer controller: the received frame stores the head address (16 bits), the received frame length (16 bits), and the frame reception status (16 bits). Each receive buffer descriptor occupies two 32-bit memory locations.
The method for reading the data frame to be received by the host comprises the following steps: first, a reception interrupt is generated when the first reception queue is not empty, and the host reads the reception buffer descriptor of the first frame from the first reception queue in response to the interrupt. If the frame has no error, the host computer loads the frame storage head pointer into the receiving data sub-module, and then starts to read the frame data; if the frame has a receiving error, the host continues to read the next receiving buffer descriptor from the first receiving queue and directly loads the storage first pointer of the second frame into the receiving data sub-module without reading the data of the first error frame one by one.
The sub-modules for receiving data disclosed in the prior art are all asynchronous FIFO (first in first out) with loadable pointers, which have the problem of difficult empty and full marks, and the problem of empty and full marks is solved by using a handshake mechanism, so that the problem of metastability is brought. In order to overcome the problems, the first receiving data submodule is set into the annular queue, the second receiving queue and the receiving empty and full judging submodule are additionally arranged, and the empty and full judgment is judged according to the positions of the reading pointer and the writing pointer in one cycle period, so that the judgment accuracy is improved, the metastable state problem of a handshake mechanism is overcome, and the problems caused by data overflow and data empty reading are avoided.
In some preferred embodiments, the flow control module is further configured to monitor an application in use, monitor a number of data frames transmitted by the application, and when detecting that the number of data frames buffered in the first transmit data sub-module is greater than a data frame threshold, determine an application performing a binary backoff algorithm based on the number of data frames transmitted by the application. For example, an application program with the number of data frames transmitted being lower than 70% of the number of data frames to be generated by the host performs a binary back-off algorithm.
By monitoring each application program, the binary backoff algorithm is performed on part of the application programs according to the number of data frames sent by the application programs, so that the problems of low short-term throughput and unstable long-term delay variance caused by the binary backoff algorithm of the application programs are solved.
In some preferred embodiments, the receiving module includes an address filtering sub-module, where the address filtering sub-module is configured to determine whether a filtering rule table corresponding to a destination IP of the data frame exists locally, and find a corresponding filtering rule according to the filtering rule table; the filtering rule table comprises fields including a source IP address, a source port, a destination port, a transport layer protocol, a first hash value and a second hash value; wherein the first HASH value is a HASH code generated from the IP address according to a fixed number of bits; the second HASH value is a HASH code generated from the IP address according to a random number.
Different HASH codes are generated by using the fixed digits and the random digits, so that the IP address is filtered, and the filtering effect and accuracy are improved.
In some preferred embodiments, a priority transmit queue is disposed within the flow control module, the priority transmit queue configured to buffer the Pause frames.
When the flow control module receives the Pause frame, the Pause frame is buffered into the preferred queue. The invention solves the problem of data overflow caused by the need of waiting to send the Pause frame in the prior art by arranging the priority sending queue in the flow control module.

Claims (10)

1. The utility model provides a low data overflow's gigabit list optical port server adapter which characterized in that, this gigabit list optical port server adapter's transmission buffer memory module includes:
the second transmission queue is configured as a ring queue and is used for caching the write pointer and the corresponding frame sequence number;
a third transmit queue configured as a ring queue for buffering a read pointer and a corresponding frame number;
The first transmission queue and the first transmission data sub-module are annular queues, and the depths of the data structures of the first transmission data sub-module, the first transmission queue, the second transmission queue and the third transmission queue are equal; preferably, in a cycle period t, N-1=n1+n2, where N is the number of frame numbers stored in the first transmission queue, N1 is the number of frame numbers stored in the second transmission queue, and N3 is the number of frame numbers stored in the third transmission queue;
The sending state judging sub-module is configured to record the storage state of the data frames in the first sending data sub-module; preferably, the bit is 1 when the storage state is near full and 0 when the storage state is near empty; the method is also configured to judge the sizes of N1 and N2, when N1 is larger than N2, the storage state of the first transmission data sub-module is close to empty, and when N1 is smaller than N2, the storage state of the first transmission data sub-module is close to full, and corresponding identifiers are marked in the first transmission data sub-module.
2. The gigabit single optical port server adapter of claim 1,
The adapter includes:
A PHY layer controller that is configured to control the PHY layer,
A controller of the MAC layer is provided with a control element,
A PCIE interface connected to the MAC layer controller; and
A gigabit fiber port configured to be connected to the PHY layer controller;
The MAC layer controller comprises a bus interface, a receiving buffer module, a sending buffer module, a flow control module, a receiving module, a first sending module, a second sending module and a PHY interface;
The bus interface is configured to connect with a PCIE interface;
the receiving buffer module is connected with the bus interface and is configured to manage the buffer of the received data frames;
The sending buffer module is connected with the bus interface and is configured to manage the sent data frame buffer;
The sending buffer module includes:
a first transmit data sub-module configured to buffer data frames to be transmitted;
The second data transmission sub-module is configured to buffer the data frames to be transmitted when the flow control module compares that the number of the data frames buffered in the first data transmission sub-module is greater than a data frame threshold;
The first sending module is connected with the PHY interface and is configured to send the data frame to be sent, which is cached in the first sending data sub-module, to the PHY layer controller through the PHY interface;
The second transmitting module is connected with the PHY interface and is configured to transmit the data frame to be transmitted, which is cached by the second transmitting data sub-module, to the PHY layer controller through the PHY interface;
the receiving module is connected with the PHY interface and is configured to analyze the data frames and buffer legal frames to the receiving buffer module;
the flow control module is further configured to monitor in real time the number of data frames buffered within the first transmit data sub-module and compare with a data frame threshold; and the second transmitting module is further configured to control the second transmitting module to read and transmit the data frames in the second transmitting data sub-module after the preset time.
3. The gigabit single optical port server adapter of any of claims 1-2, wherein the flow control module changes the buffered flow control field in the flow status register to 1 when the number of buffered data frames in the first transmit data sub-module is greater than the data frame threshold and continues to detect and changes the flow control field to 0 when the number of buffered data frames is not greater than the data frame threshold.
4. A gigabit single optical port server adapter as claimed in any of claims 1 to 3, wherein said transmit buffer module further comprises:
a first transmit queue configured to cache transmit cache descriptors;
a transmission completion queue configured to buffer a frame number and a transmission state after completion of transmission;
the transmission buffer control sub-module is configured to generate a transmission buffer descriptor corresponding to a data frame to be transmitted;
And the transmission buffer state queue is configured to read the frame sequence number of which the last transmission state in the transmission completion queue is successful in transmission when the number of the data frames buffered in the first transmission data sub-module is greater than a data frame threshold value, and record all the frame sequence numbers after the frame sequence number.
5. The gigabit single-optical port server adapter of any of claims 1-4, wherein the second transmit data sub-module is configured to update, within a preset time, the nth data frame transmitted by the first transmit data sub-module to the last data frame, and wherein the second transmit data sub-module is further configured to buffer the data frames to be transmitted when the number of data frames buffered within the first transmit data sub-module is greater than a data frame threshold;
The sending buffer module further includes:
a first transmit queue configured to cache transmit cache descriptors;
a transmission completion queue configured to buffer a frame number and a transmission state after completion of transmission;
and the sending buffer control sub-module is configured to generate a sending buffer descriptor corresponding to the data frame to be sent.
6. A gigabit single optical port server adapter as claimed in any of claims 1 to 5 wherein said data frame threshold x is calculated according to the following formula:
x=(N1+N2-1)/t。
7. The gigabit single optical port server adapter of any of claims 1-6, wherein the receive buffer module comprises:
the data receiving sub-module is configured into a ring queue and is used for caching received data frames for reading by a host;
A first receive queue configured to buffer receive buffer control symbols;
The second receiving queue is configured as a ring queue, empty and full marks exist on the ring queue, and is further configured to buffer frame numbers corresponding to the reading and writing of each reading pointer and each writing pointer of the receiving data submodule in a cycle time period t;
the capacity and the depth of the received data submodule, the first receiving queue and the second receiving queue are equal;
The receiving empty-full judging sub-module is used for reading the frame numbers corresponding to the read pointer and the write pointer in the second receiving queue, judging the empty state in the second receiving queue when the position corresponding to the read pointer is empty and the frame number of the write pointer does not exist at the position adjacent to the position corresponding to the read pointer, and judging the annular queue to be in a full state when the position corresponding to the write pointer is full and the frame number of the read pointer does not exist at the position adjacent to the position corresponding to the write pointer;
and a receive buffer control sub-module configured to generate a receive buffer control symbol for a data frame to be received.
8. The gigabit single-optical port server adapter of any of claims 1-7, wherein the flow control module is further configured to monitor an application in use and monitor a number of data frames transmitted by the application, and when detecting that the number of data frames buffered in the first transmit data sub-module is greater than a data frame threshold, determine an application performing a binary backoff algorithm based on the number of data frames transmitted by the application.
9. The gigabit single-optical port server adapter of any of claims 1-8, wherein the receiving module includes an address filtering sub-module configured to determine whether a filtering rule table corresponding to a destination IP of the data frame is locally stored, and to find a corresponding filtering rule according to the filtering rule table; the filtering rule table comprises fields including a source IP address, a source port, a destination port, a transport layer protocol, a first hash value and a second hash value; wherein the first HASH value is a HASH code generated from the IP address according to a fixed number of bits; the second HASH value is a HASH code generated from the IP address according to a random number.
10. A gigabit single optical port server adapter as claimed in any of claims 1 to 9 wherein a priority send queue is provided in the flow control module, the priority send queue being for buffering Pause frames.
CN202310658501.8A 2020-05-15 2020-05-15 Gigabit single optical port server adapter with low data overflow Pending CN118101608A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310658501.8A CN118101608A (en) 2020-05-15 2020-05-15 Gigabit single optical port server adapter with low data overflow

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310658501.8A CN118101608A (en) 2020-05-15 2020-05-15 Gigabit single optical port server adapter with low data overflow
CN202010415160.8A CN111600809B (en) 2020-05-15 2020-05-15 Gigabit single optical port server adapter

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010415160.8A Division CN111600809B (en) 2020-05-15 2020-05-15 Gigabit single optical port server adapter

Publications (1)

Publication Number Publication Date
CN118101608A true CN118101608A (en) 2024-05-28

Family

ID=72191475

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310658501.8A Pending CN118101608A (en) 2020-05-15 2020-05-15 Gigabit single optical port server adapter with low data overflow
CN202010415160.8A Active CN111600809B (en) 2020-05-15 2020-05-15 Gigabit single optical port server adapter

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010415160.8A Active CN111600809B (en) 2020-05-15 2020-05-15 Gigabit single optical port server adapter

Country Status (1)

Country Link
CN (2) CN118101608A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500393B (en) * 2021-12-31 2024-03-15 伟乐视讯科技股份有限公司 Communication method and communication equipment for MAC (media access control) to multiple PHY (physical layer) modules
CN114915604A (en) * 2022-05-23 2022-08-16 北京计算机技术及应用研究所 System and method for reducing network link layer congestion based on FPGA

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002176464A (en) * 2000-12-07 2002-06-21 Fuji Xerox Co Ltd Network interface device
CN104735077B (en) * 2015-04-01 2017-11-24 积成电子股份有限公司 It is a kind of to realize the efficiently concurrent methods of UDP using Circular buffer and circle queue
CN105406998A (en) * 2015-11-06 2016-03-16 天津津航计算技术研究所 Dual-redundancy gigabit ethernet media access controller IP core based on FPGA
CN110602166B (en) * 2019-08-08 2022-03-08 百富计算机技术(深圳)有限公司 Method, terminal device and storage medium for solving problem of repeated data transmission

Also Published As

Publication number Publication date
CN111600809A (en) 2020-08-28
CN111600809B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
JP4554863B2 (en) Network adapter and communication method with reduced hardware
US6065073A (en) Auto-polling unit for interrupt generation in a network interface device
US7072349B2 (en) Ethernet device and method for extending ethernet FIFO buffer
US7281077B2 (en) Elastic buffer module for PCI express devices
US6662234B2 (en) Transmitting data from a host computer in a reduced power state by an isolation block that disconnects the media access control layer from the physical layer
US5732286A (en) FIFO based receive packet throttle for receiving long strings of short data packets
US8019887B2 (en) Method, system, and program for managing a speed at which data is transmitted between network adaptors
US8195247B2 (en) Cable sense mode for intelligent power saving in absence of link pulse
US20070047572A1 (en) Explicit flow control in Gigabit/10 Gigabit Ethernet system
US20050157752A1 (en) Storage switch with bandwidth control function
CN111600809B (en) Gigabit single optical port server adapter
US20080209089A1 (en) Packet-Based Parallel Interface Protocol For A Serial Buffer Having A Parallel Processor Port
JP2002204253A (en) Host processor in asynchronous transfer mode, interface unit for inter-digital signal processor transfer, and data processing system using the same
CN109218154B (en) FPGA-based conversion system from gigabit Ethernet to SLIP
US20050138238A1 (en) Flow control interface
JP2000269997A (en) Lan repeating exchange device
CN107579894B (en) FPGA-based EBR1553 bus protocol implementation device
US7802031B2 (en) Method and system for high speed network application
CN113676386B (en) FC-AE-1553 bus protocol message communication system
CN110995507A (en) Network acceleration controller and method
JP2005018768A (en) Dual-port functionality for single-port cell memory device
EP1988470B1 (en) Network device and transmission method thereof
CN113051204A (en) Serial backplane bus communication method and system
CN113810109A (en) Multi-protocol multi-service optical fiber channel controller and working method thereof
CN111555800B (en) Gigabit dual-optical-port server adapter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination