CN115955447A - Data transmission method, switch and switch system - Google Patents

Data transmission method, switch and switch system Download PDF

Info

Publication number
CN115955447A
CN115955447A CN202310231807.5A CN202310231807A CN115955447A CN 115955447 A CN115955447 A CN 115955447A CN 202310231807 A CN202310231807 A CN 202310231807A CN 115955447 A CN115955447 A CN 115955447A
Authority
CN
China
Prior art keywords
queue
data
data packets
data packet
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310231807.5A
Other languages
Chinese (zh)
Other versions
CN115955447B (en
Inventor
陈涛
赵玉军
王志波
管海涛
韩明利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microgrid Union Technology Chengdu Co ltd
Original Assignee
Microgrid Union Technology Chengdu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microgrid Union Technology Chengdu Co ltd filed Critical Microgrid Union Technology Chengdu Co ltd
Priority to CN202310231807.5A priority Critical patent/CN115955447B/en
Publication of CN115955447A publication Critical patent/CN115955447A/en
Application granted granted Critical
Publication of CN115955447B publication Critical patent/CN115955447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Abstract

The application relates to a data transmission method, a switch and a switch system, wherein the method comprises the steps of responding to and acquiring a data packet, analyzing the data packet and acquiring the address of the data packet; constructing a plurality of queuing queues according to the addresses, wherein the priority of the data packets in each queuing queue is the same; sending the data packets on the queue according to the priority, and adjusting the bandwidth of the queue according to the buffer storage of the data packets on the queue; and configuring a first dynamic cache pool to the low-priority queue, and returning the data packets sent by the first dynamic cache pool to the corresponding queue in a sequential alternate queue-inserting mode. According to the data transmission method, the switch and the switch system, the high-priority data are rapidly processed in a unified management mode of received data and a sending mode according to the priority level, and the temporarily stored data are stored in a matching mode by means of the dynamic cache pool, so that the high-priority data can rapidly circulate in a network.

Description

Data transmission method, switch and switch system
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data transmission method, a switch, and a switch system.
Background
The switch has three transmission modes of a straight-through type, a storage forwarding type and a fragment-free forwarding type; the through type transmission mode is forwarded out after receiving the destination address, the mode has small delay, but damaged data are forwarded as same; after receiving the complete data packet, the store-and-forward transmission mode checks whether the data packet is good or bad, and discards and retransmits the data packet if the data packet is bad. This method is reliable in transmission, but has a long delay; after receiving the data packet, the non-fragmentation forwarding transmission mode is larger than 64bytes for forwarding and smaller than 64bytes for discarding, and the quality of the mode is between the two modes.
The above approach enables large data throughput, but in prioritized usage scenarios, it is no longer used because the user's period varies for data generated by different applications, e.g. for voice and video applications, the user expects the network to react quickly; there is no such speed desire for downloading. Under the condition that the resources of the switch are limited, how to reasonably use the bandwidth and meet the requirements of users needs further requirements.
Disclosure of Invention
The application provides a data transmission method, a switch and a switch system, which realize the rapid processing of high-priority data by uniformly managing received data and sending the data according to priority levels, and cooperate with a dynamic cache pool to store temporarily stored data, so that the high-priority data can rapidly circulate in a network.
The above object of the present application is achieved by the following technical solutions:
in a first aspect, the present application provides a data transmission method, including:
responding to the acquired data packet, analyzing the data packet, and acquiring an address of the data packet, wherein the address comprises an MAC (media access control) address and a public network address;
constructing a plurality of queuing queues according to the addresses, wherein the priority of the data packets in each queuing queue is the same;
sending the data packets on the queue according to the priority, and adjusting the bandwidth of the queue according to the buffer storage of the data packets on the queue; and
configuring a first dynamic cache pool to the low-priority queue, and returning the data packets sent by the first dynamic cache pool to the corresponding queue in a sequential alternate queue insertion manner;
when the number of data packets in the queue of a certain priority is less than the set number in unit time, the queue is merged into the queue of the higher or lower level.
In a possible implementation manner of the first aspect, the obtained data packets are filtered, and data packets with a length smaller than 64bytes are removed.
In a possible implementation manner of the first aspect, for the data packets in the high-priority queuing queue, the sending time of each data packet is the same, and the higher-level queuing queue sequentially occupies the bandwidth of the lower-level queuing queue.
In a possible implementation manner of the first aspect, when a bandwidth is insufficient, a second dynamic cache pool is allocated to a queuing without bandwidth, where the second dynamic cache pool is used to store an extruded data packet;
and after the bandwidth is recovered, returning the data packets in the second dynamic cache pool to the queue in a sequential alternate queue-inserting mode.
In a possible implementation manner of the first aspect, the method further includes:
extracting the data packets on the queue and copying the data packets to a checking buffer pool;
carrying out integrity check on the data packet in the check cache pool; and
the integrity check result is implanted into an unsent data packet on the queue;
wherein, the data packet which does not pass the integrity check requests to send a correction data packet.
In a possible implementation manner of the first aspect, the data packet containing the integrity check result is not copied to the check cache pool.
In a possible implementation manner of the first aspect, the correction packets enter the corresponding queue in a queue insertion manner;
or correct the packet into the queue-insertion buffer pool and then queue-insert to the first bit of the queue.
In a second aspect, the present application provides a switch, comprising:
the analysis unit is used for responding to the acquired data packet and analyzing the data packet to acquire an address of the data packet, wherein the address comprises an MAC address and a public network address;
the queue unit is used for constructing a plurality of queuing queues according to the addresses, and the priority of the data packets in each queuing queue is the same;
the sending unit is used for sending the data packets on the queue according to the priority and adjusting the bandwidth of the queue according to the buffer storage amount of the data packets on the queue;
the configuration unit is used for configuring a first dynamic cache pool to the low-priority queue, and sending the data packets of the first dynamic cache pool to return to the corresponding queue in a sequential alternate queue insertion mode; and
and the queue adjusting unit is used for merging the queue into a higher or lower queue when the number of the data packets on the queue with a certain priority is less than the set number in unit time.
In a third aspect, the present application provides a switch system, the system comprising:
one or more memories for storing instructions; and
one or more processors configured to invoke and execute the instructions from the memory to perform the method according to the first aspect and any possible implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium comprising:
a program for performing a method as described in the first aspect and any possible implementation manner of the first aspect when the program is run by a processor.
In a fifth aspect, the present application provides a computer program product comprising program instructions for executing the method according to the first aspect and any possible implementation manner of the first aspect when the program instructions are executed by a computing device.
In a sixth aspect, the present application provides a system on a chip comprising a processor configured to perform the functions recited in the above aspects, such as generating, receiving, sending, or processing data and/or information recited in the above methods.
The chip system may be formed by a chip, or may include a chip and other discrete devices.
In one possible design, the system-on-chip further includes a memory for storing necessary program instructions and data. The processor and the memory may be decoupled, disposed on different devices, connected in a wired or wireless manner, or coupled on the same device.
The beneficial effects of this application include at least:
the processing method adopts the processing strategy that the sending time of each data packet is the same, and the data packets on the high-priority queue can be received and sent at a stable speed. Compared with the transmission in an equal bandwidth mode, the constant speed transmission does not cause the retention problem;
in the application, the integrity check of the data packet is simultaneously undertaken by the switch and the use terminal, and when the load capacity of the switch is small, the switch undertakes the integrity check of most of the data packet; when the load capacity of the switch is large, the switch undertakes integrity checking of a small portion of the data packets.
Drawings
Fig. 1 is a schematic block diagram illustrating a flow of steps of a data transmission method provided in the present application.
Fig. 2 is a schematic diagram of a process for queue alignment of data packets according to the present application.
Fig. 3 is a schematic diagram of a process for encapsulating a data packet according to the present application.
Fig. 4 is a schematic diagram of a packet structure provided in the present application.
Fig. 5 is a schematic diagram illustrating a process of entering a first dynamic cache pool by a packet according to the present application.
Fig. 6 is a schematic diagram illustrating a process of entering a second dynamic cache pool by a packet according to the present application.
Fig. 7 is a block diagram illustrating a flow of steps for inspecting a packet according to the present application.
Fig. 8 is a schematic diagram of checking a packet and sending a result of the check provided in the present application.
Detailed Description
The working principle of data transmission of the switch is as follows: after any node of the switch receives the data sending instruction, the address table stored in the memory is quickly searched, the connection position of the MAC address network card is confirmed, and then the data is sent to the node. If the corresponding position is found in the address table, sending the corresponding position; otherwise, the switch will record the address to facilitate the next search and use. Generally, the switch only needs to transmit a frame to a corresponding point, and does not need to transmit it to all nodes like the hub, thereby saving resources and time and increasing a data transmission rate.
Hubs use more sharing methods to transmit data and cannot demand communication speed. The hub sharing method, also called a sharing network, uses a hub as a connection device and has only one data flow direction, and thus the efficiency of network sharing is very low.
In contrast, the switch can identify each computer to which it is connected and use the physical address of the network card of each computer (commonly referred to as the MAC address) for storage and identification. Under the premise, the memory MAC address of the corresponding position can be directly found without broadcasting search, and the data transmission between the two nodes is completed by using the temporary special data transmission channel without external interference. Since the switch also has a full duplex transmission method, it can also form an anaglyph data transmission channel structure by simultaneously establishing temporary dedicated channels between pairs of nodes.
The advantages and disadvantages of the three transmission methods mentioned in the foregoing are as follows:
straight-Through (Cut Through): when a data packet is detected at an input port, the header of the packet is checked and the data packet is directed to the corresponding port based on the destination address in the packet.
The advantages are that: the method can start forwarding without waiting for the completion of the data packet reception, and has high switching speed and very small delay.
The disadvantages are as follows: error detection services are not provided and it is possible to forward out erroneous packets. And a buffer is not provided, ports with different rates cannot be directly connected, and packet loss is easy.
Store-and-Forward (Store and Forward): in this way, the data packet is completely received, and is forwarded according to the address if the data packet has no error after CRC check.
The advantages are that: error detection services are provided, improving network performance. The method supports the forwarding service of the ports with different speeds, and can ensure the cooperative work between the high-speed port and the low-speed port.
The disadvantages are as follows: the transmission delay is large and a large buffer capacity is required.
Fragment Free forwarding (Fragment Free): it checks if the length of the data packet is enough to 64bytes, if it is less than 64bytes, it indicates that the data packet is a waste packet, and discards it, if it is more than 64bytes, it sends the data packet.
The method can ensure that collision fragments are not transmitted in the network, improves the efficiency of the network, and has the data processing speed between a direct-through mode and a storage forwarding mode.
Low-end switch products generally have only one switching mode, and some high-end switch products have two switching modes, and the switching modes can be automatically selected according to network environments.
The technical solution of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a data transmission method disclosed in the present application includes the following steps:
s101, responding to the acquired data packet, analyzing the data packet, and acquiring an address of the data packet, wherein the address comprises an MAC address and a public network address;
s102, constructing a plurality of queuing queues according to addresses, wherein the priority of data packets in each queuing queue is the same;
s103, sending the data packets on the queue according to the priority, and adjusting the bandwidth of the queue according to the buffer storage of the data packets on the queue; and
s104, configuring a first dynamic cache pool to the low-priority queue, and returning the data packets sent by the first dynamic cache pool to the corresponding queue in a sequential alternate queue insertion manner;
when the number of data packets in the queue of a certain priority is less than the set number in unit time, the queue is merged into the queue of the higher or lower level.
Specifically, in step S101, the switch receives the data packet, and in response to the data packet, the switch parses the data packet to obtain an address of the data packet, where the address includes a MAC address and a public network address, and the MAC address and the public network address are both terminal addresses to which the data packet is to be sent, but the switch knows the MAC address and does not know the public network address.
It should be understood that a MAC address table is stored in the switch, and this table records the MAC addresses of all connected devices and the corresponding ports, i.e. the MAC address of a device, and the port position where the device is inserted into the switch.
Then, according to the MAC address of the receiver in the transmitted data packet, the record of the row is found on the surface, so that it can be known from which port of the switch the data packet is to be forwarded. The public network address is not recorded in the MAC address table and therefore needs to be handled separately.
After the public network address is broadcasted in the network to obtain the corresponding MAC address, the switch stores the MAC address in its own MAC address table, and processes the part of the data packet in the manner of step S101.
Referring to fig. 2, in step S102, a plurality of queues are constructed according to addresses, where the priorities of the packets in each queue are the same, and when the number of addresses is multiple, the number of queues is multiple, and the number of queues in each group is multiple.
The queue is used to classify the priority of the data packet, and for example, the following method can be used: priorities 6 and 7 are generally reserved for network control data usage; priority 5 recommends voice data usage; priority 4 is used by video conferencing and streaming; priority 3 for voice control data; priorities 1 and 2 are used for data services; priority 0 is the default flag value.
IP header length (header length): the length is 4 bits. The purpose of this field is to describe the length of the IP header because there is a variable length optional part in the IP header. This portion takes 4 bits, the unit is 32 bits (4 bytes), i.e. the present area value = IP header length (the unit is bit)/(8 × 4), and therefore the length of one IP packet header is "1111" at the maximum, i.e. 15 × 4=60 bytes. The IP header has a minimum length of 20 bytes.
Referring to fig. 3 and 4, for the packet header in the data packet, the IP is taken as an example for explanation: IP header length (header length): the length is 4 bits. The purpose of this field is to describe the length of the IP header because there is a variable length optional part in the IP header. This part takes 4 bits, the unit is 32 bits (4 bytes), i.e. this region value = IP header length (the unit is bit)/(8 × 4), and therefore, the length of one IP header is "1111" at the longest, i.e. 15 × 4=60 bytes. The IP packet header has a minimum length of 20 bytes.
Service Type (Type of Service): the length is 8 bits. The 8-bit bitwise PPP D R C0 is defined as follows.
But the TOS field has been redefined as part of a differentiated services (Diffsrv) architecture, with the first 6 bits constituting a Differentiated Services Code Point (DSCP), with which 6 different classes of service can be defined. ECN explicit congestion notification.
IP packet Total Length (Total Length): the length is 16 bits. The length of an IP packet (including header and data) is calculated in bytes, so the maximum length of the IP packet is 65535 bytes.
Identifier (Identifier) (datagram ID) length 16 bits. This field is used in conjunction with the Flags and Fragment Offest fields to perform fragmentation (Fragment) operations on large upper layer packets. After a packet is split by the router, all the split packets are marked with the same value so that the destination device can distinguish which packet belongs to a part of the split packet.
Flags (Flags): the length is 3 bits. The first bit of this field is unused. The second bit is the DF (Don't Fragment) bit, which when set to 1 indicates that the router cannot segment the upper layer packet. If an upper layer packet cannot be forwarded without segmentation, the router discards the upper layer packet and returns an error message. The third bit is the MF (More Fragments) bit, which is set to 1 in the header of the IP packet except for the last fragment when the router Fragments an upper layer packet.
Slice Offset (Fragment Offset): the length is 13 bits. Indicating the position of the IP packet in the group of packet packets, and the receiving end assembles the restored IP packet by the position.
Time To Live (TTL): the length is 8 bits. When an IP packet is transmitted, a specific value is assigned to this field. As the IP packet passes through each of the routers along the way, each of the routers along the way will reduce the TTL value of the IP packet by 1. If the TTL is reduced to 0, the IP packet is discarded. This field prevents IP packets from being forwarded through the network on a continual basis due to routing loops.
The data packet is packed once after passing through one layer, and the packed information comprises relevant information such as a path and the like. It should be understood that a router can be considered as a transfer station having a plurality of input channels and a plurality of output channels, and data optimization is to enable data packets flowing on the input channels and the output channels to flow in an optimal or most appropriate manner.
The centralized processing of the data packets on the input channel aims to distinguish which data packets need to be processed preferentially and which data packets can be processed with hysteresis, then the data packets are queued uniformly and sent after the uniform queuing. Meanwhile, in the process of uniform queuing, the addresses of the queues are also analyzed, namely, data packets on the same group of queuing need to be sent to the same address.
In step S103, sending the data packets in the queue according to the priority, and adjusting the bandwidth of the queue according to the buffer amount of the data packets in the queue, where the data packets are sent according to the priority mentioned above, the data packets with high priority are always in a sending state, and the bandwidth of the queue with high priority is preferentially ensured; low priority packets are sent intermittently or occupy a small bandwidth resource.
In addition, considering the inconsistency of the processing speed between the transmitted data packet and the received data packet, the bandwidth of the queue is also adjusted according to the buffer amount of the data packets on the queue, specifically, when the buffer amount of the data packets is large, the bandwidth is preferentially allocated to the queue, so that the data packets on the queue can be transmitted at the fast speed, and the packet loss phenomenon is avoided because the total capacity of the buffer amount of the data packets is also limited.
When the bandwidth is allocated, the bandwidth is also allocated according to the priority level, namely, in the process of allocating the bandwidth, the queuing queue can only occupy the bandwidth of the queuing queue with the low priority level. When the bandwidth has no margin, the subsequent data packets are discarded, and the retransmission of the data packets is applied.
Referring to fig. 5, in step S104, a first dynamic buffer pool is configured to the low-priority queue, and the data packets sent from the first dynamic buffer pool are returned to the corresponding queue in a sequential and alternate queue-insertion manner, where the first dynamic buffer pool is used to store the data packets that cannot be sent temporarily, and when the sending is resumed, the data packets in the first dynamic buffer pool are returned to the corresponding queue in a sequential and alternate queue-insertion manner, that is, the data packets in the first dynamic buffer pool and the data packets in the corresponding queue are sent in an alternate manner.
The purpose of alternation is to maintain the queue order as much as possible, and if a centralized transmission mode is adopted, a new data packet is inevitably generated to enter the first dynamic buffer pool in the process. The alternate sending mode can make the data packet on the queue and the data packet in the first dynamic buffer pool in a flowing state, and can occupy the bandwidth.
In the foregoing process, the number of packets on the queue of a certain priority is too small, and the following method is used to handle this case: and merging the queue to a higher or lower queue when the number of the data packets on the queue of a certain priority is less than the set number in unit time.
Preferably, the queued queues are merged into a higher ranked queued queue.
The purpose of this processing is to reduce the number of queuing queues, because managing the queuing queues also consumes a certain amount of processor resources, and after optimizing the number of queuing queues, this part of resources can be released for receiving and sending data packets.
Before the data packets enter the queue, the obtained data packets are screened, the data packets with the length smaller than 64bytes are removed, the data packets with the length smaller than 64bytes are all invalid data packets, the invalid data packets do not enter the queue, and the utilization rate of the queue can be improved.
For the invalid data packet, the sending end needs to be applied for retransmission, and the newly transmitted data packet is processed in the manner from step S101 to step S104.
For the data packets on the high-priority queue, the processing strategy that the sending time of each data packet is the same is adopted, and the processing mode can enable the data packets on the high-priority queue to be received and sent at a stable speed. Compared with the transmission by using the equal bandwidth mode, the equal speed transmission does not cause the retention problem. The part needs occupied bandwidth, and the higher rank queue occupies the bandwidth of the lower rank queue in sequence.
In the case of insufficient bandwidth, this approach may result in a situation where a portion of the queued queue has no bandwidth, which is handled as follows:
referring to fig. 6, when the bandwidth is insufficient, a second dynamic buffer pool is allocated to the queuing queue without bandwidth, where the second dynamic buffer pool is used to store the extruded data packet; and after the bandwidth is recovered, returning the data packets in the second dynamic cache pool to the queue in a sequential alternate queue-inserting mode.
Referring to fig. 7, for the integrity of the data packet, the present application processes the following:
s201, extracting the data packets on the queue and copying the data packets to a checking buffer pool;
s202, carrying out integrity check on the data packets in the check cache pool; and
s203, implanting the integrity check result into an unsent data packet on the queue;
wherein, the data packet which does not pass the integrity check requests to send a correction data packet.
Specifically, data packets to be checked are copied to a check cache pool, and then integrity check is performed on the data packets in the check cache pool, the check mode may be a hash algorithm, because hash is an irreversible mapping, a hash value may be calculated by the hash algorithm from data, and the original data cannot be obtained by reflection of the hash value.
Generally, the hash values obtained from different data are different, but there are few collisions that may occur, and this few probabilities are not considered here. The hash algorithm used in network data integrity checking typically includes: MD5, SHA.
Referring to fig. 8, the data packet passing the integrity check is deleted from the check buffer pool, the integrity check result is implanted into an unsent data packet on the queue, the data packet is sent to the user terminal, and the user terminal can know which data packets can be used and which data packets cannot be used by parsing the data packet, which is of course limited to the data packet performing the integrity check in the check buffer pool.
For the data packet which is not subjected to integrity check in the check cache pool, the user terminal needs to perform integrity check by itself and then requests a new data packet from the sending terminal.
The processing mode has the advantages that the integrity check of the data packet is simultaneously undertaken by the switch and the using terminal, and when the load capacity of the switch is small, the switch undertakes the integrity check of most of the data packet; when the load capacity of the switch is large, the switch undertakes integrity checking of a small portion of the data packets.
That is, when the load capacity of the switch is large, the capacity of the checking cache pool is also allocated to the first dynamic cache pool and the second dynamic cache pool for use. The dynamic allocation of the buffer pools can enable the switch to have more buffer spaces to store data packets which cannot be sent temporarily on the queue arrangement when the load capacity is large.
In some possible implementations, the data packet containing the integrity check result is not copied to the check cache pool. Since the data packet is also a damaged data packet, the use mode of the application also plays a role, and limited computing resources can be allocated to other processes for use without carrying out integrity check on the data packet.
The exchanger and the correction data packet requested by the terminal enter the corresponding queue in a queue insertion mode; or correct the packet into the queue-insertion buffer pool and then queue-insert to the first bit of the queue. The two processing methods are different in the reception time of the terminal used, and the reception time of the first processing method is slower than that of the first processing method.
The present application further provides a switch, including:
the analysis unit is used for responding to the acquired data packet and analyzing the data packet to acquire an address of the data packet, wherein the address comprises an MAC address and a public network address;
the queue unit is used for constructing a plurality of queuing queues according to the addresses, and the priority of the data packets in each queuing queue is the same;
the sending unit is used for sending the data packets on the queue according to the priority and adjusting the bandwidth of the queue according to the buffer storage amount of the data packets on the queue;
the configuration unit is used for configuring a first dynamic cache pool to the low-priority queue, and sending the data packets of the first dynamic cache pool to return to the corresponding queue in a sequential alternate queue insertion mode; and
and the queue adjusting unit is used for merging the queue into a higher or lower queue when the number of the data packets on the queue with a certain priority is less than the set number in unit time.
Further, the obtained data packets are screened, and the data packets with the length smaller than 64bytes are removed.
Further, for the data packets on the high-priority queuing queue, the sending time of each data packet is the same, and the higher-level queuing queue sequentially occupies the bandwidth of the lower-level queuing queue.
Further, when the bandwidth is insufficient, a second dynamic cache pool is allocated to the queue without bandwidth, and the second dynamic cache pool is used for storing the extruded data packets;
and after the bandwidth is recovered, returning the data packets in the second dynamic cache pool to the queue in a sequential alternate queue-inserting mode.
Further, the method also comprises the following steps:
the copying unit is used for extracting the data packets on the queue and copying the data packets to the inspection buffer pool;
the checking unit is used for checking the integrity of the data packet in the checking cache pool;
the embedding unit is used for embedding the integrity check result into an unsent data packet on the queue; and
and the request unit is used for requesting to send a correction data packet when finding the data packet which does not pass the integrity check.
Further, the data packet containing the integrity check result is not copied to the check cache pool.
Furthermore, the correction data packet enters the corresponding queue in a queue insertion mode;
or the correction packet enters the queue-insertion buffer pool and then is queued to the first bit of the queue.
In one example, the units in any of the above apparatuses may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more Digital Signal Processors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), or a combination of at least two of these integrated circuit forms.
For another example, when a unit in the apparatus can be implemented in the form of a processing element scheduler, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling programs. As another example, these units may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Various objects such as various messages/information/devices/network elements/systems/devices/actions/operations/procedures/concepts may be named in the present application, it is to be understood that these specific names do not constitute limitations on related objects, and the named names may vary according to circumstances, contexts, or usage habits, and the understanding of the technical meaning of the technical terms in the present application should be mainly determined by the functions and technical effects embodied/performed in the technical solutions.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It should also be understood that, in various embodiments of the present application, first, second, etc. are used merely to indicate that a plurality of objects are different. For example, the first time window and the second time window are merely to illustrate different time windows. And should not have any influence on the time window itself, and the above-mentioned first, second, etc. should not impose any limitation on the embodiments of the present application.
It is also to be understood that the terminology and/or the description of the various embodiments herein is consistent and mutually inconsistent if no specific statement or logic conflicts exists, and that the technical features of the various embodiments may be combined to form new embodiments based on their inherent logical relationships.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a computer-readable storage medium, which includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned computer-readable storage media comprise: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The present application further provides a switch system, the system comprising:
one or more memories for storing instructions; and
one or more processors configured to retrieve and execute the instructions from the memory to perform the methods described above.
The present application also provides a computer program product comprising instructions that, when executed, cause the switch and the switch system to perform operations of the switch and the switch system corresponding to the above-described methods.
The present application further provides a system on a chip comprising a processor configured to perform the functions recited above, such as generating, receiving, transmitting, or processing data and/or information recited in the above-described methods.
The chip system may be formed by a chip, or may include a chip and other discrete devices.
The processor mentioned in any of the above may be a CPU, a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of the program of the method for transmitting feedback information.
In one possible design, the system-on-chip further includes a memory for storing necessary program instructions and data. The processor and the memory may be decoupled, respectively disposed on different devices, and connected in a wired or wireless manner to support the chip system to implement various functions in the above embodiments. Alternatively, the processor and the memory may be coupled to the same device.
Optionally, the computer instructions are stored in a memory.
Alternatively, the memory is a storage unit in the chip, such as a register, a cache, and the like, and the memory may also be a storage unit outside the chip in the terminal, such as a ROM or other types of static storage devices that can store static information and instructions, a RAM, and the like.
It will be appreciated that the memory herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
The non-volatile memory may be ROM, programmable Read Only Memory (PROM), erasable Programmable Read Only Memory (EPROM), electrically Erasable Programmable Read Only Memory (EEPROM), or flash memory.
Volatile memory can be RAM, which acts as external cache memory. There are many different types of RAM, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and DSRAMs.
The embodiments of the present invention are preferred embodiments of the present application, and the scope of protection of the present application is not limited by the embodiments, so: equivalent changes in structure, shape and principle of the present application shall be covered by the protection scope of the present application.

Claims (10)

1. A method of data transmission, comprising:
responding to the acquired data packet, analyzing the data packet, and acquiring an address of the data packet, wherein the address comprises an MAC (media access control) address and a public network address;
constructing a plurality of queuing queues according to the addresses, wherein the priority of the data packets in each queuing queue is the same;
sending the data packets on the queue according to the priority, and adjusting the bandwidth of the queue according to the buffer storage of the data packets on the queue; and
configuring a first dynamic cache pool to the low-priority queue, and returning the data packets sent by the first dynamic cache pool to the corresponding queue in a sequential alternate queue insertion manner;
when the number of data packets in the queue of a certain priority is less than the set number in unit time, the queue is merged into the queue of the higher or lower level.
2. The data transmission method according to claim 1, wherein the obtained data packets are screened to remove data packets with a length less than 64 bytes.
3. The data transmission method according to claim 1, wherein for the data packets in the high priority queue, the transmission time of each data packet is the same, and the higher queue sequentially occupies the bandwidth of the lower queue.
4. The data transmission method according to claim 3, wherein when the bandwidth is insufficient, a second dynamic buffer pool is allocated to the queuing queue without bandwidth, and the second dynamic buffer pool is used for storing the extruded data packet;
and after the bandwidth is recovered, returning the data packets in the second dynamic cache pool to the queue in a sequential alternate queue-inserting mode.
5. The data transmission method according to any one of claims 1 to 4, further comprising:
extracting the data packets on the queue and copying the data packets to a checking buffer pool;
carrying out integrity check on the data packet in the check cache pool; and
the integrity check result is implanted into an unsent data packet on the queue;
wherein, the data packet which does not pass the integrity check requests to send a correction data packet.
6. The data transmission method of claim 5, wherein the data packets containing the integrity check result are not copied to the check buffer pool.
7. The method according to claim 5, wherein the corrected data packets are enqueued in the queue using queue-insertion;
or correct the packet into the queue-insertion buffer pool and then queue-insert to the first bit of the queue.
8. A switch, comprising:
the analysis unit is used for responding to the acquired data packet and analyzing the data packet to acquire the address of the data packet, wherein the address comprises an MAC address and a public network address;
the queue unit is used for constructing a plurality of queuing queues according to the addresses, and the priority of the data packets in each queuing queue is the same;
the sending unit is used for sending the data packets on the queue according to the priority and adjusting the bandwidth of the queue according to the buffer storage amount of the data packets on the queue;
the configuration unit is used for configuring a first dynamic cache pool to the low-priority queue, and sending the data packets of the first dynamic cache pool to return to the corresponding queue in a sequential alternate queue-insertion mode; and
and the queue adjusting unit is used for merging the queue into a higher or lower queue when the number of the data packets on the queue with a certain priority is less than the set number in unit time.
9. A switch system, the system comprising:
one or more memories for storing instructions; and
one or more processors configured to retrieve and execute the instructions from the memory to perform the data transfer method of any of claims 1 to 7.
10. A computer-readable storage medium, the computer-readable storage medium comprising:
program for performing a data transmission method as claimed in any one of claims 1 to 7 when said program is run by a processor.
CN202310231807.5A 2023-03-13 2023-03-13 Data transmission method, switch and switch system Active CN115955447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310231807.5A CN115955447B (en) 2023-03-13 2023-03-13 Data transmission method, switch and switch system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310231807.5A CN115955447B (en) 2023-03-13 2023-03-13 Data transmission method, switch and switch system

Publications (2)

Publication Number Publication Date
CN115955447A true CN115955447A (en) 2023-04-11
CN115955447B CN115955447B (en) 2023-06-27

Family

ID=85896313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310231807.5A Active CN115955447B (en) 2023-03-13 2023-03-13 Data transmission method, switch and switch system

Country Status (1)

Country Link
CN (1) CN115955447B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116996450A (en) * 2023-08-05 2023-11-03 哈尔滨商业大学 Management data processing method, device and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030112802A1 (en) * 2001-11-16 2003-06-19 Nec Corporation Packet transfer method and apparatus
CN1426666A (en) * 2000-04-28 2003-06-25 转换中心股份公司 Method and apparatus for managing packet queues in switches
CN1798092A (en) * 2004-12-29 2006-07-05 中兴通讯股份有限公司 Fast weighted polling dispatching method, and fast weighted polling despatcher and device
CN106685853A (en) * 2016-11-23 2017-05-17 泰康保险集团股份有限公司 Data processing method and apparatus
CN109246031A (en) * 2018-11-01 2019-01-18 郑州云海信息技术有限公司 A kind of switch port queues traffic method and apparatus
CN110099000A (en) * 2019-03-27 2019-08-06 华为技术有限公司 A kind of method to E-Packet and the network equipment
CN112134813A (en) * 2020-09-22 2020-12-25 上海商米科技集团股份有限公司 Bandwidth allocation method based on application process priority and electronic equipment
CN112272933A (en) * 2018-06-05 2021-01-26 华为技术有限公司 Queue control method, device and storage medium
CN114489952A (en) * 2022-01-28 2022-05-13 深圳云豹智能有限公司 Queue distribution method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1426666A (en) * 2000-04-28 2003-06-25 转换中心股份公司 Method and apparatus for managing packet queues in switches
US20030112802A1 (en) * 2001-11-16 2003-06-19 Nec Corporation Packet transfer method and apparatus
CN1798092A (en) * 2004-12-29 2006-07-05 中兴通讯股份有限公司 Fast weighted polling dispatching method, and fast weighted polling despatcher and device
CN106685853A (en) * 2016-11-23 2017-05-17 泰康保险集团股份有限公司 Data processing method and apparatus
CN112272933A (en) * 2018-06-05 2021-01-26 华为技术有限公司 Queue control method, device and storage medium
CN109246031A (en) * 2018-11-01 2019-01-18 郑州云海信息技术有限公司 A kind of switch port queues traffic method and apparatus
CN110099000A (en) * 2019-03-27 2019-08-06 华为技术有限公司 A kind of method to E-Packet and the network equipment
CN112134813A (en) * 2020-09-22 2020-12-25 上海商米科技集团股份有限公司 Bandwidth allocation method based on application process priority and electronic equipment
CN114489952A (en) * 2022-01-28 2022-05-13 深圳云豹智能有限公司 Queue distribution method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
B. BRISCOE, ED.; INDEPENDENT;K. DE SCHEPPER; NOKIA BELL LABS;M. BAGNULO BRAUN;UNIVERSIDAD CARLOS III DE MADRID;G. WHITE; CABLELABS: "Low Latency, Low Loss, Scalable Throughput (L4S) Internet Service: Architecture draft-ietf-tsvwg-l4s-arch-05", IETF *
葛长威: "WLAN系统中队列管理方法的设计与实现", 中国优秀硕士学位论文全文数据库 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116996450A (en) * 2023-08-05 2023-11-03 哈尔滨商业大学 Management data processing method, device and system
CN116996450B (en) * 2023-08-05 2024-03-22 哈尔滨商业大学 Management data processing method, device and system

Also Published As

Publication number Publication date
CN115955447B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
US11916781B2 (en) System and method for facilitating efficient utilization of an output buffer in a network interface controller (NIC)
US7355971B2 (en) Determining packet size in networking
US20130114443A1 (en) Layered Multicast and Fair Bandwidth Allocation and Packet Prioritization
US20050276230A1 (en) Communication statistic information collection apparatus
US11968111B2 (en) Packet scheduling method, scheduler, network device, and network system
JP2002541732A5 (en)
CN107231269B (en) Accurate cluster speed limiting method and device
US11695710B2 (en) Buffer management method and apparatus
CN115955447B (en) Data transmission method, switch and switch system
EP4175333A1 (en) Information collection method and apparatus, storage medium, and electronic apparatus
EP1554644A2 (en) Method and system for tcp/ip using generic buffers for non-posting tcp applications
EP3188419A2 (en) Packet storing and forwarding method and circuit, and device
EP3952233A1 (en) Tcp congestion control method, apparatus, terminal, and readable storage medium
CN112313911A (en) Method and computer program for transmitting data packets, method and computer program for receiving data packets, communication unit and motor vehicle having a communication unit
US10999210B2 (en) Load sharing method and network device
CN112838992A (en) Message scheduling method and network equipment
CN115914130A (en) Data traffic processing method and device of intelligent network card
CN117499351A (en) Message forwarding device and method, communication chip and network equipment
US20030091067A1 (en) Computing system and method to select data packet
US10938772B2 (en) Access device for analysis of physical links and method thereof
CN112398735B (en) Method and device for batch processing of messages
CN107483334B (en) Message forwarding method and device
CN115086001B (en) Sampling data caching method, device and storage medium
CN114095760B (en) Data transmission method and data transmission device thereof
WO2024045599A1 (en) Message matching method, computer device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant