Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of an overall scheme of a cross-layer congestion control method based on MAC layer link quality in an unmanned aerial vehicle networking technology in an embodiment of the present invention; as shown in fig. 1, the flow of the cross-layer congestion control method includes:
s1, developing an SPMA mechanism by the MAC layer;
in the embodiment of the invention, a tactical targeting network technology data link needs to be constructed according to the transmission success rate and the communication delay requirement of a data packet; and constructing an SPMA mechanism on a MAC layer in the tactical targeting network technology data chain, and realizing priority queue transmission among nodes.
Firstly, the invention needs to construct a tactical targeting network technology data link according to the transmission success rate and the communication time delay requirement of a data packet; secondly, an SPMA mechanism needs to be established on an MAC layer in the tactical targeting network technology data chain, namely the MAC layer; and then, after an SPMA mechanism is built, priority queue transmission is realized among the nodes.
Firstly, a priority field, namely a priority field, needs to be newly added in the attribute of the data packet in the tactical targeting network technology data chain; secondly, developing a priority queue on an MAC layer to realize SPMA, realizing DSR source directional routing in a network layer, and realizing a TCP protocol in a transmission layer; the system performance test is carried out on the whole tactical targeting network technology data chain, if the data chain is correct, the data chain is used as the basic tactical targeting network technology data chain of the invention, and the invention can be improved and realized in the tactical targeting network technology data chain after the test is correct.
The indexes of the system performance test mainly include the transmission success rate and the communication time delay of the data packet; in the embodiment of the invention, at least the transmission success rate is required to be ensured to be more than 99%, the communication delay is less than 2ms, and after the two test indexes are met, the tactical targeting network technology data link can be subjected to subsequent cross-layer improvement and congestion control improvement.
Constructing a tactical targeting network technology data link, and when a data packet generated by an application layer passes through a network layer and reaches an MAC layer in an MAC layer, inserting the data packet into a corresponding queue according to priority and sending the data packet to the queue;
wherein, the construction process of the tactical targeting network technology data chain can be constructed based on the NS3 network simulator.
In the embodiment of the invention, the MAC layer needs to realize an SPMA mechanism, so the invention develops a priority queue at the MAC layer, when a network layer data packet reaches the MAC layer, the network layer data packet is inserted into the corresponding queue according to the service priority, and the transmission is started from the queue with higher priority when the network layer data packet is transmitted.
Assuming that the size of the queue is set to 3, the data packets with three priorities are represented; when a data packet constructed by an application layer reaches an MAC layer from a network layer, the MAC layer judges the priority of the data packet according to the attribute of the data packet and inserts the data packet into a corresponding priority queue according to the priority sequence; taking the service priority as an example, arranging the priority in the order from big to small according to the size of the data packet; and arranging the data into a preset queue according to the priority order.
After the queue is arranged, the head-of-queue packet in the queue is sent according to the CSMA/CA-based protocol and the priority.
When the channel is idle, taking out the head packet from the queue with the highest priority and sending the head packet;
when the channel is busy, acquiring the load condition of the current channel, comparing the load condition with the load threshold of the current priority, accessing and sending the head of line group if the current load is less than the load threshold, otherwise, carrying out backoff waiting, still carrying out load detection in the backoff waiting process, and sending the head of line group when the sending condition is met.
In the improved SPMA protocol of the present invention, the classification of the priority of the data packet is determined at the application layer, and is determined manually in the actual application. In this embodiment, the MAC layer first needs to determine whether a data packet exists in the high-priority queue, convert the higher-priority data packet into a to-be-sent state, and then compare the current channel occupancy rate obtained through statistics with a threshold of the to-be-sent data packet. If the channel occupancy rate is lower than the threshold value at the moment, allowing the transmitted data packet to be removed from the queue and transmitted; if the channel occupancy rate is higher than the threshold value, the node sets back-off time according to the priority of the data packet and the value of the channel occupancy rate to perform back-off waiting, detects the channel occupancy rate again after the back-off time is reached, and circulates the processes.
Fig. 2 is a flowchart of an SPMA access control procedure adopted by the MAC layer in the embodiment of the present invention, and as shown in fig. 2, the access control procedure includes:
judging the current channel load and the priority threshold, if the channel load is smaller than the priority threshold, sending a message, otherwise, calling a back-off algorithm to calculate the back-off time; and carrying out channel load detection within the back-off time until the message is sent out, otherwise, destroying the data packet if the back-off time is exceeded.
The channel load is represented by a channel occupancy rate, and the threshold of the data packet to be sent is a priority threshold (priority load threshold).
In the priority queue, the priority is a service priority, a priority field is added in the attribute of the data packet to indicate the priority, and a corresponding load threshold is set for each priority from large to small, that is, each priority corresponds to one load threshold, wherein the priority of each data packet is judged according to the size or type of a character string in the priority field.
S2, realizing DSR routing by network layer
In the embodiment of the invention, a DSR routing protocol is adopted on a network layer to construct a route for transmitting data packets between nodes in a data chain; and a DSR routing protocol is realized in a network layer, so that a node can well construct a route when sending data, and a system performance test can be performed on the data link model after the construction is completed.
FIG. 3 is a flow chart of DSR routing construction employed by the network layer in an embodiment of the present invention; as shown in fig. 3, the implementation flow is as follows: when a node S has a packet to send to a destination node D, but there is no route to the node currently in its route cache, the node S stores the packet in its send buffer and starts a route discovery process to find the route. To prevent packets from being buffered indefinitely, they will be discarded if they wait in the transmit buffer for more than maxsendbuffetime (30 seconds as a default value). For route discovery, S transmits a route request packet as a local broadcast message specifying a destination address and a unique request identifier. A node receiving a routing request packet will check its identifier and destination address in the request header; if the same packet was received before, it will be recognized as a duplicate and discarded silently, otherwise it will append its own node address to the list in the route request header and rebroadcast it. When the route request packet reaches its destination, the target node sends a route reply packet back to the initiator of the request, including a copy of the list of node addresses accumulated in its reply header. When the route response reaches the initiator of the request, the node S caches a new route in the route cache, after receiving the route response message, the node S sends a data packet to the node D by using the source route in the route response, and all intermediate nodes receiving the route response packet cut the own route to the destination and store the own route in the own route cache.
And a DSR routing protocol is realized in a network layer, so that a node can well construct a route when sending data, a route error message is sent out when the route fails, and a system performance test is carried out on the model after the route error message is completed.
S3, MAC layer real-time transmission link error rate and ARQ retransmission times
Under the condition of ensuring that the performance of the model accords with a TTNT working mechanism, modifying an adjusting mechanism of a TCP sending window at a transmission layer, and adding a private member of a network layer in a class packaged by the transmission layer, so that the transmission layer can access data of the network layer to realize a communication pipeline, thereby realizing a cross-layer information interaction function between the network layer and the transmission layer; the communication pipeline in the embodiment of the invention is an independently established interlayer communication pipeline and is mainly used for transmitting the link error rate and the ARQ retransmission times of an MAC layer.
S4, detecting packet loss by the transmission layer, judging whether the current link error rate and ARQ retransmission times exceed the preset values
FIG. 4 is a flowchart of a cross-layer optimization algorithm employed by a transport layer in an embodiment of the present invention; in the traditional TTNT network system based on IP, the high error rate caused by the complexity of a battlefield wireless link causes the loss of a data packet due to multiple retransmissions, the loss, disorder and frequent route switching of the data packet caused by 'route failure' caused by the movement of nodes, and the direct application of a TCP congestion control mechanism in a battlefield data link often causes the serious reduction of the network performance, therefore, the invention adds a link quality detection process in a transmission layer TCP congestion control scheme. When TCP detects that the network packet loss rate is high, cause detection is firstly carried out, if the MAC layer detects that the error rate is high or the ARQ retransmission times are many, the network packet loss is caused by link error codes or data retransmission at the moment and is not the congestion of the network, the sending window is not required to be reduced at the moment, otherwise, the sending window is reduced due to the congestion of the network, and the purpose of flow control is achieved.
In the embodiment of the invention, whether the network is congested is judged by adopting a link error rate and ARQ (automatic repeat request) retransmission times, wherein the ARQ retransmission times refer to an automatic retransmission mechanism, a timer is arranged while a sender sends data, if ACK (acknowledgement) information of a receiver is received before the timer is finished, data transmission of the next frame is carried out, otherwise, the data transmission is considered to be overtime, the data frame is retransmitted at the moment, the retransmission times are increased by 1, and when the times are increased to the maximum retransmission times, the whole data packet is discarded.
The service information type of an application layer is introduced to the traditional automatic retransmission mechanism ARQ of the MAC layer, and the retransmission times are dynamically adjusted according to different services so as to reduce the time delay. When the service type received by the MAC layer is common information, the sender will send out the data frame which is stored in the buffer area and sent last time again if overtime occurs, and the data frame is discarded if overtime occurs for many times; when the received service type is a time-sensitive service, namely a service with a high requirement on time delay in the transmission process, the transmission time of the next data frame is improved and the transmission time delay of the whole data packet is reduced by reducing the retransmission times.
Fig. 5 is a flowchart illustrating a MAC layer dynamically adjusting ARQ retransmission times according to an embodiment of the present invention, where as shown in fig. 5, the dynamic adjustment process includes:
after receiving a data packet sent by a network layer, an MAC layer judges the service type of the data packet according to the attribute of the data packet;
acquiring the time of a retransmission timer according to the service type;
and judging whether the current service type is a time sensitive service, if so, adjusting the time of a retransmission timer, and otherwise, keeping the default time of the retransmission timer.
In the embodiment of the present invention, since the NS3 model is adopted, a getremotestatanager () function may be called to obtain the retransmission timer time, and similarly, a SetMaxSlrc () function may be called to adjust the retransmission timer time.
In the preferred embodiment of the invention, a communication pipeline between a transmission layer and an MAC layer is built in a privatization member mode, wherein the privatization member is a private member which adds the MAC layer in a class packaged by the transmission layer, so that the transmission layer can access the data on the MAC layer, and the cross-layer information interaction function between a network layer and the transmission layer is realized; under cross-layer interaction, the invention simultaneously transmits the error rate (directly obtained) of a link and the current ARQ retransmission times to the TCP, and meets the condition that (1) the error rate is more than 3e-6 during congestion control; (2) the number of retransmissions >3, the size of the transmission window is not changed.
In the above embodiment, although specific preset rate values 3e-6 and preset times 3 are given, since these parameters are the preferred results obtained by trial and error according to the current model, although this preferred result is also applicable to other approximate models, in order to improve the utility of the present invention in other models; aiming at other data chain system models, the preset rate value can be adjusted in the following way:
an ideal preset rate value should be able to generate the preferable preset threshold value for the link error rate under different data link models with the same good effect as the data link system in the implementation, so the invention is realized by adopting a self-adaptive threshold value method, specifically, the average link error rate in the current data link system is calculated, and the average link error rate comprises the percentage of the error information amount in the information total amount in the data transmission process and the percentage of the error information amount in the whole data link system in the information total amount; because the average bit error rate of single transmission and the average bit error rate of a system have data imbalance, for example, the average bit error rate of single transmission may be 0.1 or 0.5, and the average bit error rate of a data link system may be 0.3, the average link bit error rate of a commonly obtained data link system cannot completely reflect the link bit error rate in each transmission process, because the bit error rate of data in the transmission process is influenced by various factors, including interference, transmitter power, modulation mode, bandwidth and the like, which cause different influences on the bit error rate, the link bit error rate of single transmission is determined by various factors, some of which are constant and some of which are changed along with the external environment, and which cannot be simply described by functions, the invention learns the hidden characteristics corresponding to the factors by adopting a machine learning mode, and improving the precision of the hidden features by repeated training and testing, and finally inputting the influence factors of the current data chain system in the trained machine learning model so as to output the preset value corresponding to the current data chain system.
In other preferred embodiments, different data chain systems meeting performance test indexes such as transmission success rate of data packets, communication delay and the like are built, the data chain systems and the data stream system built by the embodiment of the invention are designed to be different only on certain influence parameters, such as different bandwidths, different modulation modes and the like, a variable control method is adopted to acquire the average bit error rate of different systems during single transmission, namely at least one influence factor is different between any two systems, the influence factors are initialized, the different data chain systems are used as a group set, and the average bit error rate of each data chain system is used as an initial group; each data chain system can be used as a corresponding individual, the data chain system is evolved through the evolution process of a genetic algorithm, and the population is updated according to modes such as cross variation and the like, so that the bit error rate detection rate of the data chain system is improved.
And (4) carrying out throughput test on the TTNT data chain model by simulation, and calculating the result after the data packet is received by the network layer and the head overhead is removed. In the simulation, the network bandwidth is 10M, the sending rate of the data packet is 100packets/s, three service types (service type a, service type B, and service type C) are distinguished, wherein the size of the data packet of a is 500Bytes, the size of the data packet of B is 490Bytes, the size of the data packet of C is 480Bytes, and the priority is sequentially reduced. As a result, when the number of nodes is 6, the throughput of A is about 485.55Kbps, the throughput of B is about 484.74Kbps, and the throughput of C is about 485.04 Kbps. When the number of nodes is 30, the channel is not saturated at this time, the throughput of A is increased to about 2425.74Kbps, the throughput of B is increased to about 2425.05Kbps, and the throughput of C is increased to about 2425.19 Kbps. When the number of nodes reaches 54, because the channel is saturated, the success rate of sending the data packet with high priority is ensured according to the working principle of the SPMA, at the moment, the throughput of A is about 2065.12Kbps, the throughput of B is about 4365.98Kbps, the throughput of C is about 4363.87Kbps, and the method conforms to the working principle of the SPMA.
And (4) carrying out time delay test on the TTNT data chain model by simulation, and calculating by subtracting the sending time from the current receiving time when the network layer receives the data packet. As a result, when the number of nodes is 6, the delay of A is about 5.718ms, the delay of B is about 5.158ms, and the delay of C is about 4.926 ms. When the number of nodes is 30, the channel does not reach saturation, the delay of A is about 5.564ms, the delay of B is about 5.311ms, and the delay of C is about 4.986 ms. When the number of nodes reaches 54, because the channel is saturated, according to the working principle of the SPMA, the success rate of sending the data packet with high priority is ensured, that is, the low-priority service can be sent only after waiting for the high-priority service to be sent first, at this time, the time delay of A is about 34.834ms, the time delay of B is about 5.338ms, the time delay of C is about 4.919ms, and the method accords with the working principle of the SPMA.
The simulation tests the throughput of the TTNT data chain when the link is interfered, the distance between nodes is increased in the simulation, the route is rebuilt for many times due to the mobility of the nodes, and at the moment, the TCP detects the data loss and reduces the sending window. As a result, when the number of nodes is 6, the throughput of A is about 392.44Kbps, the throughput of B is about 392.51Kbps, and the throughput of C is about 392.22 Kbps. When the number of nodes is 30, the channel is not saturated at this time, the throughput of A is increased to about 1960.01Kbps, the throughput of B is increased to about 1960.1Kbps, and the throughput of C is increased to about 1959.33 Kbps. When the number of nodes reaches 54, because the channel is saturated, according to the working principle of the SPMA, the success rate of sending the data packet with high priority is ensured, at the moment, the throughput of A is about 1742.93Kbps, the throughput of B is about 3528.16Kbps, the throughput of C is about 3528.91Kbps, the interference is obviously reduced before the interference occurs, and the low-priority service is obviously reduced.
The simulation tests the delay of the TTNT data chain when the link is interfered, and as a result, when the number of nodes is 6, the delay of a is about 14.212ms, the delay of B is about 12.4124ms, and the delay of C is about 11.494 ms. When the number of nodes is 30, the channel does not reach saturation, the delay of A is about 13.961ms, the delay of B is about 12.613ms, and the delay of C is about 11.718 ms. When the number of nodes reaches 54, the channel is saturated, and according to the working principle of the SPMA, the success rate of sending the data packet with the high priority is ensured, namely, the low-priority service can be sent only after the high-priority service is sent first, at this time, the time delay of A is about 85.997ms, the time delay of B is about 12.934ms, the time delay of C is about 11.572ms, the time delay is obviously increased before being interfered, and the service with the low priority is obviously increased.
Fig. 6 is a comparison graph of throughput of TTNT data chain systems before and after optimization, and it can be seen in fig. 6 that, when the number of nodes is 6, since the throughput of a single service stream is also low when the number of nodes is small, and a channel is not saturated, the effects before and after optimization are not very obvious. The effect before and after optimization becomes more and more obvious as the number of nodes increases, because the optimized sending window is fixed before the routing failure is detected, rather than being immediately reduced. With the continuous increase of the number of nodes, when the channel is saturated, the system throughput before and after optimization gradually tends to be stable, and the throughput after optimization is obviously much higher than that before optimization, which shows that the congestion control method is effective.
Fig. 7 is a comparison graph of TTNT data link system time delay before and after optimization, and as can be seen in fig. 7, as nodes increase, a data packet loss caused by a routing failure each time before optimization decreases, and a TCP sending window decreases, resulting in that much time is spent on sending the data packet. Before the channel is saturated, the average delay of the service flow is almost kept stable, and when the channel is saturated, the success rate of high priority needs to be ensured, so that data packets with low priority need to wait, and the delay begins to increase. The sending window is smaller before optimization, so the time delay is further increased, and the sending window is kept unchanged after optimization and is relatively larger, so the time delay is lower. Compared with system time delay discovery before and after optimization, the time delay after optimization is obviously lower than that before optimization, and the congestion control method is effective.
The invention is based on the cross-layer optimization of the TTNT data link network, can ensure the throughput and the transmission success rate of the higher priority service in the network in the TTNT data link network, and can effectively reduce the transmission delay by about 6ms before and after the comparison and optimization in the network with poor link quality, thereby achieving the purpose of improving the system performance.
In the description of the present invention, it is to be understood that the terms "coaxial", "bottom", "one end", "top", "middle", "other end", "upper", "one side", "top", "inner", "outer", "front", "center", "both ends", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "disposed," "connected," "fixed," "rotated," and the like are to be construed broadly, e.g., as meaning fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; the terms may be directly connected or indirectly connected through an intermediate, and may be communication between two elements or interaction relationship between two elements, unless otherwise specifically limited, and the specific meaning of the terms in the present invention will be understood by those skilled in the art according to specific situations.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.