Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a general scheme flow diagram of a cross-layer congestion control method based on MAC layer link quality in unmanned aerial vehicle networking technology in an embodiment of the invention; as shown in fig. 1, the flow of the cross-layer congestion control method includes:
s1, an SPMA mechanism is developed by an MAC layer;
in the embodiment of the invention, a tactical aiming network technology data chain is constructed according to the transmission success rate of the data packet and the communication delay requirement; and constructing an SPMA mechanism at the MAC layer in the tactical aiming network technology data chain, and realizing priority queue transmission among nodes.
Firstly, constructing a tactical aiming network technology data chain according to the transmission success rate of data packets and the communication time delay requirement; secondly, an SPMA mechanism needs to be built on an MAC layer, namely an MAC layer, in the tactical aiming network technology data chain; and then, after an SPMA mechanism is built, priority queue transmission is realized among the nodes.
Wherein, the tactical targeting network technology data link needs to add a priority field in the attribute of the data packet, namely a priority field; secondly, developing a priority queue in the MAC layer to realize SPMA, realizing DSR source directional routing in the network layer, and realizing TCP protocol in the transmission layer; the system performance test is carried out on the whole tactical aiming network technology data chain, if no error exists, the system performance test is taken as the basic tactical aiming network technology data chain, and the system performance test is improved and realized in the tactical aiming network technology data chain after no error.
The indexes of the system performance test are mainly the transmission success rate and the communication time delay of the data packet; in the embodiment of the invention, the success rate of transmission is at least ensured to be more than 99%, the communication time delay is less than 2ms, and after the two test indexes are met, the tactical targeting network technology data link in the invention can carry out subsequent cross-layer improvement and congestion control improvement.
Constructing a tactical aiming network technology data chain, inserting the data packet generated by an application layer into a corresponding queue according to the priority when the data packet reaches an MAC layer in the MAC layer after passing through the network layer, and transmitting the queue;
the construction process of the tactical targeting network technology data link can be constructed based on an NS3 network simulator.
In the embodiment of the invention, the MAC layer needs to realize an SPMA mechanism, so that the invention develops a priority queue in the MAC layer, and when a network layer data packet arrives at the MAC layer, the network layer data packet is inserted into a corresponding queue according to the service priority, and when the network layer data packet is transmitted, the network layer data packet starts to be transmitted from the queue with higher priority.
Assuming that the size of the queue is set to 3, data packets with three priorities are indicated; when a data packet constructed from an application layer reaches an MAC layer from a network layer, the MAC layer judges the priority of the data packet according to the attribute of the data packet, and inserts the data packet into a corresponding priority queue according to the priority order; taking service priority as an example, the priorities are arranged in order from big to small according to the size of the data packet; and arranging the data into a preset queue according to the order of priority.
After queuing, the head of queue packets in the queue are sent according to priority in accordance with CSMA/CA based protocol.
When the channel is idle, the head of a queue packet is taken out from the queue with the highest priority and sent;
when the channel is busy, the load condition of the current channel is obtained and compared with the load threshold of the current priority, if the current load is smaller than the load threshold, the queue head packet is accessed and sent, otherwise, the queue head packet is backed off and waited, load detection is still carried out in the back off waiting process, and when the sending condition is met, the queue head packet is sent.
In the SPMA protocol modified by the present invention, the classification of the packet priority is determined at the application layer, and is decided manually in practical application. In this embodiment, the MAC layer first needs to determine whether a data packet exists in the high priority queue, and convert the data packet with higher priority into a state to be sent, and then compares the channel occupancy rate obtained by current statistics with a threshold value of the data packet to be sent. If the channel occupancy rate is lower than the threshold value at the moment, the transmitted data packet is allowed to be removed from the queue and transmitted; if the channel occupancy rate is higher than the threshold value, the node sets a back-off time according to the priority of the data packet and the value of the channel occupancy rate to carry out back-off waiting, detects the channel occupancy rate again after reaching the back-off time, and circulates the above flows.
Fig. 2 shows a SPMA access control flow chart adopted by the MAC layer in the embodiment of the present invention, as shown in fig. 2, where the access control flow includes:
judging the current channel load and the priority threshold, if the channel load is smaller than the priority threshold, sending a message, otherwise, calling a back-off algorithm to calculate back-off time; and (3) carrying out channel load detection in the back-off time until the message is sent out, and if the back-off time is not exceeded, destroying the data packet.
The channel load is embodied by the channel occupancy, and the threshold value of the data packet to be sent is a priority threshold (load threshold of priority).
In the priority queue, the priority is service priority, a new priority field is added in the attribute of the data packet to represent priority, and a corresponding load threshold is set for each priority from big to small, that is, each priority corresponds to a load threshold, where the priority of each data packet is judged according to the size or type of the character string in the priority field.
S2, network layer realizes DSR route
In the embodiment of the invention, a DSR routing protocol is adopted on a network layer to construct a route for transmitting data packets among nodes in a data chain; the DSR routing protocol is realized in the network layer, so that a route can be well constructed when the node sends data, and after construction is completed, the system performance test can be performed on the data chain model.
FIG. 3 is a flow chart of DSR route construction employed by a network layer in an embodiment of the present invention; as shown in fig. 3, the implementation flow is as follows: when a node S has a packet to send to a destination node D, but there is currently no route to that node in its route cache, the node S saves the packet in its send buffer and initiates a route discovery procedure to find the route. To prevent packets from being buffered indefinitely, packets will be discarded if they wait in the transmit buffer for more than MaxSendBuffTime (30 seconds by default). For route discovery, S transmits route request packets as local broadcast messages specifying a destination address and a unique request identifier. The node receiving the route request packet will check its identifier and destination address in the request header; if the same packet was previously received, it will be identified as a duplicate and silently discarded, otherwise it will append its own node address to the list in the route request header and rebroadcast it. When the route request packet reaches its destination, the target node sends a route reply packet back to the originator of the request, including a copy of the node address list accumulated in its reply header. When the route reply arrives at the initiator of the request, it will buffer the new route in the route buffer, after the node S receives the route reply message, it will send the data packet to D using the source route in the route reply, and all intermediate nodes receiving the route reply packet will cut their own route to the destination and store it in their own route buffer.
The DSR routing protocol is realized in the network layer, the route can be well constructed when the node sends data, and a route error message is sent when the route fails, and the system performance test is performed on the model after the completion.
S3, real-time transmission link error rate and ARQ retransmission times of MAC layer
Under the condition that the performance of the model accords with a TTNT working mechanism, modifying an adjustment mechanism of a TCP sending window at a transmission layer, and adding a private member of a network layer in a class of transmission layer encapsulation to enable the transmission layer to access data of the network layer, so as to realize a communication pipeline, thereby realizing a cross-layer information interaction function between the network layer and the transmission layer; the communication pipeline in the embodiment of the invention is an independently established interlayer communication pipeline and is mainly used for transmitting the link error rate and ARQ retransmission times of the MAC layer.
S4, the transmission layer detects packet loss and judges whether the current link error rate and ARQ retransmission times exceed preset values
FIG. 4 is a flowchart of a cross-layer optimization algorithm used by a transport layer in an embodiment of the present invention; in the traditional TTNT network system based on IP, the high error rate caused by the complexity of the battlefield wireless link causes the data packet to be retransmitted and lost for many times, the data packet is lost, disordered and frequent route switching caused by the route failure caused by the node movement, and the TCP congestion control mechanism is directly applied to the battlefield data link to cause serious degradation of network performance, so the invention increases the link quality detection process in the transmission layer TCP congestion control scheme. When the TCP detects that the network packet loss rate is higher, firstly, the reason detection is carried out, if the MAC layer detects that the bit error rate is very high or the ARQ retransmission times are very large at the moment, the network packet loss is caused by link bit error or data retransmission at the moment, the network is not congested, the sending window is not required to be reduced at the moment, otherwise, the sending window is reduced to achieve the purpose of flow control due to the congestion of the network at the moment.
In the embodiment of the invention, the link error rate and ARQ retransmission times are adopted to judge whether the network is congested, wherein the ARQ retransmission times are an automatic retransmission mechanism, a timer is set when a sender sends out data, the data of the next frame is sent if ACK confirmation information of a receiver is received before the timer is finished, otherwise, the data transmission is considered to be overtime, the data frame is retransmitted at the moment, the retransmission times are increased by 1, and when the retransmission times are increased to the maximum retransmission times, the whole data packet is discarded.
And introducing an application layer service information type on the conventional automatic retransmission mechanism ARQ of the MAC layer, and dynamically adjusting the retransmission times according to different services so as to reduce the time delay. When the service class received by the MAC layer is common information, the sender sends out the last sent data frame stored in the buffer area again when overtime occurs, and discards the data frame when overtime occurs for a plurality of times; when the received service class is time sensitive service, namely, service with higher requirement on time delay in the transmission process, the transmission time of the next data frame is improved by reducing the retransmission times, and the transmission time delay of the whole data packet is reduced.
Fig. 5 shows a flowchart of the MAC layer dynamically adjusting the number of ARQ retransmissions according to an embodiment of the present invention, where the dynamic adjustment process includes:
after receiving a data packet sent by a network layer, an MAC layer judges the service type of the data packet according to the attribute of the data packet;
acquiring the time of a retransmission timer according to the service type;
judging whether the current service type is time sensitive service, if so, adjusting the retransmission timer time, otherwise, keeping the default retransmission timer time.
In the embodiment of the invention, because the NS3 model is adopted, the getlinestate manager () function can be called to obtain the retransmission timer time, and similarly, the SetMaxSlrc () function can be called to adjust the retransmission timer time.
In the preferred embodiment of the invention, a communication pipeline between a transmission layer and an MAC layer is built in a mode of privating members, wherein the privating members are private members with the MAC layer added in a class of transmission layer encapsulation, so that the transmission layer can access MAC layer data, and a cross-layer information interaction function between a network layer and the transmission layer is realized; under cross-layer interaction, the invention simultaneously transmits the error rate (directly available) of the link and the current ARQ retransmission times to the TCP, and satisfies (1) the error rate being more than 3e-6 when congestion is controlled; (2) The number of retransmissions >3, the size of the transmission window is not changed.
In the above embodiment, although specific preset values 3e-6 and preset times 3 are given, since these parameters are preferred results obtained by trial and error according to the current model, although this preferred result is applicable to other approximation models, in order to promote the practicability of the present invention in other models; aiming at other data chain system models, the invention can also adjust the preset value by adopting the following modes:
the ideal preset value can generate the effect that the optimal preset threshold value is as good as that of the data link system in the implementation on the link error rate under different data link models, so the invention is realized by adopting an adaptive threshold value method, and specifically calculates the average link error rate in the current data link system, including the percentage of the error information amount in the total information amount in one data transmission process and the percentage of the error information amount in the whole data link system; because the average error rate of single transmission and the average error rate of the system are unbalanced, for example, the average error rate of single transmission may be 0.1 or 0.5, and the average error rate of the data link system may be 0.3, so that the average link error rate of the data link system generally obtained cannot fully reflect the link error rate in each transmission process.
In other preferred embodiments, data link systems meeting performance test indexes such as transmission success rate and communication time delay of data packets are built, the data link systems are designed to be different from the data stream systems built by the embodiment of the invention only in certain influencing parameters, such as different bandwidths and modulation modes, and the average error rate of different systems in single transmission is acquired by adopting a control variable method, namely at least one influencing factor is at least ensured to be different between any two systems, the influencing factors are initialized, the different data link systems are used as a population set, and the average error rate of each data link system is used as an initial population; each data chain system can be used as a corresponding individual, the data chain system is evolved through the evolution process of a genetic algorithm, and the population is updated according to the cross mutation and other modes, so that the error rate detection rate of the data chain system is improved.
And (3) performing throughput test on the TTNT data chain model by simulation, and calculating the result after the head overhead of the data packet received by the network layer is removed. In the simulation, the network bandwidth is 10M, the sending rate of the data packet is 100packets/s, three service types (service type A, service type B and service type C) are distinguished, wherein the data packet size of A is 500Bytes, the data packet size of B is 490Bytes, the data packet size of C is 480Bytes, and the priority is sequentially reduced. As a result, it was found that when the number of nodes was 6, the throughput of A was about 485.55Kbps, the throughput of B was about 484.74Kbps, and the throughput of C was about 485.04Kbps. When the number of nodes is 30, the channel is not saturated, the throughput of A is increased to about 2425.74Kbps, the throughput of B is increased to about 2425.05Kbps, and the throughput of C is increased to about 2425.19 Kbps. When the number of nodes reaches 54, the channel is saturated, so that the transmission success rate of the data packet with high priority is guaranteed according to the working principle of SPMA, at the moment, the throughput of A is about 2065.12Kbps, the throughput of B is about 4365.98Kbps, the throughput of C is about 4363.87Kbps, and the working principle of SPMA is met.
And (3) performing time delay test on the TTNT data chain model by simulation, and calculating the result by subtracting the sending time from the current receiving time when the network layer receives the data packet. As a result, when the number of nodes is 6, the delay of A is about 5.188 ms, the delay of B is about 5.158ms, and the delay of C is about 4.926ms. When the number of nodes is 30, the channel is not saturated, the delay of A is about 5.564ms, the delay of B is about 5.311ms, and the delay of C is about 4.986ms. When the number of nodes reaches 54, the channel is saturated, so that the success rate of transmitting the data packet with high priority is ensured according to the working principle of SPMA, namely, the low-priority service needs to wait for the high-priority service to be transmitted before the high-priority service is transmitted, at the moment, the time delay of A is about 34.284 ms, the time delay of B is about 5.338ms, the time delay of C is about 4.919ms, and the working principle of SPMA is met.
And in the simulation, the throughput of the TTNT data link is tested when the link is interfered, in the simulation, the distance between the nodes is increased, the route is reconstructed for a plurality of times due to the mobility of the nodes, at the moment, the TCP detects the data loss, and the sending window is reduced. As a result, when the number of nodes is 6, the throughput of A is about 392.44Kbps, the throughput of B is about 392.51Kbps, and the throughput of C is about 392.22Kbps. When the number of nodes is 30, the channel is not saturated, the throughput of A is increased to about 1960.01Kbps, the throughput of B is increased to about 1960.1Kbps, and the throughput of C is increased to about 1959.33 Kbps. When the number of nodes reaches 54, the channel is saturated, so that the transmission success rate of the data packet with high priority is guaranteed according to the working principle of SPMA, at the moment, the throughput of A is about 1742.93Kbps, the throughput of B is about 3528.16Kbps, the throughput of C is about 3528.91Kbps, the throughput is obviously reduced compared with that before interference, and the service with low priority is obviously reduced.
Simulation tests of the delay of the TTNT data chain when the link is disturbed show that when the number of nodes is 6, the delay of a is about 14.212ms, the delay of b is about 12.4124ms, and the delay of c is about 11.494ms. When the number of nodes is 30, the channel is not saturated, the delay of A is about 13.961ms, the delay of B is about 12.313 ms, and the delay of C is about 11.718ms. When the number of nodes reaches 54, the channel is saturated, according to the working principle of SPMA, the success rate of transmitting the data packet with high priority is ensured, namely, the low priority service needs to wait for the high priority to be transmitted before the high priority is transmitted, at the moment, the time delay of A is about 85.997ms, the time delay of B is about 12.934ms, the time delay of C is about 11.572ms, the time delay is obviously increased compared with the time delay before interference, and the service with low priority is more obviously increased.
Fig. 6 is a comparison diagram of TTNT data chain system throughput before and after optimization, and as can be seen from fig. 6, when the number of nodes is 6, the throughput of a single service flow is lower when the number of nodes is smaller, and the channel is not saturated, so that the effect before and after optimization is not obvious. As the number of nodes increases, the effect before and after optimization becomes more and more pronounced, since the optimized transmission window is fixed before the route failure is detected, rather than being reduced immediately. As the number of nodes continues to increase, when the channel reaches saturation, the system throughput before and after optimization gradually tends to be stable, and the throughput after optimization is obviously much higher than that before optimization, which proves that the congestion control method is effective.
Fig. 7 is a comparison graph of time delays of TTNT data link systems before and after optimization, and as shown in fig. 7, as nodes increase, a data packet is lost due to each route failure before optimization, a TCP transmission window is reduced, so that the data packet is transmitted completely, and relatively, the window is fixed to be the size before failure after optimization, the transmission window is larger than before optimization, and therefore, the time delay is smaller than before optimization. The average delay of the traffic stream remains almost constant until the channel is saturated, and when the channel is saturated, a high priority success rate needs to be guaranteed, so that low priority packets need to wait, and thus the delay begins to increase. The transmission window before optimization is smaller, so that the time delay is further increased, the transmission window after optimization is kept unchanged and is relatively larger, and the time delay is lower. The system time delay before and after optimization is compared, the time delay after optimization is obviously lower than the time delay before optimization, and the congestion control method is effective.
The invention is based on the cross-layer optimization of the TTNT data link network, in the TTNT data link network, the throughput and the transmission success rate of higher priority service in the network can be ensured, and in the network with poor link quality, the transmission delay can be effectively reduced by about 6ms compared with the transmission delay before and after the optimization, thereby achieving the purpose of improving the system performance.
In the description of the present invention, it should be understood that the terms "coaxial," "bottom," "one end," "top," "middle," "another end," "upper," "one side," "top," "inner," "outer," "front," "center," "two ends," etc. indicate or are based on the orientation or positional relationship shown in the drawings, merely to facilitate description of the invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the invention.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "configured," "connected," "secured," "rotated," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intermediaries, or in communication with each other or in interaction with each other, unless explicitly defined otherwise, the meaning of the terms described above in this application will be understood by those of ordinary skill in the art in view of the specific circumstances.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.