CN105163354B - A kind of data stream time delay guarantee strategies using at network coding opportunity between convection current - Google Patents

A kind of data stream time delay guarantee strategies using at network coding opportunity between convection current Download PDF

Info

Publication number
CN105163354B
CN105163354B CN201510460723.4A CN201510460723A CN105163354B CN 105163354 B CN105163354 B CN 105163354B CN 201510460723 A CN201510460723 A CN 201510460723A CN 105163354 B CN105163354 B CN 105163354B
Authority
CN
China
Prior art keywords
data
packet
coding
data packet
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510460723.4A
Other languages
Chinese (zh)
Other versions
CN105163354A (en
Inventor
陈贵海
茅娅菲
董超
吴小兵
戴海鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201510460723.4A priority Critical patent/CN105163354B/en
Publication of CN105163354A publication Critical patent/CN105163354A/en
Application granted granted Critical
Publication of CN105163354B publication Critical patent/CN105163354B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a kind of data stream time delay guarantee strategies using at network coding opportunity between convection current, the following steps are included: grouping cache, intermediate layer protocol is realized between MAC layer at IP layers, the grouping of IP layers of arrival of caching, virtual queue is established for every data stream, excavates digram coding chance between data flow.Queuing message is counted, data packet number in each queue, weight, time delay and data flow digram coding relationship are counted when each scheduling duration starts.Packet scheduling calculates the optimal order of transmission and number of grouping using integral linear programming method.Coding is sent, and the grouping encoded to needs encodes, in addition being sent at MAC-layer interface behind coding head.Terminate epicycle scheduling, the return step 1 after expiring epicycle scheduling time continues to dispatch the data grouping reached during epicycle scheduling.

Description

A kind of data stream time delay guarantee strategies using at network coding opportunity between convection current
Technical field
The present invention relates to a kind of data stream time delay guarantee strategies using at network coding opportunity between convection current, belong to communication neck Domain Protocol Design field.When existing between data flow into code capacity between convection current, using this method, it can guarantee that data flow is poor In the case where anisotropic delay constraint, network throughput is improved as far as possible.
Technical background
The characteristics of wireless mesh network (WMN) technology is not necessarily to architecture due to it, the net for being capable of providing low price, extensively covering Network is more and more used in the indoor and outdoors application scenarios such as school, market, shopping mall.However the anti-interference of wireless network The unstable disadvantage of difference, link, and it is made to be difficult to support to the higher application of quality of service requirement.Nowadays multimedia application with The combination of wireless network is more and more closer, they are high to radio communication service quality requirement, usually variant time delay, preferential The demand of grade.Network coding technique, can the natural broadcast characteristic of combining wireless net well, utilize the bulk redundancy letter of network Breath is sent to concentrating after data packet coding.This technology is proved to be able to efficiently reduce forwarding data flow number, improves network Handling capacity.It is to ensure multimedia application communication quality effective solution in wireless network.Network code also has between wherein flowing There is the advantages that encoding overhead is small, and decoding delay is short.Network code between flowing will be mainly studied herein, when data flow is effectively reduced in discussion The scheduling strategy prolonged.The basic principle encoded between stream is that intermediate node encodes simultaneously multiple groupings from different data streams Primary forwarding.Code machine can be dependent on the topological relation between data flow.
The present invention mainly discusses the scheduling problem of data streams in wireless network, when there are different for a plurality of Business Stream When delay requirement, how in the case where guaranteeing these delay requirements handling capacity is improved.Existing work gives heuristic Algorithm has studied encoded data stream end-to-end time delay, or only studies digram coding, does not account for the time delay of data flow difference It is required that.And the data flow condition that our research considers is more, coding structure is complete, and the solution provided is also more perfect.
Summary of the invention
The purpose of the present invention is: the scheduling problem of data streams in wireless network is discussed, when a plurality of Business Stream exists When different delay requirement, how in the case where guaranteeing these delay requirements handling capacity is improved.
To achieve the goals above, the technical scheme is that
A kind of data stream time delay dispatching method using at network coding opportunity between convection current, it is characterized in that including following step It is rapid:
1) it builds traffic queue and caches IP layers of arrival grouping;Grouping cache realizes middle layer association at IP layers between MAC layer View caches the grouping of IP layers of arrival, establishes virtual queue for every data stream, excavates digram coding chance between data flow.
2) queuing message is counted, the statistical data stream information when each scheduling duration (section) starts, including each queue Middle data packet number, weight, time delay and data flow digram coding relationship.
3) packet scheduling calculates the optimal order of transmission of grouping using integral linear programming method.
4) coding is sent, and is encoded to the data packet grouping that needs encode, in addition behind coding head, sequentially by data packet It is sent at MAC-layer interface;
5) terminate epicycle scheduling, the return step 1 after expiring epicycle scheduling time), continue the data for dispatching epicycle caching Grouping.
Above-mentioned steps 1) digram coding chance is found using the two-hop neighbors information of node.It is calculated as using lesser This, finds more code machine meeting.
Above-mentioned steps 3) maximizing time delay arrival, transmission data grouping weight and problem specification are integer linear rule before The problem of drawing, and prove that optimal order of transmission can be found out.
Above-mentioned steps 5) delimit scheduling time segment length according to network average delay determine.
Data stream time delay dispatching method is specifically divided into 3 stages: code machine is, it can be seen that packet scheduling and block encoding;
Stage one: code machine meeting discovery strategy, the primary module of PTCS are code machine meeting discovery mechanisms, can utilize node A hop neighbor information and two-hop neighbors information carry out the digram coding relationship between mining data stream;For proactive routing protocol, Each node can easily know oneself a jump and two-hop neighbors;For Reactive routing protocols, make each node sharp The hop neighbor information of oneself is broadcasted to neighbours with hello packet;After the hello packet for receiving neighbours, node just updates oneself A hop neighbor table and two-hop neighbors table;
It jumps with two-hop neighbors information using the upstream and downstream nodal information of data flow and the one of intermediate node and judges a pair Data flow whether there is encoding relation;
The data packet that intermediate node R is received is cached to respectively in the queue established for each data flow;Every queue is all Record source node, destination node, next hop address, weight size, delay requirement, the data packet number letter of corresponding data stream Breath will also save the pointer for being directed toward data the package list;When a data packet enters node R, this data flow is first checked for Queue whether there is, if queue exists, data the package list in queue directly is added in the data packet, while in queuing message Data packet number record plus 1;If it does not exist, then a queue is created for this data flow, while utilizes queuing message and volume Code Rule of judgment checks that this data stream and the data flow joined the team whether there is encoding relation;
Stage two: packet scheduling;One can be provided according to the encoding relation of data flow, weight, time delay, queue length information Optimal data packet transmission sequence;First defining a scheduling interval is T sending time slots, i.e., can during this period of time send T A data packet;In the beginning of each scheduling interval, count the traffic flow information recorded in queue, encoding relation, weight, time delay, Queue length is formalized to the weight of transmission data packet and problem before time delay arrival is maximized;
Two set, set P will be divided by the data flow of intermediate nodeSMiddle data flow does not have encoding relation, flows to be single, Set PCIn be all the coding pair for having encoding relation;Number is Sk, for there is the coding ordered pair (i, j) of encoding relation, and meet I < j, the coded data packet number that successfully forwarded are defined as Di, fiAnd fjIn be not encoded just forwarding data packet be called remaining packet, Its number is respectively defined as RiAnd RjThere is Ri·Rj=0 property, i.e. at least one in the two are 0, otherwise two data streams It remains to continue to encode;
The grouping number that each data flow is sent is provided by linear programming method, according to the sequence of time delay from small to large, Single data packet or coded data packet are moved in the transmit queue of intermediate node R, data packet is allowed to wait the transmission of MAC layer Chance;Original data cached queue is cleared up simultaneously, waits the arrival of new data packets;Whenever intermediate node obtains a transmitter Meeting just takes a data packet to send from the front end of transmit queue;The data packet that node is reached in scheduling interval, by IP layers Routing after, be just buffered in new queue, after epicycle finishing scheduling, participate in the scheduling of next round;
Stage three: the transmission sequence of block encoding, the output of PTCS packet scheduling module is adjusted in a series of data flow The data packet number of degree, these data flows are single stream or encoding stream;Intermediate node is according to this transmission sequence, from each data flow The data packet that corresponding number is taken out in the front end of queue is sent;If encountering encoded data stream, to divide from a pair of of encoding stream Not Qu Chu data packet encoded;The two data one are referred to as main coding packet, another is subordinate coding packet;IP is protected in head What is deposited is source node, destination node, Delay, the weight information, sending time of main coding packet;It needs to add to each data packet Upper coding head is used to record the information of subordinate coding packet, referred to as CodeHead, be placed on the head IP and MAC header it Between;In the structure of CodeHead, code identification position shows whether this data packet is that the source that coding packet and subordinate coding wrap is saved Point, destination node, next hop address, Delay, weight information, transmission time information;Data packet and other lists after coding At the same sending port that node is sequentially added of packet, MAC layer is transferred to send;In order to receive coded data packet, node needs are beaten Promiscuous mode is opened, that is, radio node utilizes the broadcast characteristic of channel, receives all data packets that can be heard, including is not hair Give own;MAC layer receives unicast packet, and data packet is just transferred PTCS layers;PTCS layers judge whether data packet is compiled Code, and the data packet of coding is decoded, decoded single packet is finally submitted IP layers and is routed or is delivered.
Beneficial effects of the present invention: the present invention is directed to the scheduling problem of data streams in wireless network, when a plurality of industry Being engaged in, there are when different delay requirements, guarantee to improve handling capacity in the case where these delay requirements for stream.The data especially considered Stream condition is more, and coding structure is complete, and the solution provided is also more perfect.Especially when there are different time delays to want for a plurality of Business Stream When asking, guarantee to improve handling capacity in the case where these delay requirements.
The prior art gives heuritic approach, has studied encoded data stream end-to-end time delay, or only study single-hop network Network does not account for the delay requirement of data flow difference.And the data flow condition that the present invention considers is more, network topology is flexible, The solution of offer is also more perfect.It can be effectively reduced more using a large amount of existing at code machine meeting between convection current in network The time delay of group unicast data stream, improves the handling capacity of network.It can be to have higher requirements in mobile Ad hoc network to service quality Application efficient, guaranteed support is provided.
Detailed description of the invention
Fig. 1: hello data format;
Fig. 2: coding header format;
Fig. 3: 21 node topology figures;
Fig. 4: handling capacity change curve;Wherein Fig. 4 (a) is three kinds of agreements under the variation of network total flow, all by mesh The received packet throughput of node.Fig. 4 (b) is three kinds of agreements under the variation of network total flow, all by destination node Timely received data throughout.
Fig. 5 shows the effectiveness performance curve of three kinds of agreements.Wherein Fig. 5 (a) is change of three kinds of agreements in network total flow Under change, it is all by the received data packet weight of destination node and.Fig. 5 (b) is three kinds of agreements under the variation of network total flow, institute Have by destination node in time received data packet weight and.
Fig. 6 shows the delay performance curve of three kinds of agreements.Wherein Fig. 6 (a) is change of three kinds of agreements in network total flow It is all by the received data packet average delay of destination node under change.Fig. 6 (b) is three kinds of agreements under the variation of network total flow, The average delay for the timeout datum packet that destination node receives.
Specific implementation method
The present invention is IP and MAC intermediate layer protocol, is named as PTCS (Pairwise-coding Time Constraint Scheduling).PTCS can be divided into 3 stages: code machine is, it can be seen that packet scheduling and block encoding.We are had using one kind The coding route discovery strategy of effect, can obtain code machine meeting, while keeping lower using the double bounce routing iinformation of data flow Computing cost.We establish the discovery code machine meeting of double bounce routing table, are preserved result with an encoding relation table.We Every data stream to cache on intermediate node all establishes a queue.Data packet in every data stream under this queue record Information, including data packet weight, data packet arrival time, data stream time delay constraint and queue length.Encoding relation table and queue Information will become the input of scheduling strategy in agreement, and the transmission of scheduling strategy final output current cache data packet is sequentially, most Afterwards data packet is gradually sent by intermediate node in order, if data packet needs to encode, data packet is carried out by coding module It is sent after coding.
Stage one: code machine can discovery strategy
The primary module of PTCS is code machine meeting discovery mechanism, it can be adjacent using a hop neighbor information and double bounce for node Occupy the digram coding relationship that information is come between mining data stream.For proactive routing protocol, such as DSDV and OSLR, due to neighbor table It is known that so each node can easily know oneself a jump and two-hop neighbors.For Reactive routing protocols, such as AODV, the method that we are proposed using Dong Chao et al. allow each node to broadcast the hop neighbor information of oneself using hello packet To neighbours.The advantages of this method is that local neighbours and topology information do not diffuse into the whole network, is opened not increasing excessive calculate In the case where pin and network overhead, each node can get two-hop neighbors information.The design of its hello packet such as Fig. 1 institute Show, a hop neighbor number of sending node is saved in grey reserved field, and the end of hello packet is then added to each one and jumps neighbour The address in residence, and the destination node address that can be reached by the neighbours.Each node has a hop neighbor table, wherein each One hop neighbor has one to be directed toward oneself neighbours' list index, i.e. the two-hop neighbors table of the node again.Receive the hello packet of neighbours Afterwards, node just updates the hop neighbor table and two-hop neighbors table of oneself.As Fig. 1 shows.
We, which will jump using the one of the upstream and downstream nodal information of data flow and intermediate node with two-hop neighbors information, sentences Disconnected a pair of data flow whether there is encoding relation.For two data stream f at intermediate node R1And f2, their encoding relation Deterministic process is as follows.We enable symbol PH(1,1)Indicate data flow f1About the upper hop node of R, PH(1,2)Indicate data flow f1It closes In upper two hop node of R, NH(1,1)Indicate data flow f1Next-hop node about R.Opposite, we have PH(2,1)Indicate data Flow f2About the upper hop node of R, PH(2,2)Indicate data flow f2About upper two hop node of R, NH(2,1)Indicate data flow f2It closes In the next-hop node of R.Meanwhile we also enable the neighbor node of symbol N (node) expression node node, if there is node1∈N (node2), mean that node node1It is node node2Neighbor node.
Encoding relation deterministic process is abstracted into two conditions by us:
Condition one: f is checked1Upstream path and f2Path downstream whether have and cross.There are three equatioies for we:
PH(1,1)=NH(2,1);PH(1,1)∈N(NH(2,1));PH(1,2)∈N(NH(2,1)).
If meeting one in three equatioies, it is believed that condition one is satisfied.
Condition two:
Check f2Upstream path and f1Path downstream whether have and cross.Also there are three equatioies for we:
PH(2,1)=NH(1,1);PH(2,1)∈NH(1,1);PH_{2,2}∈N(NH(1,1)).
If meeting one in three equatioies, it is believed that condition two is satisfied.
When two conditions are all satisfied, it is believed that data flow f1And f2Between there are encoding relations.
The data packet that we receive intermediate node R is cached to respectively in the queue that we establish for each data flow.Often Queue will record source node, destination node, next hop address, weight size, the delay requirement, data of corresponding data stream The information such as packet number will also save the pointer for being directed toward data the package list.When a data packet enters node R, we are first Check that the queue of this data flow whether there is.If queue exists, data the package list in queue directly is added in the data packet, Data packet number record plus 1 in queuing message simultaneously.If it does not exist, then a queue is created for this data flow, utilized simultaneously Queuing message and coding Rule of judgment check that this data stream and the data flow joined the team whether there is encoding relation.
Stage two: packet scheduling
The second largest module of PTCS is the scheduling of grouping, it can be according to the encoding relation of data flow, weight, time delay, queue The information such as length provide an optimal data packet transmission sequence.It is T sending time slots that we, which define a scheduling interval, i.e., at this We can send T data packet in the section time.In the beginning of each scheduling interval, we count the number recorded in queue The weight of data packet is sent before time delay arrival to maximization such as encoding relation, weight, time delay, queue length according to stream information It is formalized with problem.
We will be divided into two set, set P by the data flow of intermediate nodeSMiddle data flow does not have encoding relation, is Dan Liu, set PCIn be all the coding pair for having encoding relation.By taking the model in Fig. 2 as an example, PS={ 1,6,7 }, PC=(2,3), (4,5) } stream f single for onek, the data packet that can be successfully forwarded is single packet, number Sk, for there is the coding of encoding relation Ordered pair (i, j), and meet i < j, the coded data packet number that successfully forwarded is defined as Di, fiAnd fjIn be not encoded just turn The data packet of hair is called remaining packet, and number is respectively defined as RiAnd RjWe the problem of model in, have Ri·Rj=0 property Matter, i.e. at least one in the two are 0, and otherwise two data streams can still continue to encode.It is described the problem in order to easier Formalization process, the standard solution form of our problem definitions first.
Define one: the standard solution form of problem refers to that the data packet in the period sorts from small to large according to their time delay Queue.Data packet in same data flow is adjacent in the queue.Such queue can also be counted as by continuous D, R, made of the arrangement of S data block, it is believed that the time delay of coded data packet D corresponds to the coding lesser data flow of centering serial number Time delay.
Theorem one: an arbitrary feasible solution in given scheduling time section, we can be converted into one it is of equal value Standard solution form.
Prove: for any feasible schedule, we look first at two wherein adjacent data packets, if they are not from Same data flow then we can move on to front always with the shorter data packet of time delay, while guaranteeing the time delay of two data packets Constraint is not destroyed.Any feasible solution can be adjusted to standard solution form by repeating this exchange process.Proof finishes.
We can be according to D, and it is one whole by its specification that the related definition of R, S is formalized to the problem of us Number linear programming problem
s.t
Di,Ri,Si∈{0,1,2,3,...}. (4)
Constraint (1) indicates that the data packet number that single stream is sent cannot surpass the data packet number of its caching, and constraint (2) indicates to compile The length constraint of code data packet, constraint (3) indicate that the transmission of data packet no more than its delay constraint, constrains (4) and requires to send Data packet number must be positive integer.Target be maximize the weight of data grouping sent before time delay arrival and.
The grouping number that each data flow is sent can be provided by linear programming method, we according to time delay from small to large Sequence, single data packet or coded data packet are moved in the transmit queue of intermediate node R, allow data packet wait MAC The send opportunity of layer.We clear up original data cached queue simultaneously, wait the arrival of new data packets.Whenever intermediate node takes As soon as obtaining a send opportunity, a data packet is taken to send from the front end of transmit queue.The number of node is reached in scheduling interval It according to packet, after IP layers of routing, is just buffered in new queue, after epicycle finishing scheduling, participates in the scheduling of next round.
Our agreement has good portability.This agreement can be used in the MAC protocol divided based on time slot, each Node has fixed sending time, such as TDMA.It can be used for 802.11 agreements based on channel competition.Although 802.11 can introduce Additional channel competition time delay, no hair guarantee that data packet is sent in the time slot of setting.But we require no knowledge about each data packet Specific sending time, it will be able to optimal transmission sequence is provided, even if with the presence of channel competition time delay, it is believed that adjusted Data packet transmission sequence one surely brings promotion for network performance.
Stage three: block encoding
The transmission sequence of PTCS packet scheduling module output is the data packet number being scheduled in a series of data flow, this A little data flows may be single stream, it is also possible to encoding stream.Intermediate node is according to this transmission sequence, from each traffic queue The data packet that corresponding number is taken out in front end is sent.If encountering encoded data stream, to be taken out respectively from a pair of of encoding stream Data packet is encoded.Our the two data one are referred to as main coding packet, another is subordinate coding packet.IP is saved in head Be the information such as source node, destination node, Delay, weight information, the sending time of main coding packet.We also need to every A data packet is used to record the information of subordinate coding packet plus a coding head, and we term it CodeHead, are placed on IP head Between portion and MAC header.The structure of CodeHead is as shown in Fig. 2, wherein code identification position shows whether this data packet is coding The letters such as source node, destination node, next hop address, Delay, weight information, the sending time of packet and subordinate coding packet Breath.Data packet after coding is sequentially added at the sending port of node as other singly packet, and MAC layer is transferred to send.In order to connect Coded data packet is received, node needs to open promiscuous mode.Namely radio node receives all using the broadcast characteristic of channel The data packet that can be heard, including not being destined to own.MAC layer receives unicast packet, just transfers data packet PTCS layers.PTCS layers judge whether data packet encodes, and are decoded to the data packet of coding, finally mention decoded single packet IP layers are handed over to be routed or delivered.
By PTCS agreement and two famous agreements, 802.11 and COPE carries out Experimental comparison, and discovery PTCS can not only Handling capacity is promoted using the advantages of network code, additionally it is possible to data flow average delay be reduced by efficient scheduling strategy, improved Meet the data packet of delay requirement weight and.There is good adaptation for the wireless self-organization network of fairly large high flow Ability.
Experiment porch of the invention is QualNet emulator.QualNet is a kind of applied to wireless, wired and mixing The quick and accurately exploitation, analogue system of dynamic network.In QualNet, the transplantability of simulation protocol is very strong.Emulation association It discusses similar to the agreement in real equipment, it is only necessary to make simply to modify to download in equipment and go to use and and CPU It is unrelated.The a part that even can be used as live network, participates in the test of network.We be based on 802.11 protocol realizations I Intermediate layer protocol, be named as 802.11-pcs (Pairwise Coding Scheduling).We also exist simultaneously Coding protocol COPE is realized on 802.11, data can be bundled into coding, and first in first out is followed to the scheduling of data packet Principle, this agreement is named as 802.11-pc (Pairwise Coding) by us.We also with not encoding function 802.11 compare, and examine digram coding to performance bring gain.
As shown in figure 3, we have 21 inserting knots within the scope of 1000m*600m, the communication distance of node is 250m, Active link is all marked in figure.The source node and destination node of data flow are selected at random, there is 16 in network in total CBR data flow, the size of data packet are 1000Byte.We adjust total throughout in network, gradually rise to from 3Mbps 7Mbps observes the performance of handling capacity and time delay under three kinds of agreements.
Fig. 4 shows the throughput performance of three kinds of agreements.Fig. 4 (a) is three kinds of agreements under the variation of network total flow, institute Have by the received packet throughput of destination node.The handling capacity of 802.11-pcs and 802.11-pc is close, but is all much higher than 802.11 handling capacity illustrates that network code can effectively improve network throughput performance.When network flow increases, 802.11-pcs and 802.11-pc can approach linear increase.And 802.11 performance can be unstable sometimes, it may be with number in network It is related according to the trend of stream, if the data packet heavy congestion in certain nodes, it will cause the discarding of mass data packet, substantially reduce and handle up Amount.If in Fig. 4 (a) flow be 5Mbps at, 802.11 handling capacities minimize, and 802.11-pcs and 802.11-pc only have it is small Amplitude decline.Fig. 4 (b) is three kinds of agreements under the variation of network total flow, and all by destination node, received data are gulped down in time The amount of spitting.It can be seen that the timely handling capacity of 802.11-pcs is better than 802.11-pc and 802.11 always.Since coding bring increases Benefit, when network traffic increases, the effective throughput of 802.11-pcs and 802.11-pc can be promoted steadily.Because of coding The hop count for reducing network reduces the queuing time of data packet.And 802.11 increases and data to network flow The routing infrastructure of stream is more sensitive, and effective throughput amplification is little, and can generate the very low situation of performance.Due to 802.11- Pcs has the function of data stream scheduling, and timely throughput ratio 802.11-pc can be higher by 18.2%.
Fig. 5 shows the effectiveness performance of three kinds of agreements.Fig. 5 (a) is three kinds of agreements under the variation of network total flow, is owned By the received data packet weight of destination node and.Fig. 5 (b) is three kinds of agreements under the variation of network total flow, all by purpose Node in time received data packet weight and.It can be found that the effectiveness performance trend of three kinds of agreements moves towards class with throughput performance Seemingly, there is stronger correlation.802.11-pcs total utility curve is essentially coincided with 802.11-pc, but timely effectiveness is than 802.11 It is higher by up to 18.8%.The effectiveness of the two is all much higher than 802.11.This group of data are verified again, our scheduling strategy can The actual gain of network is further increased on data packet coding technology.
Fig. 6 shows the delay performance of three kinds of agreements.Fig. 6 (a) is three kinds of agreements under the variation of network total flow, is owned By the received data packet average delay of destination node.The average delay of 802.11-pcs is minimum always, even if network flow increases, Time delay also not excessive raising, illustrates that 802.11-pcs is preferable to the adaptability of extensive big flow network.802.11-pc with 802.11 delay performance has certain fluctuating, and wherein the time delay ratio 802.11-pcs of 802.11-pc is higher by about 22.3%, 802.11 time delay ratio 802.11-pcs is higher by about 24.6%.Fig. 6 (b) is three kinds of agreements under the variation of network total flow, mesh The average delay of timeout datum packet that receives of node.The fluctuating of this group of data is all bigger, there is biggish randomness, because For the data packet of time-out, its sending time is long, illustrates that it is affected by network traffic and node congestion degree.802.11 It rises and falls maximum, the curve of 802.11-pcs and 802.11-pc are closer to.

Claims (1)

1. a kind of data stream time delay dispatching method using at network coding opportunity between convection current, it is characterized in that the following steps are included:
Step 1) builds traffic queue and caches IP layers of arrival grouping;Grouping cache realizes middle layer association at IP layers between MAC layer View caches the grouping of IP layers of arrival, establishes virtual queue for every data stream, excavates digram coding chance between data flow;
Step 2) counts queuing message, the data in statistical data stream information, including each queue when each scheduling duration starts Packet number, weight, time delay and data flow digram coding relationship;
Step 3) packet scheduling calculates the optimal order of transmission of grouping using integral linear programming method;
Step 4) coding is sent, and is encoded to the data packet grouping that needs encode, in addition behind coding head, sequentially by data packet It is sent at MAC-layer interface;
Step 5) terminates epicycle scheduling, the return step 1 after expiring epicycle scheduling time), continue the data for dispatching epicycle caching Grouping;
Step 1) finds digram coding chance using the two-hop neighbors information of node;Using small calculating cost, find more Code machine meeting;
It is integral linear programming problem that step 3), which will maximize transmission data grouping weight and problem specification before time delay reaches, and Proof can find out optimal order of transmission;
The scheduling time segment length that step 5) delimited is determined by network average delay;
Data stream time delay dispatching method is specifically divided into 3 stages: code machine is, it can be seen that packet scheduling and block encoding;
Stage one: code machine meeting discovery strategy, code machine meeting discovery strategy can utilize the hop neighbor information and double bounce of node Neighbor information carrys out the digram coding relationship between mining data stream;For proactive routing protocol, each node can be easily Know oneself a jump and two-hop neighbors;For Reactive routing protocols, each node is allowed to broadcast oneself using hello packet One hop neighbor information is to neighbours;After the hello packet for receiving neighbours, node just updates the hop neighbor table and two-hop neighbors of oneself Table;
It jumps with two-hop neighbors information using the upstream and downstream nodal information of data flow and the one of intermediate node and judges a pair of of data Stream whether there is encoding relation;
The data packet that intermediate node R is received is cached to respectively in the queue established for each data flow;Every queue will be remembered Source node, destination node, next hop address, weight size, the delay requirement, data packet number information of lower corresponding data stream are recorded, Also to save the pointer for being directed toward data the package list;When a data packet enters node R, the team of this data flow is first checked for Column whether there is, if queue exists, data the package list in queue, while data in queuing message directly are added in the data packet Packet number record plus 1;If it does not exist, then a queue is created for this data flow, while is sentenced using queuing message and coding Broken strip part checks this data stream and the data flow joined the team with the presence or absence of encoding relation;
Stage two: packet scheduling;Can be provided according to the encoding relation of data flow, weight, time delay, queue length information one it is optimal Data packet transmission sequence;First defining a scheduling interval is T sending time slots, i.e., can during this period of time send T number According to packet;In the beginning of each scheduling interval, the traffic flow information recorded in queue, encoding relation, weight, time delay, queue are counted Length is formalized to the weight of transmission data packet and problem before time delay arrival is maximized;
Two set, set P will be divided by the data flow of intermediate nodeSMiddle data flow does not have encoding relation, for single stream, set PCIn be all the coding pair for having encoding relation;Stream f single for onek, the data packet that can be successfully forwarded is single packet, number Sk, For there is the coding ordered pair (i, j) of encoding relation, and meet i < j, the coded data packet number that successfully forwarded is defined as Di, fiAnd fjIn be not encoded just forwarding data packet be called remaining packet, number is respectively defined as RiAnd Rj;In the present solution, there is Ri· Rj=0 property, i.e. at least one in the two are 0, and otherwise two data streams remain to continue to encode;
It will be single according to the sequence of time delay from small to large by the grouping number that linear programming method provides each data flow transmission A data packet or coded data packet are moved in the transmit queue of intermediate node R, and data packet is allowed to wait the transmitter of MAC layer Meeting;Original data cached queue is cleared up simultaneously, waits the arrival of new data packets;Whenever intermediate node obtains a transmitter Meeting just takes a data packet to send from the front end of transmit queue;The data packet that node is reached in scheduling interval, by IP layers Routing after, be just buffered in new queue, after epicycle finishing scheduling, participate in the scheduling of next round;
Stage three: block encoding, the transmission sequence of PTCS packet scheduling module output are scheduled in a series of data flow Data packet number, these data flows are single stream or encoding stream, and PTCS is IP and MAC intermediate layer protocol;Intermediate node is according to this Transmission sequence, the data packet for taking out corresponding number from the front end of each traffic queue are sent;If encountering encoded data stream, Then data packet is taken out respectively from a pair of of encoding stream to be encoded;The two data one are referred to as main coding packet, another is Subordinate coding packet;When what is saved in the head IP is source node, the destination node, Delay, weight information, transmission of main coding packet Between;Need to be used to record plus a coding head to each data packet the information of subordinate coding packet, referred to as CodeHead is put Between the head IP and MAC header;In the structure of CodeHead, code identification position shows whether this data packet is coding packet, with And source node, the destination node, next hop address, Delay, weight information, transmission time information of subordinate coding packet;Coding Data packet afterwards is sequentially added at the sending port of node as other singly packet, and MAC layer is transferred to send;In order to receive coding Data packet, node need to open promiscuous mode, that is, radio node utilizes the broadcast characteristic of channel, receives what all can hear Data packet, the data packet including not being destined to own;MAC layer receives unicast packet, and data packet is just transferred PTCS layers; PTCS layers judge whether data packet encodes, and are decoded to the data packet of coding, and decoded single packet is finally submitted IP layers It is routed or is delivered.
CN201510460723.4A 2015-07-30 2015-07-30 A kind of data stream time delay guarantee strategies using at network coding opportunity between convection current Active CN105163354B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510460723.4A CN105163354B (en) 2015-07-30 2015-07-30 A kind of data stream time delay guarantee strategies using at network coding opportunity between convection current

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510460723.4A CN105163354B (en) 2015-07-30 2015-07-30 A kind of data stream time delay guarantee strategies using at network coding opportunity between convection current

Publications (2)

Publication Number Publication Date
CN105163354A CN105163354A (en) 2015-12-16
CN105163354B true CN105163354B (en) 2019-03-08

Family

ID=54804087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510460723.4A Active CN105163354B (en) 2015-07-30 2015-07-30 A kind of data stream time delay guarantee strategies using at network coding opportunity between convection current

Country Status (1)

Country Link
CN (1) CN105163354B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201711125D0 (en) * 2017-07-11 2017-08-23 Nchain Holdings Ltd Computer-implemented system and method
CN108768888B (en) * 2018-04-20 2021-10-15 北京中电普华信息技术有限公司 Queue scheduling method for quantum encryption service of power system
CN111698722A (en) * 2019-03-13 2020-09-22 电子科技大学中山学院 Delay mechanism for improving wireless opportunity network coding gain in real-time data stream
CN110691379B (en) * 2019-10-12 2023-05-02 湖南智领通信科技有限公司 Active route communication method suitable for wireless ad hoc network
CN112954719B (en) * 2021-03-25 2023-10-13 盐城工学院 Traffic matching method for network coding perception route in wireless multi-hop network
CN113347086B (en) * 2021-05-19 2022-11-22 北京安信智通科技有限公司 Method, device and storage medium for transmitting data

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101883394A (en) * 2010-05-10 2010-11-10 南京大学 Method for supporting coding opportunity discovery of wireless ad hoc network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE0303576D0 (en) * 2003-12-23 2003-12-23 Ericsson Telefon Ab L M Cost determination in a multihop network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101883394A (en) * 2010-05-10 2010-11-10 南京大学 Method for supporting coding opportunity discovery of wireless ad hoc network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Optimal Scheduling with Pairwise Coding under Heterogeneous Delay Constraints;Yafei Mao等;《IEEE》;20141231;第I-IV节
XORs in the Air: Practical Wireless Network Coding;Sachin Katti等;《IEEE》;20080630;第16卷(第3期);全文
无线自组织网络中流间网络编码机会发现方法的研究;董超等;《通信学报》;20111031;第32卷(第10期);全文

Also Published As

Publication number Publication date
CN105163354A (en) 2015-12-16

Similar Documents

Publication Publication Date Title
CN105163354B (en) A kind of data stream time delay guarantee strategies using at network coding opportunity between convection current
CN101483934A (en) Segmented adaptive multi-path routing mechanism having topology sensing capability
Yazdinejad et al. Increasing the performance of reactive routing protocol using the load balancing and congestion control mechanism in manet
CN106941447A (en) Free space optical network routing method based on Ants model
Omiwade et al. Butteries in the mesh: Lightweight localized wireless network coding
CN104486040A (en) Efficient coding-aware routing method based on cache management
Zeng et al. Dynamic segmented network coding for reliable data dissemination in delay tolerant networks
Yang et al. Improving ad hoc network performance using cross-layer information
Jain et al. A study of congestion aware adaptive routing protocols in MANET
CN102075864A (en) MCDS (minimum connected dominating set)-based method for constructing delay limit multicast forwarding structure
Lin et al. BGCA: Bandwidth guarded channel adaptive routing for ad hoc networks
Jeong et al. A network coding-aware routing mechanism for time-sensitive data delivery in multi-hop wireless networks
Okamura et al. Opportunistic routing for heterogeneous IoT networks
Bian et al. Relative link quality assessment and hybrid routing scheme for wireless mesh networks
Guidoum et al. Enhancing performance AODV routing protocol to avoid congestion
Chen et al. A load-based queue scheduling algorithm for MANET
Jing et al. On-demand multipath routing protocol with preferential path selection probabilities for MANET
Moeller et al. Backpressure routing made practical
Hu et al. A new multipath AODV routing protocol for VANET based on expected link lifetime
Yang et al. Effects of cross-layer processing on wireless ad hoc network performance
Dinesh et al. Ultimate Video Spreading With Qos over Wireless Network Using Selective Repeat Algorithm
Kavitha et al. Improving throughput and controlling congestion collapse in mobile ad hoc networks
Lin et al. Multiple path routing using tree-based multiple portal association for wireless mesh networks
Jiao et al. A Balanced Load Matching Rate Code Aware Routing Protocol based on network coding
Liyan et al. Analysis and optimization of multipath routing protocols based on SMR

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant