CN110099410B - DTN distributed caching method and device for temporary empty vehicle ground network - Google Patents

DTN distributed caching method and device for temporary empty vehicle ground network Download PDF

Info

Publication number
CN110099410B
CN110099410B CN201910451298.0A CN201910451298A CN110099410B CN 110099410 B CN110099410 B CN 110099410B CN 201910451298 A CN201910451298 A CN 201910451298A CN 110099410 B CN110099410 B CN 110099410B
Authority
CN
China
Prior art keywords
node
message
forwarding
information
contact
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910451298.0A
Other languages
Chinese (zh)
Other versions
CN110099410A (en
Inventor
张涛
张咏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201910451298.0A priority Critical patent/CN110099410B/en
Publication of CN110099410A publication Critical patent/CN110099410A/en
Application granted granted Critical
Publication of CN110099410B publication Critical patent/CN110099410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9084Reactions to storage capacity overflow
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0284Traffic management, e.g. flow control or congestion control detecting congestion or overload during communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0289Congestion control

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention provides a DTN distributed caching method and DTN distributed caching equipment for an empty vehicle ground network, wherein the method comprises the steps of detecting whether a first node has information congestion or not; if the node contact occurs, searching a second node which contacts with the first node at the current moment in the average contact frequency table of the first node according to a preset rule; and selecting forwarding information according to the message forwarding strength of each message in the first node, and sending the forwarding information to the second node so as to send the forwarding information to a target node through the second node. The embodiment of the invention can relieve the congestion of the network node and realize the normal transmission of data; neighbor nodes capable of communicating in the network are fully utilized, the pressure of the congested nodes is dispersed, and the network utilization rate is improved; the success rate of message transmission is improved, and the average transmission delay of the message is reduced.

Description

DTN distributed caching method and device for temporary empty vehicle ground network
Technical Field
The embodiment of the invention relates to the technical field of data caching, in particular to a DTN distributed caching method and DTN distributed caching equipment for an empty vehicle ground network.
Background
An air-approaching network taking an air craft (aerostat), an unmanned aerial vehicle, ground mobile equipment (such as an automobile, a train and the like) and the like as nodes is a novel dynamic network, has the advantages of regional coverage, flexible deployment and the like, and becomes a research hotspot in recent years. In the air-bound network, due to the dynamics among aerostat-unmanned aerial vehicle, unmanned aerial vehicle-unmanned aerial vehicle, and aerostat (unmanned aerial vehicle) -ground mobile node and the complexity of communication channels, the network topology is variable, links are frequently interrupted, and even the situation that the network is not communicated within a short time occurs, which brings difficulty to networking communication of the air-bound network. Specifically, the link interruption is mainly generated at high-dynamic nodes such as the unmanned aerial vehicle node and the user node. Before the node encounters the next hop node, the forwarded message is stored in the cache of the node, but the cache resource of the node is limited, and when the node cannot encounter the next hop node for a long time, the stored message exceeds the cache of the node, so that congestion is generated, and the receiving of subsequent messages is influenced.
In the prior art, when a node has message congestion, messages in the node can be discarded only according to a formulated message discarding strategy based on local knowledge or network knowledge, so as to relieve the congestion condition.
However, although the above strategy can alleviate the congestion condition of the node to some extent, the message dropping can seriously affect the success rate of message transmission and reduce the communication quality.
Disclosure of Invention
The embodiment of the invention provides a DTN distributed caching method and DTN distributed caching equipment for an empty vehicle ground network, which are used for improving the success rate of message transmission and improving the communication quality.
In a first aspect, an embodiment of the present invention provides a DTN distributed caching method for an empty vehicle ground network, including:
detecting whether the first node generates information congestion;
if the node contact occurs, searching a second node which contacts with the first node at the current moment in the average contact frequency table of the first node according to a preset rule;
and selecting forwarding information according to the message forwarding strength of each message in the first node, and sending the forwarding information to the second node so as to send the forwarding information to a target node through the second node.
In a possible design, before searching, according to a preset rule, a second node in the average contact frequency table of the first node, which communicates with the first node at the current time, the method further includes:
creating an average contact frequency table of the first node; the contact frequency table comprises cache capacity information of the first node;
detecting whether a third node is in contact with the first node;
if yes, acquiring historical contact times of the first node and a third node at the current moment;
calculating the average contact frequency between the first node and the third node according to the historical contact times; and updating the contact frequency table with the average contact frequency.
In one possible design, the finding a second node that is in contact with the first node at the current time in the average contact frequency table of the first node according to a preset rule includes:
arranging the average contact frequencies in the average contact frequency table of the first node in a descending order to obtain a first list;
sequentially detecting whether each node is in contact with the first node at the current moment or not according to the sequence of the first list until a preset number of nodes in contact with the first node are obtained;
and taking the preset number of nodes as the second nodes.
In one possible design, the selecting forwarding information according to the message forwarding strength of each message in the first node includes:
acquiring the message forwarding strength of each message of the first node at the current moment;
arranging the message forwarding strength of each message according to a descending order;
summing capacities occupied by the messages in the first ranking to the Nth ranking to obtain a total capacity;
judging whether the total capacity is larger than or equal to the preset memory amount to be released of the first node or not;
if not, summing the capacities occupied by the messages from the first rank to the (N + 1) th rank to obtain the total capacity;
repeatedly executing the step of judging whether the total capacity is larger than or equal to the preset memory amount to be released of the first node, if not, summing the capacities occupied by the messages from the first rank to the (N + 1) th rank to obtain the total capacity until the total capacity is larger than or equal to the preset memory amount to be released of the first node;
and if so, taking the messages in the current first ranking to the N +1 th ranking as forwarding messages.
In a possible design, before selecting forwarding information according to the message forwarding strength of each message in the first node and sending the forwarding information to the second node, the method further includes:
judging whether the cache capacity information of the second node meets a preset condition or not;
the selecting forwarding information according to the message forwarding strength of each message in the first node and sending the forwarding information to the second node includes:
and if so, selecting forwarding information according to a second preset rule, and sending the forwarding information to the second node.
In a possible design, the determining whether the cache capacity information of the second node meets a preset condition includes:
judging whether the ratio is larger than a preset threshold value or not;
and if so, the cache capacity information of the second node meets a preset condition.
In a possible design, after selecting forwarding information according to the message forwarding strength of each message in the first node and sending the forwarding information to the second node, the method further includes:
after the information congestion of the first node is relieved, judging whether the first node is in contact with the second node or not;
if yes, sending a retrieval control signal to the second node to enable the second node to return the forwarding information to the first node;
and the first node sends the forwarding information to a target node.
In a second aspect, an embodiment of the present invention provides a DTN distributed cache device for an empty vehicle ground network, including:
the detection module is used for detecting whether the first node generates information congestion;
the searching module is used for searching a second node which is in contact with the first node at the current moment in the average contact frequency table of the first node according to a preset rule when the first node is congested;
and the sending module is used for selecting forwarding information according to the message forwarding strength of each message in the first node and sending the forwarding information to the second node so as to send the forwarding information to a target node through the second node.
In a third aspect, an embodiment of the present invention provides a DTN distributed cache device for an empty wagon ground network, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform the method as set forth in the first aspect above and in various possible designs of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the method according to the first aspect and various possible designs of the first aspect are implemented.
According to the DTN distributed caching method and the DTN distributed caching equipment for the temporary empty vehicle ground network, when information congestion occurs in the first node, part of information is selected from all information cached in the first node and serves as forwarding information to be sent to the second node, located in the communication coverage range of the first node at the current moment and found from the average contact frequency table, so that the forwarding information is sent to the target node through the second node. The DTN distributed caching method for the temporary empty vehicle ground network can relieve network node congestion and realize normal data transmission; neighbor nodes capable of communicating in the network are fully utilized, the pressure of the congested nodes is dispersed, and the network utilization rate is improved; the success rate of message transmission can be improved, and the average transmission delay of the message can be reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a temporary air network according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a DTN distributed caching method for an empty vehicle ground network according to another embodiment of the present invention;
fig. 3 is a schematic flowchart of a DTN distributed caching method for an empty vehicle ground network according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of a DTN distributed cache device for an empty vehicle ground network according to another embodiment of the present invention;
fig. 5 is a schematic structural diagram of a DTN distributed cache device for an empty vehicle ground network according to another embodiment of the present invention;
fig. 6 is a schematic hardware structure diagram of a DTN distributed cache device for an empty vehicle ground network according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic structural diagram of an air interface network according to an embodiment of the present invention, and as shown in fig. 1, the air interface network includes an airship node, an unmanned aerial vehicle node, a ground station node, and a ground mobile device node; the airship node can be an aerostat, and the ground mobile equipment node can be transportation equipment such as an automobile and a train. In the actual communication process, the nodes can communicate between two contacted nodes, wherein the positions of the airship node and the ground station node are relatively fixed, relatively stable communication can be kept between the airship node and the ground station node in most of time, the unmanned aerial vehicle node and the ground mobile equipment node are high dynamic nodes, most of messages transmitted by the ground mobile equipment node are voice, short video, information and the like, and therefore services have the characteristics of large amount, burstiness and the like. Before each node meets the next hop node, the received message to be sent is always stored in the cache of the node, but the cache resource of the node is limited, and when the node cannot meet the next hop node for a long time, the stored message exceeds the cache of the node, so that congestion is generated, and the receiving of subsequent messages is influenced. Therefore, efficient cache management methods are needed to alleviate congestion.
In the prior art, many researches on cache management strategies are available, and the cache management strategies are mainly divided into two categories: local knowledge based policies and network knowledge based policies. When the node cache is filled with messages, the node cache does not need knowledge in a network range, and only depends on local knowledge carried by the messages in the cache, such as arrival Time (arrival Time), Time-To-Live (TTL), message size (size), and the like, To determine which message in the cache is discarded, so as To make room for a new message. The latter strategy considers not only local knowledge of the message itself but also partial or complete knowledge of the network scope in determining the message drop priority in the cache, so as to make a message drop decision capable of meeting some optimal performance index.
Specifically, the local knowledge-based policy includes:
(dl) (drop last): when the node cache is filled with messages, the last arriving message is simply discarded, and when a FIFO (First In First out) scheduling strategy is adopted, DL can ensure that data which enters the cache queue First has more forwarding opportunities.
Df (drop front): when the node cache overflows, the message which is arranged at the front end in the cache queue is discarded, and if the FIFO scheduling strategy is adopted, the message which reaches the cache at the first in the queue is discarded. In most cases, the probability that the message queued at the head of the queue will have a copy already forwarded to other nodes is the greatest, so to increase the overall forwarding ratio of messages in the cache, the DF policy discards the message queued at the head.
(iii) DO (drop Oldest): when the node cache is filled with the messages, the messages which are generated at the earliest time, namely the oldest messages in the node cache are discarded, and since the oldest messages have survived in the network for a long time, a large number of copies are generated and stored in other nodes in the network, so that the probability that the messages reach the destination is the greatest, and the probability that the messages cannot successfully reach the destination node due to discarding the messages is the smallest.
Dy (drop young): corresponding to the DO discard policy, the DY policy is to discard the most recently generated, i.e., youngest, message when the node cache is filled with messages.
Dr (drop random): and when the node cache overflows, messages in the cache queue are randomly discarded.
Specifically, the network knowledge-based policy includes:
buffer management based on partial parameter priority: for example, when the cache is congested, the policy discards the message with the Most forwarding times first; the message drop policy is determined according to the popularity (popularity) of the message, when the cache is full, the messages with low popularity are dropped first, and the like.
Optimizing cache management: and deriving a utility function related to all messages in the cache according to a certain performance standard, such as time delay and delivery rate, wherein each message corresponds to a utility gain, when two nodes meet, the messages in the cache are sent out at the first time according to the maximum message gain, and when the node cache is full, the messages with the minimum gain are discarded, so that the overall utility gain of all the messages is maximum.
Thirdly, self-adaptive cache management: and discarding the message with high meeting probability by utilizing the historical meeting information of the nodes in the network.
Fourthly, collaborative cache management: and taking the caches of two meeting nodes as a whole, and distributing the caches according to a certain rule.
Because a Delay Tolerant Network (DTN) adopts a storage-carrying-forwarding mode, the problem of link interruption of a high-dynamic network can be well solved, meanwhile, the unique beam layer (BP layer) design can shield the difference of a lower layer protocol, the cache management strategies suitable for communication among heterogeneous Networks can relieve the congestion condition of nodes in the DTN to a certain extent, and the method has respective advantages and limitations, but mostly chooses to discard a part of messages to relieve the congestion when the nodes are congested, and has lower message success transmission rate.
In order to solve the technical problem, the application provides a DTN distributed caching method for an empty vehicle ground network. Those skilled in the art can understand that most of distributed caches are applied in computer systems, and when a client sends a request to a server, much time is wasted if the client queries the database again each time, so that some memories are stored in memories of a plurality of client nodes, each request is queried in the memory first later, and if the request is not searched from the database again, the overall operating efficiency is greatly improved. In combination with the above technical problems, the following problems need to be considered in the present application for distributed cache management: firstly, before the distributed forwarding message is temporarily stored, the topology and the cache condition of a neighbor node need to be known; selecting proper message for transfer storage, and setting forwarding strength for the message according to a certain rule; thirdly, selecting proper nodes to forward messages according to the known caching condition of the neighbor nodes, the known information of links, bandwidth and the like; and fourthly, after the congestion of the node is relieved, how to retrieve the forwarded messages from the temporarily cached neighbor nodes.
The following describes the technical solutions of the present invention and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
In the following, some terms and symbols in the present application will be explained first to facilitate understanding by those skilled in the art:
(1) contact (contact): in a network, when a node meets another node and can communicate, it is called a contact. The contact frequency of the two nodes determines the message volume transmitted between the two nodes, and the higher the contact frequency is, the more message volume can be transmitted by the two nodes. When the historical contact frequency of two nodes is high, the two nodes can be considered to meet again soon in the future with high probability, which is a precondition for selecting a forwarding node for temporarily caching messages.
(2) Description of the symbols:
a. b: any two nodes in the network;
n: the total number of nodes in the network;
t0: a time in the network;
nt0(a, b): node a and node b are at t0Historical contact times at the moment;
t: time intervals (T is large enough to be statistically significant);
nt0+T(a, b): node a and node b are at t0The historical number of contacts at time + T;
nΔT(a, b): the number of times that the node a and the node b contact each other (Δ T ═ T) in the Δ T period;
fT(k)(a, b): average contact frequency of the node a and the node b in the k (k ═ 1,2,3, …) th time period;
fMTI: and the message forwarding strength is used for calculating the possibility that a certain message is forwarded out when the node is congested according to the index.
(3) Average Contact Frequency (ACF): let N total nodes in the network, at a certain time t0The historical contact times of the node a and the node b are obtained by time statistics and are nt0(a, b) after a time interval T (T is sufficiently large to have a systemSignificance), the historical contact times of the node a and the node b are nt0+T(a, b), the contact frequency of the two nodes is n within the T timeΔt(a,b)=nt0+kT(a,b)-nt0+(k-1)T(a, b), where Δ T is T, and the node recalculates the number of contacts n with other nodes after every T time intervalsΔT(a, b), (a ≠ b, and 1 ≦ b ≦ N-1), initial value 0 if there is no contact with some nodes. The calculation formula of the average contact frequency of the node a and the node b in the k (k ═ 1,2,3, …) th time period is as follows:
Figure BDA0002075229390000081
further derivation, from e in the contact matrixt(a, b) represents the contact condition of the node a and the node b at the time t, et(a, b) ═ 1 denotes that two nodes have contact at time t, and etWhen (a, b) ═ 0, it means that the two nodes are not in contact at time t, where t is0+(k-1)T<t<t0+ kT, (k ═ 1,2,3, …). The sum of the number of contacts of the node a and the node b in the k-th (k-1, 2,3, …) time period may be expressed as:
Figure BDA0002075229390000082
the following two equations can be derived:
Figure BDA0002075229390000083
(4) message Transmission Intensity (MTI): if the maximum value of all messages is msg _ size _ max, the lifetime of the message is msg _ live, the size of the current message is msg _ size, and the remaining lifetime is msg _ live _ remaining, a forwarding strength formula can be defined as follows:
Figure BDA0002075229390000091
message forwardingThe greater the strength, the greater the likelihood that the message will be considered to be forwarded to other nodes as a forwarded message when the node is congested. f. ofMTIMay be attached to the message header as an attribute carried with the message.
Fig. 2 is a schematic flow chart of a DTN distributed caching method for an empty vehicle ground network according to another embodiment of the present invention. As shown in fig. 2, the method includes:
201. whether the first node generates information congestion is detected.
In practical application, the execution subject of the embodiment is the first node or the control device arranged at the first node. The first node may be a DTN network node, for example, the first node may be a space airship (aerostat), a drone, etc. in an airborne network.
Specifically, before each node in the DTN network encounters the next hop node, messages to be transmitted to the target node, which are received from other nodes, are always stored in the cache of the node, but the cache resource of the node is limited, and when the node cannot meet the next hop node for a long time, the stored messages exceed the cache of the node, so that congestion is generated, and the reception of subsequent messages is influenced. Alternatively, whether the information congestion occurs in the first node may be detected by determining whether the amount of information stored in the first node exceeds the cache capacity of the node, and determining that the information congestion occurs in the first node if the amount of information stored in the node exceeds the cache capacity of the node, and determining that the information congestion does not occur in the first node if the amount of information stored in the node does not exceed the cache capacity of the node.
Optionally, in this embodiment, the temporary Network where the first node is located may adopt a Delay/interruption-Tolerant Network (DTN), and the DTN Network adopts a "storage-carrying-forwarding" mode, which may well solve the link interruption problem of the high-dynamic Network, and meanwhile, the unique bundle layer (BP layer) design thereof may shield the difference of the lower layer protocol, and adapt to communication between heterogeneous networks.
202. And if so, searching a second node which is in contact with the first node at the current moment in the average contact frequency table of the first node according to a preset rule.
In practical application, an average contact frequency table can be established for each node (unmanned aerial vehicle, aerostat, ground mobile device) in the air-bound network, so as to count and record the average contact frequency of each node which is in contact with a first node and the first node respectively. Through the establishment and maintenance of the average contact frequency table, the interaction frequency degree of each node in contact with the first node can be mastered, so that a high-quality cache node which frequently interacts with the first node can be conveniently selected as a second node, and the subsequent retrieval of the transferred information or the direct transmission of the transferred information to the target node through the second node is facilitated.
Optionally, the preset rule according to which the second node is searched from the average contact frequency table of the first node may be a node with the largest average contact frequency in a lookup table, and the specific step of searching may include: arranging the average contact frequencies in the average contact frequency table of the first node in a descending order to obtain a first list; sequentially detecting whether each node is in contact with the first node at the current moment or not according to the sequence of the first list until a preset number of nodes in contact with the first node are obtained; and taking the preset number of nodes as the second nodes.
Specifically, the ACF tables maintained by the node (first node) are traversed, and the ACF values in the tables are sorted in descending order. And searching whether the node with the first ranking contacts the node at the current moment, if so, the node is used as a cache node, and if not, the next node is searched. The maximum number of cache nodes can be set to be L. The fewer the number of cache nodes, the more convenient it is to subsequently retrieve the message.
203. And selecting forwarding information according to the message forwarding strength of each message in the first node, and sending the forwarding information to the second node so as to send the forwarding information to a target node through the second node.
In practical applications, when a first node is congested, part of information needs to be removed from a cache, so that enough space is reserved for receiving messages subsequently sent by other nodes. Specifically, there may be a plurality of ways to select which messages are used as forwarding information, for example, according to the sequence of information storage, according to the storage location of the information, and according to the popularity of the information. This is not limited in this embodiment.
Optionally, there are many ways to select the forwarding message according to the message forwarding strength, for example, the message with the highest message forwarding strength may be selected as the forwarding message. The specific implementation steps of the method can include:
2031. acquiring the message forwarding strength of each message of the first node at the current moment;
2032. arranging the message forwarding strength of each message according to a descending order;
2033. summing capacities occupied by the messages in the first ranking to the Nth ranking to obtain a total capacity;
2034. judging whether the total capacity is larger than or equal to the preset memory amount to be released of the first node or not;
2035. if not, summing the capacities occupied by the messages from the first rank to the (N + 1) th rank to obtain the total capacity;
2036. repeatedly executing step 2034 and step 2035 until the total capacity is greater than or equal to the preset amount of memory to be released of the first node;
2037. and if so, taking the messages in the current first ranking to the N +1 th ranking as forwarding messages.
Specifically, when the node (first node) is congested, the congestion is sorted in descending order according to the MTI value of the messages, and the messages are sequentially used as forwarding messages. The size of the memory released by the forwarding message can be set to x% of the cache of the node (x is less than 50). And if the released memory after the first message is forwarded is less than x%, continuing forwarding the second message until x% is met.
Optionally, there may be multiple ways of sending the forwarding information to the target node through the second node, for example, the forwarding information may be directly sent to the target node through the second node when the second node is in contact with the target node, the forwarding information may also be sent back to the first node through the second node, and the first node sends the forwarding information to the target node when the target node is in contact with the first node. This embodiment is not limited to this.
According to the DTN distributed caching method for the temporary empty vehicle ground network, when the first node is congested, part of information is selected from all information cached in the first node and is sent to the second node, which is located in the communication coverage range of the first node at the current moment and is searched from the average contact frequency table, as forwarding information, so that the forwarding information is sent to the target node through the second node. The DTN distributed caching method for the temporary empty vehicle ground network provided by the embodiment can be used for relieving information congestion on the premise of guaranteeing the integrity of information, and the success rate of message transmission is improved, so that the communication quality is improved, and the user experience is improved. Specifically, according to the idea of distributed caching and the network characteristics of the temporary empty vehicle ground network, the DTN distributed caching method for the temporary empty vehicle ground network provided by the embodiment has the following advantages: 1) network node congestion is relieved, and normal transmission of data is realized; 2) neighbor nodes capable of communicating in the network are fully utilized, the pressure of the congested nodes is dispersed, and the network utilization rate is improved; 3) the success rate of message transmission can be improved, and the average transmission delay of the message can be reduced.
Fig. 3 is a schematic flowchart of a DTN distributed caching method for an empty vehicle ground network according to another embodiment of the present invention. As shown in fig. 3, based on the above embodiment, this embodiment describes the whole process more completely, and the method includes:
301. an average contact frequency table of the first node is created and updated.
Optionally, step 301 may specifically include:
3011. creating an average contact frequency table of the first node; the contact frequency table includes cache capacity information of the first node.
Specifically, each node maintains an ACF table at the beginning of network establishment, and the initial state of the table is only fb (free buffer) information. The following were used:
the ACF information including all nodes that have contacted the node is formatted as follows:
Figure BDA0002075229390000121
3012. detecting whether a third node is in contact with the first node.
3013. And if so, acquiring the historical contact times of the first node and the third node at the current moment.
3014. Calculating the average contact frequency between the first node and the third node according to the historical contact times; and updating the contact frequency table with the average contact frequency.
Specifically, in the process of maintaining and updating the ACF table,
updating aiming at FB information: when the node contacts with other nodes, the FB information is updated no matter whether the other nodes have previous contact or not.
For F (a, x) information: adding a row of ACF information (F (a, x)) every time a new node is contacted, and calculating formula (3); and if the contacted node is contacted once, counting the contact times in the existing corresponding ACF information without adding new ACF information. The final ACF table format is as follows:
Figure BDA0002075229390000122
alternatively, the F (a, x) information in the ACF table may be refreshed at intervals of T according to the counted number of contacts.
302. Whether the first node generates information congestion is detected.
303. And if so, searching a second node which is in contact with the first node at the current moment in the average contact frequency table of the first node according to a preset rule.
In this embodiment, step 302 and step 303 are similar to step 201 and step 202 in the above embodiment, and are not described again here.
304. And judging whether the cache capacity information of the second node meets a preset condition.
Optionally, the determining method may be of various types, for example, the cache capacity information includes a ratio between a remaining cache capacity of the second node and a total cache capacity, and the determining method may specifically include:
and judging whether the ratio is larger than a preset threshold value or not.
And if so, the cache capacity information of the second node meets a preset condition.
Specifically, the communication with the cache node (second node) is performed to query the cache node ACF table for the remaining cache (FB) status. If FB is larger than or equal to p% of the total cache of the cache node (p is the minimum value executable by the step), sending a message to the node, and recording the node number of the node so as to retrieve the message in the following; if FB is less than p% of the total cache of the node, the node is discarded, and the next cache node may be selected as described in step 102. If the message forwarded in this step is smaller than the size of the memory released by the first node, a cache node may be added in the manner of step 102 to continue forwarding the message. Until the capacity of all cache nodes can reach the size of the memory released by the first node.
305. If the message forwarding strength is met, selecting forwarding information according to the message forwarding strength of each message in the first node, and sending the forwarding information to the second node so that the forwarding information is sent to a target node through the second node.
Step 305 in this embodiment is similar to step 203 in the above embodiment, and is not described again here.
306. And sending the forwarding information to the target node through the second node.
Optionally, there may be multiple ways of sending the forwarding information to the target node through the second node, for example, the forwarding information may be directly sent to the target node through the second node when the second node is in contact with the target node, the forwarding information may also be sent back to the first node through the second node, and the first node sends the forwarding information to the target node when the target node is in contact with the first node. This embodiment is not limited to this.
Specifically, the method for sending the forwarding information to the target node by returning the forwarding information to the first node through the second node may specifically include the following steps:
judging whether the first node is in contact with the second node or not;
if yes, sending a retrieval control signal to the second node to enable the second node to return the forwarding information to the first node;
and the first node sends the forwarding information to a target node.
According to the DTN distributed caching method for the temporary empty vehicle ground network, when the first node is congested, part of information is selected from all information cached in the first node and is sent to the second node, which is located in the communication coverage range of the first node at the current moment and is searched from the average contact frequency table, as forwarding information, so that the forwarding information is sent to the target node through the second node. The DTN distributed caching method for the temporary empty vehicle ground network provided by the embodiment can be used for relieving information congestion on the premise of guaranteeing the integrity of information, and the success rate of message transmission is improved, so that the communication quality is improved, and the user experience is improved.
Fig. 4 is a schematic structural diagram of a DTN distributed cache device for an empty vehicle ground network according to another embodiment of the present invention. As shown in fig. 4, the DTN distributed caching device 40 for the temporary empty vehicle ground network includes: a detection module 401, a lookup module 402, and a sending module 403.
A detecting module 401, configured to detect whether the first node has information congestion.
A searching module 402, configured to search, when congestion occurs in the first node, a second node that is in contact with the first node at the current time in the average contact frequency table of the first node according to a preset rule.
A sending module 403, configured to select forwarding information according to the message forwarding strength of each message in the first node, and send the forwarding information to the second node, so that the forwarding information is sent to a target node through the second node.
According to the DTN distributed cache device for the temporary empty vehicle ground network, when the detection module is used for information congestion of the first node, the sending module selects part of information from all information cached in the first node as forwarding information and sends the forwarding information to the second node which is found by the searching module from the average contact frequency table and is in the communication coverage range of the first node at the current moment, so that the forwarding information is sent to the target node through the second node. The DTN distributed caching method for the temporary empty vehicle ground network provided by the embodiment can be used for relieving information congestion on the premise of guaranteeing the integrity of information, and the success rate of message transmission is improved, so that the communication quality is improved, and the user experience is improved.
Fig. 5 is a schematic structural diagram of a DTN distributed cache device for an empty vehicle ground network according to another embodiment of the present invention. As shown in fig. 5, the DTN distributed caching device 40 for the temporary empty vehicle ground network further includes: a creating module 404, a first judging module 405, an obtaining module 406, and a processing module 407.
Optionally, the apparatus further comprises:
a creating module 404, configured to create an average contact frequency table of the first node; the contact frequency table includes cache capacity information of the first node.
A first determining module 405, configured to detect whether a third node is in contact with the first node;
an obtaining module 406, configured to obtain historical contact times of the first node and the third node at a current time when a third node contacts the first node.
A processing module 407, configured to calculate an average contact frequency between the first node and the third node according to the historical contact times; and updating the contact frequency table with the average contact frequency.
Optionally, the search module is specifically configured to: arranging the average contact frequencies in the average contact frequency table of the first node in a descending order to obtain a first list; sequentially detecting whether each node is in contact with the first node at the current moment or not according to the sequence of the first list until a preset number of nodes in contact with the first node are obtained; and taking the preset number of nodes as the second nodes.
Optionally, the sending module is specifically configured to:
and acquiring the message forwarding strength of each message of the first node at the current moment.
The message forwarding strengths of the messages are arranged in descending order.
And summing the capacities occupied by the messages in the first ranking to the Nth ranking to obtain the total capacity.
And judging whether the total capacity is larger than or equal to the preset memory amount to be released of the first node.
And if not, summing the capacities occupied by the messages from the first rank to the N +1 th rank to obtain the total capacity.
And repeatedly executing the step of judging whether the total capacity is larger than or equal to the preset memory amount to be released of the first node, and if not, summing the capacities occupied by the messages from the first rank to the (N + 1) th rank to obtain the total capacity until the total capacity is larger than or equal to the preset memory amount to be released of the first node.
And if so, taking the messages in the current first ranking to the N +1 th ranking as forwarding messages.
Optionally, the apparatus further comprises:
and the second judgment module is used for judging whether the cache capacity information of the second node meets a preset condition.
The sending module is specifically configured to select forwarding information according to a second preset rule when the cache capacity of the second node meets a preset condition, and send the forwarding information to the second node.
Optionally, the cache capacity information includes a ratio between a remaining cache capacity of the second node and a total cache capacity; the second judgment module is specifically configured to:
and judging whether the ratio is larger than a preset threshold value or not.
And if so, the cache capacity information of the second node meets a preset condition.
Optionally, the apparatus further comprises:
and the third judging module is used for judging whether the first node is in contact with the second node or not after the information congestion of the first node is relieved.
And the retrieval module is used for sending a retrieval control signal to the second node to enable the second node to return the forwarding information to the first node when the first node is in contact with the second node, so that the forwarding information is sent to a target node through the first node.
The DTN distributed cache device for the temporary empty vehicle ground network provided by the embodiment of the present invention may be used to implement the method embodiment described above, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 6 is a schematic hardware structure diagram of a DTN distributed cache device for an empty vehicle ground network according to another embodiment of the present invention. As shown in fig. 6, the DTN distributed cache device 60 for the temporary empty vehicle ground network provided in this embodiment includes: at least one processor 601 and memory 602. The DTN distributed caching device 60 for the temporary empty vehicle ground network further comprises a communication component 603. The processor 601, the memory 602, and the communication section 603 are connected by a bus 604.
In a specific implementation process, the at least one processor 601 executes the computer-executable instructions stored in the memory 602, so that the at least one processor 601 executes the DTN distributed caching method for the temporary empty ground network, which is executed by the DTN distributed caching device 60 for the temporary empty ground network.
When the first node of this embodiment transmits forwarding information to another node, the communication section 603 may transmit the forwarding information to a node such as a second node or a target node.
For a specific implementation process of the processor 601, reference may be made to the above method embodiments, which implement the principle and the technical effect similarly, and details of this embodiment are not described herein again.
In the embodiment shown in fig. 6, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise high speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The application also provides a computer-readable storage medium, wherein a computer executing instruction is stored in the computer-readable storage medium, and when a processor executes the computer executing instruction, the DTN distributed caching method for the temporary empty vehicle ground network, which is executed by the DTN distributed caching device for the temporary empty vehicle ground network, is realized.
The application also provides a computer-readable storage medium, wherein a computer executing instruction is stored in the computer-readable storage medium, and when a processor executes the computer executing instruction, the DTN distributed caching method for the temporary empty vehicle ground network, which is executed by the DTN distributed caching device for the temporary empty vehicle ground network, is realized.
The computer-readable storage medium may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the readable storage medium may also reside as discrete components in the apparatus.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A DTN distributed caching method for an empty train ground network is characterized by comprising the following steps:
detecting whether the first node generates information congestion;
if the node contact occurs, searching a second node which contacts with the first node at the current moment in the average contact frequency table of the first node according to a preset rule;
selecting forwarding information according to the message forwarding strength of each message in the first node, and sending the forwarding information to the second node so that the forwarding information is sent to a target node through the second node;
before searching for a second node communicating with the first node at the current time in the average contact frequency table of the first node according to a preset rule, the method further includes:
creating an average contact frequency table of the first node; the contact frequency table comprises cache capacity information of the first node;
detecting whether a third node is in contact with the first node;
if yes, acquiring historical contact times of the first node and a third node at the current moment;
calculating the average contact frequency between the first node and the third node according to the historical contact times; and updating the contact frequency table with the average contact frequency.
2. The method according to claim 1, wherein the average contact frequency table includes an average contact frequency between each node in contact with the first node and the first node, and the searching for the second node in contact with the first node at the current time in the average contact frequency table of the first node according to a preset rule includes:
arranging the average contact frequencies in the average contact frequency table of the first node in a descending order to obtain a first list;
sequentially detecting whether each node is in contact with the first node at the current moment or not according to the sequence of the first list until a preset number of nodes in contact with the first node are obtained;
and taking the preset number of nodes as the second nodes.
3. The method of claim 1, wherein selecting forwarding information based on message forwarding strength of each message in the first node comprises:
acquiring the message forwarding strength of each message of the first node at the current moment;
arranging the message forwarding strength of each message according to a descending order;
summing capacities occupied by the messages in the first ranking to the Nth ranking to obtain a total capacity;
judging whether the total capacity is larger than or equal to the preset memory amount to be released of the first node or not;
if not, summing the capacities occupied by the messages from the first rank to the (N + 1) th rank to obtain the total capacity;
repeatedly executing the step of judging whether the total capacity is larger than or equal to the preset memory amount to be released of the first node, if not, summing the capacities occupied by the messages from the first rank to the (N + 1) th rank to obtain the total capacity until the total capacity is larger than or equal to the preset memory amount to be released of the first node;
and if so, taking the messages in the current first ranking to the N +1 th ranking as forwarding messages.
4. The method of claim 1, wherein before selecting forwarding information according to message forwarding strength of each message in the first node and sending the forwarding information to the second node, further comprising:
judging whether the cache capacity information of the second node meets a preset condition or not;
the selecting forwarding information according to the message forwarding strength of each message in the first node and sending the forwarding information to the second node includes:
and if so, selecting forwarding information according to a second preset rule, and sending the forwarding information to the second node.
5. The method according to claim 4, wherein the cache capacity information includes a ratio between a remaining cache capacity of the second node and a total cache capacity, and the determining whether the cache capacity information of the second node satisfies a preset condition includes:
judging whether the ratio is larger than a preset threshold value or not;
and if so, the cache capacity information of the second node meets a preset condition.
6. The method according to any of claims 1-5, wherein after selecting forwarding information according to the message forwarding strength of each message in the first node and sending the forwarding information to the second node, further comprising:
after the information congestion of the first node is relieved, judging whether the first node is in contact with the second node or not;
and if so, sending a retrieval control signal to the second node to enable the second node to return the forwarding information to the first node so as to send the forwarding information to a target node through the first node.
7. A DTN distributed caching device for an empty vehicle ground network, comprising:
the detection module is used for detecting whether the first node generates information congestion;
the searching module is used for searching a second node which is in contact with the first node at the current moment in the average contact frequency table of the first node according to a preset rule when the first node is congested;
the sending module is used for selecting forwarding information according to the message forwarding strength of each message in the first node and sending the forwarding information to the second node so as to send the forwarding information to a target node through the second node;
the apparatus further comprises:
a creating module for creating an average contact frequency table of the first node; the contact frequency table comprises cache capacity information of the first node;
the first judgment module is used for detecting whether a third node is in contact with the first node or not;
the acquisition module is used for acquiring the historical contact times of the first node and the third node at the current moment when the third node is in contact with the first node;
the processing module is used for calculating the average contact frequency between the first node and the third node according to the historical contact times; and updating the contact frequency table with the average contact frequency.
8. A DTN distributed caching device for an empty vehicle ground network, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the DTN distributed caching method for an empty vehicle ground network of any one of claims 1 to 6.
9. A computer-readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, implement the DTN distributed caching method for an empty vehicle ground network according to any one of claims 1 to 6.
CN201910451298.0A 2019-05-28 2019-05-28 DTN distributed caching method and device for temporary empty vehicle ground network Active CN110099410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910451298.0A CN110099410B (en) 2019-05-28 2019-05-28 DTN distributed caching method and device for temporary empty vehicle ground network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910451298.0A CN110099410B (en) 2019-05-28 2019-05-28 DTN distributed caching method and device for temporary empty vehicle ground network

Publications (2)

Publication Number Publication Date
CN110099410A CN110099410A (en) 2019-08-06
CN110099410B true CN110099410B (en) 2021-02-05

Family

ID=67449596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910451298.0A Active CN110099410B (en) 2019-05-28 2019-05-28 DTN distributed caching method and device for temporary empty vehicle ground network

Country Status (1)

Country Link
CN (1) CN110099410B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111970200B (en) * 2020-08-27 2022-02-01 华中师范大学 Probability routing method based on utility value

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236984A (en) * 2013-02-27 2013-08-07 佳都新太科技股份有限公司 Efficient epidemic routing cache management strategy in delay tolerant network
CN109039934A (en) * 2018-08-17 2018-12-18 华中科技大学 A kind of space DTN method for controlling network congestion and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9597523B2 (en) * 2014-02-12 2017-03-21 Zoll Medical Corporation System and method for adapting alarms in a wearable medical device
CN105656803B (en) * 2016-01-25 2018-07-17 北京交通大学 A kind of space delay tolerant network jamming control method based on QoS

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236984A (en) * 2013-02-27 2013-08-07 佳都新太科技股份有限公司 Efficient epidemic routing cache management strategy in delay tolerant network
CN109039934A (en) * 2018-08-17 2018-12-18 华中科技大学 A kind of space DTN method for controlling network congestion and system

Also Published As

Publication number Publication date
CN110099410A (en) 2019-08-06

Similar Documents

Publication Publication Date Title
CN103986653B (en) Network nodes and data transmission method and system
US9185047B2 (en) Hierarchical profiled scheduling and shaping
CN109962760B (en) Service scheduling method suitable for wireless TDMA ad hoc network
US20100284274A1 (en) System and method for determining a transmission order for packets at a node in a wireless communication network
US8989011B2 (en) Communication over multiple virtual lanes using a shared buffer
CN111935031B (en) NDN architecture-based traffic optimization method and system
WO2019153931A1 (en) Data transmission control method and apparatus, and network transmission device and storage medium
CN110351200B (en) Opportunistic network congestion control method based on forwarding task migration
US11502956B2 (en) Method for content caching in information-centric network virtualization
CN112737964B (en) Transmission control method and system integrating push-pull semantics
CN113573419B (en) Multi-hop network channel access method considering multi-priority service
WO2014074802A1 (en) Controlling traffic in information centric networks
CN104618959A (en) Method and system for achieving aeronautical network MAC (multiple access control) protocols
CN111314243A (en) LoRa network QoS scheduling management method supporting complex service data transmission
KR20190114404A (en) Network system and data trasmission method based on device clustering in lorawan communication
CN104994152A (en) Web cooperative caching system and method
CN117395167A (en) Service level configuration method and device
CN110099410B (en) DTN distributed caching method and device for temporary empty vehicle ground network
WO2020160007A1 (en) Semantics and deviation aware content request and multi-factored in-network content caching
CN116708280B (en) Data center network multipath transmission method based on disorder tolerance
CN109039934B (en) Space DTN network congestion control method and system
An et al. A Congestion Level based end-to-end acknowledgement mechanism for Delay Tolerant Networks
CN109361928A (en) A kind of information centre&#39;s network system and video transmission method
CN111585894A (en) Network routing method and device based on weight calculation
KR101885144B1 (en) Hybrid Content Caching Method and System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant