CN117955979B - Cloud network fusion edge information service method based on mobile communication node - Google Patents
Cloud network fusion edge information service method based on mobile communication node Download PDFInfo
- Publication number
- CN117955979B CN117955979B CN202410355376.8A CN202410355376A CN117955979B CN 117955979 B CN117955979 B CN 117955979B CN 202410355376 A CN202410355376 A CN 202410355376A CN 117955979 B CN117955979 B CN 117955979B
- Authority
- CN
- China
- Prior art keywords
- node
- information
- network
- edge
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000010295 mobile communication Methods 0.000 title claims abstract description 9
- 230000004927 fusion Effects 0.000 title abstract description 5
- 230000005540 biological transmission Effects 0.000 claims abstract description 70
- 238000004891 communication Methods 0.000 claims abstract description 47
- 230000007246 mechanism Effects 0.000 claims abstract description 16
- 230000008569 process Effects 0.000 claims abstract description 4
- 238000004364 calculation method Methods 0.000 claims description 14
- 238000013507 mapping Methods 0.000 claims description 10
- 238000011084 recovery Methods 0.000 claims description 9
- 238000003860 storage Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 229910010293 ceramic material Inorganic materials 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011217 control strategy Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/12—Shortest path evaluation
- H04L45/124—Shortest path evaluation using a combination of metrics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/14—Routing performance; Theoretical aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/625—Queue scheduling characterised by scheduling criteria for service slots or service orders
- H04L47/6275—Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention provides a cloud network fusion edge information service method based on a mobile communication node, and belongs to the technical field of computers. It comprises the following steps: deploying light-weight cut cache software on a communication network node, and constructing a cache queue integrated by an edge information service node and the edge communication network node; caching information issued by a fixed cloud center in each edge cloud node according to priority by adopting an information hierarchical caching mechanism; according to the buffer memory position of the information to be transmitted and the network connection condition of the tail end of each user and each communication node in the edge cloud node network, calculating the optimal information transmission node and path in a certain period of time by taking the shortest overall time consumption and the most reliable transmission of the information received by the tail end user as targets, and constructing an information transmission express way; in the process of transmitting information, the transmission rate distributed from the edge cloud node network to the user terminal is controlled according to the network state, so that network congestion is avoided, and the reliability and efficiency of data sharing transmission of an information system are improved.
Description
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a cloud network fusion edge information service method based on a mobile communication node.
Background
With the development of the information age, cloud computing capability and network communication capability gradually move toward convergence. The old centralized information processing mode is that the data processing capability is placed in the fixed cloud center, after a user initiates a data processing request at the tail end, the request is required to be transmitted to the fixed cloud center through the network communication capability, and the processing result is retransmitted to the tail end node after the request is processed, so that the flow efficiency is low; the current information system adopts a cloud-side-end three-level architecture, on the basis of a fixed cloud center and a user terminal, a mobile edge cloud which is close to a user and can follow a guarantee is added, partial user data demands are processed nearby at mobile edge cloud nodes, and resources such as network, storage, calculation and the like of each node and each node in the cloud-side-end three-level architecture are uniformly managed and integrated, so that the functions of resource sharing, elastic expansion, automatic operation and maintenance and the like are realized, and the resource utilization rate and the service quality are improved. In practice, however, the distributed edge cloud not only has information service nodes available, but also has many communication network nodes that simply forward information, and the information processing capability of the server on the communication network nodes is not fully exerted.
In the cloud-edge-end system, the fixed cloud node is far away from the user terminal, and due to the influence of factors such as network delay, bandwidth limitation and the like, the user needs are difficult to respond timely, and some common data need to be cached at the edge cloud node. However, the conventional caching method has the problems of fixed priority and inflexibility, and the cache object needs to be manually configured, so that dynamic adjustment cannot be performed according to factors such as request types, network conditions, users and the like. In addition, during data transmission and cache storage, various faults and accidents, such as network interruption, equipment damage, hacking, etc., may occur, resulting in inconsistent final data, and further causing errors or malfunctioning of the application program. To ensure data integrity and availability, special techniques and in combination with other transmission protocols are required to achieve efficient and reliable data transmission. Researchers aim at the network problem to research and improve the network bandwidth and bandwidth utilization rate of communication network equipment such as microwave radio stations, but the method needs to newly research the communication network equipment to realize, and the development period is relatively long. Therefore, a mechanism for improving the reliability and efficiency of the edge information sharing transmission is needed.
Disclosure of Invention
In view of the above, the invention provides a cloud network convergence edge information service method based on mobile communication nodes, which integrates computing, storing and service resources of edge cloud information service nodes and edge cloud communication network nodes, establishes an edge cache system, and enhances the reliability guarantee capability of edge information service by dynamically adjusting the priority of cache objects; meanwhile, the method for improving the reliability and efficiency of information sharing transmission under the limited network conditions of narrow bandwidth and weak connection is provided with the aim of adapting to the wireless communication network.
The invention adopts the technical scheme that:
A cloud network convergence edge information service method based on a mobile communication node comprises the following steps:
Step S1, aiming at the characteristic that data of a fixed cloud center is firstly transmitted to an edge cloud communication network node, deploying light-weight clipping cache software on the communication network node, and constructing a cache queue integrated by an edge information service node and the edge communication network node for information caching;
Step S2, aiming at the characteristics of narrow bandwidth and weak connection of a limited network, an information hierarchical caching mechanism is adopted, and information issued by a fixed cloud center is cached at each edge cloud node according to priority, wherein the specific mode is as follows:
taking out the information with highest priority from the cache queue in the step S1, and preferentially sending the information; for the information with low priority, continuing to cache in the queue, and waiting for next information scheduling; in addition, for the information in the cache queue, dynamically adjusting the information dynamic data priority of the cache information according to the original priority, the cache time and the historical transmission times;
Step S3, calculating optimal information transmission nodes and paths in a certain period of time according to the cache position of the information to be transmitted and the network connection condition of the tail end of each user and each communication node in the edge cloud node network, and taking the shortest overall time consumption and the most reliable transmission of the information received by the tail end user as targets, so as to construct an information transmission express way;
And S4, adopting a mixed congestion control algorithm under a limited network and combining a timeout retransmission reliable mechanism, and controlling the transmission rate distributed from the edge cloud node network to the user terminal according to the network state in the information transmission process, thereby avoiding network congestion and improving the reliability and efficiency of data sharing transmission of an information system.
Further, the specific manner of step S1 is:
According to the role of a preset edge cloud node, determining the relation between the edge cloud node and each user terminal, and determining the mapping association between the communication network node and the ip address of the user terminal;
Establishing unique index information of all edge information by adopting a hash algorithm of SHA-1, so that the index relation is quickly searched by O (1) time complexity;
Establishing a key value mapping relation between the edge information and the edge cloud node as well as between the edge cloud node and the user terminal node, wherein a key is a support information keyword of SHA-1, and a value is a corresponding relation between the communication node and the user terminal node IP address;
and uniformly storing the generated key-value mapping metadata to edge cloud nodes connected with the fixed cloud center.
Further, in step S2, the information dynamic data priority is calculated as follows:
wherein, DP is the information dynamic data priority, Is a natural constant which is used for the production of the high-temperature-resistant ceramic material,Is a coefficient for adjusting the degree of influence of different factors on the priority,Is the original priority of the buffered data,Is the time that the current data is already in the cache queue,Is a time constant, used to control the weight of the time factor,Is the historical number of transmissions in a time period,Is the maximum of the historical transmission times in the time period whenIn the time-course of which the first and second contact surfaces,At this point the priority of the buffered data is minimized.
Further, in step S3, the optimal transmission information node and path in a certain period of time are calculated, and the specific manner is as follows:
On the basis of Djjkstra algorithm, calculating the shortest path by using the node distance obtained by dynamic calculation as the weight of the graph, wherein the node distance is calculated based on the network bandwidth and the packet loss rate, and the formula is as follows:
Wherein, For the distance between node a and node B,For the bandwidth between node a and node B,For the packet loss rate of the network,AndIs a weight coefficient used for balancing the influence of bandwidth and packet loss rate;
Searching out the shortest path from the source node to the target node by means of a greedy strategy of Djjkstra algorithm; the source nodes which buffer data are multiple, shortest paths of the target node and all the source nodes are compared, and the source node corresponding to the minimum value is selected as the optimal transmission information node.
Further, the hybrid congestion control algorithm in step S4 combines both modes of timeout retransmission and fast retransmission, while allowing the user to access two congestion control parameters: congestion window size and message transmission interval; the congestion window is the number of data packets sent at one time;
The mixed congestion control algorithm is divided into three stages of slow start, congestion avoidance and fast recovery, and the three stages are switched to each other based on a strategy of packet loss; if the packet is lost and the packet loss times reach the upper limit, starting to retransmit overtime and entering a slow start stage; if disorder occurs, receiving the data packet after the current sequence number, starting to rapidly retransmit and entering a rapid recovery stage;
The judgment of packet loss depends on the overtime retransmission time of the message The calculation method of (2) is as follows:
Wherein, In order to timeout the retransmission time,For timer granularity, max represents taking the maximum value; the calculation method of (2) is as follows:
Wherein, To time from the sending of a data packet to the receipt of a reply to the packet,For a period of time ofAn estimate of the average value is made,The initial value of (1) is set as;
The calculation method of (2) is as follows:
Wherein, Is thatAn estimate of the average deviation, i.e. the average jitter value,The initial value of (1) is set as;
A slow start phase, wherein the congestion window grows exponentially, and enters a congestion avoidance phase after the congestion window exceeds a slow start threshold; in the congestion avoidance stage, the congestion window grows linearly, the network capacity is filled gradually along with the growth of the congestion window, and meanwhile, the congestion window is reduced due to packet loss, namely packet loss and back-off; if the data packet is not retransmitted overtime, entering a quick recovery stage; if the data packet is retransmitted overtime, the slow start phase is re-entered.
The beneficial effects of the invention are as follows:
1. The computing capacity of the communication network node can be effectively utilized, information is allowed to be cached to the communication network node, and the edge cloud communication network node and the edge cloud information service node form a more efficient distributed edge cloud service environment;
2. The priority of the cache information in the edge cloud service environment can be adjusted in a self-adaptive mode, and therefore the optimal cache effect is achieved. The method has the advantages of high flexibility, good reliability and the like, and has wide application prospect;
3. Because the distance between the nodes in the improved Djjkstra algorithm is dynamically set according to the real-time network bandwidth, the shortest path can be calculated more accurately, network resources are utilized more effectively, and the network performance is improved;
4. aiming at the conditions of weak connection and narrow bandwidth of a limited network, a hybrid congestion control algorithm and a timeout retransmission reliable mechanism are adopted, and the efficiency of unstructured information sharing such as images, audios and videos and the reliability of information transmission are improved.
Drawings
Fig. 1 is a schematic diagram of a cloud network converged edge information service framework based on an edge communication network node of the present invention;
FIG. 2 is a flowchart of a modified Djjkstra algorithm of the present invention;
fig. 3 is a flow chart of the edge limited network environment messaging control of the present invention.
Detailed Description
Embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. The advantages and features of the present invention will become more apparent to those skilled in the art from the following detailed description of the embodiments, but the manner in which the present invention is practiced is not limited to the examples described below.
A cloud network fusion edge information service method based on a mobile communication node comprises the following steps:
Step 1, aiming at the characteristic that data of a fixed cloud center is firstly transmitted to an edge cloud communication network node, deploying light-weight cut cache software on the communication network node, and fully utilizing the information storage and data analysis capability of the communication network node. Integrating the edge communication network node and the edge information service node, and establishing a global 'information parking lot' by using uniform resources to cache information to form a distributed edge cloud service environment;
and 2, aiming at the characteristics of narrow bandwidth and weak connection of the limited network, an information hierarchical caching mechanism is adopted to realize caching of information issued by the fixed cloud center at each edge cloud node according to priority. Taking out the information with highest priority from the buffer queue (information parking lot) in the step 1 by means of the thought of 'information traffic lights', and opening a 'green channel' for the information to be sent preferentially; and for the information with low priority, continuing to buffer the information in the queue, and waiting for the next information scheduling. In particular, for the information in the buffer queue, the priority of the buffer information is dynamically adjusted according to the original priority, the buffer time, the historical transmission times and other factors. The dynamic data priority DP of the information may be calculated as follows:
Wherein, Is a coefficient for adjusting the degree of influence of different factors on the priority,Is the original priority of the buffered data,Is the time that the current data is already in the cache queue,Is a time constant, used to control the weight of the time factor,Is the historical number of transmissions in a time period,Is the maximum of the historical transmission times in the time period whenIn the time-course of which the first and second contact surfaces,At this time, the priority of the cache data is reduced to the lowest;
And 3, according to the buffer memory position of the information to be transmitted and the network connection condition of the tail end of each user and each communication node in the edge cloud node network, calculating the optimal information transmission node and path in a certain period of time by taking the shortest overall time consumption and the most reliable transmission of the information received by the tail end user as targets by means of an improved Djjkstra algorithm as shown in fig. 2, and constructing an information transmission fast lane. Based on Djjkstra algorithm, the shortest path is calculated using the dynamically calculated node distance as the weight of the graph. The node distance calculation formula based on the network bandwidth and the packet loss rate is as follows:
Wherein, For the distance between node a and node B,For the bandwidth between node a and node B,For the packet loss rate of the network,AndIs a weight coefficient used to balance the impact of bandwidth and packet loss rate.
And 4, in order to adapt to the edge limited network environment, the bottom layer transmission adopts a UDP protocol with higher transmission speed and lighter weight. Meanwhile, in order to ensure the final consistency of data, a retransmission mechanism and a congestion control algorithm are designed by referring to a reliable mechanism of TCP, so that the transmission rate distributed from an edge cloud node network to a user terminal is flexibly controlled according to the network state in the information transmission process, network congestion is avoided, and the reliability and efficiency of data sharing transmission of an information system are further improved, as shown in figure 3. Wherein the retransmission mechanism and congestion control algorithm are determined by the timeout retransmission time of the data, the timeout timeThe calculation mode of (2) is as follows:
wherein the method comprises the steps of Is round trip delay within a period of time) An estimate of the average value is made,For the estimation of the average deviation of the round trip delay,Is the granularity of the timer.
The following is a more specific example:
The embodiment considers a 'cloud-edge-end' architecture mode in the field of information systems, and as shown in fig. 1, the architecture mode is composed of a fixed cloud center, a distributed edge cloud and a user terminal. The distributed edge cloud is formed by integrating a plurality of communication network nodes and information service nodes. In fig. 1, cloud node 1 is a communication node specially accessing fixed cloud center information, cloud node 7 and cloud node 2 are mobile edge cloud information service nodes, cloud node 3, cloud node 4 and cloud node 5 are communication network nodes accessing distributed edge cloud data at the end of a user, and cloud node 6 is a network node medium for communication between the information service nodes and cloud node 3, cloud node 4 and cloud node 5.
Aiming at the mode shown in fig. 1, the method comprehensively links the basic requirements of information systems such as information link through, network cloud resource guarantee, information service support and the like in each field, and provides edge information service guarantee for end users accessing edge clouds. Meanwhile, in order to adapt to the limited network condition of the edge, a method for improving the reliability and efficiency of the edge information distribution and transmission under the conditions of narrow bandwidth and weak connection is provided, an edge cloud information distribution network with high-speed transmission, up-down intercommunication and preferred bandwidth is provided, and an edge cloud communication network node and an information service node are supported to be accessed as required, accessed quickly and networking is flexible.
The method comprises the following specific steps:
step 1, integrating computing storage resources of all edge cloud information service nodes and communication network nodes, intensively constructing an edge cloud network, and constructing a distributed edge information service environment:
step 1.1, determining the relation (N: N) between the edge cloud node and each user terminal according to the role of a preset edge cloud node, and determining the mapping association of the communication network node and the ip address of the user terminal;
Step 1.2, establishing unique index information of all edge information by adopting a hash algorithm of SHA-1, so that the index relation is quickly searched by O (1) time complexity;
And 1.3, establishing a key-value mapping relation between the edge information and the edge cloud nodes and between the edge cloud nodes and the user terminal nodes. Wherein key=sha-1 (support information key), value=correspondence between IP addresses of the communication node and the user terminal node;
step 1.4, uniformly storing the key-value mapping metadata generated in step 1.3 into an edge Yun Jiedian (cloud node 1 in fig. 1), an information service node (cloud node 2 and cloud node 7 in fig. 1) connected with a fixed cloud center and an edge Yun Jiedian (cloud node 6 in fig. 1) connected with a user terminal;
step 2, aiming at the characteristics of narrow bandwidth and weak connection of the edge limited network, an information hierarchical caching mechanism is adopted to realize caching of edge information in each edge cloud node according to priority, and metadata of the information is synchronously cached in the cloud;
Step 2.1, setting initial priority of the cache information at the edge cloud node (information service node/communication network facility node) The configuration can be carried out according to the information type, the transmission delay and other factors;
Step 2.2, the edge cloud node receives information from the fixed cloud center, matches with a preset initial priority, and calculates the priority of the current information Adding information into a cache queue;
and 2.3, dynamically adjusting the priority of the information in the buffer queue according to the factors such as the original priority, the buffer time, the historical transmission times and the like. The dynamic data priority DP of the information may be calculated as follows:
Wherein, Is the original priority of the buffered data,Is the time that the current data is already in the cache queue,Is a coefficient for adjusting the degree of influence of different factors on the priority,Is a time constant which is a function of the time constant,Is a reduction factor based on the historical number of transmissions of the current data type. At the same time, canThe definition is as follows:
Wherein, Is the maximum value (e.g., 1000) of the number of historical transmissions in a time period, whenIn the time-course of which the first and second contact surfaces,At this point the priority of the buffered data is minimized.
Through the formula, comprehensive consideration of various factors can be realized, and the priority of the cache information can be adjusted more intelligently and accurately. Specifically, the meaning of each parameter is as follows:
: measuring original priority Is generally a range of values;
: The weight of the time factor is controlled, and as time increases,Gradually decreasing the value of (a) such that the priority of the buffered data gradually decreases;
: considering the impact of the historical number of transmissions on priority, Typically an empirical value such that after the number of transmissions exceeds the value, the buffered data is substantially invalidated;
Step 2.4, when the data in the cache queue exceeds a certain threshold, adopting classical algorithms such as LRU and the like to automatically eliminate the cache data with the lowest priority so as to make room for storing new cache data with high priority;
Step 2.5, periodically updating the priority of the cache object according to the dynamic priority calculation formula in step 2.3 so as to ensure the optimization of the cache effect;
step 3, calculating an optimal transmission information node and path from the edge cloud to the user terminal within a certain period of time:
And 3.1, according to the global node mapping relation generated in the step 1.3, representing all nodes and connection relations in the edge network as a graph structure, and measuring the bandwidth and the packet loss rate between network nodes (adopting a ping command or an iperf tool). Network bandwidth between node a and node B Can be obtained by the following formula:
Wherein, In order to transmit the size of the data,Time consumed for data transmission.
And 3.2, dynamically generating the inter-node distance in the Djjkstra algorithm by taking network bandwidth and other factors into consideration on the basis of the Djjkstra algorithm, and taking the inter-node distance as a weight of the graph. The inter-node distance calculation may use the following formula:
Wherein, For the distance (in ms) between node a and node B,For the bandwidth (unit: mbps) between node a and node B,The network packet loss rate (value range:), And The weight coefficient is used for balancing the influence of bandwidth and packet loss rate, and can be flexibly adjusted according to requirements. In general, bandwidth should be given a higher weight because the efficiency of network communications tends to be more bandwidth limited.
And 3.3, because the network environment is dynamically changed, the change of the network state needs to be continuously adapted, and therefore, the bandwidth matrix and the packet loss rate matrix of the network link need to be updated in time, and the transmission path and the strategy are continuously adjusted so as to ensure the transmission efficiency and the transmission reliability. The bandwidth matrix and the packet loss rate matrix record bandwidth and packet loss rate information among different nodes.
And 3.4, searching out the shortest path from the source node to the target node by means of a greedy strategy of Djjkstra algorithm.
And 4, aiming at network conditions of narrow bandwidth and weak connection between the edge cloud and the user terminal, the transmission bottom layer adopts UDP for transmission, and a timeout retransmission and congestion control algorithm is designed by referring to a reliable mechanism of the TCP. The method specifically comprises the following steps:
Step 4.1, in order to avoid the data slicing of the UDP transport layer protocol and improve the reliability of data transmission, the self-adaptive data packet management is carried out on the support information with high priority in the step 2 at the application layer;
According to the network bandwidth in step 3.3, when the support information size of high priority in step 2 is larger than the preset maximum length of the packetization ) When the data is unpacked before being sent; when the support information length is smaller thanWhen the data is transmitted, a plurality of data packets are combined before the data is transmitted;
And 4.2, according to the transmission path of the step 3, adopting a mixed congestion control algorithm to efficiently, completely and reliably distribute the edge information data packet split in the step 4 from the distributed edge cloud to the user terminal, wherein the specific algorithm is as follows:
the link congestion control technique defaults to a hybrid congestion control algorithm that combines two modes of timeout retransmission and fast retransmission while allowing users to access two congestion control parameters: congestion window size and message transmission interval. Congestion control is in fact the process of dynamically calculating the congestion window size. The congestion control strategy of the method is similar to TCP and is divided into three stages of slow start, congestion avoidance and fast recovery. The three stages are switched with each other based on the strategy of packet loss. If the packet is lost and the packet loss times reach the upper limit, starting to retransmit overtime and entering a slow start stage; if disorder occurs, the data packet after the current sequence number is received, the quick retransmission is started, and the quick recovery stage is entered.
The judgment of packet loss depends on the overtime retransmission time of the message, and one set proper overtime retransmission time is the key for ensuring the real-time transmission performance. The timeout retransmission time should be positively correlated with the data round trip delay and should be higher than the round trip delay to tolerate some degree of jitter. Obtaining timeout retransmission timeThe specific method of (2) is as follows:
wherein the method comprises the steps of To time from the sending of a data packet to the receipt of a reply to the packet,For a period of time ofAnd (5) estimating an average value. And the initial value is set as。
Wherein the method comprises the steps ofIs thatAn estimate of the average deviation, i.e. the average jitter value. And the initial value is set as。
Wherein the method comprises the steps ofIn order to timeout the retransmission time,Is the granularity of the timer, i.e., the transmission time interval.
Step 4.3, the congestion window (the number of data packets sent at one time) will increase exponentially in the slow start phase, and the process obviously cannot continue all the time;
Step 4.4, entering the congestion avoidance phase when the congestion window exceeds a threshold. This threshold is referred to as a slow start threshold. During the congestion avoidance phase, the window will increase approximately linearly;
Step 4.5, as the window grows, the network capacity is gradually filled, and at this time, packet loss inevitably occurs, which means that the window should be reduced, which is called packet loss back-off. If the data packet is not retransmitted overtime, the quick recovery stage is entered. If the data packet is retransmitted overtime, re-entering a slow start stage;
step 4.6, the receiving side receives the data packet and responds the response ack packet to the sending side;
Step 4.7, when the sender does not receive the ack packet for a long time, the network between the edge cloud and the user terminal is considered to be blocked, and the current is modified While at the same time transmitting data of less than the length ofWhen the data is not subjected to the packet combining operation;
And 4.8, the receiver packs or unpacks the received data packet to obtain complete information, and submits the complete information to an upper layer application for processing.
The invention allows information to be cached to the communication network node by constructing an information parking lot formed by integrating the edge communication network node and the edge information service node, and fully exerts the calculation capacity of the communication network node; by designing an information hierarchical caching mechanism under a limited network, the information with high priority in an information parking lot is opened to be sent with priority in a green channel, and the information with low priority is continuously cached in a queue and waits for next information scheduling; by designing an information transmission 'express way' mechanism based on the network state of each node, according to the network bandwidth, the packet loss rate and the data demand node position, calculating the optimal information transmission node and path in a certain time period with the shortest overall time consumption and the most reliable transmission target of the information which can be received by the end user; by designing a reliable transmission mechanism under the edge limited network, the reliability and efficiency of data sharing transmission of the information system are improved.
In a word, the edge cloud information service guarantee capability is enhanced by utilizing the computing storage resources of the communication network nodes. Meanwhile, due to the specificity of the edge limited network, a dynamic priority algorithm, an improved Djjkstra algorithm and a mixed congestion control algorithm are used for carrying out dynamic self-adaptive control on data buffering and information sending, and the problems of unstable routing, congestion and the like caused by the fact that the traditional algorithm ignores factors such as network bandwidth and packet loss rate are avoided. Meanwhile, the invention can be flexibly configured according to actual demands so as to adapt to various network environments and application scenes, and effectively improve the quality and efficiency of network communication.
The above-described embodiment is only one applicable mode of the present invention, but the scope of the present invention is not limited thereto. The architecture and optimization algorithm of the present invention have been illustrated and described in detail in the examples, and the present invention is susceptible to any variation without departing from the principles of the technology and is intended to be covered by the scope of the claims.
Claims (2)
1. The cloud network convergence edge information service method based on the mobile communication node is characterized by comprising the following steps of:
Step S1, aiming at the characteristic that data of a fixed cloud center is firstly transmitted to an edge cloud communication network node, deploying light-weight clipping cache software on the communication network node, and constructing a cache queue integrated by an edge information service node and the edge communication network node for information caching; the specific method is as follows:
According to the role of a preset edge cloud node, determining the relation between the edge cloud node and each user terminal, and determining the mapping association between the communication network node and the ip address of the user terminal;
Establishing unique index information of all edge information by adopting a hash algorithm of SHA-1, so that the index relation is quickly searched by O (1) time complexity;
Establishing a key value mapping relation between the edge information and the edge cloud node as well as between the edge cloud node and the user terminal node, wherein a key is a support information keyword of SHA-1, and a value is a corresponding relation between the communication node and the user terminal node IP address;
uniformly storing the generated key-value mapping metadata to an edge cloud node connected with a fixed cloud center;
Step S2, aiming at the characteristics of narrow bandwidth and weak connection of a limited network, an information hierarchical caching mechanism is adopted, and information issued by a fixed cloud center is cached at each edge cloud node according to priority, wherein the specific mode is as follows:
taking out the information with highest priority from the cache queue in the step S1, and preferentially sending the information; for the information with low priority, continuing to cache in the queue, and waiting for next information scheduling; in addition, for the information in the cache queue, dynamically adjusting the information dynamic data priority of the cache information according to the original priority, the cache time and the historical transmission times;
Step S3, calculating optimal information transmission nodes and paths in a certain period of time according to the cache position of the information to be transmitted and the network connection condition of the tail end of each user and each communication node in the edge cloud node network, and taking the shortest overall time consumption and the most reliable transmission of the information received by the tail end user as targets, so as to construct an information transmission express way; the optimal transmission information node and path in a certain period of time are calculated, and the specific mode is as follows:
On the basis of Djjkstra algorithm, calculating the shortest path by using the node distance obtained by dynamic calculation as the weight of the graph, wherein the node distance is calculated based on the network bandwidth and the packet loss rate, and the formula is as follows:
Wherein, For the distance between node A and node B,/>For the bandwidth between node A and node B,/>For network packet loss rate,/>And/>Is a weight coefficient used for balancing the influence of bandwidth and packet loss rate;
searching out the shortest path from the source node to the target node by means of a greedy strategy of Djjkstra algorithm; the method comprises the steps that a plurality of source nodes for caching data are provided, shortest paths of a target node and all source nodes are compared, and a source node corresponding to a minimum value is selected as an optimal transmission information node;
Step S4, adopting a mixed congestion control algorithm under a limited network and combining a timeout retransmission reliable mechanism, and controlling the transmission rate distributed from the edge cloud node network to the user terminal according to the network state in the process of transmitting information, thereby avoiding network congestion and improving the reliability and efficiency of data sharing transmission of an information system; wherein the hybrid congestion control algorithm combines two modes of timeout retransmission and fast retransmission, while allowing the user to access two congestion control parameters: congestion window size and message transmission interval; the congestion window is the number of data packets sent at one time;
The mixed congestion control algorithm is divided into three stages of slow start, congestion avoidance and fast recovery, and the three stages are switched to each other based on a strategy of packet loss; if the packet is lost and the packet loss times reach the upper limit, starting to retransmit overtime and entering a slow start stage; if disorder occurs, receiving the data packet after the current sequence number, starting to rapidly retransmit and entering a rapid recovery stage;
The judgment of packet loss depends on the overtime retransmission time of the message The calculation method of (2) is as follows:
Wherein, For timeout retransmission time,/>For timer granularity, max represents taking the maximum value; /(I)The calculation method of (2) is as follows:
Wherein, For the time from the sending of a data packet to the receipt of the reply to this packet,/>For a period of time/> Estimation of average value,/>Initial value set to/> ;
The calculation method of (2) is as follows:
Wherein, For/>Estimation of average deviation, i.e. average jitter value,/>Initial value set to/>;
A slow start phase, wherein the congestion window grows exponentially, and enters a congestion avoidance phase after the congestion window exceeds a slow start threshold; in the congestion avoidance stage, the congestion window grows linearly, the network capacity is filled gradually along with the growth of the congestion window, and meanwhile, the congestion window is reduced due to packet loss, namely packet loss and back-off; if the data packet is not retransmitted overtime, entering a quick recovery stage; if the data packet is retransmitted overtime, the slow start phase is re-entered.
2. The cloud network convergence edge information service method based on the mobile communication node according to claim 1, wherein in step S2, the information dynamic data priority is calculated as follows:
Wherein, DP is the information dynamic data priority, e is the natural constant, Is a coefficient for adjusting the influence degree of different factors on the priorityIs the original priority of the cached data,/>Is the time that the current data is already in the cache queue,/>Is a time constant used to control the weight of the time factor,/>Is the historical number of transmissions in a time period,/>Is the maximum of the historical transmission times in the time period, when/>Time,/>At this point the priority of the buffered data is minimized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410355376.8A CN117955979B (en) | 2024-03-27 | 2024-03-27 | Cloud network fusion edge information service method based on mobile communication node |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410355376.8A CN117955979B (en) | 2024-03-27 | 2024-03-27 | Cloud network fusion edge information service method based on mobile communication node |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117955979A CN117955979A (en) | 2024-04-30 |
CN117955979B true CN117955979B (en) | 2024-06-18 |
Family
ID=90805176
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410355376.8A Active CN117955979B (en) | 2024-03-27 | 2024-03-27 | Cloud network fusion edge information service method based on mobile communication node |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117955979B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118041924B (en) * | 2024-04-11 | 2024-07-05 | 成都云智天下科技股份有限公司 | Method and system for improving cloud network fusion performance |
CN118413295A (en) * | 2024-06-24 | 2024-07-30 | 嘉环科技股份有限公司 | 6G communication data processing system and method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108449270A (en) * | 2018-03-21 | 2018-08-24 | 中南大学 | Buffer memory management method priority-based in opportunistic network |
CN115361333A (en) * | 2022-10-19 | 2022-11-18 | 中国电子科技集团公司第二十八研究所 | Network cloud fusion information transmission method based on QoS edge self-adaption |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103200125B (en) * | 2013-03-28 | 2015-10-21 | 广东电网公司电力调度控制中心 | Electric power data network node congestion bypassing method and system |
CN111464611B (en) * | 2020-03-30 | 2022-07-12 | 中科边缘智慧信息科技(苏州)有限公司 | Method for efficiently accessing service between fixed cloud and edge node in dynamic complex scene |
CN112020103B (en) * | 2020-08-06 | 2023-08-08 | 暨南大学 | Content cache deployment method in mobile edge cloud |
CN112804125B (en) * | 2021-02-09 | 2022-03-18 | 河南科技大学 | Named data network congestion control method based on fuzzy comprehensive evaluation algorithm |
CN113986486B (en) * | 2021-10-15 | 2024-06-18 | 东华大学 | Combined optimization method for data caching and task scheduling in edge environment |
CN115102986B (en) * | 2022-06-15 | 2023-12-01 | 之江实验室 | Internet of things data distribution and storage method and system in edge environment |
CN117156167A (en) * | 2023-09-13 | 2023-12-01 | 玉林师范学院 | Self-adaptive data transmission method and device of fusion transmission system |
-
2024
- 2024-03-27 CN CN202410355376.8A patent/CN117955979B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108449270A (en) * | 2018-03-21 | 2018-08-24 | 中南大学 | Buffer memory management method priority-based in opportunistic network |
CN115361333A (en) * | 2022-10-19 | 2022-11-18 | 中国电子科技集团公司第二十八研究所 | Network cloud fusion information transmission method based on QoS edge self-adaption |
Also Published As
Publication number | Publication date |
---|---|
CN117955979A (en) | 2024-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117955979B (en) | Cloud network fusion edge information service method based on mobile communication node | |
Tran et al. | Congestion adaptive routing in mobile ad hoc networks | |
CN110139319B (en) | Routing method for minimizing transmission delay of high dynamic delay network | |
CN110198278B (en) | Lyapunov optimization method for vehicle networking cloud and edge joint task scheduling | |
US10523777B2 (en) | System and method for joint dynamic forwarding and caching in content distribution networks | |
US11558302B2 (en) | Data transmission method and apparatus | |
US11937123B2 (en) | Systems and methods for congestion control on mobile edge networks | |
JP2002368800A (en) | Method for managing traffic and system for managing traffic | |
WO2022001175A1 (en) | Data packet sending method and apparatus | |
US11502956B2 (en) | Method for content caching in information-centric network virtualization | |
CN110351200B (en) | Opportunistic network congestion control method based on forwarding task migration | |
CN104683259A (en) | TCP congestion control method and device | |
CN111698732B (en) | Time delay oriented cooperative cache optimization method in micro-cellular wireless network | |
CN104994152A (en) | Web cooperative caching system and method | |
US11606409B1 (en) | Optimizing quality of experience (QoE) levels for video streaming over wireless/cellular networks | |
Choudhary et al. | Novel multipipe quic protocols to enhance the wireless network performance | |
Yang et al. | Edge caching with real-time guarantees | |
CN114499777B (en) | Data transmission method for cluster unmanned system | |
CN111464444B (en) | Sensitive information distribution method | |
Wang et al. | Energy-efficient multi-tier caching and node association in heterogeneous fog networks | |
CN116708300A (en) | Congestion control method, device and system | |
Sreekanth et al. | Performance improvement of DTN routing protocols with enhanced buffer management policy | |
US20240163219A1 (en) | System and method for data transfer and request handling among a plurality of resources | |
CN115002036B (en) | NDN network congestion control method, electronic equipment and storage medium | |
Gong et al. | Nuwa-RL: A Reinforcement Learning based Receiver-side Congestion Control Algorithm to Meet Applications Demands over Dynamic Wireless Networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |