CN115766598A - Method and system for flow management - Google Patents

Method and system for flow management Download PDF

Info

Publication number
CN115766598A
CN115766598A CN202211342571.4A CN202211342571A CN115766598A CN 115766598 A CN115766598 A CN 115766598A CN 202211342571 A CN202211342571 A CN 202211342571A CN 115766598 A CN115766598 A CN 115766598A
Authority
CN
China
Prior art keywords
application
link
traffic
processed
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211342571.4A
Other languages
Chinese (zh)
Inventor
卢国鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xingrong Information Technology Co ltd
Original Assignee
Shanghai Xingrong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xingrong Information Technology Co ltd filed Critical Shanghai Xingrong Information Technology Co ltd
Priority to CN202211342571.4A priority Critical patent/CN115766598A/en
Publication of CN115766598A publication Critical patent/CN115766598A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

An embodiment of the present specification provides a method and a system for traffic management, where the method includes: acquiring target flow and determining at least one application corresponding to the target flow; the target flow is the flow to be forwarded received by the access node; in response to there being a pending application of the at least one application for which a corresponding transmission link is not allocated: acquiring application characteristics of an application to be processed and link characteristics of at least one transmission link; and determining the preferred link of the application to be processed based on the application characteristic and the link characteristic. The link risk is judged and the link priority is determined based on the model so as to manage the flow, and the phenomena of blocking and screen splash in the application use process can be reduced.

Description

Method and system for managing flow
Technical Field
The present disclosure relates to the field of network technologies, and in particular, to a method and a system for traffic management.
Background
When multiple applications use the same router, they may select different links for traffic transmission. When a link used by an application is suddenly broken, the application may be stuck causing a data transmission interruption, and some applications may be stuck for a long time.
The traffic management needs to implement a scheduling policy to satisfy the comprehensive requirements of users on the service quality indexes such as bandwidth and priority, and complete service quality assurance is realized. Therefore, a traffic management method is needed to implement a traffic scheduling policy to meet the comprehensive demand of a user on traffic, and reduce or avoid the events that the user experiences the degradation of user experience, such as jamming and screen splash, in the application.
Disclosure of Invention
One or more embodiments of the present specification provide a method of traffic management. The method comprises the following steps: acquiring target traffic and determining at least one application corresponding to the target traffic, wherein the target traffic is traffic to be forwarded and received by an access node; in response to there being a pending application of the at least one application for which a corresponding transmission link is not allocated: acquiring the application characteristics of the application to be processed and the link characteristics of at least one transmission link; and determining a preferred link of the application to be processed based on the application characteristic and the link characteristic.
One or more embodiments of the present description provide a system for traffic management. The system comprises: a traffic obtaining module, configured to obtain a target traffic and determine at least one application corresponding to the target traffic, where the target traffic is a traffic to be forwarded and received by an access node; the characteristic acquisition module is used for acquiring the application characteristics of the application to be processed and the link characteristics of at least one transmission link when the application to be processed, which is not allocated with the corresponding transmission link, exists in the at least one application; and the link determining module is used for determining the preferred link of the application to be processed based on the application characteristic and the link characteristic.
One or more embodiments of the present specification provide a traffic management apparatus including a processor for performing a traffic management method.
One or more embodiments of the present specification provide a computer-readable storage medium storing computer instructions, and when the computer instructions in the storage medium are read by a computer, the computer executes a traffic management method.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a traffic management system according to some embodiments of the present description;
FIG. 2 is a block schematic diagram of a traffic management system according to some embodiments of the present description;
FIG. 3 is an exemplary flow diagram of a traffic management method according to some embodiments of the present description;
FIG. 4 is an exemplary diagram illustrating the determination of a preferred link for a pending application according to some embodiments of the present description;
FIG. 5 is an exemplary diagram illustrating the determination of a preferred link for a pending application according to further embodiments of the present description;
FIG. 6 is an exemplary flow diagram illustrating updating an application link load correspondence table according to some embodiments of the present description;
fig. 7 is an exemplary flow diagram illustrating determining an application link load correspondence table according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, without inventive effort, the present description can also be applied to other similar contexts on the basis of these drawings. Unless otherwise apparent from the context, or stated otherwise, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to or removed from these processes.
Traffic management may refer to the allocation of transmission links for each application's traffic request when there are multiple applications running on one or more terminals. The problem of deadlock caused by improper link allocation can be solved based on proper flow management, so that each application can smoothly run.
Fig. 1 is a schematic diagram of an application scenario of a traffic management system according to some embodiments of the present disclosure. As shown in fig. 1, a user terminal 110, an access node 120, an intermediate node 130, a request destination 140, and a transmission link 150 may be included in the application scenario 100.
User terminal 110 refers to one or more terminal devices or software used by a user. In some embodiments, the user terminal 110 may be used by one or more users, and may include users who directly use the service, and may also include other related users. In some embodiments, the user terminal 110 may be one or any combination of a mobile device, a tablet computer, a laptop computer, a desktop computer, or other device having input and/or output capabilities. In some embodiments, the mobile device may include a wearable apparatus, a smart mobile device, and the like, or any combination thereof.
In some embodiments, the user terminal 110 includes a plurality of terminal devices, each of which may be connected to the same communication network, such as a WIFI network.
In some embodiments, each terminal device may run multiple applications. For example, the user terminal 110 includes a plurality of APP applications. The APP application refers to software installed on the terminal device. In some embodiments, the APP application may be pre-installed software or may be third-party application software installed by the user. The above examples are intended only to illustrate the breadth of the user terminal 110 and not to limit its scope.
The access node 120 may refer to a node comprising a user terminal of a party or a cluster of user terminal devices belonging to a party and connected to an intermediate node via a network interface. The access node may obtain the target traffic. In some embodiments, the cluster of devices may be centralized or distributed. In some embodiments, the cluster of devices may be regional or remote. In some embodiments, access node 120 may comprise a host, terminal, or the like. Such as routers, computers with computing resources, and the like.
In some embodiments, the plurality of terminal devices included in the user terminal 110 are all connected to the network transmitted by the access node 120, for example, the access node 120 is a WIFI router in the home of the user, and all smart devices in the home of the user are connected to the WIFI network transmitted by the router. In some embodiments, traffic requests issued by the user terminal 110 may be collected and aggregated based on the access node 120, and each traffic request may be redistributed to a corresponding intermediate node 130 according to link allocation.
The intermediate node 130 may comprise a network node that functions as a data exchange and relay in network communications. The intermediate node 130 may refer to a node that includes a single device of a party or a cluster of devices belonging to a party and is connected to the access network via a network interface. In some embodiments, the cluster of devices may be centralized or distributed. In some embodiments, the cluster of devices may be regional or remote.
In some embodiments, one or more intermediate points (e.g., deployed base stations) may exist in the process from the access node 120 to the base station or application server, and these intermediate points are the intermediate node 130. In some embodiments, the intermediate nodes 130 may be planned and laid out in advance by the relevant government departments or operators. In some embodiments, the intermediate node 130 may comprise a wired or wireless network access point, such as a base station and/or a network switching point.
The request endpoint 140 may be used to process data and/or information of at least one component in the application scenario 100 or an external data source (e.g., a cloud data center). In some embodiments, the request destination 140 may be a single server or a group of servers. The set of servers can be centralized or distributed (e.g., servers can be distributed systems), dedicated, or served by other devices or systems.
In some embodiments, when the request destination 140 corresponds to a server, the server may correspond to an APP application that issues a traffic request, for example, when a user is browsing a hundred degree webpage, the user sends a traffic request to the hundred degree server, and the hundred degree server (or a third party webpage service provider) returns content requested by the user, which is the request destination 140. In some embodiments, in the process that the user sends a traffic request to the hundredth server, or the hundredth server returns the request content to the user terminal, the user terminal may first transit through an intermediate base station (for example, if the direct transmission distance is too long), and then the base station may be regarded as the intermediate node 130.
In some embodiments, the request destination 140 may be regional or remote. In some embodiments, the request endpoint 140 may be implemented on a cloud platform or provided in a virtual manner. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-tiered cloud, and the like, or any combination thereof.
In some embodiments, the request destination 140 may also correspond to a base station. When the request destination 140 is a base station, the process of transmitting further to a more remote party (e.g., an application server) after the traffic request arrives at the base station may not be considered.
For example only, the transmission route of a traffic request is: a user application is generated → a wireless router in the user's home → base station 1 → base station 2 →. → base station n → a destination server of the application, then in this transmission route, the wireless router is the access node 120, the base station 1, the base station 2,. The base station n is the intermediate node 130, and the destination server of the application is the request destination 140.
The transmission link 150 may be a path in the transmission from the access node 120 to the request destination 140, and different paths may be selected from the access node 120 to the request destination 140, where different paths correspond to different transmission links 150, and each path selects a different route to reach the request destination 140. In some embodiments, the transmission links may be physically distinct links in the figure. For example, different traffic requests select different physical underlying lines. In some embodiments, the transmission link may be a logically differentiated (physically indistinguishable) link. For example, three virtual logical links (each assigned a respective bandwidth) are divided inside the router 1, and different applications use different logical links. The above is merely an illustrative example of the transmission link, and the transmission link in the present embodiment may include, but is not limited to, the foregoing cases.
Fig. 2 is a block diagram of a traffic management system in accordance with some embodiments of the present description. In some embodiments, the traffic management system 200 may include a traffic acquisition module 210, a feature acquisition module 220, and a link determination module 230.
The traffic obtaining module 210 is configured to obtain a target traffic and determine at least one application corresponding to the target traffic, where the target traffic is a traffic to be forwarded and received by an access node.
The feature obtaining module 220 is configured to obtain an application feature of the application to be processed and a link feature of at least one transmission link when there is an application to be processed in the at least one application, where the corresponding transmission link is not allocated.
A link determining module 230, configured to determine a preferred link of the application to be processed based on the application characteristic and the link characteristic. For more description on determining the preferred link, refer to the related description of fig. 4 and 5.
In some embodiments, the traffic management system 200 may further include the following modules:
a table construction module 240, configured to construct an application link load correspondence table based on each application in the at least one application and its corresponding transmission link; the application link load corresponding table comprises the binding relationship between each application and a transmission link. For more description on the construction of the table, refer to the related description of fig. 6 and fig. 7.
An information obtaining module 250, configured to obtain load information of each transmission link in the application link load correspondence table periodically or when a preset update condition is met. For more description of obtaining load information, refer to the related description of fig. 6.
A table updating module 260, configured to update the binding relationship between each application and the transmission link based on the load information of each transmission link in the application link load corresponding table, so as to update the application link load corresponding table. For more description of the update table, refer to the related description of fig. 6 and fig. 7.
It should be appreciated that the system and its modules illustrated in FIG. 2 may be implemented in a variety of ways. For example, in some embodiments the link determination module may determine the preferred link for the pending application from the application characteristics and the link characteristics through a genetic algorithm or machine learning.
It should be noted that the above descriptions of the traffic management system and its modules are only for convenience of description, and should not be construed as limiting the present disclosure to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. In some embodiments, the traffic acquisition module, the feature acquisition module, and the link determination module disclosed in fig. 2 may be different modules in a system, or may be a module that implements the functions of two or more modules described above. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present disclosure.
Fig. 3 is an exemplary flow diagram of a traffic management method according to some embodiments of the present description. In some embodiments, the process 300 may be performed by the traffic management system 200. As shown in fig. 3, the process 300 includes the following steps:
step 310, obtaining the target flow and determining at least one application corresponding to the target flow. Step 310 may be performed by traffic acquisition module 210.
The target traffic may refer to traffic received by the access node to be forwarded.
When the user terminal sends the flow request data to the application server or the network, an access node is arranged between the user terminal and the application server or the network, the flow request firstly reaches the access node, and then the access node distributes a corresponding link and then flows to the application server or the base station network and the like. In this process, the access node needs to forward the received traffic request to a link corresponding to the traffic request for subsequent processing. And the traffic request which is not forwarded to the corresponding link is the traffic to be forwarded. Any traffic to be forwarded received by the access node can be used as the target traffic.
The target traffic may be obtained by a router or computer corresponding to the access node. For more description of the access node, see the description related to fig. 1.
The at least one application corresponding to the target traffic refers to one or more applications that generate traffic to be forwarded. For example, the current traffic to be forwarded comes from WeChat and Tao, respectively, and the at least one application corresponding to the target traffic includes WeChat and Tao.
In some embodiments, the traffic acquisition module 210 may determine at least one application corresponding to the target traffic in a variety of ways. For example, the access node may determine an application corresponding to the target traffic by analyzing a destination address of the target traffic. For example only, the destination of the target traffic generated by the wechat APP may be a remote wechat server, and therefore if it is determined by analysis that the target traffic corresponds to the address of the wechat server at the remote side of the request, the application corresponding to the target traffic may be considered as wechat, that is, the target traffic is considered to be from the wechat application. For another example, the access node may unpack the target traffic to obtain some traffic characteristics of the target traffic, and determine the application corresponding to the target traffic based on a search result of the traffic characteristics in a preset characteristic matching library.
Step 320, in response to that there is an application to be processed in at least one application to which a corresponding transmission link is not allocated, the operations of step 321 and step 322 may be performed. Step 320 may be performed by feature acquisition module 220.
A transmission link may consist of two end nodes and communication lines between the nodes. For further details of the transmission link, reference is made to the description relating to fig. 1.
A pending application for which a corresponding transmission link is not allocated may refer to an application for which a transmission link is to be allocated. For example, game B has not allocated a transmission link, and game B is a pending application that has not allocated a corresponding transmission link.
In some embodiments, when there is a pending application in the access node that is not allocated a corresponding transport link, the following steps may be performed:
step 321, obtaining the application characteristics of the application to be processed and the link characteristics of at least one transmission link. Step 321 may be performed by feature acquisition module 220.
Application characteristics may refer to characteristic data relating to the attributes of the application itself to be processed. For example, application characteristics may include application type, application space footprint size, application update frequency, and the like. For example, application types may include games, video software, online chat software, and the like.
Link characteristics may refer to characteristic data relating to the attributes of the application itself to be processed. For example, the link characteristics may include nominal characteristics, real-time characteristics, average characteristics, etc. of the link. For example, the rating characteristics may include a set maximum bandwidth and maximum load, etc. For example, the real-time characteristics may include a current remaining bandwidth and a current load, etc. For example, the average characteristics may include average remaining bandwidth, average load, and the like.
The feature obtaining module 220 may obtain the application features of the application to be processed in various ways. Further description of the type of application is obtained with reference to the details of step 310. The occupied size of the application space and the application updating frequency can be determined by reading the application information in the mobile phone.
The characteristic obtaining module 220 may obtain the link characteristics of the transmission link in various ways. For example, the feature acquisition module 220 may acquire the link feature through some network command. Network commands are test tools used to detect network related problems.
Step 322, determining a preferred link of the application to be processed based on the application characteristic and the link characteristic. Step 322 may be performed by link determination module 230.
The preferred link may refer to the best link for transmitting traffic data corresponding to the pending application. In some embodiments, a link that is the least stuck and jumbled when transmitting the traffic data corresponding to the application to be processed may be considered as the preferred link.
In some embodiments, the link determination module 230 may determine the preferred link for the pending application based on the application characteristics and the link characteristics in a variety of ways. For example, the link determining module 230 may determine, based on the application characteristics and the link characteristics, a link with the lowest link load as a preferred link of the to-be-processed application, from among links that can bear transmission of traffic data corresponding to the to-be-processed application. For another example, the link determining module 230 may determine a processing priority based on the user and the application category corresponding to the pending application, and then determine a link meeting the processing priority as a preferred link of the pending application.
In some embodiments, the link determination module 230 may determine a projected traffic characteristic of the pending application based on the application characteristic; and determining the preferred link of the application to be processed based on the estimated flow characteristics and the link characteristics. Further description is provided with reference to the details of fig. 4.
And determining a preferred link of the application to be processed based on the actual conditions corresponding to the application characteristics and the link characteristics, and preferentially distributing the application with high demand on flow to a stable link, thereby reducing the frequency of the problems of jamming and the like.
Fig. 4 is an exemplary diagram illustrating determining a preferred link for a pending application according to some embodiments of the present description.
In some embodiments, the link determining module 230 may determine the preferred link of the pending application based on the application characteristic and the link characteristic, and may include: determining the estimated flow characteristics of the application to be processed based on the application characteristics; and determining the preferred link of the application to be processed based on the estimated flow characteristics and the link characteristics.
The estimated traffic characteristics may refer to information that is estimated that a particular application is associated with characteristics of traffic data, such as bandwidth occupancy, average size of transmitted packets, average frequency of transmitted packets, average size of received packets, average frequency of received packets, and the like. The estimated traffic characteristics may be represented by a vector, for example, (0.5,1,1,0.5,2) may represent the estimated traffic characteristics as 50% bandwidth occupancy, 1kb average size of transmitted packets, 1 packet/second average frequency of transmitted packets, 0.5kb average size of received packets, and 2 packets/second average frequency of received packets.
In some embodiments, the link determining module 230 may determine the pre-estimated traffic characteristics of the application to be processed according to the first preset rule based on the application characteristics. The first preset rule may be set empirically. For example, the first preset rule may be that the application type is a game, the size of the occupied application space is a MB, the estimated traffic characteristics of the application are bandwidth occupancy rate a, the average size of the transmitted data packets is b, the average frequency of the transmitted data packets is c, the average size of the received data packets is d, the average frequency of the received data packets is e, and the vector is represented as (a, b, c, d, e).
In some embodiments, the predicted flow characteristics are also related to user usage characteristics.
The user usage characteristics can refer to the relevant characteristic information of the user for using a certain application. In some embodiments, the user usage characteristics may include user usage frequency, average time of single use, and the like.
The user usage frequency may refer to the number of times the user starts the application program in a certain period of time, for example, the user starts the a application 6 times within 3 days in the history, and then the user usage frequency of the a application by the user is 2 times/day.
The average time for single use refers to the average time length for a user to use a certain application, for example, the user starts the a application 6 times within 3 days of the history, and uses the a application for 0.5 hour, 1.5 hour, 3 hours, 4.5 hours, 1 hour and 1.5 hours each time, respectively, so that the average time for single use of the user for the a application is 2 hours.
In some embodiments, the link determination module 230 may derive the user usage characteristics based on an analysis of historical behavior of users of the access nodes. An access node may refer to a device, e.g., a router or the like, that a user terminal accesses a network. The user historical behavior may refer to the user's interaction with an application at a historical time, e.g., the user historical behavior may refer to the user searching for a product in a shopping application at a historical time. For example, the link determining module 230 may search the a application for the B product 3 times in history 3 days and browse for 6 hours in total for the user based on the historical behavior of the router, and obtain the user usage characteristic that the user usage frequency of the a application is 1 time/day and the average time of single usage is 2 hours.
In some embodiments, the predicted flow characteristics may be related to user usage characteristics. For example, the predicted traffic characteristics may be proportional to the user usage characteristics, and the higher the user usage frequency and the longer the average time of single usage in the user usage characteristics, the higher the bandwidth occupancy rate, the larger the average size of the transmitted data packets and the received data packets, and the lower the average frequency of transmitting the data packets and receiving the data packets in the predicted traffic characteristics.
In some embodiments of the present specification, by relating the estimated flow characteristics to the user usage characteristics, the usage habits of the user on the application to be processed may be combined to obtain more accurate estimated flow characteristics.
In some embodiments, as shown in fig. 4, the link determination module 230 may predict 430 the predicted traffic characteristics via a predicted traffic characteristic prediction model; the predictive flow characteristics prediction model 430 is a machine learning model.
In some embodiments, the predictive flow characteristic prediction model 430 may include a Deep Neural Network (DNN) model, a Convolutional Neural Network (CNN) model, a Recurrent Neural Network (RNN) model, a Graph Neural Network (GNN) model, or the like, or any combination thereof.
In some embodiments, the inputs to the predictive flow characteristics prediction model 430 may be the application characteristics 410, the historical flow characteristics 470. The output of the predicted flow characteristic prediction model 430 may be a predicted flow characteristic 440.
The application feature 410 may use a vector representation, for example, (1, 100, 1) may represent that the application type is 1, the application space occupation size is 100MB, and the application update frequency is 1/month in the application feature. Different application types may be indicated by different numbers, e.g. game 1, video software 2, etc. For more on the application features, see fig. 3 and its related description.
The historical traffic characteristics 470 may refer to traffic characteristics stored in the internet/database when some network users actually use a certain application, for example, the historical traffic characteristics 470 may be bandwidth occupancy of users using application a in historical 1 year, average size of transmitted data packets, average frequency of transmitted data packets, average size of received data packets, average frequency of received data packets, and the like. The historical traffic characteristics 470 may be represented by a vector, for example, (0.6,1,2,1,2) may represent the historical traffic characteristics as a bandwidth occupancy of 60% over a historical period of time, an average size of 1kb for transmitted packets, an average frequency of 2 packets/second for transmitted packets, an average size of 1kb for received packets, and an average frequency of 2 packets/second for received packets.
In some embodiments, if the historical traffic characteristics 470 are not stored in the internet/database, they can be completed by using a 0-padding method, etc., and the training process is performed accordingly. In some embodiments, for an application that has been running on a transmission link, the input historical traffic characteristics 470 can be traffic characteristics that the application has been running on the transmission link for a period of time.
In some embodiments, the predictive flow characteristics predictive model 430 may be trained. For example, a first training sample is input into the initial estimated flow characteristic prediction model, a loss function is established based on the label and the output result of the initial estimated flow characteristic prediction model, parameters of the initial estimated flow characteristic prediction model are updated, model training is completed when the loss function of the initial estimated flow characteristic prediction model meets preset conditions, and the trained estimated flow characteristic prediction model is obtained. The preset condition may be that the loss function converges, the number of iterations reaches a threshold, and the like.
In some embodiments, the first training sample may include actual application characteristics and historical flow characteristics of the sample application. The first training sample may be obtained based on historical data. The label of the first training sample may be the sample actual traffic characteristic corresponding to the sample application. The label of the first training sample can be determined by manual labeling or automatic labeling.
In some embodiments, the input to the predictive flow characteristic prediction model may also include user usage characteristics 420. In some embodiments, the user usage characteristics 420 may be represented using a vector, for example, (3, 2) may represent a user usage characteristic of 3 times/day for user usage and 2 hours for average time per use. For more on the user usage characteristics, user usage frequency and average time per use, see the foregoing and related description.
In some embodiments, where the input to the predictive flow characteristics prediction model includes user usage characteristics 420, the first training sample may include actual application characteristics for the sample application, historical flow characteristics, and user usage characteristics for the corresponding sample application.
By processing the application characteristics and the user use characteristics based on the estimated flow characteristic prediction model according to some embodiments of the description, the estimated flow characteristic can be determined more conveniently and accurately.
In some embodiments, the link determination module 230 may implement determining the preferred link 460 for the pending application based on the forecasted traffic characteristics 440 and the link characteristics 450 based on a variety of ways. For example, the link determining module 230 may directly select the transmission link with the lowest current load as the preferred link of the application to be processed based on the estimated traffic characteristics and the link characteristics. For another example, the link determining module 230 may obtain the average load of the transmission links based on the estimated traffic characteristics and the link characteristics, and select the transmission link with the lowest average load as the preferred link of the application to be processed. The average load may refer to an average amount of the transmission link load in a certain time period, for example, the link determining module 230 may obtain that the maximum load of a certain transmission link in the historical time period is 100MB, and the minimum load is 60MB, and then the average load of the transmission link in the historical time period is 80MB.
In some embodiments, the link determining module 230 may predict the estimated fluency of the application to be processed after the application to be processed joins the candidate link through the fluency prediction model, and determine the preferred link of the application to be processed based on the estimated fluency of the application to be processed for each of the at least one transmission link. For more on determining a preferred link based on the estimated fluency of the pending application, reference is made to FIG. 5 and its associated description.
In some embodiments of the present description, the preferred link for the pending application is determined by determining a projected traffic characteristic for the pending application based on the application characteristic. The method and the device can be combined with the actual use condition of the application to be processed, so that the process of determining the optimal link of the application to be processed is more accurate and efficient, and a good transmission effect is further ensured.
FIG. 5 is an exemplary diagram illustrating determining a preferred link for a pending application according to further embodiments of the present description. In some embodiments, flow 500 may be performed by traffic management system 200. As shown in fig. 5, the process 500 includes the following steps:
step 510, predicting the estimated fluency of the application to be processed after the application to be processed is added into the candidate link through a fluency prediction model based on the estimated flow characteristics and the link characteristics; the candidate link is any one of at least one transmission link; the fluency prediction model is a machine learning model. Step 510 may be performed by the link determination module 230.
The candidate link may refer to a link that is available for transmitting traffic data corresponding to the pending application. For example, there are link a, link B, and link C for transmitting traffic data, link a, link B, and link C may be all candidate links.
The estimated fluency may refer to the operating fluency of the application when the user terminal operates the application when the estimated traffic data of the application is transmitted based on the corresponding candidate link. For example, the estimated fluency may be 60 Frames Per Second (FPS).
In some embodiments, as shown in fig. 5, the fluency prediction model 513 may be used to process the predicted traffic characteristics 512 of the application to be processed and the link characteristics 511 of the candidate links to predict the predicted fluency 514 of the application to be processed.
In some embodiments, fluency prediction model 513 may include a DNN model, a CNN model, an RNN model, a GNN model, the like, or any combination thereof.
In some embodiments, the inputs to the fluency prediction model 513 may be the predicted traffic characteristics 512 of the application to be processed and the link characteristics 511 of the candidate links. For more on the predicted flow characteristics, see fig. 4 and its associated description. The link characteristics may be represented by vectors, for example ((10, 100), (4, 60), (5, 50)) may represent that the maximum bandwidth set in the link characteristics is 10MB/s, the maximum load is 100MB, the current residual bandwidth is 4MB/s, the current load is 60MB, the average residual bandwidth is 5MB/s, and the average load is 50MB, and further contents about the link characteristics may be found in fig. 3 and its related description.
The output of the fluency prediction model 513 can be the estimated fluency 514 of the pending application.
In some embodiments, fluency prediction model 513 may be derived through training. For example, a second training sample is input into the initial fluency prediction model, a loss function is established based on the second training sample label and the output result of the initial fluency prediction model, the parameters of the initial fluency prediction model are updated, model training is completed when the loss function of the initial fluency prediction model meets preset conditions, and the trained fluency prediction model is obtained. The preset condition may be that the loss function converges, the number of iterations reaches a threshold, and the like.
In some embodiments, the second training sample may include actual traffic characteristics of the sample application and actual link characteristics of the sample link over which the sample application was transmitted. The second training sample may be obtained based on historical data. The label of the second training sample may be the corresponding fluency of the actual application of the sample application after joining the sample link. The label of the second training sample can be determined by manual labeling or automatic labeling.
In some embodiments, the input of the fluency prediction model may be predicted flow characteristics of multiple applications, link characteristics of candidate links; the output may be a corresponding estimated fluency for each application.
Multiple applications may refer to applications on the same transmission link, e.g., application C, application D, and application E, which may run simultaneously on link 1. Illustratively, the application C and the application D are already running on the link 1, the link determining module 230 predicts that the application E to be processed is added to the link 1, when the estimated fluency of the application E to be processed is determined, the input of the fluency prediction model is the estimated flow characteristics of the application C, the application D and the application E and the link characteristics of the link 1, and the output is the estimated fluency corresponding to the application C, the application D and the application E respectively. And the estimated flow characteristics of the application C and the application D can be determined based on the estimated flow characteristic prediction model.
For example, the fluency prediction model has the input of ((a) 1 ,b 1 ,c 1 ,d 1 ,e 1 ),(a 2 ,b 2 ,c 2 ,d 2 ,e 2 ),(a 3 ,b 3 ,c 3 ,d 3 ,e 3 ) Wherein, a) 1 -a 3 Bandwidth occupancy, b, on behalf of applications C-E 1 -b 3 Average size of transmission packets, C, on behalf of applications C-E 1 -c 3 Average frequency of transmitted packets, d, on behalf of applications C-E 1 -d 3 Average size of received packets, E, on behalf of applications C-E 1 -e 3 Representative applications C-to EAverage frequency of receiving packets; the output of the fluency prediction model is (k, l, m), and k, l, m respectively represent the estimated fluency of application C, application D, and application E.
In some embodiments, the fluency prediction model that can simultaneously obtain the estimated fluency corresponding to multiple applications can be obtained through training. For example, a third training sample is input into the initial fluency prediction model, a loss function is established based on the label of the third training sample and the output result of the initial fluency prediction model, the parameters of the initial fluency prediction model are updated, the model training is completed when the loss function of the initial fluency prediction model meets the preset conditions, and the trained fluency prediction model is obtained. The preset condition may be that the loss function converges, the number of iterations reaches a threshold, and the like.
In some embodiments, the third training sample may include actual traffic characteristics for a plurality of sample applications and link characteristics for the sample link. The third training sample may be obtained based on historical data. The label of the third training sample may be the actual fluency of the application of the plurality of samples. The label of the third training sample can be determined by manual labeling or automatic labeling.
In some embodiments of the present description, estimated flow characteristics of multiple applications and link characteristics of candidate links are processed through a fluency prediction model to obtain estimated fluency of each application, and mutual influences and interactions between multiple applications on the same transmission link can be considered at the same time, so that the estimated fluency of an application to be processed is determined more accurately.
Step 520, determining a preferred link of the application to be processed based on the estimated fluency of each transmission link of the application to be processed in at least one transmission link. Step 520 may be performed by link determination module 230.
In some embodiments, the link determining module 230 can determine the preferred link of the pending application based on the estimated fluency of each of the at least one transmission link for the pending application in a variety of ways. For example, the link determination module 230 can directly determine the transmission link with the highest estimated fluency as the preferred link for the pending application.
In some embodiments, the link determination module 230 can predict a future overload frequency for each transmission link based on the overload prediction model, the selection of the preferred link also being related to the future overload frequency for each transmission link.
In some embodiments, the overload prediction model may include a DNN model, a CNN model, an RNN model, a GNN model, or the like, or any combination thereof.
In some embodiments, the inputs to the overload prediction model may be the predicted traffic characteristics of the pending application and the nominal characteristics among the link characteristics of the candidate links. The output of the overload prediction model may be the future overload frequency of the candidate link. For more details on the predicted flow characteristics, reference may be made to fig. 4 and its associated description.
Nominal characteristics may refer to data relevant to the normal operation of the transmission link. Such as maximum bandwidth, maximum load, etc., set by the transmission link. The rated characteristic may be represented using a vector, for example, (10, 100) may represent that the maximum bandwidth set in the rated characteristic is 10MB/s and the maximum load is 100MB. For more on the rating feature, see fig. 3 and its associated description.
Overload frequency may refer to the frequency of occurrence of an overload condition of the transmission link. An overload condition may refer to a condition in which the transmission link bandwidth is fully occupied, for example, an overload condition may be a transmission link bandwidth occupancy of 100%. In some embodiments, the overload frequency may be expressed in terms of the number of times an overload condition occurs within a certain time period or a percentage of the time an overload condition occurs within a certain time period. For example, if an overload condition occurs 5 times in an hour of history, then the overload frequency may be 5 times per hour. As another example, if an overload condition occurs for 10 minutes during one hour of the history, the overload frequency may be 1/6 of the time. The future overload frequency may refer to an overload frequency of the transmission link for a future period of time, e.g. the future overload frequency may be 7 overload frequencies/hour in a future day.
In some embodiments, the overload prediction model may be derived by training. For example, a fourth training sample is input into the initial overload prediction model, a loss function is established based on the label of the fourth training sample and the output result of the initial overload prediction model, the parameters of the initial overload prediction model are updated, when the loss function of the initial overload prediction model meets the preset conditions, the model training is completed, and the trained overload prediction model is obtained. The preset condition may be that the loss function converges, the number of iterations reaches a threshold, etc.
In some embodiments, the fourth training sample may include actual traffic characteristics and sample link characteristics of the sample application. The fourth training sample may be obtained based on historical data. The label of the fourth training sample may be the actual overload frequency of the sample link over a period of time. The label of the fourth training sample can be determined by manual labeling or automatic labeling.
In some embodiments, the link determining module 230 may determine the transmission link preference value according to a fifth preset rule based on the obtained estimated fluency and the future overload frequency, and further select the transmission link with the highest transmission link preference value as the preferred link. The fifth preset rule may be set empirically. For example, the fifth predetermined rule may be that when the estimated fluency is 0-30FPS or the future overload frequency is more than 10 times/hour, the preferred value is 0.3, the estimated fluency is 30-60FPS or the future overload frequency is 5 times/hour-10 times/hour, the preferred value is 0.6, the estimated fluency is more than 60FPS or the future overload frequency is 0 times/hour-5 times/hour, and the preferred value is 0.9, based on the obtained estimated fluency and overload frequency, a preferred value can be determined by the fifth predetermined rule, and then the average value or the weighted sum of the two preferred values is used as the preferred value of the transmission link.
In some embodiments of the present description, the overload prediction model is used to predict future overload frequencies of each transmission link and determine the preferred link of the application to be processed based on the future overload frequencies, so that the future overload frequencies can be efficiently and accurately obtained, and the future overload frequencies are added into the selection consideration of the preferred link, so that the determination of the preferred link is more accurate and reasonable, and the smooth application and use effect is further ensured.
In some embodiments of the description, the estimated fluency after the application to be processed is added into the candidate link is predicted through the fluency prediction model, the preferred link is determined based on the estimated fluency, and the preferred link can be determined from multiple dimensions according to the use condition of the candidate link, so that the determination of the preferred link is more efficient and reasonable, the fluency of application operation is improved, and a user has better application use experience.
Fig. 6 is an exemplary flow diagram illustrating updating an application link load correspondence table according to some embodiments of the present description. In some embodiments, the flow 600 may be performed by the traffic management system 200, for example, based on the table building module 240. As shown in fig. 6, the process 600 includes the following steps:
step 610, an application link load corresponding table is constructed based on each application in the at least one application and its corresponding transmission link.
The application link load correspondence table may refer to a table reflecting correspondence between different applications and transmission links. For example, a table is constructed for the second column with the application name as the first column and the transmission link corresponding to the traffic of the application.
In some embodiments, the application link load correspondence table may include a binding relationship between each application and the transmission link. The binding relationship may refer to a correspondence between the application and a transmission link of the traffic thereof, for example, if the traffic of the application a is transmitted through the link 1, the binding relationship exists between the application a and the link 1.
In some embodiments, the table building module 240 may build the application link load correspondence table in a variety of ways. For example, the link determination module 230 may predict future overload frequencies of each link based on the overload prediction model, respectively, and transfer a portion of applications in transmission links whose future overload frequencies exceed a first threshold to transmission links whose future overload frequencies are below a second threshold. Wherein the first threshold and the second threshold may be set empirically. The table building module 240 may build the application link load corresponding table with the application name as a first column and the transmission link of the traffic corresponding to the application as a second column based on the transferred application and the corresponding link thereof.
In some embodiments, as shown in fig. 7, the table building module 240 may determine the application link load correspondence table through a preset algorithm. Fig. 7 is an exemplary flow diagram illustrating the determination of an application link load correspondence table based on a predetermined algorithm according to some embodiments of the present description. The process 700 may be performed based on the table building module 240, as shown in fig. 7, the process 700 may include:
step 710, generating a plurality of initial candidate corresponding tables; the initial candidate correspondence table includes a plurality of sets of "application-link pairs".
The initial candidate correspondence table may refer to a correspondence table of the application and transmission link as candidates, which is initially constructed. For example, the initial candidate correspondence table may be a table containing a plurality of sets of "application-link pairs", and at least some of the "application-link pairs" are different in different initial candidate correspondence tables.
An "application-link pair" may refer to a corresponding binding relationship formed by an application and a corresponding traffic transmission link, for example, if the traffic transmission link corresponding to the application a is link 1, the "application-link pair" corresponding to the application a may be "application a-link 1".
In some embodiments, the table construction module 240 may construct the initial candidate correspondence table in a variety of ways. For example, the table building module 240 may obtain historical data from a storage device or the like inside or outside the traffic management system 200, and directly build the initial candidate correspondence table according to the correspondence relationship between the application and the corresponding traffic transmission link in the historical data. For another example, the table building module 240 may randomly pair the application and the transmission link, build a corresponding relationship between the application and the corresponding traffic transmission link, and further build the initial candidate mapping table.
At step 720, evaluation values of the respective first candidate correspondence tables are determined.
In some embodiments, when the number of iteration rounds =1, the first candidate correspondence table is an initial candidate correspondence table, and when the number of iteration rounds >1, the first candidate correspondence table is a third candidate correspondence table of a previous iteration round.
The first candidate correspondence table may refer to a correspondence table of applications and transmission links that need to be subjected to iteration processing in each iteration. In a first iteration, the first candidate correspondence table may be an initial candidate correspondence table containing multiple sets of "application-link pairs".
In some embodiments, the first candidate correspondence table may be represented in a vector-based manner. For example, a first candidate correspondence table may include "application a-link 1", "application B-link 2", and "application C-link 3", and the first candidate correspondence table may be expressed as ((a, 1), (B, 2), (C, 3)).
The first candidate correspondence table may be determined based on a result of a previous iteration round, or based on an initial candidate correspondence table, for example, in a first iteration round, the first candidate correspondence table may be an initial candidate correspondence table, and in a subsequent iteration, the first candidate correspondence table is determined based on a third candidate correspondence table of the previous iteration round. See below for a detailed description of the third candidate correspondence table.
The evaluation value may refer to a relevant parameter for evaluating the superiority or inferiority of the first candidate correspondence table. The evaluation value may be positively correlated with the superiority or inferiority of the first candidate correspondence table. That is, the better the transmission effect corresponding to the preferred link determined in the first candidate correspondence table is, the larger the evaluation value corresponding to the first candidate correspondence table is. In some embodiments, the evaluation value may be represented by a number from 0-10 or words such as "excellent", "general", etc.
In some embodiments, the evaluation value may be determined in various ways. For example, the determination may be performed by manual calculation, or may be performed by using an algorithm model or the like.
In some embodiments, the table construction module 240 may determine the estimated fluency of each application after each application is added to the corresponding transmission link according to the setting in the first candidate correspondence table through the fluency prediction model prediction based on the first candidate correspondence table, and determine the evaluation value based on the estimated fluency and the future overload frequency of each transmission link after each application is added to the corresponding transmission link through the overload prediction model prediction.
For example, the table construction module 240 can determine the evaluation value according to a second predetermined rule based on the predicted estimated fluency and the future overload frequency. For more details on predicting the estimated fluency of each application by the fluency prediction model and predicting the future overload frequency of each transmission link by the overload prediction model, reference is made to fig. 5 and its associated description. The second preset rule may be set empirically. For example, the second predetermined rule may be that when the estimated fluency is 0 to 30FPS or the future overload frequency is 10 times/hour or more, the evaluation value is 0.3, the estimated fluency is 30 to 60FPS or the future overload frequency is 5 times/hour to 10 times/hour, the estimated fluency is 0.6, the estimated fluency is 60FPS and above or the future overload frequency is 0 times/hour to 5 times/hour, and the evaluation value is 0.9, based on the estimated fluency and the overload frequency, an evaluation value corresponding to an "application-link pair" may be respectively determined, then an average value of the two evaluation values may be used as a final evaluation value of the "application-link pair", and then an average value or a weighted sum value of the evaluation values of the "application-link pairs" included in the first candidate correspondence table may be used as the evaluation value of the first candidate correspondence table. Wherein, when the evaluation values of the respective "application-link pairs" are weighted and summed as the evaluation values of the first candidate correspondence table, the weight is related to the user usage characteristic.
In some embodiments, the table construction module 240 may determine evaluation values of each "application-link pair" in each first candidate correspondence table based on the estimated fluency of each application determined as described above, and then determine evaluation values of each first candidate correspondence table based on a weighted sum of the evaluation values of each "application-link pair" in each first candidate correspondence table, wherein the weight of each "application-link pair" is related to the user usage characteristic. The user usage characteristics may include a frequency of usage of the corresponding application by the user, an average time of single use, and the like. For example, the table building module 240 may set the evaluation value of the "application-link pair" corresponding to an application to be weighted more heavily according to the higher frequency of use and the longer average time of single use of the application by the user.
The specific weight value may be set according to a fourth preset rule, where the fourth preset rule may be that the applied usage frequency is 0 to 3 times/day, the weight is 0.3, the usage frequency is 3 to 6 times/day, the weight is 0.5, the usage frequency is more than 6 times/day, the weight is 0.7, the single-use average time is 0 to 1 hour, the weight is 0.3, the single-use average time is 1 to 2 hours, the weight is 0.5, the single-use average time is more than 2 hours, the weight is 0.7, and the total weight may be an average value of the weights corresponding to the applied usage frequency and the weights corresponding to the single-use average time. The fourth preset rule may be set empirically.
By way of example only, assuming that the evaluation value of "apply a-link 1" is 0.4, the evaluation value of "apply B-link 2" is 0.8, and the evaluation value of "apply C-link 3" is 0.2, the frequency of use of applications a, B, C by the user is 4 times, 1 time, 7 times, respectively, and the average time of single use of applications a, B, C is 1.5 hours, 2.5 hours, 0.5 hours, respectively, in the first candidate correspondence table 1 ((a, 1), (B, 2), (C, 3)) acquired by the table construction module 240, the table construction module 240 may determine that the weight corresponding to "apply a-link 1" is 0.5, the weight corresponding to "apply B-link 2" is 0.5, the weight corresponding to "apply C-link 3" is 0.5, and the evaluation value of the first candidate correspondence table 1 is 0.7, according to the fourth preset rule in the foregoing embodiment.
Step 730, determine a second candidate mapping table.
The second candidate correspondence table may refer to a candidate correspondence table screened based on the evaluation value of the first candidate correspondence table.
In some embodiments, the table construction module 240 may determine a plurality of second candidate correspondence tables from the plurality of first candidate correspondence tables based on the evaluation value corresponding to each of the plurality of first candidate correspondence tables. For example, a first candidate correspondence table whose evaluation value is larger than a preset evaluation value among the first candidate correspondence tables may be determined as the second candidate correspondence table. Among them, the preset evaluation value may be a parameter set in advance. By way of example only, the first candidate correspondence table 1 ((a, 1), (B, 2), (C, 1)) has an evaluation value of 0.6, the first candidate correspondence table 2 ((a, 4), (B, 5), (C, 6)) has an evaluation value of 0.7, the first candidate correspondence table 3 ((a, 4), (B, 4), (C, 6)) has an evaluation value of 0.2, and the preset evaluation value is 0.5, and the table construction module 240 may screen out the first candidate correspondence table 1 ((a, 1), (B, 2), (C, 1)) and the first candidate correspondence table 2 ((a, 4), (B, 5), (C, 6)) having evaluation values greater than the preset evaluation value as the second candidate correspondence table.
Step 740, transforming the second candidate mapping table to determine a third candidate mapping table.
The third candidate mapping table may refer to a candidate mapping table after the second candidate mapping table is further processed.
In some embodiments, the third candidate correspondence table may be determined by subjecting the second candidate correspondence table to a transformation process. Wherein the transformation process may include a first transformation and a second transformation.
In some embodiments, the first transformation may include: selecting at least two second candidate corresponding tables from the plurality of second candidate corresponding tables, exchanging binding relations of one or more application-link pairs in the selected at least two second candidate corresponding tables to generate at least two third candidate tables, and taking the third candidate tables as third candidate corresponding tables.
The third candidate table may be a candidate table obtained by first transforming the second candidate table. For example, the second candidate correspondence table ((a, 1), (B, 2), (C, 1)) is subjected to the first conversion to obtain the third candidate table ((a, 4), (B, 2), (C, 1)).
The first transformation may refer to an operation for exchanging binding relationships of application-link pairs. In some embodiments, the first transformation may be an exchange of transmission links corresponding to different applications in a plurality of different second candidate correspondence tables. For example, the second candidate correspondence table 1 is ((a, 1), (B, 2), (C, 1)), and the second candidate correspondence table 2 is ((a, 4), (B, 5), (C, 6)), and the table construction module 240 may swap the transmission links corresponding to the application a in the second candidate correspondence table 1 and the second candidate correspondence table 2 to generate a third candidate table, such as the third candidate table 1 is ((a, 4), (B, 2), (C, 1)), and the third candidate table 2 is ((a, 1), (B, 5), (C, 6)).
In some embodiments, table construction module 240 may also preferentially swap the less effective "application-link pairs" in the two second candidate correspondences to improve the efficiency of determining the preferred link. The "application-link pair" with poor effect in the second candidate correspondence table may be obtained based on a test, for example, if a transmission link corresponding to one of the applications is adjusted, the smoothness of the application may be greatly improved, and the future overload frequency of the corresponding transmission link may be reduced, the "application-link pair" may be considered as a transmission link with poor effect.
The second transformation may refer to an operation for updating the binding relationship of the application-link pair. In some embodiments, the second transformation may include: updating the binding relation of at least one application-link pair in the prepared corresponding table to generate at least one third candidate corresponding table; the prepared corresponding table is a second candidate corresponding table or a third candidate table.
The preliminary correspondence table may refer to a correspondence table to which the second conversion is to be performed. In some embodiments, the preliminary correspondence table is a second candidate correspondence table or a third candidate table.
In some embodiments, for each of the plurality of preliminary correspondence tables, the table building module 240 may update the binding relationship of at least one application-link pair in the preliminary correspondence table, generating at least one third candidate correspondence table. For example, if the preliminary correspondence table 1 is ((a, 4), (B, 2), and (C, 1)), the transmission link corresponding to the application B can be adjusted, that is, (B, 2) is modified to (B, 3), and the updated third candidate correspondence table is ((a, 4), (B, 3), and (C, 1)).
In some embodiments, the table building module 240 may preferentially update the less effective "application-link pairs" in the preliminary correspondence table to increase the efficiency of determining the preferred link. For a description of the less effective "application-link pair" see the description of the less effective "application-link pair" in the aforementioned second candidate correspondence table.
In some embodiments, the selection probability of the table building module 240 selecting an "application-link pair" for transformation processing from the second candidate correspondence table or the preliminary correspondence table may be related to the user usage characteristics corresponding to the "application-link pair". For example, when the table building module 240 selects one of "application a-link 1" and "application B-link 2" for change, if information that the user uses the application a more frequently is acquired according to the user usage characteristics, the probability that the table building module 240 selects "application a-link 1" for conversion processing is higher.
In some embodiments of the present specification, by relating the selection probability of the "application-link pair" selected for the transformation processing in the second candidate correspondence table or the preliminary correspondence table to the user usage characteristics, the applications more frequently used by the user can be preferentially processed, and the determination of the preferred link can be quickly performed while trying more possible combinations, so that the probability that the applications more frequently used by the user correspond to the preferred link is higher.
In some embodiments, the table construction module 240 may further process the obtained third candidate correspondence table based on the following steps.
Step 750, determining the reference value of the third candidate correspondence table.
The reference value may refer to a probability that any one of the plurality of third candidate correspondence tables is selected as the first candidate correspondence table for the next iteration or as the application link load correspondence table.
In some embodiments, the table construction module 240 may determine the third candidate correspondence table reference value in a variety of ways.
For example, the table construction module 240 may directly take the evaluation value of the third candidate correspondence table as the third candidate correspondence table reference value; for another example, the table building module 240 may determine the evaluation value of the third candidate correspondence table based on the correspondence relationship between the preset reference value and the evaluation value of the third candidate correspondence table. For more on determining the evaluation value, see the related description above.
For another example, the table construction module 240 may process the evaluation value of the third candidate correspondence table based on a program, an algorithm, and the like to obtain the third candidate correspondence table reference value.
For example, the reference value of one of the third candidate correspondence tables may be a ratio of the evaluation value of the third candidate correspondence table to the sum of the evaluation values of all the third candidate correspondence tables. For example, the total number of the third candidate correspondence tables is 2, where the evaluation value of the third candidate correspondence table 1 is 0.3 and the evaluation value of the third candidate correspondence table 2 is 0.2, then the evaluation value of the third candidate correspondence table 1 is 0.3/(0.3 + 0.2) =0.6, that is, the reference value of the third candidate correspondence table 1 is 0.6.
And 760, screening the third candidate corresponding table to obtain the screened third candidate corresponding table.
In some embodiments, the table construction module 240 may filter the third candidate correspondence table based on the reference value of the third candidate correspondence table, and use the filtered third candidate correspondence table as the first candidate correspondence table of the next iteration or for determining the application link load correspondence table.
In some embodiments, the table construction module 240 may determine the first candidate correspondence table to enter the next round from the plurality of third candidate correspondence tables based on the size of the reference value. For example, the reference values may be sorted from large to small, and several third candidate correspondence tables with top ranks may be determined as the first candidate correspondence table for entering the next iteration.
In some embodiments, the table construction module 240 may use the third candidate correspondence table with the reference value greater than the preset reference value as the first candidate correspondence table of the next iteration. For example, the total number of the third candidate correspondence tables is 4, where the reference value of the third candidate correspondence table 1 is 0.1, the reference value of the third candidate correspondence table 2 is 0.8, the reference value of the third candidate correspondence table 3 is 0.7, the reference value of the third candidate correspondence table 4 is 0.2, and the preset reference value thereof is 0.6, and since the reference values of the third candidate correspondence table 2 and the third candidate correspondence table 3 are both greater than the preset reference value, the third candidate correspondence table 2 and the third candidate correspondence table 3 may be used as the first candidate correspondence table of the next round. The preset reference value may be a probability parameter set in advance.
Step 770, determining the application link load correspondence table.
The table constructing module 240 may regard the screened third candidate mapping table as the first candidate mapping table of the next round, and repeatedly perform steps 710 to 770, continue to perform iterative updating until a preset iteration condition is met, and determine the third candidate mapping table with the largest reference value in all iterations as the application link load mapping table.
In some embodiments, the preset iteration condition may include that the number of iteration rounds is not less than a preset round number value. The preset wheel value can be directly determined according to past experience, and can also be determined in a test mode and the like. For example, a small value (e.g., 50) may be set and then gradually expanded to a reasonable range based on the iteration result.
In some embodiments, the preset iteration condition may include that the evaluation value of the first candidate correspondence table is not less than a preset evaluation value. The preset evaluation value can be the minimum evaluation value corresponding to the estimated fluency of the preferred link and the future overload frequency which can enable the client to have good application experience based on experience. When the evaluation value of the first candidate correspondence table is not less than the preset evaluation value, it is described that the application link load correspondence table in which the preferred link can be obtained has been generated.
In some embodiments, the preset iteration condition may further include that, in at least two consecutive iterations, a variation range of the evaluation value of the first candidate correspondence table is smaller than a preset variation value. The preset variation value may be a minimum variation requirement that the evaluation value of the first candidate correspondence table needs to satisfy before and after the iteration. If the variation range of the evaluation value of the first candidate correspondence table is smaller than the preset variation value in at least two consecutive iterations, it can be considered that the third candidate correspondence table before and after the iteration has no variation or has small variation, and the iteration can be stopped at this time.
The preset iteration condition may be preset by a user. In some embodiments, the preset iteration condition may include at least one of the above conditions.
In some embodiments of the present specification, the estimated fluency and the future overload frequency are used to determine the evaluation value of the first candidate correspondence table, so that the determination of the evaluation value is more accurate, and the operation efficiency is improved; based on the exchange processing or the exchange or adjustment of the binding relationship between the application and the transmission link in the first candidate corresponding table, and multiple iterations, the iteration efficiency can be effectively improved, so that the application link load corresponding table can be quickly determined.
And step 620, acquiring load information of each transmission link in the application link load corresponding table periodically or when a preset updating condition is met.
In some embodiments, the preset update condition may be manually set empirically, for example, the preset update condition may be that the overload number of a certain transmission link exceeds a third threshold or the future overload frequency of a certain transmission link is higher than a fourth threshold, and the like. The overload number may refer to the number of times that a certain transmission link exceeds the carrying capacity of the transmission link in a certain time period, for example, the overload number of the link 1 in one day may be 4 times, and if the third threshold is 3 times, the preset update condition may be considered to be satisfied. The third threshold and the fourth threshold may be set empirically by the system automatically or manually.
Load information may refer to information relating to the load condition of the transmission link. For example, the load information may be an overload frequency of the transmission link, the number of applications in the transmission link with a binding relationship, and the like.
In some embodiments, the information obtaining module 250 may obtain the load information of each transmission link in the application link load correspondence table periodically or in a plurality of ways when a preset update condition is met.
For example, the information obtaining module 250 may obtain the load information of each transmission link in the application link load corresponding table through an overload prediction model, a client, a storage device, or the like every 1 hour or when a preset update condition is satisfied. For more on load information obtained by the overload prediction model, see fig. 5 and its associated description.
Step 630, updating the binding relationship between each application and the transmission link based on the load information of each transmission link in the application link load corresponding table, so as to update the application link load corresponding table.
In some embodiments, the table updating module 260 may update the binding relationship between each application and the transmission link according to a third preset rule based on the load information of each transmission link in the application link load corresponding table, so as to update the application link load corresponding table.
The third preset rule may be set empirically. For example, the third preset rule may be that if the future overload frequency of a certain transmission link is higher than the fifth threshold, a binding relationship is established between the application of the estimated fluency in the transmission link that is lower than the sixth threshold and the transmission link of which the future overload frequency is lower than the fifth threshold. In some embodiments, the fifth threshold should be lower than the fourth threshold.
Illustratively, the third predetermined rule is to establish a binding relationship between an application with estimated fluency lower than 60FPS in a transmission link and a transmission link with future overload frequency lower than 5 times/hour if the future overload frequency of the transmission link is higher than 5 times/hour. If the future overload frequency of the link 1 is 8 times/hour, the future overload frequency of the link 2 is 3 times/hour, the application a and the application B are located on the link 1, the estimated fluency of the application a is 40FPS, and the estimated fluency of the application B is 70FPS in a certain application link load correspondence table ((a, 1), (B, 1), (C, 2)), the table updating module 260 may establish a binding relationship between the application a and the link 2 in the link 1, update the application link load correspondence table, and perform steps 620-630 based on the updated application link load correspondence table.
In some embodiments, the binding relationship between each application and the transmission link is updated based on the load information, and the application link load correspondence table is updated, so that the transmission link which is possibly jammed can be adjusted and updated in time, and the reduction of application use experience caused by the fact that a plurality of applications use the same transmission link and the like is avoided.
In some embodiments, the load information is acquired and the application link load correspondence table is updated periodically or when a preset updating condition is met by constructing the application link load correspondence table, so that the binding relationship between the application and the transmission link can be updated in time according to the actual use condition of a user, a more accurate preferred link can be determined efficiently, the probability of occurrence of bad user experience events such as jamming and frame dropping is reduced, and the use experience of the user is improved.
One or more embodiments of the present specification provide a traffic management device including a processor for performing a traffic management method.
One or more embodiments of the present specification also provide a computer-readable storage medium storing computer instructions, and when the computer instructions in the storage medium are read by a computer, the computer performs the method for traffic management as described in any one of the above embodiments.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, though not explicitly described herein. Such alterations, modifications, and improvements are intended to be suggested in this specification, and are intended to be within the spirit and scope of the exemplary embodiments of this specification.
Also, the description uses specific words to describe embodiments of the specification. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means a feature, structure, or characteristic described in connection with at least one embodiment of the specification. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, certain features, structures, or characteristics may be combined as suitable in one or more embodiments of the specification.
Additionally, the order in which elements and sequences are described in this specification, the use of numerical letters, or other designations are not intended to limit the order of the processes and methods described in this specification, unless explicitly stated in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the foregoing description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit-preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into the specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of the present specification shall control if they are inconsistent or inconsistent with the statements and/or uses of the present specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A method of traffic management, comprising:
acquiring target flow and determining at least one application corresponding to the target flow; the target traffic is traffic to be forwarded received by the access node;
in response to there being a pending application of the at least one application for which a corresponding transmission link is not allocated:
acquiring application characteristics of the application to be processed and link characteristics of at least one transmission link;
and determining a preferred link of the application to be processed based on the application characteristic and the link characteristic.
2. The method of claim 1, the determining a preferred link for the pending application based on the application characteristic and the link characteristic comprising:
determining the estimated flow characteristics of the application to be processed based on the application characteristics;
and determining the preferred link of the application to be processed based on the estimated flow characteristics and the link characteristics.
3. The method of claim 2, the determining a preferred link for the pending application based on the projected traffic characteristics and the link characteristics comprising:
predicting the estimated fluency of the application to be processed after the application to be processed is added into the candidate link through a fluency prediction model based on the estimated flow characteristics and the link characteristics; the candidate link is any one of the at least one transmission link; the fluency prediction model is a machine learning model;
determining the preferred link for the pending application based on the estimated fluency for each of the at least one transmission link for the pending application.
4. The method of claim 1, further comprising:
constructing an application link load corresponding table based on each application in the at least one application and a corresponding transmission link thereof; the application link load corresponding table comprises the binding relationship between each application and a transmission link;
acquiring load information of each transmission link in the application link load corresponding table periodically or when a preset updating condition is met;
and updating the binding relationship between each application and the transmission link based on the load information of each transmission link in the application link load corresponding table so as to update the application link load corresponding table.
5. A system for traffic management, comprising:
a traffic obtaining module, configured to obtain a target traffic and determine at least one application corresponding to the target traffic, where the target traffic is a traffic to be forwarded and received by an access node;
the system comprises a characteristic acquisition module, a transmission link acquisition module and a processing module, wherein the characteristic acquisition module is used for acquiring the application characteristic of the application to be processed and the link characteristic of at least one transmission link when the application to be processed which is not allocated with the corresponding transmission link exists in the at least one application;
and the link determining module is used for determining the preferred link of the application to be processed based on the application characteristic and the link characteristic.
6. The system of claim 5, the link determination module further to:
determining the pre-estimated flow characteristics of the application to be processed based on the application characteristics;
and determining the preferred link of the application to be processed based on the estimated flow characteristics and the link characteristics.
7. The system of claim 6, the link determination module further to:
predicting the estimated fluency of the application to be processed after the application to be processed is added into the candidate link through a fluency prediction model based on the estimated flow characteristics and the link characteristics; the candidate link is any one of the at least one transmission link; the fluency prediction model is a machine learning model;
determining the preferred link for the application to be processed based on the estimated fluency of the application to be processed for each of the at least one transmission link.
8. The system of claim 5, further comprising:
the table building module is used for building an application link load corresponding table based on each application in the at least one application and the corresponding transmission link thereof; the application link load corresponding table comprises the binding relationship between each application and a transmission link;
the information acquisition module is used for acquiring the load information of each transmission link in the application link load corresponding table periodically or when a preset updating condition is met;
and the table updating module is used for updating the binding relationship between each application and the transmission link based on the load information of each transmission link in the application link load corresponding table so as to update the application link load corresponding table.
9. A traffic management apparatus, the apparatus comprising at least one processor and at least one memory;
the at least one memory is for storing computer instructions;
the at least one processor is configured to execute at least some of the computer instructions to implement the method of any one of claims 1-4.
10. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 4.
CN202211342571.4A 2022-10-31 2022-10-31 Method and system for flow management Pending CN115766598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211342571.4A CN115766598A (en) 2022-10-31 2022-10-31 Method and system for flow management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211342571.4A CN115766598A (en) 2022-10-31 2022-10-31 Method and system for flow management

Publications (1)

Publication Number Publication Date
CN115766598A true CN115766598A (en) 2023-03-07

Family

ID=85354305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211342571.4A Pending CN115766598A (en) 2022-10-31 2022-10-31 Method and system for flow management

Country Status (1)

Country Link
CN (1) CN115766598A (en)

Similar Documents

Publication Publication Date Title
He et al. Green resource allocation based on deep reinforcement learning in content-centric IoT
Chen et al. Learning and management for Internet of Things: Accounting for adaptivity and scalability
Li et al. Method of resource estimation based on QoS in edge computing
CN114286413A (en) TSN network combined routing and stream distribution method and related equipment
Krolikowski et al. A decomposition framework for optimal edge-cache leasing
Skondras et al. An analytic network process and trapezoidal interval‐valued fuzzy technique for order preference by similarity to ideal solution network access selection method
Othman et al. Efficient admission control and resource allocation mechanisms for public safety communications over 5G network slice
CN111629390B (en) Network slice arranging method and device
Taboada et al. QoE–aware optimization of multimedia flow scheduling
Li et al. DQN-enabled content caching and quantum ant colony-based computation offloading in MEC
CN114205317B (en) SDN and NFV-based service function chain SFC resource allocation method and electronic equipment
Xiong et al. Learning augmented index policy for optimal service placement at the network edge
CN114423023B (en) Mobile user-oriented 5G network edge server deployment method
CN110913430B (en) Active cooperative caching method and cache management device for files in wireless network
Malazi et al. Distributed service placement and workload orchestration in a multi-access edge computing environment
CN113543160A (en) 5G slice resource allocation method and device, computing equipment and computer storage medium
CN115766598A (en) Method and system for flow management
Kalyanasundaram et al. Admission control schemes to provide class-level QoS in multiservice networks
Hsieh et al. Deep reinforcement learning-based task assignment for cooperative mobile edge computing
Hoiles et al. Risk-averse caching policies for YouTube content in femtocell networks using density forecasting
CN114286408A (en) Network performance optimization method, system, device and medium based on heaven-earth integration
Adiwal et al. Rat selection for a low battery mobile device for future 5g networks
US20240155031A1 (en) Methods, internet of things systems and mediums for controlling data transmission for smart gas
Zha Key technologies of cache and computing in 5G mobile communication network
Tang et al. Network availability evaluation based on markov chain of qos-aware

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 10g27, No. 2299, Yan'an west road, Changning District, Shanghai 200336

Applicant after: Xingrong (Shanghai) Information Technology Co.,Ltd.

Address before: Room 10g27, No. 2299, Yan'an west road, Changning District, Shanghai 200336

Applicant before: SHANGHAI XINGRONG INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information