CN103023801A - Network intermediate node cache optimization method based on flow characteristic analysis - Google Patents

Network intermediate node cache optimization method based on flow characteristic analysis Download PDF

Info

Publication number
CN103023801A
CN103023801A CN2012105060245A CN201210506024A CN103023801A CN 103023801 A CN103023801 A CN 103023801A CN 2012105060245 A CN2012105060245 A CN 2012105060245A CN 201210506024 A CN201210506024 A CN 201210506024A CN 103023801 A CN103023801 A CN 103023801A
Authority
CN
China
Prior art keywords
cache
flow
traffic
classification
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012105060245A
Other languages
Chinese (zh)
Other versions
CN103023801B (en
Inventor
赵进
余浩淼
王新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201210506024.5A priority Critical patent/CN103023801B/en
Publication of CN103023801A publication Critical patent/CN103023801A/en
Application granted granted Critical
Publication of CN103023801B publication Critical patent/CN103023801B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention belongs to the field of computer network communication and particularly relates to a network intermediate node cache optimization method based on flow characteristic analysis. The method includes when the flow passes the intermediate node, firstly classifying the flow, and then according to the type of the flow, determining how to process the flow in divided buffer area space of the type according to the cache condition of the area and combining a least recently used (LRU) algorithm. The system can update flow classification models and redistribute buffer space to different new classifications regularly. The method mainly focuses on solving the transparency problem of a network intermediate node cache strategy, namely, algorithm design of the network intermediate node cache is guaranteed to be unrelated to a specific user layer protocol design, and the efficiency of the strategy which satisfies the programming transparency is managed to be optimal.

Description

A kind of nodes cache optimization method of analyzing based on traffic characteristic
Technical field
The invention belongs to computer network communication field, be specifically related to a kind of nodes cache optimization method of analyzing based on traffic characteristic.
Background technology
The nodes buffer memory is a kind of very traditional in order to solve the method for Internet resources deficiency, by the data buffer storage that may be repeated to access in nodes, can alleviate to a great extent offered load, thereby rationally utilize Internet resources to obtain better network service quality.
With regard to traditional nodes cache policy, its design does not meet the cardinal principle of the intermediate node programming transparency, and general Design Mode all is aimed at a certain known client layer protocol rule and designs accordingly.Yet, explosive growth along with network application, the protocol multiplexing situation that has had a large amount of proprietary protocols and common protocol in the middle of the network causes traditional nodes cache policy can't cover the network application of present One's name is legion like this, and its corresponding effect is also had a greatly reduced quality.
The traffic characteristic analytical technology has had very many application and research at information security field, angle from the deep packet parsing, the algorithm of automatic generation protocol state machine has been proposed at present, thus can be to the preliminary protocol state machine of non-publicly-owned protocol construction, reduction protocol interaction process; The traffic classification technology has also gradually been introduced a lot of methods and has been solved the proprietary protocol problem, and it carries out cluster by analyzing the feature of data flow with data flow, thereby what agreement is the data of differentiating in the data flow belong to.Above achievement in research shows and can realize the programming transparence at intermediate node, namely so that the design and optimization of cache policy is no longer dependent on the characteristic of certain specific protocol.
Summary of the invention
The problem that the present invention mainly solves is to allow the nodes cache policy have the transparency, and is main by analyzing the different characteristic of different flow, adjustment cache resource allocation strategy, thereby lifting cache hit rate.
The nodes cache optimization method of analyzing based on traffic characteristic provided by the invention, be not directed to a certain specific client layer agreement, but by analyzing the characteristic information through the flow of intermediate node, the hit rate situation of binding cache, set up corresponding traffic characteristic buffer memory forecast model, prediction has the cache weights of the flow set of certain type feature, thereby distributes this to gather corresponding cache size.
When flow when the intermediate node, at first flow is classified, then between the buffer empty that distributes for this classification in, according to the buffer memory situation in this space, determine in conjunction with lru algorithm flow should how processed.Regularly, system can upgrade the traffic classification model and redistribute spatial cache to new difference classification, thereby guarantees that cache policy and the traffic conditions of recent node have very strong correlation.
A kind of nodes flow buffering optimization method provided by the invention, main flow according to having different characteristic has the characteristics of different buffer sizes, data traffic with similar traffic characteristic is carried out cluster, the buffer memory of analyzing each cluster is worth and quantizes accordingly, again according to corresponding quantized value, adjustment is for the cache resources of different cluster flows, thus the maximization that the realization cache resources utilizes.
A kind of nodes cache optimization method of analyzing based on traffic characteristic provided by the invention, concrete steps are as follows:
1) nodes of depositing,, and the parameter of system is set: classification number, hit rate threshold value and time threshold, the user can according to the actual needs, change these crucial parameters when disposing in system;
2) according to the classification number of the described system of step 1), the big or small spatial cache such as be with the buffer area mean allocation, when flow process intermediate node, the traffic statistics module of system begins the traffic characteristic information of statistics, it is quantized and standardization, form corresponding vector and deposit in the database; At this moment, system can determine whether the flow of current generation can be stored in the middle of the buffering area according to lru algorithm;
3) constantly pass through in the process of intermediate node and buffer memory at flow, the user is also constantly to the cache request data, when the hit rate of whole buffer memory reaches the hit rate threshold value of presetting in the step 1), system begins the data characteristics vector after the quantification in the database is classified, obtain a plurality of traffic classifications, set up corresponding traffic classification model; Again the cache weights of each traffic classification is carried out computing, the classification weight result according to computing obtains sets up the spatial cache apportion model;
4) after model was built up, system redistributed spatial cache according to the result of modeling; When new flow entered system, system just determined according to new traffic classification model which classification this flow enters; Under determining, after the classification, just in corresponding spatial cache, according to lru algorithm, determine how processed flow is;
5) after system enters normal operation, the user can be by adjusting hit rate and the time threshold set before dynamically, thereby determine the modeling frequency, regularly the flow through intermediate node is classified, upgrade the traffic classification model, again the cache weights of each traffic classification is carried out computing, upgrade the spatial cache apportion model, thereby guarantee that buffer memory allocation strategy and the flow situation of recent node are closely related, and realize the maximization that cache resources utilizes.
Among the present invention, no matter be gateway proxy or router etc. for nodes, its main task all provides corresponding network service, so its caching function can not consume too many calculation resources, has influence on other main service function.
Among the present invention, system architecture diagram such as Fig. 1, whole system mainly is made of model generation module, prediction module and traffic statistics module three parts.
The model generation module, it mainly utilizes the historical traffic characteristic data of collection and historical hit rate data of classifying to generate corresponding traffic classification model and buffer memory allocation of space model.
Prediction module is used for new flow is calculated, and sees which classification it belongs to, and determines whether this flow to be packed into buffer memory according to cache policy in the territory, perhaps replaces the less flow of existing use with it.
The traffic statistics module mainly is responsible for the needed data statistics of modeling, and the extraction that mainly comprises traffic characteristic quantizes and the hit rate information of the data traffic of each buffer memory.These informational needs extract storage as soon as possible from flow, so that the follow-up usefulness of setting up model and utilizing model that flow is classified.
Among the present invention, the cluster flow chart as shown in Figure 2.
Utilize clustering algorithm to obtain corresponding traffic classification model among the present invention, the Main Function of cluster arithmetic module is the vector of the data characteristics after the quantification of traffic statistics module statistics.Wherein, it is highly important that what feature is chosen quantizes and utilize which type of algorithm to carry out the structure of Clustering Model.The present invention adopts at present comparatively popular K-means clustering algorithm, and this algorithm time complexity is low, realizes simple.
Among the present invention, described traffic characteristic information exchange is crossed the network measure instrument and is gathered, and selected traffic characteristic vector tuple comprises: connect data packet length in interaction time statistics, occupied bandwidth, the flow, data on flows amount size and mutual interval.The numerical value of these tuples and the buffer memory of flow are worth closely bound up.It has reflected that directly flow is for what of data in the middle of the situation that takies of network bandwidth resources and the flow.Experiment shows, bandwidth resources take higher and single connect in the more flows of data volume of average transmission have higher buffer memory and be worth.
In view of the foundation of model relevant with corresponding modeling data collection period, different modeling periods can have influence on the precision that Clustering Model is set up, if i.e. excessive cycle, the data that modeling is adopted may not be recent effectively traffic statistics data, modeling period is too short then may allow some originally effectively data can't play a role, and frequent modeling also can the sizable operation efficiency of loss.Therefore, here we are by setting-up time threshold value and hit rate threshold value, namely in the modeling of cycle preset time, when hit rate is lower than also automatically again modeling of system certain the time, thereby can allow the system modelling frequency more have adaptability.
Among the present invention, in the step 3) average weighted characteristic vector value in each cluster is passed through to calculate, obtain the cache weights that traffic characteristic determines.
At first, all data of waiting for input models are carried out standardization, suppose to wait for input model data have n to organize, m tuple arranged in every group of vector, utilize following formula that each tuple data is carried out standardization:
Figure 521973DEST_PATH_IMAGE001
Wherein,
Figure 90358DEST_PATH_IMAGE002
Be the standardized value of data in i group data, the j tuple,
Figure 368892DEST_PATH_IMAGE003
Be the initial value of data in i group data, the j tuple,
Figure 251398DEST_PATH_IMAGE004
With
Figure 666198DEST_PATH_IMAGE005
Be respectively average and the standard deviation of the directed quantity j of the institute tuple data in the initial data that gathers.
After the vector after obtaining all data normalizations, obtain the first class mean after each Clustering standardization, obtain new x vector.
According to following formula, obtain the cache weights of the traffic characteristic decision of each classification at last:
Figure 671064DEST_PATH_IMAGE006
Wherein,
Figure 108998DEST_PATH_IMAGE007
Be the cache weights of the traffic characteristic decision of i classification,
Figure 597791DEST_PATH_IMAGE003
(j=1,2 ... k) be j class standard unit of i classification class mean, wherein total k average.Utilize top formula, just can finally obtain the cache weights of the traffic characteristic decision of all classification.
Among the present invention, adopted the buffer memory algorithm that assigns weight namely to utilize cache weights that the traffic characteristic of classification determines and give in the spatial cache of each classification in the step 5), the mean hit rate of a period of time obtains corresponding cache weights.
Algorithm idea can be illustrated as substantially with the cache weights of traffic characteristic decision and the mean hit rate average addition of the flow after the standardization, the respective value that obtains is exactly the Buffer allocation weight that needs this classification to be got, and corresponding allocation proportion, then be this corresponding assign weight account for all the classification weights and percentage, this strategy mathematics is described below:
Figure 70361DEST_PATH_IMAGE008
Figure 980548DEST_PATH_IMAGE009
Wherein,
Figure 905779DEST_PATH_IMAGE010
Be the cache weights of the traffic characteristic decision of i traffic packet, The hit rate that is i traffic packet determines cache weights, The buffer memory that is i traffic packet assigns weight,
Figure 866148DEST_PATH_IMAGE012
Be the buffer memory percentage that the i group is got, and n is grouping number altogether.
Among the present invention, because the client layer Details Of Agreement of the feature situation of flow and flow application is irrelevant, therefore can guarantee the transparency of facility strategy.
Description of drawings
Fig. 1 entire system Organization Chart.
Fig. 2 is the Hierarchical Clustering flow chart.
Fig. 3 entire system flow chart.
Embodiment
The present invention mainly is used in nodes such as router, gateway proxy etc. have in the middle of the network intermediate equipment of computing capability, serve an internal network that must just can be connected with extraneous network by this node, erection unit must have certain storage data and the ability of calculating, guarantees that the involved algorithm of each module can move in the middle of equipment smoothly.Be illustrated in figure 3 as the overall flow figure of system.
Here, we suppose user this moment need to system of the present invention be installed at the intermediate node of a corporate intranet, and system will be installed in the middle of the gateway proxy server that this Intranet is connected with outer net.According to scale and data volume and the frequency common and that outer net is mutual of Intranet, this gateway proxy server will distribute a certain size memory space as the network traffic cache zone, the buffer zone for example is set be initially 100MB.
Step 1) user need to be deployed in Account Dept in the middle of this gateway proxy server.During deployment, the user need to arrange accordingly to classification number, hit rate threshold value and the time threshold of this system, and here the test situation for canonical system arranges initial value before system's basis, and these three values are preset as 10,50% and 1 hours.The user can arrange as required according to the needs of own network condition, thereby in system's running, can be according to the number of these three parameter control tactics and the frequency of modeling.
Step 2) owing to before system begins to start, not having data flow through gateway, therefore can't carry out according to data on flows the foundation of disaggregated model and allocation of space model.The spatial cache of this moment will be distributed equally according to the classification number that sets in advance, if according to default classification number 10, then each classification will have the 10MB space.
The startup of step 3) system brings into operation.This moment, Intranet user can begin to ask outer network data successively, system can not classify to data before hit rate threshold value and time threshold all fail to reach, and can not carry out partitioned storage according to the zone of classification yet, namely in the buffer area of whole 100MB, carry out buffer memory according to lru algorithm.In buffer memory, the traffic statistics module just begins to add up all through the data on flows of gateway and deposits database in.
Step 4) is along with system further moves, and its hit rate threshold value and time threshold all might reach, and this moment, system can utilize the K-means algorithm to set up the traffic classification model according to the data on flows that stores in the database.
Step 5) is after the traffic classification model establishes, system can classify to all discharge records (comprising the data that all are buffered at this moment through gateway or be not buffered this moment in these records), for the data in each classification, system can utilize the algorithm of mentioning in the above invention, namely at first as indicated above, all are waited for the data utilization of input model
Figure 340991DEST_PATH_IMAGE001
Formula carries out standardization, then to standardized data, utilizes mentioned above
Figure 103411DEST_PATH_IMAGE006
Formula obtains the cache weights that the traffic characteristic of the classification of each classification determines, in this example, has 10 classification, namely obtains 10 y values.
10 y values that the step 6) utilization obtains, and owing to there not being the hit rate information of classification now, therefore utilize
Figure 347310DEST_PATH_IMAGE008
Formula can obtain total buffer memory and assign weight, wherein x iOwing to there not being the hit rate data, be 0 this moment.And this moment, recycling
Figure 802563DEST_PATH_IMAGE009
Just can obtain each and be sorted in resulting buffer memory percentage in the buffering area, also be the buffer memory apportion model.
Step 7) is after model establishes, and system can at once according to the result of modeling, according to each buffering area ratio of getting, redistribute the space of buffering area.When system further moves, new flow just can determine which classification this flow belongs to according to the traffic classification model that has established through gateway proxy.Under determining, after the classification, just in the middle of the space that this classification is distributed, according to lru algorithm, determine the caching situation in space.
Step 8) is when system runs to hit rate threshold value and time threshold when having one to reach, and system can repeating step 5) and the described process that obtains new buffer memory apportion model of step 6), just this moment is because of the hit rate data that each classification has been arranged, formula
Figure 764702DEST_PATH_IMAGE008
Middle x iNo longer equal 0.
The user can be in the process of system's operation, adjust hit rate threshold value and these two values of time threshold according to system situation, namely when thinking that the too short frequency of modeling period is too high, increase time threshold and reduce the hit rate value, and when the long underfrequency of modeling period, carry out opposite setting.
Here, the buffer memory of flow and the storage of traffic characteristic have nothing to do, and the flow that namely might have is replaced in buffering area, but use when corresponding traffic characteristic information still can be stored in the central next time modeling of traffic statistics module.For the frequency of modeling, the user can adjust by the adjustment of two threshold values dynamically according to the needs of oneself, thereby so that minute quefrency meets the traffic characteristic that system faces more, thereby the utilance of system is maximized.

Claims (8)

1. nodes cache optimization method of analyzing based on traffic characteristic, it is characterized in that, by analyzing the characteristic information through the flow of intermediate node, the hit rate situation of binding cache, set up corresponding traffic characteristic buffer memory forecast model, distribute corresponding spatial cache, concrete steps are as follows:
1) Account Dept is deployed in the nodes that needs buffer memory, simultaneously to the classification number, the acquiescence hit rate, the systems such as the time threshold system parameters that will use in service arranges;
2) according to the classification number of the described system of step 1), for waiting spatial cache of size, when flow process intermediate node, the traffic statistics module begins the statistic flow characteristic information with the buffer area mean allocation, it is quantized and standardization, form corresponding vector and deposit in the database; At this moment, system can determine whether the flow of current generation can be stored in the middle of the buffering area according to least recently used lru algorithm;
3) constantly pass through in the process of intermediate node and buffer memory at flow, the user is also constantly to the cache request data, when the hit rate of whole buffer memory reaches the hit rate threshold value of presetting in the step 1), system begins the data characteristics vector after the quantification in the database is classified, obtain a plurality of traffic classifications, set up corresponding traffic classification model; Again the cache weights of each traffic classification is carried out computing, the classification weight result according to computing obtains sets up the spatial cache apportion model;
4) after model was built up, system redistributed spatial cache according to the result of modeling; When new flow entered system, system just determined according to new traffic classification model which classification this flow enters; Under determining, after the classification, just in corresponding spatial cache, according to lru algorithm, determine how processed flow is;
5) after system enters normal operation, the user can be by adjusting hit rate and the time threshold set before dynamically, thereby determine the modeling frequency, regularly the flow through intermediate node is classified, upgrade the traffic classification model, again the cache weights of each traffic classification is carried out computing, upgrade the spatial cache apportion model, thereby guarantee that buffer memory allocation strategy and the flow situation of recent node are closely related, and improve the cache resources utilance.
2. method according to claim 1, it is characterized in that: described system mainly is comprised of model generation module, prediction module and system data module.
3. method according to claim 1, it is characterized in that: described intermediate node is router or gateway proxy.
4. nodes cache optimization method according to claim 1 is characterized in that, the information of traffic characteristic described in the step 1) comprises: connect the interaction time statistics, occupied bandwidth, data packet length in the flow, data on flows amount size, mutual interval; Described traffic characteristic information exchange is crossed the network measure instrument and is gathered.
5. nodes cache optimization method according to claim 1 is characterized in that: step 2) described in the traffic characteristic of data is quantized and standardization, it utilizes following formula to carry out the data processing:
Figure 950987DEST_PATH_IMAGE001
Wherein,
Figure 212204DEST_PATH_IMAGE002
Be the standardized value of data in j the tuple of i group data of trying to achieve at last,
Figure 677821DEST_PATH_IMAGE003
Be the initial value of data in j tuple of i group data, With
Figure 753410DEST_PATH_IMAGE005
Be respectively average and the standard deviation of the directed quantity j of the institute tuple data in the initial data that gathers.
6. nodes cache optimization method according to claim 1 is characterized in that: in step 3), the step 5) flow classified and adopt the k-means clustering algorithm.
7. nodes cache optimization method according to claim 1, it is characterized in that: the cache weights to each traffic classification described in the step 3) is carried out computing, and its formula is as follows:
Figure 552739DEST_PATH_IMAGE006
Wherein,
Figure 138441DEST_PATH_IMAGE007
Be the cache weights of the traffic characteristic decision of i classification,
Figure 568285DEST_PATH_IMAGE003
(j=1,2 ... k) be j class standard unit of i classification class mean, wherein total k average.
8. nodes cache optimization method according to claim 1, it is characterized in that: assign weight cache weights that algorithm adopts traffic characteristic to determine of buffer memory is combined the algorithm that determines classification buffer memory ratio in the step 5) with the flow hit rate, and specifically mathematical formulae is as follows:
Figure 209668DEST_PATH_IMAGE009
Wherein,
Figure 853139DEST_PATH_IMAGE010
Be the cache weights of the traffic characteristic decision of i traffic packet,
Figure 250622DEST_PATH_IMAGE007
The hit rate that is i traffic packet determines cache weights,
Figure 979544DEST_PATH_IMAGE011
The buffer memory that is i traffic packet assigns weight, Be the buffer memory percentage that the i group is got, and n is grouping number altogether.
CN201210506024.5A 2012-12-03 2012-12-03 A kind of network intermediate node cache optimization method analyzed based on traffic characteristic Expired - Fee Related CN103023801B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210506024.5A CN103023801B (en) 2012-12-03 2012-12-03 A kind of network intermediate node cache optimization method analyzed based on traffic characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210506024.5A CN103023801B (en) 2012-12-03 2012-12-03 A kind of network intermediate node cache optimization method analyzed based on traffic characteristic

Publications (2)

Publication Number Publication Date
CN103023801A true CN103023801A (en) 2013-04-03
CN103023801B CN103023801B (en) 2016-02-24

Family

ID=47971944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210506024.5A Expired - Fee Related CN103023801B (en) 2012-12-03 2012-12-03 A kind of network intermediate node cache optimization method analyzed based on traffic characteristic

Country Status (1)

Country Link
CN (1) CN103023801B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105099732A (en) * 2014-04-28 2015-11-25 华为技术有限公司 Abnormal IP data flow identification method, device and system
CN105227396A (en) * 2015-09-01 2016-01-06 厦门大学 A kind of inferior commending contents dissemination system towards mobile communications network and method thereof
CN106021126A (en) * 2016-05-31 2016-10-12 腾讯科技(深圳)有限公司 Cache data processing method, server and configuration device
US9923794B2 (en) 2014-04-28 2018-03-20 Huawei Technologies Co., Ltd. Method, apparatus, and system for identifying abnormal IP data stream
CN107943720A (en) * 2017-11-29 2018-04-20 武汉理工大学 Algorithm is optimized based on the LRU cache of file income and priority weighting in mixed cloud
CN108696446A (en) * 2018-07-30 2018-10-23 网宿科技股份有限公司 A kind of update method of traffic characteristic information, device and Centroid server
CN108880913A (en) * 2018-07-30 2018-11-23 网宿科技股份有限公司 A kind of management method of traffic characteristic, device and central node server
WO2020056633A1 (en) * 2018-09-19 2020-03-26 华为技术有限公司 Method for estimating network rate and estimation device
CN110943883A (en) * 2019-11-13 2020-03-31 深圳市东进技术股份有限公司 Network flow statistical method, system, gateway and computer readable storage medium
CN111030922A (en) * 2019-12-17 2020-04-17 腾讯云计算(北京)有限责任公司 Session display method and device in instant messaging, storage medium and electronic device
CN112152939A (en) * 2020-09-24 2020-12-29 宁波大学 Double-queue cache management method for inhibiting non-response flow and service differentiation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070226803A1 (en) * 2006-03-22 2007-09-27 Woonyon Kim System and method for detecting internet worm traffics through classification of traffic characteristics by types
US20080130497A1 (en) * 2006-12-01 2008-06-05 Electronics And Telecommunications Research Institute Apparatus and method for merging internet traffic mirrored from multiple links
CN102394827A (en) * 2011-11-09 2012-03-28 浙江万里学院 Hierarchical classification method for internet flow

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070226803A1 (en) * 2006-03-22 2007-09-27 Woonyon Kim System and method for detecting internet worm traffics through classification of traffic characteristics by types
US20080130497A1 (en) * 2006-12-01 2008-06-05 Electronics And Telecommunications Research Institute Apparatus and method for merging internet traffic mirrored from multiple links
CN102394827A (en) * 2011-11-09 2012-03-28 浙江万里学院 Hierarchical classification method for internet flow

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郑常熠 等: ""P2P视频点播内容分发策略"", 《软件学报》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105099732B (en) * 2014-04-28 2018-11-20 华为技术有限公司 A kind of methods, devices and systems identifying abnormal IP traffic
US9923794B2 (en) 2014-04-28 2018-03-20 Huawei Technologies Co., Ltd. Method, apparatus, and system for identifying abnormal IP data stream
CN105099732A (en) * 2014-04-28 2015-11-25 华为技术有限公司 Abnormal IP data flow identification method, device and system
CN105227396A (en) * 2015-09-01 2016-01-06 厦门大学 A kind of inferior commending contents dissemination system towards mobile communications network and method thereof
CN105227396B (en) * 2015-09-01 2018-09-18 厦门大学 A kind of inferior commending contents dissemination system and its method towards mobile communications network
CN106021126A (en) * 2016-05-31 2016-10-12 腾讯科技(深圳)有限公司 Cache data processing method, server and configuration device
CN107943720A (en) * 2017-11-29 2018-04-20 武汉理工大学 Algorithm is optimized based on the LRU cache of file income and priority weighting in mixed cloud
CN108880913A (en) * 2018-07-30 2018-11-23 网宿科技股份有限公司 A kind of management method of traffic characteristic, device and central node server
CN108696446A (en) * 2018-07-30 2018-10-23 网宿科技股份有限公司 A kind of update method of traffic characteristic information, device and Centroid server
CN108880913B (en) * 2018-07-30 2020-01-31 网宿科技股份有限公司 traffic characteristic management method and device and central node server
WO2020024402A1 (en) * 2018-07-30 2020-02-06 网宿科技股份有限公司 Traffic feature management method and apparatus, and central node server
CN108696446B (en) * 2018-07-30 2022-01-25 网宿科技股份有限公司 Method and device for updating flow characteristic information and central node server
WO2020056633A1 (en) * 2018-09-19 2020-03-26 华为技术有限公司 Method for estimating network rate and estimation device
CN110943883A (en) * 2019-11-13 2020-03-31 深圳市东进技术股份有限公司 Network flow statistical method, system, gateway and computer readable storage medium
CN110943883B (en) * 2019-11-13 2023-01-31 深圳市东进技术股份有限公司 Network flow statistical method, system, gateway and computer readable storage medium
CN111030922A (en) * 2019-12-17 2020-04-17 腾讯云计算(北京)有限责任公司 Session display method and device in instant messaging, storage medium and electronic device
CN112152939A (en) * 2020-09-24 2020-12-29 宁波大学 Double-queue cache management method for inhibiting non-response flow and service differentiation
CN112152939B (en) * 2020-09-24 2022-05-17 宁波大学 Double-queue cache management method for inhibiting non-response flow and service differentiation

Also Published As

Publication number Publication date
CN103023801B (en) 2016-02-24

Similar Documents

Publication Publication Date Title
CN103023801B (en) A kind of network intermediate node cache optimization method analyzed based on traffic characteristic
Lai et al. Oort: Efficient federated learning via guided participant selection
CN102081622B (en) Method and device for evaluating system health degree
WO2023103349A1 (en) Load adjustment method, management node, and storage medium
Fu et al. Layered virtual machine migration algorithm for network resource balancing in cloud computing
CN103516807A (en) Cloud computing platform server load balancing system and method
CN106793031B (en) Smart phone energy consumption optimization method based on set competitive optimization algorithm
CN105975345B (en) A kind of video requency frame data dynamic equalization memory management method based on distributed memory
CN104063501B (en) copy balance method based on HDFS
CN102025732B (en) Dynamic adaptive cognitive network quality of service (QoS) mapping method
CN103207920A (en) Parallel metadata acquisition system
CN115392481A (en) Federal learning efficient communication method based on real-time response time balancing
CN110061881A (en) A kind of energy consumption perception virtual network mapping algorithm based on Internet of Things
Li et al. Scalable replica selection based on node service capability for improving data access performance in edge computing environment
CN103916478B (en) The method and apparatus that streaming based on distributed system builds data side
CN105022823B (en) A kind of cloud service performance early warning event generation method based on data mining
CN116050540B (en) Self-adaptive federal edge learning method based on joint bi-dimensional user scheduling
CN115718644A (en) Computing task cross-region migration method and system for cloud data center
CN105227396B (en) A kind of inferior commending contents dissemination system and its method towards mobile communications network
CN106528804A (en) User grouping method based on fuzzy clustering
CN105393518B (en) Distributed cache control method and device
CN110597598B (en) Control method for virtual machine migration in cloud environment
CN115525230A (en) Storage resource allocation method and device, storage medium and electronic equipment
Cen et al. Developing a disaster surveillance system based on wireless sensor network and cloud platform
CN103118102A (en) System and method for counting and controlling spatial data access laws under cloud computing environment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160224

Termination date: 20191203

CF01 Termination of patent right due to non-payment of annual fee