CN104507124A - Management method for base station cache and user access processing method - Google Patents

Management method for base station cache and user access processing method Download PDF

Info

Publication number
CN104507124A
CN104507124A CN201410817493.8A CN201410817493A CN104507124A CN 104507124 A CN104507124 A CN 104507124A CN 201410817493 A CN201410817493 A CN 201410817493A CN 104507124 A CN104507124 A CN 104507124A
Authority
CN
China
Prior art keywords
request
content
node
base
station node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410817493.8A
Other languages
Chinese (zh)
Inventor
李杨
辛永辉
林涛
周旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Acoustics CAS
Original Assignee
Institute of Acoustics CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Acoustics CAS filed Critical Institute of Acoustics CAS
Priority to CN201410817493.8A priority Critical patent/CN104507124A/en
Publication of CN104507124A publication Critical patent/CN104507124A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention provides a management method for base station cache. The method comprises the following steps: step 101, generating a content heat degree precedence table on a central gateway controller node; step 102, dividing local cache of each base station node into shared cache and unshared cache; step 103, storing the request content in the content heat degree precedence table into the local cache of each base station node according to the precedence; step 104, respectively establishing an undetermined precedence table on each central gateway controller node and each base station node. Based on the management method for the base station cache, the invention further provides a user access processing method which is realized based on a request aggregation and asynchronous multicast strategy. The management method disclosed by the invention can be used for lowering the redundancy rate of the base station cache, increasing the utilization rate of cache, reducing repeated flow rate inside and outside the network, quickening the network response speed and improving the internet-surfing experience of a user.

Description

The processing method of a kind of management method of node B cache and user's access
Technical field
The present invention relates to mobile network's technical field, be specifically related to a kind of management method of node B cache and the processing method of user's access.
Background technology
Along with the increase of smart mobile phone, dull and stereotyped number of users in world wide, mobile device (Mobile Device, be called for short MD) (document [1]: White Paper can be increased rapidly to the visit capacity expectation of the Internet, " Cisco Visual Networking Index:Global Mobile Data, " 2010-2015.).When a MD accesses the content of the Internet, this content must obtain from the original server mobile network.Content distributing network (Content Delivery Network, referred to as CDN) be by increasing the new network architecture of one deck in existing Internet, the content of website is published to the network " edge " closest to user, make user can obtain required content nearby, solve the situation that Internet network is crowded, improve the response speed of user's access websites.Solve technically due to the slow problem of the network bandwidth is little, user's visit capacity is large, the unequal reason of network point distribution causes user's access websites response speed comprehensively.
Document [2] (G.Pallis, A.Vakali, " Insight and perspectives for content delivery networks, " in Communications of the ACM, vol.49, issue 1, January 2006.) and document [3] (Akamai Technologies, http://www.akamai.com.) point out that CDN contributes to reducing the consumption of Internet bandwidth and relevant delay and shake, but request content is before arrival MD, must by the core net of cellular carrier (Core Network, be called for short CN) and wireless access network (Radio Access Network, be called for short RAN).Therefore all requests all obtain content from CDN can bring very large flow pressure to CN and RAN of operator; Request concurrent in a large number, under limited network capacity, very easily causes network congestion, significantly increases network delay.In addition, present mobile network is experiencing the evolution from 3G to LTE, in the network configuration of reality, centralized network gateway controller (SAE-GW) can directly control than more base station (Evolved Node B in the past, be called for short eNodeB), because the size of RAN edge cache is more much smaller than the CDN in the Internet, therefore to store content in collaborative mode in these large-scale eNodeB, make store a greater variety of content in the ran.Therefore, gateway controller can utilize these storage resources in the mode of equity to serve the request of one's respective area base-station node, and need not obtain from the CDN the Internet, thus improves the performance of integrity service.In order to support request number to greatest extent in the ran, propose some typical cache policies based on optimized methodological science man:
Miscoordination cache policy: all base-station nodes work alone, each base-station node only processes the request of this node access user, if the content that local cache has user to ask, then directly return to user, if do not have, then go to obtain to outer net, the buffer memory for this locality then adopts the typical cache policy of content-based access frequency or other history service condition.Miscoordination buffer memory has easy realization, complexity is lower, store with service separate between node, the administration overhead of network can not be increased, but can cause the bulk redundancy of network internal buffer memory, the content type of buffer memory is few, and the utilance of buffer memory is low, the average delay that user downloads is large, the problems such as network egress flow is large.
Collaborative cache policy: all base-station node collaborative works, by a gateway controller unified management; After a node receives user's request, if local cache is miss and peer node has buffer memory, then select to obtain from peer node, if peer node does not have, then go to obtain to server, for the buffer memory of this node, the joint effect of controlled web gateway controller and local statistical information.Cache node in collaborative buffer memory is worked in coordination, by the cache resources of gateway controller unified management this locality, avoid the problem of buffer memory over-redundancy, service can better be provided for the user's request received, but need to predict each content requests arrival rate (document [4]: X.Tang in advance, S.T.Chanson, " Coordinated En-Route Web Caching; " in IEEE Transactions on Computers, Vol.51, No.6, pp.595-607,2002.); And in actual applications, user along with time and popularity change, makes to come from statistics in the past no longer applicable to the request rate of content; In addition, due to content clean culture transmission between peer node, can cause producing a large amount of repetition flows in network.
Illustrate below and to be described two tactful application, as shown in Figure 1, a RAN, comprises three eNodeB:eNB 1, eNB 2and eNB 3four content a can be provided with one, b, the former server O of c, d, in addition, the memory capacity of all eNodeB can only store a content object, SAE-GW does not have memory function, and from eNodeB to SAE-GW in each uplink capacity be only can ask an object simultaneously, represent the average latency that service needs are provided by a reciprocity eNodeB in a given RAN with τ.And the popularity distribution supposing content is consistent, have many group MD to send request stream to other eNodeB node in each honeycomb, request stream is identical, and by repetitive sequence, { aaaabbbccd} represents.Our hypothesis obtains the performance (stand-by period) of content than much better from original server O in identical RAN from the eNodeB node of equity, because in the network of reality, source server may be positioned at network quite far away.
Miscoordination cache policy: the information that all eNodeB have accumulated compares b to the request of content a, the information of c and d is many, therefore all eNodeB store a, and do not store other content, eNodeB can only be arrived first to the request to b, c and d of the follow-up transmission of MD, be transmitted to SAE-GW again, finally mail to server O.As can be seen here, miscoordination cache policy makes overall buffer memory redundancy high, and utilance is low, and the average delay that user obtains content strengthens, and experiences poor.
Collaborative cache policy: because each eNodeB cooperates with each other, we suppose eNB 1store a, eNB 2store b and eNB 3store c, can be served in this RAN the request of a, b, c like this; And for request d, cache miss on other eNodeB in eNodeB and RAN of correspondence, can only obtain content by SAE-GW to original server O.Because the content temperature distribution of user's access of a node access is close, when multiple MD may obtain from the identical content of same neighbourhood eNodeB simultaneously, because in collaborative buffer memory, independently reciprocity unicast transmission is machine-processed, a large amount of flow (comprising request and response) repeated will be produced, consume valuable link bandwidth resource.
Summary of the invention
The object of the invention is to overcome the above-mentioned defect existed in current CDN caching technology, propose a kind of new node B cache management method, and on this basis, based on the strategy of asking polymerization and asynchronous multicast, propose the processing method of a kind of user access.
To achieve these goals, the invention provides a kind of management method of node B cache, comprising:
Step 101) generating content temperature list on centralized gateway controller nodes;
Step 102) local cache of each base-station node is divided into shared buffer memory and unshared buffer memory;
Step 103) by the request content in content temperature list according in the local cache of rank stored in each base-station node;
Step 104) on centralized gateway controller nodes and each base-station node, set up pending query requests table separately.
In technique scheme, described step 101) comprise further:
In fixed time period, each base-station node adds up the access times of each request content according to user access request, reports centralized gateway controller nodes; The reporting information of all base-station nodes of centralized network gateway controller node statistics, and generating content temperature list; Content temperature list have recorded name, the access total degree and history access time data of each content.
In technique scheme, described step 102) comprise further:
Suppose that the local cache size of each base-station node is c, the local cache of each base-station node is divided into two parts: a part is shared buffer memory, for storing the content that other base-station node can be accessed, size is x; Another part is non-shared buffer memory, and size is c-x;
Suppose that each base-station node has identical link upstream bandwidth U, user is τ from issuing a request to the average delay receiving request content to base-station node, then user is 1/ τ to the maximum to the request rate at base-station node place; The request rate that so centralized gateway controller nodes sends is 2/ τ to the maximum, and x value is:
x = τU 2 .
In technique scheme, described step 103) comprise further:
Suppose that content blocks unit is 1, all request contents are all divided into equal-sized content blocks; C-x block content the most forward for rank in content temperature list is stored in the unshared buffer memory of each base-station node, for the local user of its access provides download service; Then the request content being c-x+1 to c-x+Nx by rank in content temperature list is deposited in the shared buffer memory of N number of base-station node in order respectively, for all users in net provide download service.
In technique scheme, described step 104) comprise further:
The pending query requests table of base-station node is made up of a series of entrance, and each entrance comprises: request title, sends the client side list of this request; The station address sending this request is have recorded in client side list;
The pending query requests table of centralized gateway controller nodes is made up of a series of entrance, and each entrance comprises: request title, sends the client side list of this request; The base station address sending this request is have recorded in client side list.
Based on a kind of management method of node B cache, present invention also offers the processing method of a kind of user access; Comprise:
Step 201) user will request k be sent to its access base-station node n;
Step 202) search request content k in the local cache of base-station node n; If there is request content k in local cache, then send request content k according to station address; Otherwise, forward step 203 to);
Step 203) the request k do not found in the local cache of base-station node n is polymerized, upgrade the pending query requests table of base-station node n;
Step 204) request of institute's base-station node is polymerized by centralized gateway controller nodes, upgrades the pending query requests table of centralized gateway controller nodes;
Step 205) other base-station node of centralized network gateway controller querying node n shared buffer memory on whether have request content k;
Step 206) there is request content k if had in the shared buffer memory of a base-station node m, then request content k is sent to centralized gateway controller nodes; Forward step 208 to); Otherwise, forward step 207 to);
Step 207) request k is sent to original server by centralized gateway controller nodes, after original server finds request content k, request content k is sent to centralized gateway controller nodes;
Step 208) request content k to issue in its pending query requests table base-station node in corresponding client side list by centralized gateway controller nodes with asynchronous multicast; And delete the entry of asking k in pending query requests table;
Step 209) after base-station node n receives request content k, send to the user in client side list corresponding in pending query requests table with asynchronous multicast, delete the entry of request k corresponding in pending query requests table simultaneously.
In technique scheme, described step 203) comprise further:
Step 203-1) matching request k in the pending query requests table of base-station node n;
Step 203-2) if there is an entry of mating completely in pending query requests table, request station address added to the client side list of coupling entry, request k is abandoned thereupon; Otherwise, forward 203-3 to);
Step 203-3) if the entry that neither one mates completely in pending query requests table, in pending query requests table, increase new request k entry, and request k is forwarded.
In technique scheme, the request k sent for base-station node n; Described step 204) comprise further:
Step 204-1) matching request k in the pending query requests table of centralized gateway controller nodes;
Step 204-2) if there is an entry of mating completely in pending query requests table, base-station node n added to the client side list of coupling entry, request k is abandoned thereupon; Otherwise, forward 204-3 to);
Step 204-3) if the entry that neither one mates completely in pending query requests table, in pending query requests table, increase new request k entry.
The invention has the advantages that:
1, the present invention proposes the management method of node B cache, makes base-station node can provide effective service for access user, can also facilitate internodal coordinated, reduce the redundancy of node B cache, improve the utilance of buffer memory;
2, in the processing method that the user that the present invention proposes accesses, the request of multiple user to identical content is polymerized to one by pending query requests table by base-station node and gateway node respectively, under limited link upstream bandwidth, the user expanding node support asks number;
3, the processing method that the user that the present invention proposes accesses, after making base-station node and gateway node obtain request content, sent by the mode of asynchronous multicast, reduce the repetition flow inside and outside network, accelerate network response speed, the online improving user is experienced.
Accompanying drawing explanation
Fig. 1 asks schematic diagram for mobile network works in coordination with caching process user;
Fig. 2 is the schematic diagram of the method for designing of node B cache of the present invention.
Embodiment
Application scenarios of the present invention is: a RAN comprises a SAE-GW node, N number of eNodeB node, the N number of eNodeB node of SAE-GW node administration.
Below in conjunction with the drawings and specific embodiments, the present invention is described further.
As shown in Figure 2, the invention provides a kind of management method of node B cache, comprising:
Step 101) generating content temperature list on SAE-GW node;
In fixed time period, each eNodeB node adds up the access times of each request content according to user access request, report SAE-GW node; The reporting information of all eNodeB nodes of SAE-GW node statistics, and generating content temperature list; Content temperature list have recorded name, the access total degree and history access time data of each content.
Step 102) local cache of each eNodeB node is divided into shared buffer memory and unshared buffer memory;
Suppose that the local cache size of each eNodeB node is c, the local cache of each eNodeB node is divided into two parts: a part is shared buffer memory, for storing the content that other eNodeB node can be accessed, size is x; Another part is non-shared buffer memory, and size is c-x.
Suppose that each eNodeB node has identical link upstream bandwidth U, user is τ from issuing a request to the average delay receiving request content to eNodeB node, then user is 1/ τ to the maximum to the request rate of eNodeB Nodes; The request rate that so SAE-GW node sends is 2/ τ to the maximum, and x value is:
x = τU 2
Step 103) by the request content in content temperature list according in the local cache of rank stored in each eNodeB node;
Suppose that content blocks unit is 1, all request contents are all divided into equal-sized content blocks; C-x block content the most forward for rank in content temperature list is stored in the unshared buffer memory of each eNodeB node, for the local user of its access provides download service; Then the request content being c-x+1 to c-x+Nx by rank in content temperature list is deposited in the shared buffer memory of N number of eNodeB node in order respectively, for all users in net provide download service.
Step 104) on SAE-GW node and each eNodeB node, set up a pending query requests table separately.
The pending query requests table of eNodeB node is made up of a series of entrance, and each entrance comprises: request title, sends the client side list of this request; The station address sending this request is have recorded in client side list.
When receiving in the content requests that local cache is miss, a newly-built entrance, records the name of this request content, adds request station address simultaneously, and gone out by request forward in the client side list of this entrance; If other user subsequently received to the request of this content, then adds this station address in client side list, this request no longer forwards.
The pending query requests table of SAE-GW node is made up of a series of entrance, and each entrance comprises: request title, sends the client side list of this request; The base station address sending this request is have recorded in client side list.
Based on a kind of management method of node B cache, present invention also offers the processing method of a kind of user access; Comprise:
Step 201) user will request k be sent to its access eNodeB node n;
Step 202) search request content k in the local cache of eNodeB node n; If there is request content k in local cache, then send request content k according to station address; Otherwise, forward step 203 to);
Step 203) the request k do not found in the local cache of eNodeB node n is polymerized, upgrade the pending query requests table of eNodeB node n; Comprise:
Step 203-1) matching request k in the pending query requests table of eNodeB node n;
Step 203-2) if there is an entry of mating completely in pending query requests table, request station address added to the client side list of coupling entry, request k is abandoned thereupon; Otherwise, forward 203-3 to);
Step 203-3) if the entry that neither one mates completely in pending query requests table, in pending query requests table, increase new request k entry, and request k is forwarded;
Step 204) request of all eNodeB nodes is polymerized by SAE-GW node, upgrades the pending query requests table of SAE-GW node;
For the request k that eNodeB node n sends; Step 204) comprising:
Step 204-1) matching request k in the pending query requests table of SAE-GW node;
Step 204-2) if there is an entry of mating completely in pending query requests table, eNodeB node n added to the client side list of coupling entry, request k is abandoned thereupon; Otherwise, forward 204-3 to);
Step 204-3) if the entry that neither one mates completely in pending query requests table, in pending query requests table, increase new request k entry;
Step 205) other eNodeB node of SAE-GW querying node shared buffer memory on whether have request content k;
SAE-GW node contains the cache contents view of all eNodeB nodes of its administration.
Step 206) there is request content k if had in the shared buffer memory of an eNodeB node m, then request content k is sent to SAE-GW node; Forward step 208 to); Otherwise, forward step 207 to);
Step 207) request k is sent to original server by SAE-GW node, after original server finds request content k, request content k is sent to SAE-GW node;
Step 208) request content k to issue in its pending query requests table eNodeB node in corresponding client side list by SAE-GW node with asynchronous multicast; And delete the entry of asking k in pending query requests table;
Step 209) after eNodeB node n receives request content k, send to the user in client side list corresponding in pending query requests table with asynchronous multicast, delete the entry of request k corresponding in pending query requests table simultaneously.
Below to the present invention propose method and existing method carry out simulation calculation.
First the source load capacity performance of three kinds of methods is compared; The percentage that source load capacity is accounted in all requests by the request number directly sending to original server is weighed.
When using miscoordination cache policy to process user's request, all eNodeB nodal caches all store content a, after user sends request to content a, the eNodeB node that can be accessed by user directly provides service, and user's request content b, during c and d, must be provided by original server.This means that 6/10 of three all requests of flowing will be forwarded to original server, source load capacity is 0.6.
When using collaborative cache policy to process user's request, suppose content a, the request of b and c is all met (optimal cases) because request rate is lower in this RAN, and three stream upper total 3/30, namely can be transmitted to original server to all requests of d.But suppose that all requests arrive (worst case), so all up links are hurried simultaneously, link capacity thinks a request service only, so other request a, b and c are still sent to original server, therefore, source load capacity is (4+5+6)/30=0.2.Source load capacity is minimum is 0.1, is 0.2 to the maximum.
Use λ krepresent the arrival rate of request content k, as τ λ kwhen>=1, represent when the request of content k is not met, have again some to arrive to the request of content k.In this case, use request aggregation strategy of the present invention, eNodeB node can filtering subsequent user to the request of content k, and requesting node is recorded in PRT; All requests simultaneously in PRT are all sent out away, may be sent to other eNodeB node in net, also may be sent to original server O.
When using method of the present invention, when above-mentioned the worst, the request that all contents are identical is grouped together in eNodeB node; Such as, the request of user to content a is recorded in eNB 2pRT in.In addition, SAE-GW node also records the request of the eNodeB node of all administrations; As shown in Figure 1, eNB 1and eNB 2the request of a is recorded in the PRT of SAE-GW node.Therefore, each content only has a request to be sent out away, and wherein, only have one to be sent to original server to the request of content d, namely original server is mail in the request of 1/30; If arrive (optimal cases) after each request reply data before it returns, request aggregation strategy is the same with collaborative cache policy, because do not ask to be polymerized, now the source load capacity of original server is minimum is 0.033, is 0.1 to the maximum.
Then the performance of the generation flow of three kinds of methods is compared; The flow produced is calculated by the product of averaging network jumping figure and content size.Under prerequisite based on cell size content, the flow that single request produces can direct representation be the network hops of passing by when obtaining content.
When using miscoordination cache policy to process user's request, the user of request content a directly can obtain content from the eNodeB node accessed, and without the eNodeB node of any equity.And when request content b, c and d, need by the former server of SAE-GW node request; Supposing to be the flow that 2 average each requests produce from SAE-GW node to the jumping figure of former server is (363)/30=1.8.
When using collaborative cache policy to process user's request, best situation is, the content a of request, b and c provides service by this RAN, is only that the request of d is sent to former server by request content.Therefore, the flow that average each request produces is [(25+31)+(26+31)+(27+31)]/30=1.5; And in the worst case, when all requests arrive eNodeB node simultaneously, due to cannot content a be got, b and c, therefore former server must be gone to obtain, and the flow that average each request produces is: [(22+34)+(22+35)+(22+36)]/30=1.9.
Apply method of the present invention to user request process time, because whole repetitive requests is filtered at eNodeB node and SAE-GW node, the flow that an average request produces is: [(22+31)+(22+11)+(22+11)]/30=0.56.
Above-mentioned simulation result is as shown in table 1:
Table 1
Miscoordination cache policy Collaborative cache policy Method of the present invention
Source server load 0.6 0.1~0.2 0.033~0.1
Produce flow/request 1.8 1.5~1.9 0.56~1.5
As can be seen from Table 1, compared with the conventional method, not only make the load on original server lower, the flow simultaneously produced is lower for method of the present invention.
It should be noted that, a kind of method that just this programme that embodiment of the present invention of above introduction provide can run, the explanation of this embodiment just for help to understand method of the present invention and core concept thereof and and unrestricted.One of ordinary skill in the art should be appreciated that any amendment to technical solution of the present invention or the equivalent alternative spirit and scope not departing from technical solution of the present invention, and it all should be encompassed in right of the present invention.

Claims (8)

1. a management method for node B cache, described method comprises:
Step 101) generating content temperature list on centralized gateway controller nodes;
Step 102) local cache of each base-station node is divided into shared buffer memory and unshared buffer memory;
Step 103) by the request content in content temperature list according in the local cache of rank stored in each base-station node;
Step 104) on centralized gateway controller nodes and each base-station node, set up pending query requests table separately.
2. the management method of node B cache according to claim 1, is characterized in that, described step 101) comprise further:
In fixed time period, each base-station node adds up the access times of each request content according to user access request, reports centralized gateway controller nodes; The reporting information of all base-station nodes of centralized network gateway controller node statistics, and generating content temperature list; Content temperature list have recorded name, the access total degree and history access time data of each content.
3. the management method of node B cache according to claim 2, is characterized in that, described step 102) comprise further:
Suppose that the local cache size of each base-station node is c, the local cache of each base-station node is divided into two parts: a part is shared buffer memory, for storing the content that other base-station node can be accessed, size is x; Another part is non-shared buffer memory, and size is c-x;
Suppose that each base-station node has identical link upstream bandwidth U, user is τ from issuing a request to the average delay receiving request content to base-station node, then user is 1/ τ to the maximum to the request rate at base-station node place; The request rate that so centralized gateway controller nodes sends is 2/ τ to the maximum, and x value is:
x = τU 2 .
4. the management method of node B cache according to claim 3, is characterized in that, described step 103) comprise further:
Suppose that content blocks unit is 1, all request contents are all divided into equal-sized content blocks; C-x block content the most forward for rank in content temperature list is stored in the unshared buffer memory of each base-station node, for the local user of its access provides download service; Then the request content being c-x+1 to c-x+Nx by rank in content temperature list is deposited in the shared buffer memory of N number of base-station node in order respectively, for all users in net provide download service.
5. the management method of node B cache according to claim 4, is characterized in that, described step 104) comprise further:
The pending query requests table of base-station node is made up of a series of entrance, and each entrance comprises: request title, sends the client side list of this request; The station address sending this request is have recorded in client side list;
The pending query requests table of centralized gateway controller nodes is made up of a series of entrance, and each entrance comprises: request title, sends the client side list of this request; The base station address sending this request is have recorded in client side list.
6. a processing method for user's access, the method realizes based on the management method of the node B cache described in claim 1-5; The method comprises:
Step 201) user will request k be sent to its access base-station node n;
Step 202) search request content k in the local cache of base-station node n; If there is request content k in local cache, then send request content k according to station address; Otherwise, forward step 203 to);
Step 203) the request k do not found in the local cache of base-station node n is polymerized, upgrade the pending query requests table of base-station node n;
Step 204) request of institute's base-station node is polymerized by centralized gateway controller nodes, upgrades the pending query requests table of centralized gateway controller nodes;
Step 205) other base-station node of centralized network gateway controller querying node n shared buffer memory on whether have request content k;
Step 206) there is request content k if had in the shared buffer memory of a base-station node m, then request content k is sent to centralized gateway controller nodes; Forward step 208 to); Otherwise, forward step 207 to);
Step 207) request k is sent to original server by centralized gateway controller nodes, after original server finds request content k, request content k is sent to centralized gateway controller nodes;
Step 208) request content k to issue in its pending query requests table base-station node in corresponding client side list by centralized gateway controller nodes with asynchronous multicast; And delete the entry of asking k in pending query requests table;
Step 209) after base-station node n receives request content k, send to the user in client side list corresponding in pending query requests table with asynchronous multicast, delete the entry of request k corresponding in pending query requests table simultaneously.
7. the processing method of user's access according to claim 6, is characterized in that, described step 203) comprise further:
Step 203-1) matching request k in the pending query requests table of base-station node n;
Step 203-2) if there is an entry of mating completely in pending query requests table, request station address added to the client side list of coupling entry, request k is abandoned thereupon; Otherwise, forward 203-3 to);
Step 203-3) if the entry that neither one mates completely in pending query requests table, in pending query requests table, increase new request k entry, and request k is forwarded.
8. the processing method of user's access according to claim 6, is characterized in that, the request k sent for base-station node n; Described step 204) comprise further:
Step 204-1) matching request k in the pending query requests table of centralized gateway controller nodes;
Step 204-2) if there is an entry of mating completely in pending query requests table, base-station node n added to the client side list of coupling entry, request k is abandoned thereupon; Otherwise, forward 204-3 to);
Step 204-3) if the entry that neither one mates completely in pending query requests table, in pending query requests table, increase new request k entry.
CN201410817493.8A 2014-12-24 2014-12-24 Management method for base station cache and user access processing method Pending CN104507124A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410817493.8A CN104507124A (en) 2014-12-24 2014-12-24 Management method for base station cache and user access processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410817493.8A CN104507124A (en) 2014-12-24 2014-12-24 Management method for base station cache and user access processing method

Publications (1)

Publication Number Publication Date
CN104507124A true CN104507124A (en) 2015-04-08

Family

ID=52948824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410817493.8A Pending CN104507124A (en) 2014-12-24 2014-12-24 Management method for base station cache and user access processing method

Country Status (1)

Country Link
CN (1) CN104507124A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105471962A (en) * 2015-11-13 2016-04-06 上海斐讯数据通信技术有限公司 Access control method and system of network resources, and base station
CN105656997A (en) * 2015-12-25 2016-06-08 中国科学院信息工程研究所 Hotness cache content active pushing method based on mobile user relationship
CN106535231A (en) * 2016-12-08 2017-03-22 北京邮电大学 Content delivery method based on Cache deployment oriented to 5G (5th generation) user-centered network
WO2017113373A1 (en) * 2015-12-31 2017-07-06 华为技术有限公司 Caching method and packet data network gateway
CN107733949A (en) * 2016-08-11 2018-02-23 北京邮电大学 A kind of wireless access network caching method and system
WO2018090336A1 (en) * 2016-11-18 2018-05-24 华为技术有限公司 Cache data acquisition method, related device and communication system
CN108200178A (en) * 2018-01-04 2018-06-22 海信集团有限公司 A kind of method and apparatus for downloading resource
CN108259521A (en) * 2016-12-28 2018-07-06 中国移动通信集团浙江有限公司 Hot point resource sharing method and device
CN110022579A (en) * 2019-04-23 2019-07-16 重庆邮电大学 Content caching management method based on base station collaboration
CN107493328B (en) * 2017-08-14 2019-10-11 武汉大学 A kind of Cooperative caching method based on Fusion Features
CN110868333A (en) * 2019-10-28 2020-03-06 云深互联(北京)科技有限公司 Data caching method and system for gateway
CN110908612A (en) * 2019-11-27 2020-03-24 腾讯科技(深圳)有限公司 Cache management method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014053A (en) * 2010-11-17 2011-04-13 华为技术有限公司 Service transmitting method and device and communication system
CN102204324A (en) * 2011-04-27 2011-09-28 华为技术有限公司 Method and device for improving user access speed of mobile broadband internet
CN102244900A (en) * 2011-07-15 2011-11-16 上海华为技术有限公司 Method, device and system for synchronizing cached data heat value, and distributed caching method
CN103001870A (en) * 2012-12-24 2013-03-27 中国科学院声学研究所 Collaboration caching method and system for content center network
CN103686860A (en) * 2011-07-15 2014-03-26 上海华为技术有限公司 Distribution caching method and device of cached data
US20140237071A1 (en) * 2011-09-28 2014-08-21 Telefonaktiebolaget L M Ericsson (Publ) Caching in Mobile Networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014053A (en) * 2010-11-17 2011-04-13 华为技术有限公司 Service transmitting method and device and communication system
CN102204324A (en) * 2011-04-27 2011-09-28 华为技术有限公司 Method and device for improving user access speed of mobile broadband internet
CN102244900A (en) * 2011-07-15 2011-11-16 上海华为技术有限公司 Method, device and system for synchronizing cached data heat value, and distributed caching method
CN103686860A (en) * 2011-07-15 2014-03-26 上海华为技术有限公司 Distribution caching method and device of cached data
US20140237071A1 (en) * 2011-09-28 2014-08-21 Telefonaktiebolaget L M Ericsson (Publ) Caching in Mobile Networks
CN103001870A (en) * 2012-12-24 2013-03-27 中国科学院声学研究所 Collaboration caching method and system for content center network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YANG LI ET.AL: "A Novel Coordinated Edge Caching with Request Filtration in Radio Access Network", 《THE SCIENTIFIC WORLD JOURNAL:NEXT-GENERATION INTERNET AND COMMUNICATION》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105471962A (en) * 2015-11-13 2016-04-06 上海斐讯数据通信技术有限公司 Access control method and system of network resources, and base station
CN105656997A (en) * 2015-12-25 2016-06-08 中国科学院信息工程研究所 Hotness cache content active pushing method based on mobile user relationship
WO2017113373A1 (en) * 2015-12-31 2017-07-06 华为技术有限公司 Caching method and packet data network gateway
CN107733949B (en) * 2016-08-11 2021-03-16 北京邮电大学 Wireless access network caching method and system
CN107733949A (en) * 2016-08-11 2018-02-23 北京邮电大学 A kind of wireless access network caching method and system
CN109891929A (en) * 2016-11-18 2019-06-14 华为技术有限公司 Data cached acquisition methods, relevant device and communication system
WO2018090336A1 (en) * 2016-11-18 2018-05-24 华为技术有限公司 Cache data acquisition method, related device and communication system
CN106535231B (en) * 2016-12-08 2019-12-13 北京邮电大学 Content transmission method for 5G user-oriented central network Cache deployment
CN106535231A (en) * 2016-12-08 2017-03-22 北京邮电大学 Content delivery method based on Cache deployment oriented to 5G (5th generation) user-centered network
CN108259521A (en) * 2016-12-28 2018-07-06 中国移动通信集团浙江有限公司 Hot point resource sharing method and device
CN108259521B (en) * 2016-12-28 2021-02-23 中国移动通信集团浙江有限公司 Hot resource sharing method and device
CN107493328B (en) * 2017-08-14 2019-10-11 武汉大学 A kind of Cooperative caching method based on Fusion Features
CN108200178A (en) * 2018-01-04 2018-06-22 海信集团有限公司 A kind of method and apparatus for downloading resource
CN110022579A (en) * 2019-04-23 2019-07-16 重庆邮电大学 Content caching management method based on base station collaboration
CN110868333A (en) * 2019-10-28 2020-03-06 云深互联(北京)科技有限公司 Data caching method and system for gateway
CN110908612A (en) * 2019-11-27 2020-03-24 腾讯科技(深圳)有限公司 Cache management method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN104507124A (en) Management method for base station cache and user access processing method
Tran et al. Cooperative hierarchical caching in 5G cloud radio access networks
CN104683485B (en) A kind of Internet content cache based on C RAN preloads method and system
US8355384B2 (en) System and method of handover in wireless network
WO2018120802A1 (en) Collaborative content cache control system and method
CN103781115B (en) Distributed base station buffer replacing method based on transmission cost in a kind of cellular network
Sinky et al. Responsive content-centric delivery in large urban communication networks: A LinkNYC use-case
Zheng et al. 5G network-oriented hierarchical distributed cloud computing system resource optimization scheduling and allocation
CN108541025B (en) Wireless heterogeneous network-oriented base station and D2D common caching method
Sun et al. QoE driven BS clustering and multicast beamforming in cache-enabled C-RANs
Jiang et al. Cooperative content distribution for 5G systems based on distributed cloud service network
Gür Energy-aware cache management at the wireless network edge for information-centric operation
CN108449388A (en) A kind of multinode idleness of equipment aggregated bandwidth utilizes method and system
CN106686399A (en) Intra-network video buffering method based on combined buffering architecture
Kottursamy et al. Optimized D-RAN aware data retrieval for 5G information centric networks
Alduayji et al. PF-EdgeCache: Popularity and freshness aware edge caching scheme for NDN/IoT networks
Reshadinezhad et al. An efficient adaptive cache management scheme for named data networks
Tang et al. A user‐centric cooperative edge caching scheme for minimizing delay in 5G content delivery networks
Xu et al. Social-aware D2D caching content deployment strategy over edge computing wireless networks
Guo et al. Joint optimization of caching and routing strategies in content delivery networks: A big data case
CN106487923B (en) Configurable mobile terminal cooperation caching method
CN113766540B (en) Low-delay network content transmission method, device, electronic equipment and medium
CN113709853B (en) Network content transmission method and device oriented to cloud edge collaboration and storage medium
Zheng et al. Game theoretic approaches to massive data processing in wireless networks
Zhang et al. A cooperation-driven ICN-based caching scheme for mobile content chunk delivery at RAN

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150408

RJ01 Rejection of invention patent application after publication