CN106412040A - Cache resource allocation method and apparatus - Google Patents
Cache resource allocation method and apparatus Download PDFInfo
- Publication number
- CN106412040A CN106412040A CN201610832372.XA CN201610832372A CN106412040A CN 106412040 A CN106412040 A CN 106412040A CN 201610832372 A CN201610832372 A CN 201610832372A CN 106412040 A CN106412040 A CN 106412040A
- Authority
- CN
- China
- Prior art keywords
- cache
- matrix
- cache resource
- current
- resource allocation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
Abstract
The embodiment of the invention provides a cache resource allocation method and apparatus, applied to the technical field of resource utilization. The method comprises the following steps: segmenting a plurality of cache nodes in a core network into a plurality of network slices, randomly carrying out cache resource allocation on the cache nodes at first to obtain a current cache resource allocation matrix set, selecting a current cache resource allocation matrix in the set for optimization, until the current gain of a network operator corresponding to the optimized current cache resource allocation matrix is greater than the initial gain of the network operator corresponding to the selected current cache resource allocation matrix, obtaining a finally optimized cache resource allocation matrix, and carrying out cache resource allocation according the obtained cache resource allocation matrix. In combination with the internal network caching technology and the network slicing technology, the cache resources of the cache nodes are allocated to a plurality of network slices, and iterative optimization is performed on the cache resource allocation of the cache nodes to improve the cache resource allocation rate and the resource utilization rate.
Description
Technical field
The present invention relates to technical field of resource utilization, more particularly to a kind of method and device of cache resource allocation.
Background technology
In order to tackle the fast development of mobile network's service application such as HD video, virtual reality, game on line, 5G (The
5th Generation Mobile Communication, the 5th third-generation mobile communication) technology arises at the historic moment.ICN
(Information-Centric Networking, information centre's network), as a kind of new network architecture, increasingly obtains
To the concern of academia and industrial circle, and in netting, caching technology, as one of key technology in ICN, is also the important potential of 5G
One of technology.In net, caching technology is by disposing caching in a network, shortening user to the distance of content, reduce user's request
The time delay of response, improves the QoE (Quality of Experience, Quality of experience) of user, deployment caching in 5G network,
Deployed position according to caching is generally divided into two kinds:EPC (Evolved Packet Core, evolution block core net) caching and
RAN (Radio Access Network, wireless access network) caches, and wherein, evolution block core net caching is alternatively referred to as core
Net caching, wireless access network caching is alternatively referred to as access network caching.
Further investigation is cached to core net, a kind of existing core net cache resource allocation method is by CDN (Content
Delivery Content, content distributing network) node deployment in the resource of EPC, that is, with Overlay's (covering)
Mode is deployed in core net, by increasing network element LGW (Local Gateway, a LAN in the EPC network of standard
Close), it is directly connected to, MME (Mobility Management with eNodeB (Evolved Node B, evolved Node B)
Entity, mobile management entity) judge user's request and user's request is diverted to by LGW according to judged result, realize to EPC's
Data service shunts, and the greatest requirements according to user distribute cache resources, but existing this method makes cache resources divide
Join rate low, resource utilization is low.
Another cache resource allocation method existing, in ICN, each routing node integrated cache of memory resource.Will
The content that user commonly uses is stored in CS (Content Store, content store), by LCE (Leave Copy
Everywhere, each jump all caches) tactful distribution caching provides strategy with LCD (Leave Copy Down, next-hop caches)
Source;The cache resource allocation method adopting in NDN (Named Data Networking names data network) is LCE strategy,
When user hits at a certain caching or reaches content distribution server to the request of a certain content, on the return road of content
Everywhere on footpath all caches the copy of this content a;Other tactful using LCD, whenever having content to be hit, all can
Content is replicated once to the next-hop node in content return path, this routing node distribution caching money in existing ICN
, so that cache resource allocation rate is low, resource utilization is low for the method in source.
In a word, in prior art cache resource allocation method there is problems that:Cache resource allocation rate is low, the utilization of resources
Rate is low.
Content of the invention
The purpose of the embodiment of the present invention is to provide a kind of method and device of cache resource allocation, to improve cache resources
Apportionment ratios and resource utilization.Concrete technical scheme is as follows:
On the one hand, embodiments provide a kind of method of cache resource allocation, including:
By Intel Virtualization Technology, the multiple cache node cuttings in core net are cut into slices for multiple networks;
Select the cache node of predetermined number in the plurality of cache node, whether generate instruction the plurality of network section
Take the oriental matrix of the cache resources of multiple cache nodes;
According to described oriental matrix, at random cache resource allocation is carried out to the cache node selecting, obtain current cache money
The set of source allocation matrix;
Randomly select the current cache resource assignment matrix in described set, and to the current cache resource allocation square chosen
Battle array is optimized, the current cache resource assignment matrix after being optimized;
Obtain and according to the first cache resource allocation amount in the current cache resource assignment matrix chosen and described optimization
The second cache resource allocation amount in current cache resource assignment matrix afterwards, by preset algorithm, corresponds to respectively and obtains network
The initial return of operator and the current income of Virtual network operator;
The current income of the corresponding Virtual network operator of current cache resource assignment matrix after described optimization, more than described
Choose the initial return of current cache resource assignment matrix corresponding Virtual network operator when, obtain and according to final optimization pass after
Cache resource allocation matrix, carries out cache resource allocation.
Preferably, in the described current cache resource assignment matrix randomly selecting in described set and current to choose
Cache resource allocation matrix is optimized, and after the current cache resource assignment matrix after being optimized, described cache resources divide
The method joined also includes:
Current cache resource assignment matrix after described optimization is added to described set.
Preferably, the described current cache resource assignment matrix randomly selecting in described set, and currently delay to choose
Deposit resource assignment matrix to be optimized, including:
Randomly select at least one of described set current cache resource assignment matrix, and at least one chosen is worked as
Any row in front cache resource allocation matrix redistributes cache resources data.
Preferably, the described current cache resource assignment matrix randomly selecting in described set, and currently delay to choose
Deposit resource assignment matrix to be optimized, including:
Randomly select one of described set current cache resource assignment matrix;
Randomly select the first matrix element in one current cache resource assignment matrix, distribute to the first original sky
Not busy matrix, and in other in addition to the matrix position of described first matrix element occupancy in the described first original free time matrix
Matrix position, randomly generates cache resources data;
Randomly select second square in addition to described first matrix element of one current cache resource assignment matrix
Array element element, distribute to second original free time matrix, and described second original free time matrix in except described second matrix element
Other matrix positions beyond the matrix position taking, randomly generate cache resources data, wherein, described second original free time square
The matrix size of battle array, described first original free time matrix and one current cache resource assignment matrix is identical.
Preferably, the described current cache resource assignment matrix randomly selecting in described set, and currently delay to choose
Deposit resource assignment matrix to be optimized, including:
Randomly select two current cache resource assignment matrix in described set, currently delay from described two respectively at random
Deposit selection cache resources data in resource assignment matrix, and according to described cache resources data, produce new matrix, wherein, institute
The size stating new matrix is identical with the size of described two current cache resource assignment matrix.
Preferably, described obtain and according to choose current cache resource assignment matrix in the first cache resource allocation amount
With described optimize after current cache resource assignment matrix in the second cache resource allocation amount, by preset algorithm, right respectively
The initial return of Virtual network operator and the current income of Virtual network operator should be obtained, including:
According to described first cache resource allocation amount, obtain the first primary power consumption that cache node distributes cache resources
Cost, the second primary power consuming cost of network section respond request and the initial cost collected;
By described initial cost and described first primary power consuming cost and described second primary power consuming cost it
With, difference, as described initial return;
According to described second cache resource allocation amount, obtain the first present energy consumption that cache node distributes cache resources
Cost, the second present energy consuming cost of network section respond request and the current expense collected;
By described current expense and described first present energy consuming cost and described second present energy consuming cost it
With, difference, as described current income.
Preferably, working as positioned at the described corresponding Virtual network operator of current cache resource assignment matrix after described optimization
When front income is more than the initial return of the corresponding Virtual network operator of current cache resource assignment matrix of described selection, obtain final
Before cache resource allocation matrix after optimization, the method for described cache resource allocation also includes:
Judge whether the current income of the corresponding Virtual network operator of Current resource allocation matrix after described optimization is more than institute
State the initial return of the corresponding Virtual network operator of current cache resource assignment matrix of selection;
If not, continuing to randomly select the current cache resource assignment matrix in described set, and currently delay to choose
Deposit resource assignment matrix to be optimized;
If it is, obtaining the cache resource allocation matrix after final optimization pass.
On the other hand, the embodiment of the invention also discloses a kind of device of cache resource allocation, including:
Section module, for being cut the multiple cache node cuttings in core net for multiple networks by Intel Virtualization Technology
Piece;
Oriental matrix generation module, for selecting the cache node of predetermined number in the plurality of cache node, generation refers to
Show whether the plurality of network section takies the oriental matrix of the cache resources of multiple cache nodes;
Set generation module, for according to described oriental matrix, carrying out cache resources at random to the cache node selecting and dividing
Join, obtain the set of current cache resource assignment matrix;
Optimization module is for randomly selecting the current cache resource assignment matrix in described set and current to choose
Cache resource allocation matrix is optimized, the current cache resource assignment matrix after being optimized;
Income calculation module, for acquisition and according to the first cache resources in the current cache resource assignment matrix chosen
The second cache resource allocation amount in current cache resource assignment matrix after sendout and described optimization, by preset algorithm,
Correspondence obtains the initial return of Virtual network operator and the current income of Virtual network operator respectively;
Cache resource allocation module, for the corresponding network operation of current cache resource assignment matrix after described optimization
The current income of business, more than the corresponding Virtual network operator of current cache resource assignment matrix of described selection initial return when,
Obtain and according to the cache resource allocation matrix after final optimization pass, carry out cache resource allocation.
Preferably, the device of described cache resource allocation, also includes:
Add module, for adding the current cache resource assignment matrix after described optimization to described set.
Preferably, described optimization module, is further used for randomly selecting at least one of described set current cache money
Source allocation matrix, and cache resources number is redistributed to any row at least one the current cache resource assignment matrix chosen
According to.
Cache resource allocation method and device provided in an embodiment of the present invention, by the multiple cache node cuttings in core net
For the section of multiple networks, first cache resource allocation is carried out at random to cache node, obtain current cache resource assignment matrix
Set, chooses the current cache resource assignment matrix in set, and the different son reactions according to chemical reaction optimized algorithm are to selection
Current cache resource assignment matrix be optimized, until optimize after the corresponding network operation of current cache resource assignment matrix
The current income of business, more than the initial return of the corresponding Virtual network operator of current cache resource assignment matrix chosen, obtains simultaneously
According to the cache resource allocation matrix after final optimization pass, carry out cache resource allocation.By combining caching technology and network in net
Microtomy, the cache resource allocation of cache node is given multiple network sections, the cache resource allocation of cache node is carried out
Iteration optimization, improves cache resource allocation rate and resource utilization.
Brief description
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
Have technology description in required use accompanying drawing be briefly described it should be apparent that, drawings in the following description be only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, acceptable
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of the method for embodiment of the present invention cache resource allocation;
Fig. 2 is the network diagram of caching and network microtomy in the integrated net of the embodiment of the present invention;
Fig. 3 is the schematic diagram of the device of embodiment of the present invention cache resource allocation.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation description is it is clear that described embodiment is only a part of embodiment of the present invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of not making creative work
Embodiment, broadly falls into the scope of protection of the invention.
The embodiment of the invention discloses a kind of method of cache resource allocation, it is described in detail with reference to Fig. 1, including:
Multiple cache node cuttings in core net are cut into slices for multiple networks by step 101 by Intel Virtualization Technology.
It should be noted that the method for cache resource allocation of the present invention is based on caching in integrated network microtomy and net
The core pessimistic concurrency control of technology, can be specifically 5G core net.In net, caching technology passes through to dispose in a network cache node, contracting
Short user obtains the distance of cache resources, and in application net, caching technology disposes distributed cache node, such as Fig. 2 in the core network
Shown, core net 201 is disposed 8 cache nodes 202, the number of the cache node 202 disposed in actual applications is permissible
Determine according to the actual requirements, each cache node has the cache resources of certain capacity C, C is in actual application, according to
Actual demand determines.
Network microtomy is will be abstract for single physical network framework and be sliced into virtual network one by one, on demand
Provide a user with the technology of end-to-end differentiated service;Intel Virtualization Technology is by the software and hardware function of the special equipment in core net
Transfer on fictitious host computer.
The cache resources of each cache node, can be dynamically assigned in network infrastructure, be split as difference
The corresponding heterogeneous networks of business are cut into slices, and the section of each business corresponding network can take the caching money of different cache nodes
Source, cache node distribution cache resources meet law of conservation of energy during giving the section of each business corresponding network, tool
Body refers to each cache node the cache resources of certain capacity, during actual cache resource allocation, each
Cache node distributes to cache resources and no more than cache node the capacity of different business corresponding heterogeneous networks section.
In actual applications, as shown in Fig. 2 cache node 201 distribution cache resources, it is supplied to by back haul link 203
Different business, has automatic Pilot business, mobile phone 206 and mobile phone 207 that base station 204 is formed by automobile 205 to pass through base station 204
The smart mobile phone business being formed, in certain practical application, except the automatic Pilot business described in Fig. 2 and smart mobile phone business,
Also have HD video, virtual reality, game on line and internet of things service etc., end user 208 enjoys the facility that business is brought.
In the embodiment of the present invention, by Intel Virtualization Technology, by disposing, multiple cache nodes in the core network are virtual, cutting is
M network section, M is in actual application, determines according to actual demand.The cache resource allocation of cache node is to M
Individual network section, and then by the cache resource allocation of the node of caching to different business in network infrastructure as shown in Figure 2
Corresponding heterogeneous networks section.
Step 102, selects the cache node of predetermined number in the plurality of cache node, generates and indicates the plurality of network
Whether section takies the oriental matrix of the cache resources of multiple cache nodes.
In oriental matrix, sign is carried out by default identifier.Default identifier can be specifically numeral, can also
It is letter, can also be symbol.Preferably, using digital 0,1 sign, wherein, 1 sign selects the embodiment of the present invention
This cache node, i.e. the cache resources of multiple network section this cache node of occupancy, 0 sign this cache node non-selected,
The i.e. cache resources of multiple networks section this cache node vacant.
Step 103, according to described oriental matrix, carries out cache resource allocation at random to the cache node selecting, is worked as
The set of front cache resource allocation matrix.
According to oriental matrix, it is that 1 cache node carries out cache resource allocation to indicator identifiers, obtain current cache money
Source allocation matrix, current cache resource assignment matrix is the matrix of N × M size, and wherein, N represents the multiple cachings in core net
The number of node, M represents the number of multiple network sections that the multiple cache node cuttings in core net are;Repeat cache node
The process of distribution cache resources, the current cache resource assignment matrix obtaining being made up of multiple current cache resource assignment matrix
Set.
Step 104, randomly selects the current cache resource assignment matrix in described set, and to the current cache money chosen
Source allocation matrix is optimized, the current cache resource assignment matrix after being optimized.
Can be that any one can realize the method that optimizes it is contemplated that CRO to the optimization of current cache resource assignment matrix
The feature of (Chemical Reaction Optimization, chemical reaction optimized algorithm), the embodiment of the present invention is calculated according to CRO
Method is optimized to current cache resource assignment matrix.
CRO is a kind of heuritic approach, and in chemical reaction optimized algorithm, main object of study has molecular structure, divides
Sub- potential energy and molecular kinetic energy etc..Wherein, the solution of structure representation optimization problem, molecular potential represents the target of optimization problem
Functional value, molecular kinetic energy represents that molecule obtains the degrees of tolerance of worse solution, that is, jump out the ability of locally optimal solution.CRO includes four
The molecule reaction of type:Hit wall reaction, decomposition reaction, crash response and synthetic reaction.These four reactions can be to molecular structure
The different impact of generation degree, and then different degrees of impact is produced to the solution of optimization problem.According to the molecule reaction of CRO, its
In, the selection obedience of four seed reactions is uniformly distributed, cache resources are redistributed to current cache resource assignment matrix, according to
The difference of the son reaction selecting, is optimized to current cache resource assignment matrix using different methods, after being optimized
Current cache resource assignment matrix.
Step 105, obtain and according to choose current cache resource assignment matrix in the first cache resource allocation amount and
The second cache resource allocation amount in current cache resource assignment matrix after described optimization, by preset algorithm, corresponds to respectively
Obtain the initial return of Virtual network operator and the current income of Virtual network operator.
Virtual network operator is to obtain income by providing cache resources, and the income that Virtual network operator obtains is according to caching section
Point importance and caching resource allocation determine, wherein, the importance of cache node be distributed to according to cache node many
The cache resource allocation amount of individual network section determines.
Step 106, the current income of the corresponding Virtual network operator of current cache resource assignment matrix after described optimization,
More than the corresponding Virtual network operator of current cache resource assignment matrix of described selection initial return when, obtain and according to final
Cache resource allocation matrix after optimization, carries out cache resource allocation.
Choose the current cache resource in the set of current cache resource assignment matrix to be iterated optimizing, until meet changing
In generation, optimizes termination condition, obtains the current cache resource assignment matrix after final optimization pass, and then according to working as after this final optimization pass
Front cache resource allocation matrix, carries out cache resource allocation.Wherein, iteration optimization termination condition is the current cache money after optimizing
The current income of the corresponding Virtual network operator of source allocation matrix, more than the corresponding network of current cache resource assignment matrix chosen
The initial return of operator.
The method of embodiment of the present invention cache resource allocation, the multiple cache node cuttings in core net are multiple networks
Section, carries out cache resource allocation at random to cache node first, obtains the set of current cache resource assignment matrix, chooses collection
Current cache resource assignment matrix in conjunction is optimized, until the corresponding network of current cache resource assignment matrix after optimizing
The current income of operator, more than the initial return of the corresponding Virtual network operator of current cache resource assignment matrix chosen, obtains
Arrive and according to the cache resource allocation matrix after final optimization pass, carry out cache resource allocation.By combine net in caching technology and
Network microtomy, the cache resource allocation of cache node is given multiple network sections, the cache resource allocation to cache node
It is iterated optimizing, improve cache resource allocation rate and resource utilization.
Preferably, the current cache resource assignment matrix in randomly selecting set, and to the current cache resource chosen
Allocation matrix is optimized, and after the current cache resource assignment matrix after being optimized, the method for cache resource allocation is also wrapped
Include:
Current cache resource assignment matrix after optimizing is added to set.
In the method for embodiment of the present invention cache resource allocation, process that current cache resource assignment matrix is optimized
It is the process of iteration optimization, so the current cache resource assignment matrix after optimizing is added to current cache resource assignment matrix
Set in, using this can by optimize after current cache resource assignment matrix as need further exist for optimize current cache money
Source allocation matrix, can proceed to optimize on the basis of optimizing at least one times, until currently delaying after obtaining final optimization pass
Deposit resource assignment matrix.
Preferably, randomly select the current cache resource assignment matrix in set, and the current cache resource chosen is divided
Join matrix to be optimized, including:
Randomly select at least one of set current cache resource assignment matrix, and at least one chosen currently is delayed
Deposit any row in resource assignment matrix and redistribute cache resources data.
According to the son reaction of CRO, current cache resource assignment matrix is optimized, different son reactions is to optimization problem
Solution produce different degrees of impact, current cache resource assignment matrix can be optimized according to different reactions respectively.
Because hitting, the influence degree that wall reaction produces to the solution of optimization problem is less, when selection is to hit wall reaction, according to hitting wall
When reaction is optimized to current cache resource assignment matrix, randomly select in the set of current cache resource assignment matrix
Individual current cache resource assignment matrix, selects any row in this current cache resource allocation matrix to redistribute cache resources number
According to.In current cache resource assignment matrix, one of all cache nodes of element representation of any row cache node slow
Deposit the distribution condition of resource, data is redistributed to any row in this current cache resource allocation matrix selecting, that is, selects
One of all cache nodes cache node, the cache resources of this cache node are reassigned to different network sections,
Which network section the cache resource allocation of this cache node specific gives, and distributes to specific network section how many cache resources
It is to carry out at random in actual application.
Because the influence degree that crash response produces to the solution of optimization problem is less, works as and select crash response, according to
When crash response is optimized to current cache resource assignment matrix, randomly select in the set of current cache resource assignment matrix
Two current cache resource assignment matrix, respectively to choose two current cache resource assignment matrix in any row again
Distribution data.Specific assigning process similar to above-mentioned basis hit wall reaction current cache resource assignment matrix is redistributed slow
Deposit the process of resource data, just repeat no more here.
Preferably, randomly select the current cache resource assignment matrix in set, and the current cache resource chosen is divided
Join matrix to be optimized, including:
The first step, randomly selects one of set current cache resource assignment matrix;
Second step, randomly selects the first matrix element in a current cache resource assignment matrix, distributes to first former
Begin idle matrix, and other matrixes in addition to the matrix position that the first matrix element takies in the first original free time matrix
Position, randomly generates cache resources data;
3rd step, randomly selects second matrix in addition to the first matrix element of a current cache resource assignment matrix
Element, distributes to the second original free time matrix, and the matrix taking except the second matrix element in the second original free time matrix
Other matrix positions beyond position, randomly generate cache resources data, wherein, the second original free time matrix, the first original sky
The matrix size of not busy matrix and a current cache resource assignment matrix is identical.
Because the influence degree that decomposition reaction produces to the solution of optimization problem is larger, works as and select decomposition reaction, according to
When decomposition reaction is optimized by redistributing cache resources data to current cache resource assignment matrix, randomly select current
One of set of cache resource allocation matrix current cache resource assignment matrix, the current cache resource allocation square that will choose
Element in battle array is randomly assigned to two original free time matrixes, specifically by the Elemental partition in current cache resource assignment matrix
To which the original free time matrix in two original free time matrixes, distributed to which matrix position in original free time matrix,
Carry out at random by actual demand.
The Elemental partition of the current cache resource assignment matrix of N × M size is given the original free time square of two N × M sizes
Battle array, wherein, N represents the number of all cache nodes of core net, and M represents the network section of the cache node cutting of core net
Number, because the size of two original free time matrixes is and current cache resource assignment matrix size identical that two former
Beginning idle matrix has matrix position and can not obtain cache resources data from current cache resource assignment matrix, these matrix positions
The matrix element put randomly generates in actual application.
Preferably, randomly select the current cache resource assignment matrix in set, and the current cache resource chosen is divided
Join matrix to be optimized, including:
Randomly select two current cache resource assignment matrix in set, divide from two current cache resources respectively at random
Join in matrix selection cache resources data, and according to cache resources data, produce new matrix, wherein, the size of new matrix
Identical with the size of two current cache resource assignment matrix.
Different molecule reactions produces different degrees of impact to the solution of optimization problem, because synthetic reaction is to optimization problem
The influence degree that produces of solution big, so when selecting synthetic reaction, being led to current cache resource assignment matrix according to synthetic reaction
Cross again be allocated into row optimize when, randomly select in the set of current cache resource assignment matrix two current cache resources and divide
Join matrix, select cache resources data to form new matrix, the new square of formation from two current cache resource assignment matrix
The size of battle array and current cache resource assignment matrix is identical, and size is N × M, and wherein, N represents all cachings of core net
The number of node, M represents the number of the network section of the cache node cutting of core net, from two current cache resource allocation squares
Battle array in a concrete matrix select how many data or which data selected from matrix, be in actual application by
Randomly choose according to actual demand.
Preferably, obtain and according to the first cache resource allocation amount and excellent in the current cache resource assignment matrix chosen
The second cache resource allocation amount in current cache resource assignment matrix after change, by preset algorithm, corresponds to respectively and obtains net
The initial return of network operator and the current income of Virtual network operator, including:
The first step, according to the first cache resource allocation amount, obtains the first primary power that cache node distributes cache resources
Consuming cost, the second primary power consuming cost of network section respond request and the initial cost collected;
Second step, by initial cost and the first primary power consuming cost and the second primary power consuming cost sum,
Difference, as initial return;
3rd step, according to the second cache resource allocation amount, obtains the first present energy that cache node distributes cache resources
Consuming cost, the second present energy consuming cost of network section respond request and the current expense collected;
4th step, by current expense and the first present energy consuming cost and the second present energy consuming cost sum,
Difference, as current income.
Virtual network operator provides service expectation to obtain income, and wherein, income includes initial return and current income, and network is transported
Battalion business can collect the charges when providing a user with the service using cache resources, and wherein, expense includes initial cost and current expense,
But in actual application, cache node can pay cost, the embodiment of the present invention in cutting into slices when distributing cache resources to network
In only consider the energy expenditure cost that cache node produces during distributing cache resources, wherein, energy expenditure cost includes delaying
Deposit energy expenditure cost, the energy expenditure cost of network section respond request of node distribution cache resources, cache node distributes
The energy expenditure cost of cache resources includes, and cache node distributes the first primary power consuming cost and the caching section of cache resources
First present energy consuming cost of point distribution cache resources, the energy expenditure cost of network section respond request includes, network
Second primary power consuming cost of section respond request and the second present energy consuming cost of network section respond request, when
So also have the energy expenditure cost of other side in practical application, the present invention does not consider.So the income of Virtual network operator
It is that Virtual network operator provides a user with network section using fee charged during cache resources and cache node distribution cache resources
During produce energy expenditure cost difference.
The cache resources amount that the income of Virtual network operator is taken with network section is linear, and it is slow that network section takies
Deposit stock number more, Virtual network operator provide a user with network section using cache resources when fee charged more, meanwhile, net
Network operator provides a user with network section has relation using the importance of fee charged during cache resources and cache node, delays
The importance that the factors such as position, the market of node of depositing lead to cache node in application process is also different, cache node important
Property by setting cache node price weight represent.
The energy expenditure cost that cache node produces during distributing cache resources, main consideration cache node distribution caching
The energy expenditure cost of resource, the energy expenditure cost of network section respond request.Cache node distributes the energy of cache resources
Consuming cost, that is, content caching or caching substitute and update the energy expenditure cost that brings, distribute to network section with cache node
Cache resource allocation amount have direct relation, cache node distributes to the more i.e. networks of cache resource allocation amount of network section
The cache resource allocation amount that section takies cache node is more, and the cache node of generation distributes the energy expenditure cost of cache resources
Higher, meanwhile, in actual application, in the unit interval, the frequency of content caching or caching replacement renewal is higher, this energy
Consuming cost is higher, but is compared to cache resource allocation amount and substitutes, to content caching or caching, the energy expenditure that renewal brings
The frequency that the influence degree of cost, content caching or caching substitute renewal is less to this energy expenditure cost impact degree, so
Do not consider the impact that this factor is brought to energy expenditure cost in the present invention.
The energy expenditure cost of network section respond request, i.e. the energy expenditure cost of content response request, similarly, with
The cache resource allocation amount that cache node distributes to network section has direct relation, and cache node distributes to the slow of network section
Deposit that resource allocation is more, that is, network section take cache node cache resource allocation amount more, the network section of generation rings
The energy expenditure cost that should ask is higher.Meanwhile, in actual application, the frequency of network section respond request is higher, produces
This raw energy expenditure cost is higher, similarly it is contemplated that being compared to cache resource allocation amount to this energy expenditure cost
Influence degree, the frequency of network section respond request is less to the influence degree of this energy expenditure cost, so the present invention is implemented
Example does not consider the impact that the frequency of network section respond request is brought to this energy expenditure cost.
By choose current cache resource assignment matrix, obtain cache node distribute to network section cache resources divide
Dosage, by this cache resource allocation amount, obtains collecting when Virtual network operator provides a user with network section using cache resources
Initial cost, cache node distributes the first primary power consuming cost of cache resources and network cuts into slices the second of respond request
Primary power consuming cost, the initial cost that Virtual network operator is collected when providing a user with network section using cache resources, with
First primary power of the first primary power consumption and network section respond request that cache node distributes cache resources consumes into
This sum, difference, as the initial return of Virtual network operator;
According to the cache resource allocation amount in the current cache resource assignment matrix after optimizing, obtain working as of Virtual network operator
Front income, the computational methods of current income are identical with the computational methods of initial return, just repeat no more here.
Preferably, big positioned at the current income of the corresponding Virtual network operator of current cache resource assignment matrix after optimization
When the initial return of the current cache resource assignment matrix corresponding Virtual network operator chosen, obtain the caching after final optimization pass
Before resource assignment matrix, the method for cache resource allocation also includes:
Whether the current income of the corresponding Virtual network operator of Current resource allocation matrix after judging to optimize is more than selection
The initial return of the corresponding Virtual network operator of current cache resource assignment matrix;If not, continuing to randomly select working as in set
Front cache resource allocation matrix, and the current cache resource assignment matrix chosen is optimized;If it is, obtaining final optimization pass
Cache resource allocation matrix afterwards.
Distribution to cache resources is iterated optimizing, the Current resource distribution after judging during iteration optimization to optimize
Whether the current income of the corresponding Virtual network operator of matrix is more than the current cache resource assignment matrix corresponding network fortune chosen
The initial return of battalion business, that is, judge whether to meet iteration optimization termination condition, if be unsatisfactory for, proceeds to optimize;If full
Foot, then terminate iterative optimization procedure, obtain the cache resource allocation matrix after final optimization pass.
The embodiment of the present invention is optimized to current cache resource assignment matrix according to the different son reactions of CRO, judges excellent
Whether the current income of the corresponding Virtual network operator of Current resource allocation matrix after change divides more than the current cache resource chosen
Join the initial return of the corresponding Virtual network operator of matrix, if not, proceeding to optimize;If it is, after obtaining final optimization pass
Cache resource allocation matrix, and according to the cache resource allocation matrix after final optimization pass, carry out cache resource allocation.To caching
The cache resource allocation of node is iterated optimizing, and improves cache resource allocation rate and resource utilization.
The embodiment of the invention also discloses a kind of device of cache resource allocation, it is described in detail with reference to Fig. 3, including:
Section module 301, for by Intel Virtualization Technology by the multiple cache node cuttings in core net be multiple networks
Section.
Oriental matrix generation module 302, for selecting the cache node of predetermined number in multiple cache nodes, generates instruction
Whether multiple network sections take the oriental matrix of the cache resources of multiple cache nodes.
Set generation module 303, for according to oriental matrix, carrying out cache resources at random to the cache node selecting and dividing
Join, obtain the set of current cache resource assignment matrix.
Optimization module 304, for randomly selecting the current cache resource assignment matrix in set, and currently delays to choose
Deposit resource assignment matrix to be optimized, the current cache resource assignment matrix after being optimized.
Income calculation module 305, caches for obtaining and according to first in the current cache resource assignment matrix chosen
The second cache resource allocation amount in current cache resource assignment matrix after resource allocation and optimization, by preset algorithm,
Correspondence obtains the initial return of Virtual network operator and the current income of Virtual network operator respectively.
Cache resource allocation module 306, for the corresponding network operation of current cache resource assignment matrix after optimization
The current income of business, more than choose the corresponding Virtual network operator of current cache resource assignment matrix initial return when, obtain
And according to the cache resource allocation matrix after final optimization pass, carry out cache resource allocation.
The device of cache resource allocation in the embodiment of the present invention, by module of cutting into slices, oriental matrix generation module, collection symphysis
Become module, optimization module, income calculation module and caching resource distribution module, cache resources are carried out at random to cache node and divides
Join, obtain the set of current cache resource assignment matrix, the current cache resource assignment matrix chosen in set is optimized, directly
The current income of the corresponding Virtual network operator of current cache resource assignment matrix to after optimize, more than the current cache money chosen
During the initial return of the corresponding Virtual network operator of source allocation matrix, obtain and according to the cache resource allocation square after final optimization pass
Battle array, carries out cache resource allocation.In conjunction with caching technology and network microtomy in net, and the cache resources of cache node are divided
Join and be iterated optimizing, improve cache resource allocation rate and resource utilization.
Preferably, the device of the cache resource allocation of the embodiment of the present invention, also includes:
Add module, for adding the current cache resource assignment matrix after described optimization to described set.
Preferably, optimization module in the device of the cache resource allocation of the embodiment of the present invention, is further used for randomly selecting
At least one of described set current cache resource assignment matrix, and at least one the current cache resource allocation square chosen
Any row in battle array redistributes cache resources data.
Preferably, optimization module in the device of the cache resource allocation of the embodiment of the present invention, including:
Choose submodule, be used for randomly selecting one of described set current cache resource assignment matrix.
Distribution data submodule, for randomly selecting the first matrix element of one current cache resource assignment matrix
Element, distribute to first original free time matrix, and described first original free time matrix in except described first matrix element take
Matrix position beyond other matrix positions, randomly generate cache resources data.
Matrix generate submodule, for randomly select one current cache resource assignment matrix except described first square
The second matrix element beyond array element element, distributes to the second original free time matrix, and in the described second original free time matrix
Other matrix positions in addition to the matrix position that described second matrix element takies, randomly generate cache resources data, wherein,
The matrix of described second original free time matrix, described first original free time matrix and one current cache resource assignment matrix
Size is identical.
Preferably, optimization module in the device of the cache resource allocation of the embodiment of the present invention, is further used for randomly selecting
Two current cache resource assignment matrix in described set, at random respectively from described two current cache resource assignment matrix
Select cache resources data, and according to described cache resources data, produce new matrix, wherein, the size of described new matrix
Identical with the size of described two current cache resource assignment matrix.
Preferably, income calculation module in the device of the cache resource allocation of the embodiment of the present invention, including:
First cost and cost-calculation module, for according to described first cache resource allocation amount, obtaining described caching
First primary power consuming cost of node distribution cache resources, the second primary power consuming cost of network section respond request
With the initial cost collected.
Initial return calculating sub module, for described initial cost and described first primary power consuming cost and described
Second primary power consuming cost sum, difference, as described initial return.
Second cost and cost-calculation module, for according to described second cache resource allocation amount, obtaining cache node
First present energy consuming cost of distribution cache resources, the second present energy consuming cost of network section respond request and receipts
The current expense taking.
Current income calculation submodule, for described current expense and described first present energy consuming cost and described
Second present energy consuming cost sum, difference, as described current income.
The device of the cache resource allocation of the embodiment of the present invention, also includes:
Judge module, for judging the current receipts of the corresponding Virtual network operator of Current resource allocation matrix after described optimization
Whether benefit is more than the initial return of the corresponding Virtual network operator of current cache resource assignment matrix of described selection.
Continue optimization module, when being no for the result of judge module, continue to randomly select currently delaying in described set
Deposit resource assignment matrix, and the current cache resource assignment matrix chosen is optimized.
Object module, for judge module result be when, obtain the cache resource allocation matrix after final optimization pass.
It should be noted that the device of the embodiment of the present invention is the device of the method applying above-mentioned cache resource allocation, then
All embodiments of the method for above-mentioned cache resource allocation are all applied to this device, and all can reach same or analogous beneficial effect
Really.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality
Body or operation are made a distinction with another entity or operation, and not necessarily require or imply these entities or deposit between operating
In any this actual relation or order.And, term " inclusion ", "comprising" or its any other variant are intended to
Comprising of nonexcludability, wants so that including a series of process of key elements, method, article or equipment and not only including those
Element, but also include other key elements being not expressly set out, or also include for this process, method, article or equipment
Intrinsic key element.In the absence of more restrictions, the key element that limited by sentence "including a ..." it is not excluded that
Also there is other identical element including in the process of described key element, method, article or equipment.
Each embodiment in this specification is all described by the way of related, identical similar portion between each embodiment
Divide mutually referring to what each embodiment stressed is the difference with other embodiment.Real especially for system
For applying example, because it is substantially similar to embodiment of the method, so description is fairly simple, referring to embodiment of the method in place of correlation
Part illustrate.
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit protection scope of the present invention.All
Any modification, equivalent substitution and improvement made within the spirit and principles in the present invention etc., are all contained in protection scope of the present invention
Interior.
Claims (10)
1. a kind of method of cache resource allocation is it is characterised in that include:
By Intel Virtualization Technology, the multiple cache node cuttings in core net are cut into slices for multiple networks;
Select the cache node of predetermined number in the plurality of cache node, generate whether instruction the plurality of network section takies
The oriental matrix of the cache resources of multiple cache nodes;
According to described oriental matrix, at random cache resource allocation is carried out to the cache node selecting, obtains current cache resource and divide
Join the set of matrix;
Randomly select the current cache resource assignment matrix in described set, and the current cache resource assignment matrix chosen is entered
Row optimizes, the current cache resource assignment matrix after being optimized;
Obtain and according to choose current cache resource assignment matrix in the first cache resource allocation amount and described optimization after
The second cache resource allocation amount in current cache resource assignment matrix, by preset algorithm, corresponds to respectively and obtains network operation
The initial return of business and the current income of Virtual network operator;
The current income of the corresponding Virtual network operator of current cache resource assignment matrix after described optimization, more than described selection
The initial return of current cache resource assignment matrix corresponding Virtual network operator when, obtain and according to the caching after final optimization pass
Resource assignment matrix, carries out cache resource allocation.
2. the method for cache resource allocation according to claim 1 is it is characterised in that randomly select described set described
In current cache resource assignment matrix, and to choose current cache resource assignment matrix be optimized, after being optimized
After current cache resource assignment matrix, the method for described cache resource allocation also includes:
Current cache resource assignment matrix after described optimization is added to described set.
3. the method for cache resource allocation according to claim 1 is it is characterised in that described randomly select in described set
Current cache resource assignment matrix, and to choose current cache resource assignment matrix be optimized, including:
Randomly select at least one of described set current cache resource assignment matrix, and at least one chosen currently is delayed
Deposit any row in resource assignment matrix and redistribute cache resources data.
4. the method for cache resource allocation according to claim 1 is it is characterised in that described randomly select in described set
Current cache resource assignment matrix, and to choose current cache resource assignment matrix be optimized, including:
Randomly select one of described set current cache resource assignment matrix;
Randomly select the first matrix element in one current cache resource assignment matrix, distribute to the first original free time square
Battle array, and other matrixes in addition to the matrix position that described first matrix element takies in the described first original free time matrix
Position, randomly generates cache resources data;
Randomly select second matrix element in addition to described first matrix element of one current cache resource assignment matrix
Element, distribute to second original free time matrix, and described second original free time matrix in except described second matrix element take
Matrix position beyond other matrix positions, randomly generate cache resources data, wherein, described second original free time matrix,
The matrix size of described first original free time matrix and one current cache resource assignment matrix is identical.
5. the method for cache resource allocation according to claim 1 is it is characterised in that described randomly select in described set
Current cache resource assignment matrix, and to choose current cache resource assignment matrix be optimized, including:
Randomly select two current cache resource assignment matrix in described set, provide from described two current cache respectively at random
Select cache resources data in the allocation matrix of source, and according to described cache resources data, produce new matrix, wherein, described new
The size of matrix identical with the size of described two current cache resource assignment matrix.
6. cache resource allocation according to claim 1 method it is characterised in that described acquisition and according to choose work as
In the first cache resource allocation amount in front cache resource allocation matrix and the current cache resource assignment matrix after described optimization
The second cache resource allocation amount, by preset algorithm, correspondence obtains initial return and the network operation of Virtual network operator respectively
The current income of business, including:
According to described first cache resource allocation amount, the first primary power obtaining cache node distribution cache resources consumes into
Originally, the second primary power consuming cost of network section respond request and the initial cost collected;
By described initial cost and described first primary power consuming cost and described second primary power consuming cost sum,
Difference, as described initial return;
According to described second cache resource allocation amount, the first present energy obtaining cache node distribution cache resources consumes into
Originally, the second present energy consuming cost of network section respond request and the current expense collected;
By described current expense and described first present energy consuming cost and described second present energy consuming cost sum,
Difference, as described current income.
7. cache resource allocation according to claim 1 method it is characterised in that be located at described described optimize after
The current income of the corresponding Virtual network operator of current cache resource assignment matrix is more than the current cache resource allocation of described selection
During the initial return of the corresponding Virtual network operator of matrix, before obtaining the cache resource allocation matrix after final optimization pass, described slow
The method depositing resource allocation also includes:
Judge whether the current income of the corresponding Virtual network operator of Current resource allocation matrix after described optimization is more than described choosing
The initial return of the corresponding Virtual network operator of current cache resource assignment matrix taking;
If not, continue to randomly select the current cache resource assignment matrix in described set, and to the current cache money chosen
Source allocation matrix is optimized;
If it is, obtaining the cache resource allocation matrix after final optimization pass.
8. a kind of device of cache resource allocation is it is characterised in that include:
Section module, for being cut into slices the multiple cache node cuttings in core net for multiple networks by Intel Virtualization Technology;
Oriental matrix generation module, for selecting the cache node of predetermined number in the plurality of cache node, generates instruction institute
State the oriental matrix whether multiple network sections take the cache resources of multiple cache nodes;
Set generation module, for according to described oriental matrix, carrying out cache resource allocation at random to the cache node selecting, obtaining
Set to current cache resource assignment matrix;
Optimization module, for randomly selecting the current cache resource assignment matrix in described set, and to the current cache chosen
Resource assignment matrix is optimized, the current cache resource assignment matrix after being optimized;
Income calculation module, for acquisition and according to the first cache resource allocation in the current cache resource assignment matrix chosen
The second cache resource allocation amount in current cache resource assignment matrix after amount and described optimization, by preset algorithm, difference
Correspondence obtains the initial return of Virtual network operator and the current income of Virtual network operator;
Cache resource allocation module, for the corresponding Virtual network operator of current cache resource assignment matrix after described optimization
Current income, more than the corresponding Virtual network operator of current cache resource assignment matrix of described selection initial return when, obtain
And according to the cache resource allocation matrix after final optimization pass, carry out cache resource allocation.
9. the device of cache resource allocation according to claim 8 is it is characterised in that the dress of described cache resource allocation
Put, also include:
Add module, for adding the current cache resource assignment matrix after described optimization to described set.
10. the device of cache resource allocation according to claim 8, it is characterised in that described optimization module, is used further
In randomly select described set at least one of current cache resource assignment matrix, and to choose at least one current cache
Any row in resource assignment matrix redistributes cache resources data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610832372.XA CN106412040B (en) | 2016-09-19 | 2016-09-19 | A kind of method and device of cache resource allocation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610832372.XA CN106412040B (en) | 2016-09-19 | 2016-09-19 | A kind of method and device of cache resource allocation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106412040A true CN106412040A (en) | 2017-02-15 |
CN106412040B CN106412040B (en) | 2019-09-06 |
Family
ID=57997743
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610832372.XA Active CN106412040B (en) | 2016-09-19 | 2016-09-19 | A kind of method and device of cache resource allocation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106412040B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106922002A (en) * | 2017-04-26 | 2017-07-04 | 重庆邮电大学 | A kind of network section virtual resource allocation method based on internal auction mechanism |
CN106954267A (en) * | 2017-04-14 | 2017-07-14 | 北京邮电大学 | A kind of method for managing resource cut into slices based on wireless network |
CN107820321A (en) * | 2017-10-31 | 2018-03-20 | 北京邮电大学 | Large-scale consumer intelligence Access Algorithm in a kind of arrowband Internet of Things based on cellular network |
CN108093482A (en) * | 2017-12-11 | 2018-05-29 | 北京科技大学 | A kind of optimization method of wireless messages central site network resource allocation |
CN108111931A (en) * | 2017-12-15 | 2018-06-01 | 国网辽宁省电力有限公司 | The virtual resource section management method and device of a kind of power optical fiber access net |
WO2018177310A1 (en) * | 2017-03-28 | 2018-10-04 | Huawei Technologies Co., Ltd. | Architecture for integrating service, network and domain management subsystems |
CN109391913A (en) * | 2017-08-08 | 2019-02-26 | 北京亿阳信通科技有限公司 | A kind of method and system based on the slice management of NB-IoT Internet resources |
CN109600798A (en) * | 2018-11-15 | 2019-04-09 | 北京邮电大学 | Multi-domain resource allocation method and device in a kind of network slice |
CN109951849A (en) * | 2019-02-25 | 2019-06-28 | 重庆邮电大学 | A method of federated resource distribution and content caching in F-RAN framework |
CN110138575A (en) * | 2018-02-02 | 2019-08-16 | 中兴通讯股份有限公司 | Network is sliced creation method, system, the network equipment and storage medium |
CN110167045A (en) * | 2019-04-17 | 2019-08-23 | 北京科技大学 | A kind of heterogeneous network efficiency optimization method |
CN110944335A (en) * | 2019-12-12 | 2020-03-31 | 北京邮电大学 | Resource allocation method and device for virtual reality service |
CN112835716A (en) * | 2021-02-02 | 2021-05-25 | 深圳震有科技股份有限公司 | CPU (Central processing Unit) cache allocation method and terminal for 5G communication virtualization network element |
CN113364686A (en) * | 2017-06-30 | 2021-09-07 | 华为技术有限公司 | Method for generating forwarding table item, controller and network equipment |
CN114630441A (en) * | 2022-05-16 | 2022-06-14 | 网络通信与安全紫金山实验室 | Resource scheduling method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130166724A1 (en) * | 2011-12-22 | 2013-06-27 | Lakshmi Narayanan Bairavasundaram | Dynamic Instantiation and Management of Virtual Caching Appliances |
JP2014186590A (en) * | 2013-03-25 | 2014-10-02 | Nec Corp | Resource allocation system and resource allocation method |
CN104822150A (en) * | 2015-05-13 | 2015-08-05 | 北京工业大学 | Spectrum management method for information proactive caching in center multi-hop cognitive cellular network |
CN105471954A (en) * | 2014-09-11 | 2016-04-06 | 北京智梵网络科技有限公司 | SDN based distributed control system and user flow optimization method |
CN105812217A (en) * | 2014-12-29 | 2016-07-27 | 中国移动通信集团公司 | Virtual network division method and multi-controller agent device |
-
2016
- 2016-09-19 CN CN201610832372.XA patent/CN106412040B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130166724A1 (en) * | 2011-12-22 | 2013-06-27 | Lakshmi Narayanan Bairavasundaram | Dynamic Instantiation and Management of Virtual Caching Appliances |
JP2014186590A (en) * | 2013-03-25 | 2014-10-02 | Nec Corp | Resource allocation system and resource allocation method |
CN105471954A (en) * | 2014-09-11 | 2016-04-06 | 北京智梵网络科技有限公司 | SDN based distributed control system and user flow optimization method |
CN105812217A (en) * | 2014-12-29 | 2016-07-27 | 中国移动通信集团公司 | Virtual network division method and multi-controller agent device |
CN104822150A (en) * | 2015-05-13 | 2015-08-05 | 北京工业大学 | Spectrum management method for information proactive caching in center multi-hop cognitive cellular network |
Non-Patent Citations (2)
Title |
---|
TRICCI SO,袁知贵等: "支持多业务的网络切片技术研究", 《邮电设计技术》 * |
许阳,等: "5G移动网络切片技术浅析", 《邮电设计技术》 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10841184B2 (en) | 2017-03-28 | 2020-11-17 | Huawei Technologies Co., Ltd. | Architecture for integrating service, network and domain management subsystems |
WO2018177310A1 (en) * | 2017-03-28 | 2018-10-04 | Huawei Technologies Co., Ltd. | Architecture for integrating service, network and domain management subsystems |
CN106954267B (en) * | 2017-04-14 | 2019-11-22 | 北京邮电大学 | A kind of method for managing resource based on wireless network slice |
CN106954267A (en) * | 2017-04-14 | 2017-07-14 | 北京邮电大学 | A kind of method for managing resource cut into slices based on wireless network |
CN106922002A (en) * | 2017-04-26 | 2017-07-04 | 重庆邮电大学 | A kind of network section virtual resource allocation method based on internal auction mechanism |
CN106922002B (en) * | 2017-04-26 | 2020-02-07 | 重庆邮电大学 | Network slice virtual resource allocation method based on internal auction mechanism |
CN113364686A (en) * | 2017-06-30 | 2021-09-07 | 华为技术有限公司 | Method for generating forwarding table item, controller and network equipment |
US11665595B2 (en) | 2017-06-30 | 2023-05-30 | Huawei Technologies Co., Ltd. | Forwarding entry generation method, controller, and network device |
CN113364686B (en) * | 2017-06-30 | 2022-10-04 | 华为技术有限公司 | Method for generating forwarding table item, controller and network equipment |
CN109391913B (en) * | 2017-08-08 | 2021-03-09 | 北京亿阳信通科技有限公司 | NB-IoT (NB-IoT) network resource slice management method and system |
CN109391913A (en) * | 2017-08-08 | 2019-02-26 | 北京亿阳信通科技有限公司 | A kind of method and system based on the slice management of NB-IoT Internet resources |
CN107820321A (en) * | 2017-10-31 | 2018-03-20 | 北京邮电大学 | Large-scale consumer intelligence Access Algorithm in a kind of arrowband Internet of Things based on cellular network |
CN107820321B (en) * | 2017-10-31 | 2020-01-10 | 北京邮电大学 | Large-scale user intelligent access method in narrow-band Internet of things based on cellular network |
CN108093482A (en) * | 2017-12-11 | 2018-05-29 | 北京科技大学 | A kind of optimization method of wireless messages central site network resource allocation |
CN108093482B (en) * | 2017-12-11 | 2020-01-21 | 北京科技大学 | Optimization method for wireless information center network resource allocation |
CN108111931B (en) * | 2017-12-15 | 2021-07-16 | 国网辽宁省电力有限公司 | Virtual resource slice management method and device for power optical fiber access network |
CN108111931A (en) * | 2017-12-15 | 2018-06-01 | 国网辽宁省电力有限公司 | The virtual resource section management method and device of a kind of power optical fiber access net |
CN110138575A (en) * | 2018-02-02 | 2019-08-16 | 中兴通讯股份有限公司 | Network is sliced creation method, system, the network equipment and storage medium |
CN110138575B (en) * | 2018-02-02 | 2021-10-08 | 中兴通讯股份有限公司 | Network slice creating method, system, network device and storage medium |
CN109600798A (en) * | 2018-11-15 | 2019-04-09 | 北京邮电大学 | Multi-domain resource allocation method and device in a kind of network slice |
CN109600798B (en) * | 2018-11-15 | 2020-08-28 | 北京邮电大学 | Multi-domain resource allocation method and device in network slice |
CN109951849B (en) * | 2019-02-25 | 2023-02-17 | 重庆邮电大学 | Method for combining resource allocation and content caching in F-RAN architecture |
CN109951849A (en) * | 2019-02-25 | 2019-06-28 | 重庆邮电大学 | A method of federated resource distribution and content caching in F-RAN framework |
CN110167045A (en) * | 2019-04-17 | 2019-08-23 | 北京科技大学 | A kind of heterogeneous network efficiency optimization method |
CN110944335B (en) * | 2019-12-12 | 2022-04-12 | 北京邮电大学 | Resource allocation method and device for virtual reality service |
CN110944335A (en) * | 2019-12-12 | 2020-03-31 | 北京邮电大学 | Resource allocation method and device for virtual reality service |
CN112835716A (en) * | 2021-02-02 | 2021-05-25 | 深圳震有科技股份有限公司 | CPU (Central processing Unit) cache allocation method and terminal for 5G communication virtualization network element |
CN112835716B (en) * | 2021-02-02 | 2023-12-01 | 深圳震有科技股份有限公司 | CPU buffer allocation method and terminal of 5G communication virtualization network element |
CN114630441A (en) * | 2022-05-16 | 2022-06-14 | 网络通信与安全紫金山实验室 | Resource scheduling method and device |
Also Published As
Publication number | Publication date |
---|---|
CN106412040B (en) | 2019-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106412040B (en) | A kind of method and device of cache resource allocation | |
CN101500022B (en) | Data access resource allocation method, system and equipment therefor | |
CN106230953B (en) | A kind of D2D communication means and device based on distributed storage | |
CN107466016B (en) | A kind of cell buffer memory device distribution method based on user mobility | |
CN104022911A (en) | Content route managing method of fusion type content distribution network | |
CN103001870A (en) | Collaboration caching method and system for content center network | |
CN102546435B (en) | A kind of frequency spectrum resource allocation method and device | |
CN101719148B (en) | Three-dimensional spatial information saving method, device, system and query system | |
CN110336885A (en) | Fringe node distribution method, device, dispatch server and storage medium | |
CN107040931A (en) | A kind of wireless and caching Resource co-allocation method of mist Radio Access Network | |
CN112492687B (en) | Self-adaptive resource allocation method and system based on wireless network slice | |
CN104679594A (en) | Middleware distributed calculating method | |
CN109343945A (en) | A kind of multitask dynamic allocation method based on contract net algorithm | |
CN106254561A (en) | The real-time offline download method of a kind of Internet resources file and system | |
CN108055701A (en) | A kind of resource regulating method and base station | |
CN107105043A (en) | A kind of content center network caching method based on software defined network | |
CN104426953A (en) | Method and apparatus for distributing calculation resources | |
CN101507336A (en) | Method and arrangement for access selection in a multi-access communications system | |
CN105227396B (en) | A kind of inferior commending contents dissemination system and its method towards mobile communications network | |
CN101651600B (en) | Method and device for updating link cost in grid network | |
CN112887943B (en) | Cache resource allocation method and system based on centrality | |
CN106686112A (en) | Cloud file transmission system and method | |
CN109600780A (en) | A kind of duplicate of the document caching method based on base station sub-clustering | |
CN107733998A (en) | Method is placed and provided to the cache contents of honeycomb isomery hierarchical network | |
CN103269519B (en) | A kind of processing resource allocation method and system in centralized base station framework |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220622 Address after: 310052 Changhe Road, Binjiang District, Hangzhou, Zhejiang Province, No. 466 Patentee after: NEW H3C TECHNOLOGIES Co.,Ltd. Address before: 100876 Beijing city Haidian District Xitucheng Road No. 10 Patentee before: Beijing University of Posts and Telecommunications |