CN106412040B - A kind of method and device of cache resource allocation - Google Patents

A kind of method and device of cache resource allocation Download PDF

Info

Publication number
CN106412040B
CN106412040B CN201610832372.XA CN201610832372A CN106412040B CN 106412040 B CN106412040 B CN 106412040B CN 201610832372 A CN201610832372 A CN 201610832372A CN 106412040 B CN106412040 B CN 106412040B
Authority
CN
China
Prior art keywords
cache
matrix
current
cache resource
resource assignment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610832372.XA
Other languages
Chinese (zh)
Other versions
CN106412040A (en
Inventor
谢人超
贾庆民
黄韬
刘韵洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Technologies Co Ltd
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201610832372.XA priority Critical patent/CN106412040B/en
Publication of CN106412040A publication Critical patent/CN106412040A/en
Application granted granted Critical
Publication of CN106412040B publication Critical patent/CN106412040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Abstract

The embodiment of the invention provides a kind of method and devices of cache resource allocation, are applied to technical field of resource utilization.Wherein, the described method includes: being multiple networks slice by multiple cache node cuttings in core net, cache resource allocation is carried out at random to cache node first, obtain the set of current cache resource assignment matrix, the current cache resource assignment matrix chosen in set optimizes, until the current income of the corresponding network operator of current cache resource assignment matrix after optimization, greater than the initial return of the corresponding network operator of current cache resource assignment matrix of selection, it obtains and according to the cache resource allocation matrix after final optimization pass, progress cache resource allocation.Interior caching technology and network microtomy are netted by combining, gives multiple networks to be sliced the cache resource allocation of cache node, optimization is iterated to the cache resource allocation of cache node, improves cache resource allocation rate and resource utilization.

Description

A kind of method and device of cache resource allocation
Technical field
The present invention relates to technical field of resource utilization, more particularly to a kind of method and device of cache resource allocation.
Background technique
In order to cope with the fast development of mobile networks' service application such as HD video, virtual reality, game on line, 5G (The 5th Generation Mobile Communication, the 5th third-generation mobile communication) technology comes into being.ICN (Information-Centric Networking, information centre's network) is increasingly obtained as a kind of novel network architecture To the concern of academia and industrial circle, and caching technology is as the important potential of one of key technology in ICN and 5G in netting One of technology.Caching technology is to shorten user by disposing caching in a network to the distance of content, reduce user's request in netting The time delay of response improves the QoE (Quality of Experience, Quality of experience) of user, and caching is disposed in 5G network, According to the deployed position of caching be generally divided into two kinds: EPC (Evolved Packet Core, evolution block core net) caching and RAN (Radio Access Network, wireless access network) caching, wherein evolution block core net caching is alternatively referred to as core Net caching, wireless access network caching alternatively referred to as access net caching.
Core net is cached and is furtherd investigate, a kind of existing core net cache resource allocation method is by CDN (Content Delivery Content, content distributing network) node deployment in the resource of EPC, that is, with Overlay's (covering) Mode is deployed in core net, by increasing network element LGW (Local Gateway, a local network in the EPC network of standard Close), it is directly connected to, MME (Mobility Management with eNodeB (Evolved Node B, evolved Node B) Entity, mobile management entity) judge user's request and user's request is diverted to by LGW according to judging result, it realizes to EPC's Data service shunts, and distributes cache resources according to the greatest requirements of user, but existing this method makes cache resources point Low with rate, resource utilization is low.
Another existing cache resource allocation method, in ICN, each routing node integrated cache of memory resource.It will The common content of user is stored in CS (Content Store, content store), passes through LCE (Leave Copy Everywhere, each jump all cache) strategy and LCD (Leave Copy Down, next-hop caching) strategy distribution caching money Source;The cache resource allocation method used in NDN (Named Data Networking names data network) is LCE strategy, When user hits at a certain caching to the request of a certain content or reaches content distribution server, on the return road of content Everywhere on diameter all caches the copy of a content;Others whenever having content to be hit, all can using LCD strategy Next-hop node duplication of the content into content return path is primary, this routing node distribution caching money in existing ICN The method in source, so that cache resource allocation rate is low, resource utilization is low.
In short, in the prior art cache resource allocation method the problem is that: cache resource allocation rate is low, the utilization of resources Rate is low.
Summary of the invention
The method and device for being designed to provide a kind of cache resource allocation of the embodiment of the present invention, to improve cache resources Apportionment ratio and resource utilization.Specific technical solution is as follows:
On the one hand, the embodiment of the invention provides a kind of methods of cache resource allocation, comprising:
By virtualization technology by multiple cache node cuttings in core net be multiple networks be sliced;
The cache node of predetermined number in the multiple cache node is selected, generates and whether indicates the multiple network slice Occupy the oriental matrix of the cache resources of multiple cache nodes;
According to the oriental matrix, cache resource allocation is carried out to the cache node of selection at random, obtains current cache money The set of source allocation matrix;
The current cache resource assignment matrix in the set is randomly selected, and to the current cache resource allocation square of selection Battle array optimizes, the current cache resource assignment matrix after being optimized;
It obtains and according to the first cache resource allocation amount and the optimization in the current cache resource assignment matrix of selection The second cache resource allocation amount in current cache resource assignment matrix afterwards respectively corresponds to obtain network by preset algorithm The initial return of operator and the current income of network operator;
The current income of the corresponding network operator of current cache resource assignment matrix after the optimization is greater than described When the initial return of the corresponding network operator of current cache resource assignment matrix of selection, obtain and according to final optimization pass after Cache resource allocation matrix carries out cache resource allocation.
Preferably, in the current cache resource assignment matrix randomly selected in the set, and to the current of selection Cache resource allocation matrix optimizes, after the current cache resource assignment matrix after being optimized, the cache resources point The method matched further include:
Current cache resource assignment matrix after the optimization is added in the set.
Preferably, the current cache resource assignment matrix randomly selected in the set, and to the current slow of selection Resource assignment matrix is deposited to optimize, comprising:
At least one current cache resource assignment matrix in the set is randomly selected, and at least one to selection is worked as Any row in preceding cache resource allocation matrix redistributes cache resources data.
Preferably, the current cache resource assignment matrix randomly selected in the set, and to the current slow of selection Resource assignment matrix is deposited to optimize, comprising:
Randomly select a current cache resource assignment matrix in the set;
Randomly select the first matrix element in one current cache resource assignment matrix, distribution to the first original sky Not busy matrix, and other in addition to the matrix position of first matrix element occupancy in the described first original idle matrix Cache resources data are randomly generated in matrix position;
Randomly select the second square in addition to first matrix element of one current cache resource assignment matrix Array element element, distribution to the second original idle matrix, and second matrix element is removed in the described second original idle matrix Other matrix positions other than the matrix position of occupancy, are randomly generated cache resources data, wherein the described second original idle square The matrix size of battle array, the described first original idle matrix and one current cache resource assignment matrix is identical.
Preferably, the current cache resource assignment matrix randomly selected in the set, and to the current slow of selection Resource assignment matrix is deposited to optimize, comprising:
Two current cache resource assignment matrix in the set are randomly selected, at random respectively from described two current slow Selection cache resources data in resource assignment matrix are deposited, and according to the cache resources data, generate new matrix, wherein institute The size for stating new matrix is identical as the size of described two current cache resource assignment matrix.
Preferably, described to obtain and according to the first cache resource allocation amount in the current cache resource assignment matrix of selection It is right respectively by preset algorithm with the second cache resource allocation amount in the current cache resource assignment matrix after the optimization It should obtain the initial return of network operator and the current income of network operator, comprising:
According to the first cache resource allocation amount, the first primary power consumption of cache node distribution cache resources is obtained Cost, the second primary power consuming cost of network slice respond request and the initial cost collected;
By the initial cost, with the first primary power consuming cost and the second primary power consuming cost it With difference, as the initial return;
According to the second cache resource allocation amount, the first present energy consumption of cache node distribution cache resources is obtained Cost, the second present energy consuming cost of network slice respond request and the current expense collected;
By the current expense, with the first present energy consuming cost and the second present energy consuming cost it With difference, as the current income.
Preferably, the corresponding network operator of current cache resource assignment matrix positioned at described after the optimization works as When preceding income is greater than the initial return of the corresponding network operator of current cache resource assignment matrix of the selection, obtain final Before cache resource allocation matrix after optimization, the method for the cache resource allocation further include:
Whether the current income of the corresponding network operator of Current resource allocation matrix after judging the optimization is greater than institute State the initial return of the corresponding network operator of current cache resource assignment matrix of selection;
If not, continuing to randomly select the current cache resource assignment matrix in the set, and to the current slow of selection Resource assignment matrix is deposited to optimize;
If so, obtaining the cache resource allocation matrix after final optimization pass.
On the other hand, the embodiment of the invention also discloses a kind of devices of cache resource allocation, comprising:
It is sliced module, for cutting multiple cache node cuttings in core net for multiple networks by virtualization technology Piece;
Oriental matrix generation module, for selecting the cache node of predetermined number in the multiple cache node, generation refers to Show whether the multiple network slice occupies the oriental matrix of the cache resources of multiple cache nodes;
Gather generation module, for carrying out cache resources point at random to the cache node of selection according to the oriental matrix Match, obtains the set of current cache resource assignment matrix;
Optimization module, for randomly selecting the current cache resource assignment matrix in the set, and to the current of selection Cache resource allocation matrix optimizes, the current cache resource assignment matrix after being optimized;
Income calculation module, for obtaining and according to the first cache resources in the current cache resource assignment matrix of selection The second cache resource allocation amount in current cache resource assignment matrix after sendout and the optimization, by preset algorithm, It respectively corresponds to obtain the initial return of network operator and the current income of network operator;
Cache resource allocation module, for the corresponding network operation of current cache resource assignment matrix after the optimization The current income of quotient, greater than the corresponding network operator of current cache resource assignment matrix of the selection initial return when, It obtains and according to the cache resource allocation matrix after final optimization pass, progress cache resource allocation.
Preferably, the device of the cache resource allocation, further includes:
Adding module, for the current cache resource assignment matrix after the optimization to be added in the set.
Preferably, the optimization module is further used for randomly selecting at least one current cache money in the set Source allocation matrix, and cache resources number is redistributed to any row at least one current cache resource assignment matrix of selection According to.
Cache resource allocation method and device provided in an embodiment of the present invention, by multiple cache node cuttings in core net For multiple networks slice, first cache resource allocation is carried out to cache node at random, obtain current cache resource assignment matrix Set chooses the current cache resource assignment matrix in set, according to the difference son reaction of chemical reaction optimization algorithm to selection Current cache resource assignment matrix optimize, until optimization after the corresponding network operation of current cache resource assignment matrix The current income of quotient obtains simultaneously greater than the initial return of the corresponding network operator of current cache resource assignment matrix of selection According to the cache resource allocation matrix after final optimization pass, cache resource allocation is carried out.Interior caching technology and network are netted by combining The cache resource allocation of cache node is sliced to multiple networks, carries out to the cache resource allocation of cache node by microtomy Iteration optimization improves cache resource allocation rate and resource utilization.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is the flow diagram of the method for cache resource allocation of the embodiment of the present invention;
Fig. 2 is the network diagram of caching and network microtomy in the networking of collection of the embodiment of the present invention;
Fig. 3 is the schematic diagram of the device of cache resource allocation of the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a kind of methods of cache resource allocation, are described in detail referring to Fig.1, comprising:
It step 101, is multiple networks slice by multiple cache node cuttings in core net by virtualization technology.
It should be noted that the method for cache resource allocation of the present invention is based on caching in integrated network microtomy and net The core pessimistic concurrency control of technology, the specific can be that 5G core net.Caching technology is contracted by disposing cache node in a network in netting Short user obtains the distance of cache resources, and application nets interior caching technology and disposes distributed cache node, such as Fig. 2 in the core network It is shown, dispose 8 cache nodes 202 in core net 201, the number of the cache node 202 disposed in practical applications can be with Determine according to actual needs, each cache node has a cache resources of certain capacity C, C be in actual application, according to What actual demand determined.
Network microtomy is to be abstracted single physical network framework and be sliced into virtual network one by one, on demand Provide a user the technology of end-to-end differentiated service;Virtualization technology is by the software and hardware function of the special equipment in core net It is transferred on fictitious host computer.
The cache resources of each cache node can be dynamically assigned in network infrastructure, be split as difference The corresponding heterogeneous networks slice of business, the corresponding network slice of each business can occupy the caching money of different cache nodes Source, cache node meet law of conservation of energy during distributing cache resources to the corresponding network slice of each business, have Body refers to that each cache node has the cache resources of certain capacity, during actual cache resource allocation, each Cache node distributes to capacity of the sum no more than cache node of the cache resources of the corresponding heterogeneous networks slice of different business.
In practical applications, it as shown in Fig. 2, cache node 201 distributes cache resources, is supplied to by back haul link 203 Different business has base station 204 to pass through base station 204 by automatic Pilot business, mobile phone 206 and the mobile phone 207 that automobile 205 is formed The smart phone business of formation, in certain practical application, in addition to automatic Pilot business described in Fig. 2 and smart phone business, There are also HD video, virtual reality, game on line and internet of things service etc., it is convenient that end user 208 enjoys business bring.
By virtualization technology, by the multiple cache nodes of deployment in the core network, virtual, cutting is in the embodiment of the present invention M network slice, M is in actual application, to be determined according to actual requirements.The cache resource allocation of cache node is to M A network slice, and then by the cache resource allocation of the node of caching to different business in network infrastructure as shown in Figure 2 Corresponding heterogeneous networks slice.
Step 102, the cache node of predetermined number in the multiple cache node is selected, generates and indicates the multiple network Whether slice occupies the oriental matrix of the cache resources of multiple cache nodes.
Sign is carried out by default identifier in oriental matrix.Default identifier specifically can be number, can also be with It is letter, can also be symbol.Preferably, the embodiment of the present invention uses digital 0,1 sign, wherein the selection of 1 sign The cache node, i.e., the cache resources of multiple network slice occupancy cache nodes, the non-selected cache node of 0 sign, I.e. multiple networks are sliced the cache resources of the vacant cache node.
Step 103, according to the oriental matrix, cache resource allocation is carried out to the cache node of selection at random, is worked as The set of preceding cache resource allocation matrix.
According to oriental matrix, the cache node for being 1 to indicator identifiers carries out cache resource allocation, obtains current cache money Source allocation matrix, current cache resource assignment matrix are the matrixes of N × M size, wherein N indicates multiple cachings in core net The number of node, M indicate the number for multiple networks slice that multiple cache node cuttings in core net are;Repeat cache node The process for distributing cache resources, obtains the current cache resource assignment matrix being made of multiple current cache resource assignment matrix Set.
Step 104, the current cache resource assignment matrix in the set is randomly selected, and the current cache of selection is provided Source allocation matrix optimizes, the current cache resource assignment matrix after being optimized.
Any method that optimization may be implemented can be to the optimization of current cache resource assignment matrix, it is contemplated that CRO The characteristics of (Chemical Reaction Optimization chemically reacts optimization algorithm), the embodiment of the present invention is calculated according to CRO Method optimizes current cache resource assignment matrix.
CRO is a kind of heuritic approach, and in chemical reaction optimization algorithm, main research object has molecular structure, divides Sub- potential energy and molecular kinetic energy etc..Wherein, the solution of structure representation optimization problem, molecular potential indicate the target of optimization problem Functional value, molecular kinetic energy indicate that molecule obtains the degrees of tolerance of worse solution, that is, jump out the ability of locally optimal solution.CRO includes four The molecule of seed type reacts: hitting wall reaction, decomposition reaction, crash response and synthetic reaction.These four reactions can be to molecular structure The different influence of generation degree, and then different degrees of influence is generated to the solution of optimization problem.It is reacted according to the molecule of CRO, In, the selection obedience of four seeds reaction is uniformly distributed, cache resources are redistributed to current cache resource assignment matrix, according to The difference of the son reaction of selection, optimizes current cache resource assignment matrix using different methods, after being optimized Current cache resource assignment matrix.
Step 105, obtain and according in the current cache resource assignment matrix of selection the first cache resource allocation amount and The second cache resource allocation amount in current cache resource assignment matrix after the optimization is respectively corresponded by preset algorithm Obtain the initial return of network operator and the current income of network operator.
Network operator is to obtain income by providing cache resources, and the income that network operator obtains is saved according to caching Point importance and caching resource allocation determine, wherein the importance of cache node be distributed to according to cache node it is more What the cache resource allocation amount of a network slice determined.
Step 106, the current income of the corresponding network operator of current cache resource assignment matrix after the optimization, Greater than the corresponding network operator of current cache resource assignment matrix of the selection initial return when, obtain simultaneously according to final Cache resource allocation matrix after optimization carries out cache resource allocation.
The current cache resource chosen in the set of current cache resource assignment matrix is iterated optimization, changes until meeting Generation optimization termination condition, the current cache resource assignment matrix after obtaining final optimization pass, and then according to working as after the final optimization pass Preceding cache resource allocation matrix carries out cache resource allocation.Wherein, iteration optimization termination condition is the current cache money after optimization The current income of the corresponding network operator of source allocation matrix, greater than the corresponding network of current cache resource assignment matrix of selection The initial return of operator.
Multiple cache node cuttings in core net are multiple networks by the method for cache resource allocation of the embodiment of the present invention Slice, carries out cache resource allocation to cache node first at random, obtains the set of current cache resource assignment matrix, chooses collection Current cache resource assignment matrix in conjunction optimizes, until the corresponding network of current cache resource assignment matrix after optimization The current income of operator is obtained greater than the initial return of the corresponding network operator of current cache resource assignment matrix of selection It arrives and according to the cache resource allocation matrix after final optimization pass, progress cache resource allocation.By combine net in caching technology and The cache resource allocation of cache node is sliced, to the cache resource allocation of cache node by network microtomy to multiple networks It is iterated optimization, improves cache resource allocation rate and resource utilization.
Preferably, the current cache resource assignment matrix in set is being randomly selected, and to the current cache resource of selection Allocation matrix optimizes, and after the current cache resource assignment matrix after being optimized, the method for cache resource allocation is also wrapped It includes:
Current cache resource assignment matrix after optimization is added in set.
In the method for cache resource allocation of the embodiment of the present invention, process that current cache resource assignment matrix is optimized It is the process of iteration optimization, so the current cache resource assignment matrix after optimization is added to current cache resource assignment matrix Set in, using this can by the current cache resource assignment matrix after optimization as need further exist for optimization current cache provide Source allocation matrix can continue to optimize on the basis of optimizing at least once, until current slow after obtaining final optimization pass Deposit resource assignment matrix.
Preferably, the current cache resource assignment matrix in set is randomly selected, and to the current cache resource of selection point It is optimized with matrix, comprising:
At least one current cache resource assignment matrix in set is randomly selected, and at least one to selection is current slow It deposits any row in resource assignment matrix and redistributes cache resources data.
According to the son reaction of CRO, current cache resource assignment matrix is optimized, different son reactions is to optimization problem Solution generate different degrees of influence, current cache resource assignment matrix can be optimized according to different reactions respectively. Because hitting, the influence degree that wall reaction generates the solution of optimization problem is smaller, and what it is when selection is to hit wall reaction, according to hitting wall When reaction optimizes current cache resource assignment matrix, one in the set of current cache resource assignment matrix is randomly selected A current cache resource assignment matrix selects any row in the current cache resource allocation matrix to redistribute cache resources number According to.In current cache resource assignment matrix, a cache node in all cache nodes of the element representation of any row it is slow The distribution condition for depositing resource is redistributed data to any row in the current cache resource allocation matrix of selection, that is, is selected The cache resources of the cache node are reassigned to different networks and are sliced by a cache node in all cache nodes, Which network slice the cache resource allocation of the specific cache node gives, and distributes to specific network slice how many cache resources It is to be carried out at random in actual application.
Because the influence degree that crash response generates the solution of optimization problem is smaller, work as selection crash response, according to When crash response optimizes current cache resource assignment matrix, in the set that randomly selects current cache resource assignment matrix Two current cache resource assignment matrix, respectively again to any row in two current cache resource assignment matrix of selection Distribute data.Specific assigning process be similar to above-mentioned basis hit wall reaction current cache resource assignment matrix is redistributed it is slow The process for depositing resource data, just repeats no more here.
Preferably, the current cache resource assignment matrix in set is randomly selected, and to the current cache resource of selection point It is optimized with matrix, comprising:
The first step randomly selects a current cache resource assignment matrix in set;
Second step randomly selects the first matrix element in a current cache resource assignment matrix, distribution to the first original Begin idle matrix, and other matrixes in addition to the matrix position occupied except the first matrix element in the first original idle matrix Cache resources data are randomly generated in position;
Third step randomly selects second matrix in addition to the first matrix element of a current cache resource assignment matrix Element, distribution to the second original idle matrix, and the matrix occupied except the second matrix element in the second original idle matrix Other matrix positions other than position, are randomly generated cache resources data, wherein the second original idle matrix, the first original sky The matrix size of not busy matrix and a current cache resource assignment matrix is identical.
Because the influence degree that decomposition reaction generates the solution of optimization problem is larger, work as selection decomposition reaction, according to When decomposition reaction optimizes current cache resource assignment matrix by redistributing cache resources data, randomly select current A current cache resource assignment matrix in the set of cache resource allocation matrix, by the current cache resource allocation square of selection Element in battle array is randomly assigned to two original idle matrixes, specifically by the Elemental partition in current cache resource assignment matrix It gives which of two original idle matrixes original idle matrix, which matrix position in original idle matrix is distributed to, It is to be carried out at random by actual demand.
By the Elemental partition of the current cache resource assignment matrix of N × M size to the original idle square of two N × M sizes Battle array, wherein N indicates the number of all cache nodes of core net, and M indicates the network slice of the cache node cutting of core net Number, because the size of two original idle matrixes is, two originals identical with current cache resource assignment matrix size Beginning idle matrix has matrix position cannot obtain cache resources data from current cache resource assignment matrix, these matrix positions The matrix element set is randomly generated in actual application.
Preferably, the current cache resource assignment matrix in set is randomly selected, and to the current cache resource of selection point It is optimized with matrix, comprising:
Two current cache resource assignment matrix in set are randomly selected, at random respectively from two current cache resources point With selection cache resources data in matrix, and according to cache resources data, new matrix is generated, wherein the size of new matrix It is identical as the size of two current cache resource assignment matrix.
Different molecule reactions generate different degrees of influence to the solution of optimization problem, because synthetic reaction is to optimization problem The influence degree that generates of solution it is big, it is logical to current cache resource assignment matrix according to synthetic reaction so when selection synthetic reaction It crosses and redistributes when optimizing, randomly select two current cache resources point in the set of current cache resource assignment matrix With matrix, cache resources data are selected to form new matrix, the new square of formation from two current cache resource assignment matrix Battle array and current cache resource assignment matrix size be it is identical, size be N × M, wherein N expression core net all cachings The number of node, M indicates the number of the network slice of the cache node cutting of core net, from two current cache resource allocation squares A specific matrix in battle array selects how many a data or selects which data from matrix, be in actual application by It is randomly selected according to actual demand.
Preferably, it obtains and according to the first cache resource allocation amount in the current cache resource assignment matrix of selection and excellent The second cache resource allocation amount in current cache resource assignment matrix after change respectively corresponds to obtain net by preset algorithm The initial return of network operator and the current income of network operator, comprising:
The first step obtains the first primary power of cache node distribution cache resources according to the first cache resource allocation amount Consuming cost, the second primary power consuming cost of network slice respond request and the initial cost collected;
Second step, by initial cost, with the sum of the first primary power consuming cost and the second primary power consuming cost, Difference, as initial return;
Third step obtains the first present energy of cache node distribution cache resources according to the second cache resource allocation amount Consuming cost, the second present energy consuming cost of network slice respond request and the current expense collected;
4th step, by current expense, with the sum of the first present energy consuming cost and the second present energy consuming cost, Difference, as current income.
Network operator provides service expectation and obtains income, wherein income includes initial return and current income, network fortune Battalion quotient can collect the charges when providing a user the service using cache resources, wherein and expense includes initial cost and current expense, But in actual application, cache node is sliced distribution cache resources Shi Zhonghui to network and pays cost, the embodiment of the present invention In only consider the energy consumption cost generated during cache node distribution cache resources, wherein energy consumption cost includes slow Deposit the energy consumption cost of node distribution cache resources, the energy consumption cost of network slice respond request, cache node distribution The energy consumption cost of cache resources includes that cache node distributes the first primary power consuming cost and caching section of cache resources First present energy consuming cost of point distribution cache resources, the energy consumption cost that network is sliced respond request includes network It is sliced the second primary power consuming cost of respond request and the second present energy consuming cost of network slice respond request, when The energy consumption cost of other aspects is also had in right practical application, the present invention does not consider.So the income of network operator It is that network operator provides a user fee charged and cache node distribution cache resources when network slice uses cache resources The difference of the energy consumption cost generated in the process.
The income of network operator and the cache resources amount that network slice occupies are linear, and network slice occupies slow It is more to deposit stock number, fee charged is more when network operator provides a user network slice using cache resources, meanwhile, net Network operator, which provides a user network slice, has relationship using the importance of fee charged when cache resources and cache node, delays Depositing the factors such as position, the market of node causes the importance of cache node in application process also different, cache node it is important Property by setting cache node price weight indicate.
Cache node distributes the energy consumption cost generated during cache resources, main to consider cache node distribution caching Energy consumption cost, the energy consumption cost of network slice respond request of resource.The energy of cache node distribution cache resources Consuming cost, i.e. content caching or caching substitution update bring energy consumption cost, distribute to network slice with cache node Cache resource allocation amount have a direct relationship, cache node distributes to the more i.e. networks of cache resource allocation amount of network slice The cache resource allocation amount that slice occupies cache node is more, the energy consumption cost of the cache node distribution cache resources of generation It is higher, meanwhile, in actual application, the frequency that content caching or caching substitution update in the unit time is higher, the energy Consuming cost is higher, but is compared to cache resource allocation amount and updates bring energy consumption to content caching or caching substitution The frequency that the influence degree of cost, content caching or caching substitution update is smaller to the energy consumption cost impact degree, so Do not consider that the factor influences energy consumption cost bring in the present invention.
Network is sliced the energy consumption cost of respond request, i.e. the energy consumption cost of content response request, similarly, with The cache resource allocation amount that cache node distributes to network slice has direct relationship, and cache node distributes to the slow of network slice Deposit that resource allocation is more, i.e., network slice occupy cache node cache resource allocation amount it is more, the network of generation, which is sliced, to be rung The energy consumption cost that should be requested is higher.Meanwhile in actual application, the frequency of network slice respond request is higher, produces The raw energy consumption cost is higher, similarly, it is contemplated that is compared to cache resource allocation amount to the energy consumption cost Influence degree, the frequency that network is sliced respond request is smaller to the influence degree of the energy consumption cost, so the present invention is implemented Example does not consider that the frequency of network slice respond request influences the energy consumption cost bring.
By the current cache resource assignment matrix of selection, the cache resources point that cache node distributes to network slice are obtained Dosage obtains collecting when network operator provides a user network slice using cache resources by the cache resource allocation amount Initial cost, cache node distribution cache resources the first primary power consuming cost and network slice respond request second Primary power consuming cost, the initial cost that network operator collects when providing a user network slice using cache resources, with Cache node distribute cache resources the first primary power consumption and network slice respond request the first primary power consumption at The sum of this, difference, the initial return as network operator;
According to the cache resource allocation amount in the current cache resource assignment matrix after optimization, working as network operator is obtained Preceding income, the calculation method of current income and the calculation method of initial return are identical, just repeat no more here.
Preferably, the current income positioned at the corresponding network operator of current cache resource assignment matrix after optimization is big The caching when initial return of the corresponding network operator of current cache resource assignment matrix of selection, after obtaining final optimization pass Before resource assignment matrix, the method for cache resource allocation further include:
Whether the current income of the corresponding network operator of Current resource allocation matrix after judging optimization is greater than selection The initial return of the corresponding network operator of current cache resource assignment matrix;If not, continuing to randomly select working as in set Preceding cache resource allocation matrix, and the current cache resource assignment matrix of selection is optimized;If so, obtaining final optimization pass Cache resource allocation matrix afterwards.
Optimization is iterated to the distribution of cache resources, the Current resource distribution during iteration optimization after judgement optimization Whether the current income of the corresponding network operator of matrix is greater than the corresponding network fortune of current cache resource assignment matrix of selection The initial return for seeking quotient, that is, judge whether to meet iteration optimization termination condition, if conditions are not met, continuing to optimize;If full Foot, then terminate iterative optimization procedure, the cache resource allocation matrix after obtaining final optimization pass.
The embodiment of the present invention optimizes current cache resource assignment matrix according to the difference son reaction of CRO, judges excellent Whether the current income of the corresponding network operator of Current resource allocation matrix after change is greater than the current cache resource point of selection Initial return with the corresponding network operator of matrix, if not, continuing to optimize;If it is, after obtaining final optimization pass Cache resource allocation matrix carry out cache resource allocation and according to the cache resource allocation matrix after final optimization pass.To caching The cache resource allocation of node is iterated optimization, improves cache resource allocation rate and resource utilization.
The embodiment of the invention also discloses a kind of devices of cache resource allocation, are described in detail referring to Fig. 3, comprising:
It is sliced module 301, is multiple networks for passing through virtualization technology for multiple cache node cuttings in core net Slice.
Oriental matrix generation module 302 generates instruction for selecting the cache node of predetermined number in multiple cache nodes Whether multiple network slices occupy the oriental matrix of the cache resources of multiple cache nodes.
Gather generation module 303, for carrying out cache resources point at random to the cache node of selection according to oriental matrix Match, obtains the set of current cache resource assignment matrix.
Optimization module 304, for randomly selecting the current cache resource assignment matrix in set, and to the current slow of selection It deposits resource assignment matrix to optimize, the current cache resource assignment matrix after being optimized.
Income calculation module 305, for obtaining and according to the first caching in the current cache resource assignment matrix of selection The second cache resource allocation amount in current cache resource assignment matrix after resource allocation and optimization, by preset algorithm, It respectively corresponds to obtain the initial return of network operator and the current income of network operator.
Cache resource allocation module 306, for the corresponding network operation of current cache resource assignment matrix after optimization The current income of quotient, greater than the corresponding network operator of current cache resource assignment matrix of selection initial return when, obtain And according to the cache resource allocation matrix after final optimization pass, cache resource allocation is carried out.
The device of cache resource allocation in the embodiment of the present invention passes through slice module, oriental matrix generation module, collection symphysis At module, optimization module, income calculation module and caching resource distribution module, cache resources point are carried out at random to cache node Match, obtain the set of current cache resource assignment matrix, the current cache resource assignment matrix chosen in set optimizes, directly The current income of the corresponding network operator of current cache resource assignment matrix after to optimization, the current cache greater than selection provide When the initial return of the corresponding network operator of source allocation matrix, obtain and according to the cache resource allocation square after final optimization pass Battle array carries out cache resource allocation.In conjunction with caching technology and network microtomy in net, and to the cache resources point of cache node With optimization is iterated, cache resource allocation rate and resource utilization are improved.
Preferably, the device of the cache resource allocation of the embodiment of the present invention, further includes:
Adding module, for the current cache resource assignment matrix after the optimization to be added in the set.
Preferably, optimization module in the device of the cache resource allocation of the embodiment of the present invention, is further used for randomly selecting At least one current cache resource assignment matrix in the set, and at least one current cache resource allocation square of selection Any row in battle array redistributes cache resources data.
Preferably, optimization module in the device of the cache resource allocation of the embodiment of the present invention, comprising:
Submodule is chosen, for randomly selecting a current cache resource assignment matrix in the set.
Data submodule is distributed, for randomly selecting the first matrix element of one current cache resource assignment matrix Element, distribution to the first original idle matrix, and first matrix element that removes in the described first original idle matrix occupies Matrix position other than other matrix positions, cache resources data are randomly generated.
Matrix generates submodule, for randomly select one current cache resource assignment matrix except first square The second matrix element other than array element element, distribution to the second original idle matrix, and in the described second original idle matrix Other matrix positions in addition to the matrix position that second matrix element occupies, are randomly generated cache resources data, wherein The matrix of described second original idle matrix, the described first original idle matrix and one current cache resource assignment matrix Size is identical.
Preferably, optimization module in the device of the cache resource allocation of the embodiment of the present invention, is further used for randomly selecting Two current cache resource assignment matrix in the set, at random respectively from described two current cache resource assignment matrix Cache resources data are selected, and according to the cache resources data, generate new matrix, wherein the size of the new matrix It is identical as the size of described two current cache resource assignment matrix.
Preferably, income calculation module in the device of the cache resource allocation of the embodiment of the present invention, comprising:
First cost and cost-calculation module, for obtaining the caching according to the first cache resource allocation amount Node distributes the second primary power consuming cost of the first primary power consuming cost of cache resources, network slice respond request With the initial cost collected.
Initial return computational submodule, for the initial cost, with the first primary power consuming cost and described The sum of second primary power consuming cost, difference, as the initial return.
Second cost and cost-calculation module, for obtaining cache node according to the second cache resource allocation amount Distribute the first present energy consuming cost of cache resources, the second present energy consuming cost and receipts of network slice respond request The current expense taken.
Current income calculation submodule, for the current expense, with the first present energy consuming cost and described The sum of second present energy consuming cost, difference, as the current income.
The device of the cache resource allocation of the embodiment of the present invention, further includes:
Judgment module, for judging the current receipts of the corresponding network operator of Current resource allocation matrix after the optimization Whether benefit is greater than the initial return of the corresponding network operator of current cache resource assignment matrix of the selection.
Continue optimization module, when the result for judgment module is no, continues to randomly select current slow in the set Resource assignment matrix is deposited, and the current cache resource assignment matrix of selection is optimized.
Object module, the result for judgment module are cache resource allocation matrix when being, after obtaining final optimization pass.
It should be noted that the device of the embodiment of the present invention is using the device of the method for above-mentioned cache resource allocation, then All embodiments of the method for above-mentioned cache resource allocation are suitable for the device, and can reach the same or similar beneficial to effect Fruit.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method Part explanation.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention It is interior.

Claims (10)

1. a kind of method of cache resource allocation characterized by comprising
By virtualization technology by multiple cache node cuttings in core net be multiple networks be sliced;
The cache node of predetermined number in the multiple cache node is selected, generates and indicates whether the multiple network slice occupies The oriental matrix of the cache resources of multiple cache nodes;
According to the oriental matrix, cache resource allocation is carried out to the cache node of selection at random, obtains current cache resource point Set with matrix;
The current cache resource assignment matrix in the set is randomly selected, and according to chemical reaction optimization algorithm CRO to selection Current cache resource assignment matrix optimize, the current cache resource assignment matrix after being optimized;
Obtain and according to the first cache resource allocation amount and the optimization in the current cache resource assignment matrix of selection after The second cache resource allocation amount in current cache resource assignment matrix respectively corresponds to obtain network operation by preset algorithm The initial return of quotient and the current income of network operator;
The current income of the corresponding network operator of current cache resource assignment matrix after the optimization is greater than the selection The corresponding network operator of current cache resource assignment matrix initial return when, obtain and according to the caching after final optimization pass Resource assignment matrix carries out cache resource allocation.
2. the method for cache resource allocation according to claim 1, which is characterized in that randomly select the set described In current cache resource assignment matrix, and the current cache resource assignment matrix of selection is optimized, after being optimized After current cache resource assignment matrix, the method for the cache resource allocation further include:
Current cache resource assignment matrix after the optimization is added in the set.
3. the method for cache resource allocation according to claim 1, which is characterized in that described to randomly select in the set Current cache resource assignment matrix, and the current cache resource assignment matrix of selection is optimized, comprising:
At least one current cache resource assignment matrix in the set is randomly selected, and at least one to selection is current slow It deposits any row in resource assignment matrix and redistributes cache resources data.
4. the method for cache resource allocation according to claim 1, which is characterized in that described to randomly select in the set Current cache resource assignment matrix, and the current cache resource assignment matrix of selection is optimized, comprising:
Randomly select a current cache resource assignment matrix in the set;
Randomly select the first matrix element in one current cache resource assignment matrix, distribution to the first original idle square Battle array, and other matrixes in addition to the matrix position occupied except first matrix element in the described first original idle matrix Cache resources data are randomly generated in position;
Randomly select the second matrix element in addition to first matrix element of one current cache resource assignment matrix Element, distribution to the second original idle matrix, and second matrix element that removes in the described second original idle matrix occupies Matrix position other than other matrix positions, cache resources data are randomly generated, wherein the second original idle matrix, The matrix size of described first original idle matrix and one current cache resource assignment matrix is identical.
5. the method for cache resource allocation according to claim 1, which is characterized in that described to randomly select in the set Current cache resource assignment matrix, and the current cache resource assignment matrix of selection is optimized, comprising:
Two current cache resource assignment matrix in the set are randomly selected, are provided respectively from described two current caches at random Cache resources data are selected in the allocation matrix of source, and according to the cache resources data, generate new matrix, wherein described new Matrix size it is identical as the size of described two current cache resource assignment matrix.
6. the method for cache resource allocation according to claim 1, which is characterized in that the acquisition and working as according to selection In the first cache resource allocation amount in preceding cache resource allocation matrix and the current cache resource assignment matrix after the optimization The second cache resource allocation amount the initial return for obtaining network operator and network operation are respectively corresponded by preset algorithm The current income of quotient, comprising:
According to the first cache resource allocation amount, obtain the first primary power consumption of cache node distribution cache resources at Originally, the second primary power consuming cost of network slice respond request and the initial cost collected;
By the initial cost, with the sum of the first primary power consuming cost and the second primary power consuming cost, Difference, as the initial return;
According to the second cache resource allocation amount, obtain the first present energy consumption of cache node distribution cache resources at Originally, the second present energy consuming cost of network slice respond request and the current expense collected;
By the current expense, with the sum of the first present energy consuming cost and the second present energy consuming cost, Difference, as the current income.
7. the method for cache resource allocation according to claim 1, which is characterized in that positioned at described after the optimization The current income of the corresponding network operator of current cache resource assignment matrix is greater than the current cache resource allocation of the selection It is described slow before the cache resource allocation matrix after obtaining final optimization pass when the initial return of the corresponding network operator of matrix The method for depositing resource allocation further include:
Whether the current income of the corresponding network operator of Current resource allocation matrix after judging the optimization is greater than the choosing The initial return of the corresponding network operator of current cache resource assignment matrix taken;
If not, continuing to randomly select the current cache resource assignment matrix in the set, and the current cache of selection is provided Source allocation matrix optimizes;
If so, obtaining the cache resource allocation matrix after final optimization pass.
8. a kind of device of cache resource allocation characterized by comprising
It is sliced module, is multiple networks slice for passing through virtualization technology for multiple cache node cuttings in core net;
Oriental matrix generation module generates instruction institute for selecting the cache node of predetermined number in the multiple cache node State the oriental matrix whether multiple network slices occupy the cache resources of multiple cache nodes;
Gather generation module, for cache resource allocation being carried out at random to the cache node of selection, being obtained according to the oriental matrix To the set of current cache resource assignment matrix;
Optimization module optimizes for randomly selecting the current cache resource assignment matrix in the set, and according to chemical reaction Algorithm CRO optimizes the current cache resource assignment matrix of selection, the current cache resource assignment matrix after being optimized;
Income calculation module, for obtaining and according to the first cache resource allocation in the current cache resource assignment matrix of selection The second cache resource allocation amount in current cache resource assignment matrix after amount and the optimization, by preset algorithm, respectively The current income of the corresponding initial return for obtaining network operator and network operator;
Cache resource allocation module, for the corresponding network operator's of current cache resource assignment matrix after the optimization Current income, greater than the corresponding network operator of current cache resource assignment matrix of the selection initial return when, obtain And according to the cache resource allocation matrix after final optimization pass, cache resource allocation is carried out.
9. the device of cache resource allocation according to claim 8, which is characterized in that the dress of the cache resource allocation It sets, further includes:
Adding module, for the current cache resource assignment matrix after the optimization to be added in the set.
10. the device of cache resource allocation according to claim 8, which is characterized in that the optimization module is further used In randomly selecting at least one current cache resource assignment matrix in the set, and at least one current cache of selection Any row in resource assignment matrix redistributes cache resources data.
CN201610832372.XA 2016-09-19 2016-09-19 A kind of method and device of cache resource allocation Active CN106412040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610832372.XA CN106412040B (en) 2016-09-19 2016-09-19 A kind of method and device of cache resource allocation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610832372.XA CN106412040B (en) 2016-09-19 2016-09-19 A kind of method and device of cache resource allocation

Publications (2)

Publication Number Publication Date
CN106412040A CN106412040A (en) 2017-02-15
CN106412040B true CN106412040B (en) 2019-09-06

Family

ID=57997743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610832372.XA Active CN106412040B (en) 2016-09-19 2016-09-19 A kind of method and device of cache resource allocation

Country Status (1)

Country Link
CN (1) CN106412040B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10841184B2 (en) * 2017-03-28 2020-11-17 Huawei Technologies Co., Ltd. Architecture for integrating service, network and domain management subsystems
CN106954267B (en) * 2017-04-14 2019-11-22 北京邮电大学 A kind of method for managing resource based on wireless network slice
CN106922002B (en) * 2017-04-26 2020-02-07 重庆邮电大学 Network slice virtual resource allocation method based on internal auction mechanism
CN113364686B (en) * 2017-06-30 2022-10-04 华为技术有限公司 Method for generating forwarding table item, controller and network equipment
CN109391913B (en) * 2017-08-08 2021-03-09 北京亿阳信通科技有限公司 NB-IoT (NB-IoT) network resource slice management method and system
CN107820321B (en) * 2017-10-31 2020-01-10 北京邮电大学 Large-scale user intelligent access method in narrow-band Internet of things based on cellular network
CN108093482B (en) * 2017-12-11 2020-01-21 北京科技大学 Optimization method for wireless information center network resource allocation
CN108111931B (en) * 2017-12-15 2021-07-16 国网辽宁省电力有限公司 Virtual resource slice management method and device for power optical fiber access network
CN110138575B (en) * 2018-02-02 2021-10-08 中兴通讯股份有限公司 Network slice creating method, system, network device and storage medium
CN109600798B (en) * 2018-11-15 2020-08-28 北京邮电大学 Multi-domain resource allocation method and device in network slice
CN109951849B (en) * 2019-02-25 2023-02-17 重庆邮电大学 Method for combining resource allocation and content caching in F-RAN architecture
CN110167045B (en) * 2019-04-17 2020-06-05 北京科技大学 Heterogeneous network energy efficiency optimization method
CN110944335B (en) * 2019-12-12 2022-04-12 北京邮电大学 Resource allocation method and device for virtual reality service
CN112835716B (en) * 2021-02-02 2023-12-01 深圳震有科技股份有限公司 CPU buffer allocation method and terminal of 5G communication virtualization network element
CN114630441B (en) * 2022-05-16 2022-08-02 网络通信与安全紫金山实验室 Resource scheduling method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014186590A (en) * 2013-03-25 2014-10-02 Nec Corp Resource allocation system and resource allocation method
CN104822150A (en) * 2015-05-13 2015-08-05 北京工业大学 Spectrum management method for information proactive caching in center multi-hop cognitive cellular network
CN105471954A (en) * 2014-09-11 2016-04-06 北京智梵网络科技有限公司 SDN based distributed control system and user flow optimization method
CN105812217A (en) * 2014-12-29 2016-07-27 中国移动通信集团公司 Virtual network division method and multi-controller agent device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9274838B2 (en) * 2011-12-22 2016-03-01 Netapp, Inc. Dynamic instantiation and management of virtual caching appliances

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014186590A (en) * 2013-03-25 2014-10-02 Nec Corp Resource allocation system and resource allocation method
CN105471954A (en) * 2014-09-11 2016-04-06 北京智梵网络科技有限公司 SDN based distributed control system and user flow optimization method
CN105812217A (en) * 2014-12-29 2016-07-27 中国移动通信集团公司 Virtual network division method and multi-controller agent device
CN104822150A (en) * 2015-05-13 2015-08-05 北京工业大学 Spectrum management method for information proactive caching in center multi-hop cognitive cellular network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
5G移动网络切片技术浅析;许阳,等;《邮电设计技术》;20160731;全文
支持多业务的网络切片技术研究;Tricci So,袁知贵等;《邮电设计技术》;20160731;全文

Also Published As

Publication number Publication date
CN106412040A (en) 2017-02-15

Similar Documents

Publication Publication Date Title
CN106412040B (en) A kind of method and device of cache resource allocation
CN101500022B (en) Data access resource allocation method, system and equipment therefor
CN106230953B (en) A kind of D2D communication means and device based on distributed storage
CN103001870A (en) Collaboration caching method and system for content center network
CN102075402A (en) Virtual network mapping processing method and system
CN103596066B (en) Method and device for data processing
CN110463140A (en) The network Service Level Agreement of computer data center
CN107241319A (en) Distributed network crawler system and dispatching method based on VPN
CN102546435B (en) A kind of frequency spectrum resource allocation method and device
CN110209490A (en) A kind of EMS memory management process and relevant device
CN105721354B (en) Network-on-chip interconnected method and device
CN110336885A (en) Fringe node distribution method, device, dispatch server and storage medium
CN104022911A (en) Content route managing method of fusion type content distribution network
CN104796880B (en) Client identification module SIM card resource allocation methods, relevant device and system
CN110149394A (en) Dispatching method, device and the storage medium of system resource
CN104679594A (en) Middleware distributed calculating method
CN106254561A (en) The real-time offline download method of a kind of Internet resources file and system
CN101507336A (en) Method and arrangement for access selection in a multi-access communications system
CN110198267A (en) A kind of traffic scheduling method, system and server
CN108307412A (en) The super-intensive network interferences management method based on grouping game of user-center
CN101651600A (en) Method and device for updating link cost in grid network
CN106686112A (en) Cloud file transmission system and method
CN101616177A (en) Data transmission sharing method based on the network topography system of P2P
CN103297542A (en) Operating system bus and balancing method supporting online expansion and retraction of components
CN106550408A (en) A kind of data object integration method based on MANET

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220622

Address after: 310052 Changhe Road, Binjiang District, Hangzhou, Zhejiang Province, No. 466

Patentee after: NEW H3C TECHNOLOGIES Co.,Ltd.

Address before: 100876 Beijing city Haidian District Xitucheng Road No. 10

Patentee before: Beijing University of Posts and Telecommunications

TR01 Transfer of patent right