CN111225023A - Caching method and device - Google Patents

Caching method and device Download PDF

Info

Publication number
CN111225023A
CN111225023A CN201911135894.4A CN201911135894A CN111225023A CN 111225023 A CN111225023 A CN 111225023A CN 201911135894 A CN201911135894 A CN 201911135894A CN 111225023 A CN111225023 A CN 111225023A
Authority
CN
China
Prior art keywords
popularity
cache
target
parameter
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911135894.4A
Other languages
Chinese (zh)
Other versions
CN111225023B (en
Inventor
王蕴实
张曼君
马铮
赵晨斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN201911135894.4A priority Critical patent/CN111225023B/en
Publication of CN111225023A publication Critical patent/CN111225023A/en
Application granted granted Critical
Publication of CN111225023B publication Critical patent/CN111225023B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a caching method and a caching device, relates to the field of communication, and is used for solving the problem that a local cache region of an MEC node cannot timely store new cache contents. The method comprises the following steps: obtaining cache region parameters of a mobile edge computing MEC node, wherein the cache region parameters comprise: a first popularity parameter of the cache content in the local cache region of the MEC node and a second popularity parameter of the cache content in the global cache region of the MEC node; determining a first adjusting parameter according to the first popularity parameter and the second popularity parameter; the first adjustment parameter is used for indicating a specific adjustment amount for adjusting the local cache region in the MEC node. The embodiment of the application is applied to the MEC network.

Description

Caching method and device
Technical Field
The present invention relates to the field of communications, and in particular, to a caching method and apparatus.
Background
With the rapid development of mobile multimedia services such as high-definition video streaming, the traffic flow in the network increases rapidly. In order to cope with the impact of a large amount of data traffic on a mobile communication network, currently, a Mobile Edge Computing (MEC) node is mainly deployed for offloading. In order to meet the access requirements of different users, in the conventional technology, the data cache region of the MEC node includes a local cache region and a global cache region.
However, since the local cache area and the global cache area in the MEC node are divided according to a fixed ratio in the conventional technology. Therefore, when the popularity of the cache content of the local cache region increases, the MEC node may increase the cache content, so as to reduce the free cache area of the local cache region, and further possibly cause that part of the content of the MEC node cannot be stored in time.
Disclosure of Invention
The embodiment of the application provides a caching method and device, which are used for solving the problem that a local cache region of an MEC node cannot timely store new cache contents.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, a caching method is provided, and the method includes: obtaining cache region parameters of the MEC node, wherein the cache region parameters comprise: a first popularity parameter of the cache content in the local cache region of the MEC node and a second popularity parameter of the cache content in the global cache region of the MEC node; determining a first adjusting parameter according to the first popularity parameter and the second popularity parameter; the first adjustment parameter is used for expressing the specific adjustment amount for adjusting the local cache region in the MEC node; and adjusting the size of the cache space of the local cache region according to the first adjustment parameter.
In a second aspect, a cache apparatus is provided, which includes an obtaining unit, a determining unit, and an executing unit; the acquiring unit is configured to acquire a cache region parameter of the MEC node, where the cache region parameter includes: a first popularity parameter of the cache content in the local cache region of the MEC node and a second popularity parameter of the cache content in the global cache region of the MEC node; the determining unit is used for determining a first adjusting parameter according to the first popularity parameter and the second popularity parameter acquired by the acquiring unit; the first adjustment parameter is used for expressing the specific adjustment amount for adjusting the local cache region in the MEC node; and the execution unit is used for adjusting the size of the cache space of the local cache region according to the first adjustment parameter determined by the determination unit.
According to the caching method and device provided by the embodiment of the application, the first adjustment parameter is determined according to the first popularity parameter of the cache content of the local cache region in the MEC node and the second popularity parameter of the cache content of the global cache region, and the space sizes of the local cache region and the global cache region in the MEC node are adjusted according to the first adjustment, so that the size of the cache space in the local cache region can be adjusted in real time to cache more popular content, the problem that the access delay of a user is increased due to the fact that the local cache region cannot cache more popular content is solved to a certain extent, and the user experience is improved.
Drawings
Fig. 1 is a schematic diagram of an MEC network architecture provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a caching method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of another caching method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a cache apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another cache apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another cache apparatus according to an embodiment of the present application.
Detailed Description
In the following, some concepts related to the embodiments of the present application are briefly introduced, and the technical solutions in the embodiments of the present application will be clearly and completely described with reference to the drawings in the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of this application, "/" means "or" unless otherwise stated, for example, A/B may mean A or B. "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. Further, "at least one" means one or more, "a plurality" means two or more. The terms "first", "second", and the like do not necessarily limit the number and execution order, and the terms "first", "second", and the like do not necessarily limit the difference.
The technical solution provided in the embodiment of the present application may be applied to various communication systems, for example, an NR communication system that adopts a fifth generation (5G) communication technology, a future evolution system, or a multiple communication convergence system, and the like. The technical scheme provided by the application can be applied to various application scenarios, for example, scenarios such as machine-to-machine (M2M), enhanced mobile internet (eMBB), ultra-reliable and ultra-low latency communication (urlllc), and mass internet of things communication (mtc).
In the embodiment of the present application, the access network device may be a base station or a base station controller for wireless communication, and the like. In this embodiment, the base station may be a base station (BTS) in a global system for mobile communication (GSM), a Code Division Multiple Access (CDMA), a base station (node B) in a Wideband Code Division Multiple Access (WCDMA), an eNB, an internet of things (IoT) or an eNB in a narrowband band-internet of things (NB-internet), a base station in a future 5G mobile communication network or a future evolved Public Land Mobile Network (PLMN), which is not limited in this embodiment.
Terminals are used to provide voice and/or data connectivity services to users. The terminal may be referred to by different names, such as User Equipment (UE), access terminal, terminal unit, terminal station, mobile station, remote terminal, mobile device, wireless communication device, vehicular user equipment, terminal agent or terminal device, and the like. Optionally, the terminal may be various handheld devices, vehicle-mounted devices, wearable devices, and computers with communication functions, which is not limited in this embodiment of the present application. For example, the handheld device may be a smartphone. The in-vehicle device may be an in-vehicle navigation system. The wearable device may be a smart bracelet. The computer may be a Personal Digital Assistant (PDA) computer, a tablet computer, and a laptop computer.
In order to cope with the impact of mass data traffic on the mobile communication network, the access traffic of the mobile communication network can be reduced by caching the content required by the user to the MEC node.
Fig. 1 provides a schematic diagram of an MEC network architecture, as shown in fig. 1, comprising: remote server R, cooperation domain a, cooperation domain B, base station C, and user terminal U1 and user terminal U2. Wherein, cooperation domain A includes: MEC node a1, MEC node a2, MEC node a3, and MEC node a 4. The cooperation domain B includes: MEC node b1, MEC node b2, MEC node b3, and MEC node b 4.
Illustratively, when a video access request is initiated by a user terminal U1 or a user terminal U2, the video access request is sent to the MEC node B3 in the cooperative domain B through the base station C. If the buffer of the MEC node B3 does not have the video resource requested to be accessed by the user terminal U1, the MEC node B3 requests the video resource from other neighboring MEC nodes in the cooperation domain B (e.g., MEC node B1, MEC node B2, and MEC node B4), and if the video resource is cached in the buffers of other neighboring MEC nodes in the cooperation domain B, the video resource is sent to the user terminal U1 through the MEC node B3.
For example, the MEC node cache region may be divided into a local cache region b3-1 and a global cache region b 3-2. The content cached in the local cache region is mainly the content with high demand of the local user of the MEC node, and the global cache region can provide content caching service for other user terminals in the cooperation region or provide content caching service for user terminals in other adjacent cooperation regions.
In the related art, the local cache area and the global cache area of the MEC node are divided according to a fixed ratio. The user terminal U1 or the user terminal U2 belongs to a local user of the MEC node b3, and therefore, a video access request is preferentially sent to the local cache of the MEC node b3, and as the average value of the popularity of the content cached in the local cache is larger and larger, the space occupancy rate of the local cache is higher and higher, so that new content cannot be cached, the local user cannot obtain the video content from the local cache, the access delay of the user is increased, and the user experience is reduced.
In order to solve the problem, the application provides a caching method and a caching device, and whether the cache space of the local cache region in the MEC node needs to be adjusted is judged by obtaining the content popularity characteristics of the local cache region of the MEC node and the content popularity characteristics of the global cache region of the MEC node, so that the cache space of the local cache region of the MEC node can timely cache the content required by the local user of the MEC node, the user delay is reduced to a certain extent, and the user experience is improved.
The execution main body of the caching method provided by the embodiment of the present invention may be a caching device, or may also be a functional module and/or a functional entity capable of implementing the caching method in the caching device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. For example, the cache device may be an MEC node.
The technical scheme provided by the application is described below with reference to the accompanying drawings.
The first embodiment is as follows:
the embodiment provides a caching method, which is applied to an MEC node, and as shown in fig. 2, the caching method includes the following steps:
s101, obtaining cache region parameters of the MEC nodes.
Wherein, the cache region parameters include: a first popularity parameter of the cache content in the local cache region of the MEC node, and a second popularity parameter of the cache content in the global cache region of the MEC node.
Illustratively, the first popularity parameter is used to indicate a popularity characteristic of the cached content in the local cache region. And the second popularity parameter is used for representing the popularity characteristics of the cache content in the global cache region.
Illustratively, the first popularity parameter includes: the first popularity mean and the first popularity standard deviation of all the cache contents in the local cache region.
Illustratively, the second popularity parameter includes: a second average of popularity for all cached content in the global cache.
It should be noted that, in the embodiments of the present application, the popularity mean and the popularity standard deviation of the cache content are used to measure the popularity characteristics of the cache content in the local cache region and the global cache region in the MEC node. The protection scope of the present application is not limited thereto, and any parameter for measuring the popularity characteristics of the cache contents in the cache area is within the protection scope of the present application.
It should be noted that the content cached in the local cache region of the MEC node mainly provides a content caching service for the local user, and the content cached in the global cache region mainly provides a caching service for the global user.
For example, before step S101 of the caching method provided in the embodiment of the present application, a cache space in an MEC node may be divided into a local cache region and a global cache region according to a preset ratio.
Illustratively, the preference of the user is analyzed according to the content applied by the local user, analysis modeling is performed according to the preference of the local user, machine learning is performed, and a local content caching strategy is formulated, so that the MEC node can predict the content preferred by the user and cache popular content preferred by the user into a local cache region in advance. Analyzing the preference of the user according to the content applied by the global user, analyzing and modeling aiming at the preference of the global user, performing machine learning, formulating a global content caching strategy, and periodically issuing the popular content to a global caching area.
Illustratively, this embodiment further provides a method for determining a local user and a global user, including: within the preset time, the user terminal with the residence time greater than or equal to the preset time threshold value within the MEC service range is a local user; and in the preset time, the user terminal with the residence time smaller than the preset time threshold value in the MEC service range is a global user.
And S102, determining a first adjusting parameter according to the first popularity parameter and the second popularity parameter.
The first adjustment parameter is used to indicate a specific adjustment amount for adjusting the local cache region in the MEC node.
S103, adjusting the size of the cache space of the local cache region according to the first adjustment parameter.
Illustratively, step S103 may include the following: and calculating a first target popularity value of the local cache area according to the first popularity mean value and the first popularity standard deviation.
For example, the first popularity standard deviation may be calculated using equation 1:
Figure BDA0002279585820000051
wherein, the above
Figure BDA0002279585820000052
Denotes the first standard deviation of popularity, PiThe ith cache content in the local cache region is represented, mu represents the first popularity mean value, and N represents the total number of the cache contents in the local cache region. Illustratively, the second popularity standard deviation may also be calculated using equation one.
Illustratively, after obtaining the first target popularity value of the local cache, step 103 further includes:
and if the first popularity mean value is smaller than the second popularity mean value and the first target popularity value is smaller than the second target popularity value of the target content in the network, reducing the cache space of the local cache region according to the first space adjustment amount.
And if the first popularity mean value is larger than the second popularity mean value and the first target popularity value is larger than the second target popularity value of the target content in the network, increasing the cache space of the local cache region according to the second space adjustment amount.
Wherein the first adjustment parameter includes: a first amount of spatial adjustment and a second amount of spatial adjustment; the cache content in the local cache region belongs to the target content, and the target content is popular content with the popularity larger than a preset threshold value in the network.
Illustratively, the popularity of content in the network obeys the pareto law, i.e., 80% of the user access traffic in the network is concentrated in 20% of the content, and thus, the target content is 20% of the traffic in the network.
For example, the first target popularity value of the local cache region may be calculated according to the first popularity mean value, the first popularity standard deviation, and formula one (i.e., formula 2 below);
Figure BDA0002279585820000061
wherein β in the formula represents a first target popularity value, mu is a first popularity mean value;
Figure BDA0002279585820000063
representing the first popularity standard deviation.
For example, the second target popularity value may also be calculated by using the third target popularity mean of the target content, the third target popularity standard deviation of the target content, and formula two.
Since the MEC node periodically adjusts the cache spaces of the local cache region and the global cache region, the popularity of the content cached in the local cache region and the global cache region needs to be calculated in each adjustment period.
For example, at the time t of one adjustment period, the popularity of the ith cache content in the local cache region or the global cache region of the MEC node may be calculated by the following formula 3.
Figure BDA0002279585820000062
Wherein in the above formula, Pi(t) represents the popularity of the ith cache content in the local cache region or the global cache region at the time t, RiRepresents the user access times T of the ith cache content in the local cache region or the global cache regioniAnd the statistical period is the ith cache content in the local cache region or the global cache region.
Illustratively, a specific implementation procedure of the caching method provided in this embodiment, as shown in fig. 3, includes:
and S10, starting.
S11, obtaining a first popularity mean value mu 1 of the cache content in the local cache region and a second popularity mean value mu 2 of the cache content in the global cache region.
S12, obtaining a first target popularity value β 1 of the cache content in the local cache region and a second target popularity value β 2 of the target content.
S13, judging the sizes of mu 1 and mu 2, if mu 1 is less than mu 2, executing step S14, if mu 1 is more than or equal to mu 2, executing step S15.
S14, if β 1< β 2, execute step S16, otherwise execute step S11.
S15, if μ 1> μ 2 and β 1> β 2, execute step S16, otherwise execute step S11.
S16, the cache space of the MEC node is adjusted according to the first adjustment parameter, and the step S11 is executed. The method comprises the steps of adjusting the size of a local cache space of the MEC node according to a first space adjustment quantity, and adjusting the size of a global cache space of the MEC node according to a second space adjustment quantity.
For example, in the flow shown in fig. 3, β 1 and β 2 are compared to each other, and a static variable β 0 'may be set, that is, β 1< β 2- β' in step S14 and β 1> β 2+ β 'in step S15, where β' is β 2 × 5%.
It should be noted that the obtaining of the specific parameter in the step shown in fig. 3 has been described in detail in the above method, and is not described herein again.
Illustratively, when the cache space sizes of the local cache region and the global cache region of the MEC node are adjusted, cache contents with low popularity may be preferentially deleted according to a popularity priority principle, that is, when the cache space size of the local cache region or the global cache region is reduced; that is, when the size of the cache space of the local cache region or the global cache region is increased, cache contents with high popularity are preferentially cached.
According to the caching method provided by the embodiment of the application, the first adjustment parameter is determined by obtaining the first popularity parameter of the MEC node local cache region and the second popularity parameter of the MEC node global cache region, and the cache spaces of the local cache region and the global cache region are adjusted according to the first adjustment parameter, so that the technical problem that new contents required by a user cannot be cached timely due to the fact that the MEC node cache region is divided according to a fixed proportion is solved, the user access delay is prevented from being increased to a certain extent, and the user experience is improved.
Example two:
the present embodiment provides a cache apparatus, as shown in fig. 4, including: an acquisition unit 201, a determination unit 202, and an execution unit 203, wherein:
an obtaining unit 201, configured to obtain a cache region parameter of an MEC node, where the cache region parameter includes: a first popularity parameter of the cache content in the local cache region of the MEC node, and a second popularity parameter of the cache content in the global cache region of the MEC node.
A determining unit 202, configured to determine a first adjustment parameter according to the first popularity parameter and the second popularity parameter acquired by the acquiring unit 201; the first adjustment parameter is used for indicating a specific adjustment amount for adjusting the local cache region in the MEC node.
The execution unit 203 is configured to adjust the size of the cache space of the local cache region according to the first adjustment parameter determined by the determination unit 202.
Optionally, the first popularity parameter includes: the method comprises the steps that a first popularity mean value and a first popularity standard deviation of all cache contents in a local cache region are obtained; the second popularity parameter includes: a second average of popularity for all cached content in the global cache.
Optionally, the obtaining unit 201 is further configured to calculate a first target popularity value of the local cache area according to the first popularity mean and the first popularity standard deviation.
The executing unit 203 is configured to reduce the cache space of the local cache area according to the first space adjustment amount when the first popularity average is smaller than the second popularity average and the first target popularity value acquired by the acquiring unit 201 is smaller than the second target popularity value of the target content in the network.
The executing unit 203 is further configured to increase the cache space of the local cache area according to the second space adjustment amount when the first popularity average is greater than the second popularity average and the first target popularity value acquired by the acquiring unit 201 is greater than the second target popularity value.
Wherein the first adjusting parameter comprises: a first amount of spatial adjustment and a second amount of spatial adjustment; the cached content in the local cache region belongs to target content, and the target content is popular content with the popularity larger than a preset threshold value in the network.
Optionally, the second target popularity value is determined based on a third popularity mean and a third popularity standard deviation of the target content.
Optionally, the obtaining unit 201 is specifically configured to calculate a first target popularity value of the local cache area according to the first popularity mean, the first popularity standard deviation and a formula one;
Figure BDA0002279585820000081
wherein β in the formula represents a first target popularity value, mu is a first popularity mean value;
Figure BDA0002279585820000082
representing the first popularity standard deviation.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In the embodiment of the present application, the cache device may be divided into the functional modules or the functional units according to the above method example, for example, each functional module or functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software functional module or a functional unit. The division of the modules or units in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Fig. 5 shows a schematic diagram of a possible structure of the above-described cache device in the case of an integrated unit. The cache apparatus 50 includes: a storage unit 501, a processing unit 502, and an interface unit 503. The processing unit 502 is configured to perform cache management on the actions of the cache apparatus 50. A storage unit 501 for caching program codes and data of the apparatus 50. The interface unit 503 is used to connect with other external devices to receive input content.
For example, the processing unit is a processor, the storage unit is a memory, and the interface unit is a transceiver. The cache apparatus may be as shown in fig. 6 with reference to the apparatus 60, and includes a transceiver 603, a processor 602, a memory 601, and a bus 604, where the transceiver 603 and the processor 602 are connected to the memory 601 through the bus 604.
Processor 602 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an Application-Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to control the execution of programs in accordance with the teachings of the present disclosure.
The Memory 601 may be a Read-Only Memory (ROM) or other types of static storage devices that can store static information and instructions, a Random Access Memory (RAM) or other types of dynamic storage devices that can store information and instructions, an electrically erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these. The memory may be self-contained and coupled to the processor via a bus. The memory may also be integral to the processor.
The memory 601 is used for storing application program codes for executing the scheme of the application, and the processor 602 controls the execution. The transceiver 603 is configured to receive content input from an external device, and the processor 602 is configured to execute the application program code stored in the memory 601, so as to implement the caching method in the embodiment of the present application.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application are all or partially generated upon loading and execution of computer program instructions on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or can comprise one or more data storage devices, such as a server, a data center, etc., that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A caching method, comprising:
obtaining cache region parameters of a mobile edge computing MEC node, wherein the cache region parameters comprise: a first popularity parameter of the cache content in the local cache region of the MEC node and a second popularity parameter of the cache content in the global cache region of the MEC node;
determining a first adjusting parameter according to the first popularity parameter and the second popularity parameter; the first adjustment parameter is used for indicating a specific adjustment amount for adjusting a local cache region in the MEC node;
and adjusting the size of the cache space of the local cache region according to the first adjustment parameter.
2. The caching method of claim 1, wherein the first popularity parameter comprises: the first popularity mean value and the first popularity standard deviation of all cache contents in the local cache region; the second popularity parameter comprises: a second popularity average of all cached content in the global cache region.
3. The caching method according to claim 2, wherein the adjusting the size of the space of the local cache according to the first adjustment parameter comprises:
calculating a first target popularity value of the local cache area according to the first popularity mean value and the first popularity standard deviation;
if the first popularity mean value is smaller than the second popularity mean value and the first target popularity value is smaller than a second target popularity value of target content in the network, reducing the cache space of the local cache area according to a first space adjustment amount;
if the first popularity mean value is larger than the second popularity mean value and the first target popularity value is larger than the second target popularity value, increasing the cache space of the local cache area according to a second space adjustment amount;
wherein the first adjustment parameter comprises: a first amount of spatial adjustment and the second amount of spatial adjustment; and the cache content in the local cache region belongs to the target content, and the target content is popular content with the popularity larger than a preset threshold value in the network.
4. The caching method of claim 3, wherein the second target popularity value is determined based on a third popularity mean and a third popularity standard deviation of the target content.
5. The caching method according to claim 3 or 4, wherein the calculating a first target popularity value of the local cache according to the first popularity mean and the first popularity standard deviation comprises:
calculating a first target popularity value of the local cache region according to the first popularity mean value, the first popularity standard deviation and a formula I;
Figure FDA0002279585810000021
wherein β represents the first target popularity value in the formula, mu is the average value of the first popularity;
Figure FDA0002279585810000022
representing the first popularity standard deviation.
6. The cache device is characterized by comprising an acquisition unit, a determination unit and an execution unit;
the obtaining unit is configured to obtain a cache region parameter of the MEC node, where the cache region parameter includes: a first popularity parameter of the cache content in the local cache region of the MEC node and a second popularity parameter of the cache content in the global cache region of the MEC node;
the determining unit is configured to determine a first adjustment parameter according to the first popularity parameter and the second popularity parameter acquired by the acquiring unit; the first adjustment parameter is used for indicating a specific adjustment amount for adjusting a local cache region in the MEC node;
the execution unit is configured to adjust the size of the cache space of the local cache region according to the first adjustment parameter determined by the determination unit.
7. The caching apparatus of claim 6, wherein the first popularity parameter comprises: the first popularity mean value and the first popularity standard deviation of all cache contents in the local cache region; the second popularity parameter comprises: a second popularity average of all cached content in the global cache region.
8. The cache apparatus according to claim 7, wherein the executing unit is configured to adjust a cache space size of the local cache according to the first adjustment parameter determined by the determining unit, and includes:
the obtaining unit is further configured to calculate a first target popularity value of the local cache area according to the first popularity mean value and the first popularity standard deviation;
the execution unit is configured to reduce the cache space of the local cache area according to a first space adjustment amount when the first popularity average is smaller than the second popularity average and the first target popularity value is smaller than a second target popularity value of target content in the network;
the execution unit is further configured to increase the cache space of the local cache area according to a second space adjustment amount when the first popularity average is greater than the second popularity average and the first target popularity value is greater than the second target popularity value;
wherein the first adjustment parameter comprises: a first amount of spatial adjustment and the second amount of spatial adjustment; and the cache content in the local cache region belongs to the target content, and the target content is popular content with the popularity larger than a preset threshold value in the network.
9. The caching apparatus of claim 8, wherein the second target popularity value is determined based on a third popularity mean and a third popularity standard deviation of the target content.
10. The cache device according to claim 8 or 9, wherein the obtaining unit is specifically configured to calculate a first target popularity value of the local cache area according to the first popularity mean, the first popularity standard deviation, and a formula one;
Figure FDA0002279585810000031
wherein β represents the first target popularity value in the formula, mu is the average value of the first popularity;
Figure FDA0002279585810000032
representing the first popularity standard deviation.
CN201911135894.4A 2019-11-19 2019-11-19 Caching method and device Active CN111225023B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911135894.4A CN111225023B (en) 2019-11-19 2019-11-19 Caching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911135894.4A CN111225023B (en) 2019-11-19 2019-11-19 Caching method and device

Publications (2)

Publication Number Publication Date
CN111225023A true CN111225023A (en) 2020-06-02
CN111225023B CN111225023B (en) 2022-02-25

Family

ID=70829410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911135894.4A Active CN111225023B (en) 2019-11-19 2019-11-19 Caching method and device

Country Status (1)

Country Link
CN (1) CN111225023B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2249549A1 (en) * 2009-05-06 2010-11-10 Alcatel Lucent Method for optimizing delivery of content from cache regarding cost
CN102137138A (en) * 2010-09-28 2011-07-27 华为技术有限公司 Method, device and system for cache collaboration
CN102204218A (en) * 2011-05-31 2011-09-28 华为技术有限公司 Data processing method, buffer node, collaboration controller, and system
CN102523256A (en) * 2011-11-30 2012-06-27 华为技术有限公司 Content management method, device and system
CN103778071A (en) * 2014-01-20 2014-05-07 华为技术有限公司 Cache space distribution method and device
US20140215156A1 (en) * 2013-01-30 2014-07-31 Electronics And Telecommunications Research Institute Prioritized dual caching method and apparatus
CN104284201A (en) * 2014-09-26 2015-01-14 北京奇艺世纪科技有限公司 Video content processing method and device
CN105022700A (en) * 2015-07-17 2015-11-04 哈尔滨工程大学 Named data network cache management system based on cache space division and content similarity and management method
US20150370490A1 (en) * 2014-06-24 2015-12-24 Nec Europe Ltd. Optimizing ssd-based content caches in content delivery networks
CN105491156A (en) * 2016-01-08 2016-04-13 华中科技大学 SD-RAN-based whole network collaborative content caching management system and method
CN106502576A (en) * 2015-09-06 2017-03-15 中兴通讯股份有限公司 Migration strategy method of adjustment, capacity change suggesting method and device
US20170264702A1 (en) * 2016-03-08 2017-09-14 Huawei Technologies Co., Ltd. Distributed hierarchial cache management system and method
CN107911711A (en) * 2017-10-24 2018-04-13 北京邮电大学 A kind of edge cache for considering subregion replaces improved method
CN109688171A (en) * 2017-10-18 2019-04-26 中国电信股份有限公司 Spatial cache dispatching method, device and system
CN110213627A (en) * 2019-04-23 2019-09-06 武汉理工大学 Flow medium buffer distributor and its working method based on multiple cell user mobility

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2249549A1 (en) * 2009-05-06 2010-11-10 Alcatel Lucent Method for optimizing delivery of content from cache regarding cost
CN102137138A (en) * 2010-09-28 2011-07-27 华为技术有限公司 Method, device and system for cache collaboration
CN102204218A (en) * 2011-05-31 2011-09-28 华为技术有限公司 Data processing method, buffer node, collaboration controller, and system
CN102523256A (en) * 2011-11-30 2012-06-27 华为技术有限公司 Content management method, device and system
US20140215156A1 (en) * 2013-01-30 2014-07-31 Electronics And Telecommunications Research Institute Prioritized dual caching method and apparatus
CN103778071A (en) * 2014-01-20 2014-05-07 华为技术有限公司 Cache space distribution method and device
US20150370490A1 (en) * 2014-06-24 2015-12-24 Nec Europe Ltd. Optimizing ssd-based content caches in content delivery networks
CN104284201A (en) * 2014-09-26 2015-01-14 北京奇艺世纪科技有限公司 Video content processing method and device
CN105022700A (en) * 2015-07-17 2015-11-04 哈尔滨工程大学 Named data network cache management system based on cache space division and content similarity and management method
CN106502576A (en) * 2015-09-06 2017-03-15 中兴通讯股份有限公司 Migration strategy method of adjustment, capacity change suggesting method and device
CN105491156A (en) * 2016-01-08 2016-04-13 华中科技大学 SD-RAN-based whole network collaborative content caching management system and method
US20170264702A1 (en) * 2016-03-08 2017-09-14 Huawei Technologies Co., Ltd. Distributed hierarchial cache management system and method
CN109688171A (en) * 2017-10-18 2019-04-26 中国电信股份有限公司 Spatial cache dispatching method, device and system
CN107911711A (en) * 2017-10-24 2018-04-13 北京邮电大学 A kind of edge cache for considering subregion replaces improved method
CN110213627A (en) * 2019-04-23 2019-09-06 武汉理工大学 Flow medium buffer distributor and its working method based on multiple cell user mobility

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜微: "移动通信网络高效内容分发机制与策略", 《电子科技大学博士学位论文》 *

Also Published As

Publication number Publication date
CN111225023B (en) 2022-02-25

Similar Documents

Publication Publication Date Title
KR102382913B1 (en) Method and Apparatus for Wireless Resource Scheduling with Guaranteed QoS in Mobile Communication System
CN106713028B (en) Service degradation method and device and distributed task scheduling system
CN111147395B (en) Network resource adjusting method and device
CN111949409B (en) Method and system for unloading computing task in power wireless heterogeneous network
US20140140329A1 (en) Network and user behavior based time-shifted mobile data transmission
KR102219015B1 (en) Use of network support protocols to improve network usage
CN111147327B (en) Network quality evaluation method and device
CN108834216B (en) Resource scheduling method and device
US20140192662A1 (en) Network and user behavior based time-shifted mobile data transmission
CN115473841B (en) Network path determining method, device and storage medium
CN114614989A (en) Feasibility verification method and device of network service based on digital twin technology
CN110839166B (en) Data sharing method and device
CN112887905A (en) Task unloading method based on periodic resource scheduling in Internet of vehicles
CN111511028A (en) Multi-user resource allocation method, device, system and storage medium
CN112905110B (en) Data storage method and device, storage medium, user equipment and network side equipment
CN112291796B (en) Cell network capacity expansion method, device, equipment and storage medium
US20230224693A1 (en) Communication method and device, and electronic device and computer-readable storage medium
CN111225023B (en) Caching method and device
CN111278039B (en) User perception suppression identification method, device, equipment and medium
CN114158104B (en) Network selection method, device, terminal and storage medium
CN111465052A (en) Core network mapping and mapping table generating method, device, equipment and medium
CN112738815B (en) Method and device for evaluating number of accessible users
CN112291754B (en) QOS parameter calculation method and access network equipment
CN113613184B (en) Flow package determining method and device
CN114006874B (en) Resource block scheduling method, device, storage medium and base station

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant