CN113709853B - Network content transmission method and device oriented to cloud edge collaboration and storage medium - Google Patents

Network content transmission method and device oriented to cloud edge collaboration and storage medium Download PDF

Info

Publication number
CN113709853B
CN113709853B CN202110836601.6A CN202110836601A CN113709853B CN 113709853 B CN113709853 B CN 113709853B CN 202110836601 A CN202110836601 A CN 202110836601A CN 113709853 B CN113709853 B CN 113709853B
Authority
CN
China
Prior art keywords
energy consumption
mbs
network
sbs
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110836601.6A
Other languages
Chinese (zh)
Other versions
CN113709853A (en
Inventor
方超
黄潇洁
徐航
王朱伟
陈华敏
孙阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202110836601.6A priority Critical patent/CN113709853B/en
Publication of CN113709853A publication Critical patent/CN113709853A/en
Application granted granted Critical
Publication of CN113709853B publication Critical patent/CN113709853B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0203Power saving arrangements in the radio access network or backbone network of wireless communication networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/04TPC
    • H04W52/18TPC being performed according to specific parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/04TPC
    • H04W52/18TPC being performed according to specific parameters
    • H04W52/24TPC being performed according to specific parameters using SIR [Signal to Interference Ratio] or other wireless path parameters
    • H04W52/241TPC being performed according to specific parameters using SIR [Signal to Interference Ratio] or other wireless path parameters taking into account channel quality metrics, e.g. SIR, SNR, CIR, Eb/lo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/04TPC
    • H04W52/18TPC being performed according to specific parameters
    • H04W52/26TPC being performed according to specific parameters using transmission rate or quality of service QoS [Quality of Service]
    • H04W52/265TPC being performed according to specific parameters using transmission rate or quality of service QoS [Quality of Service] taking into account the quality of service QoS
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention relates to a network content transmission method, a device and a storage medium for cloud edge coordination, wherein the network content transmission method for the cloud edge coordination comprises the following steps: analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station; analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing a system energy consumption model based on the energy consumption models; and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption. According to the cloud-edge-collaboration-oriented network content transmission method, the cloud-edge-collaboration-oriented network content transmission device and the storage medium, the in-network cache is reasonably deployed under the cooperation of the SBS and the MBS base stations, and the request aggregation and the cooperative allocation of calculation, cache and communication resources are adopted, so that the redundant transmission of the network content is reduced, and the content transmission efficiency, the resource utilization rate and the network energy efficiency of the heterogeneous network are improved.

Description

Network content transmission method and device oriented to cloud edge collaboration and storage medium
Technical Field
The present invention relates to the field of wireless communications technologies, and in particular, to a method and an apparatus for transmitting network content for cloud-edge collaboration, and a storage medium.
Background
With the rapid growth of internet traffic represented by multimedia (e.g., youtube, netflix) and smart mobile terminals (e.g., smart phones, tablet computers), as a key technology of a fifth generation (5G) mobile communication system, an ultra-dense network can effectively improve resource utilization rate and content transmission efficiency. However, with the dense deployment of base stations and the increasing demands for Quality of Service (QoS) due to the increasing number of users and the size of communication data, the problem of energy consumption is increasingly highlighted. Therefore, improving network energy efficiency while ensuring content transmission efficiency is an urgent and challenging task.
The first cloud computing to appear improves content delivery by shortening the distance between content requesters and providers, but incurs additional deployment and operational costs, as well as significant energy consumption. As a lightweight extension of cloud Computing, mobile Edge Computing (MEC) can further reduce transmission delay and network energy consumption by implementing caching and Computing capabilities at the Edge of a Mobile network (e.g., base stations, access routers, and switches). In view of the significant advantages and limited service capabilities of mobile edge computing compared to cloud computing, the cloud-edge collaborative scheme has recently attracted great attention in both academic and industrial fields. The cloud-edge collaborative solution is initially applied in a two-layer network of edge devices and clouds, ignoring the computing and caching capabilities of the base station. Many attempts have been made by researchers: the problems of energy-saving wireless backhaul bandwidth allocation and power allocation are researched in a heterogeneous small cellular network, and an optimal energy efficiency model capable of being iteratively solved is provided. Then, it is proposed to develop a distributed framework and centralized processing capability to implement a green network, especially to reduce energy consumption by a base station sleep strategy. On the other hand, the students establish a joint energy and user allocation method based on energy sharing between a Macro Base Station (MBS) and a Small Base Station (SBS) in a heterogeneous network. Second, researchers improve the performance of mobile networks by jointly managing and allocating cache, computing, and communication resources. However, current work is focused primarily on the co-optimization of the two resources. Scholars propose a software-defined networking-based IC-IoT network architecture and use DQN (Deep Q-learning) to optimize the allocation of computational and cache resources. Others have defined the joint offloading and resource allocation problem as a Markov Decision Process (MDP) to maximize the number of offloading tasks while reducing energy consumption.
Although cloud-edge collaboration can improve energy efficiency and content delivery, most research work is done in a three-tier topology of Mobile Users (MUs), base stations, and the cloud, without considering the effects of intra-network caching, request aggregation, and network heterogeneity. And mainly focuses on joint optimal configuration of two resources.
Disclosure of Invention
The invention aims to provide a network content transmission method, a device and a storage medium for cloud edge coordination, which are used for solving the problems in the prior art.
In a first aspect, the present invention provides a network content transmission method facing cloud edge collaboration, including:
analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station;
analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing energy consumption models of the system based on the energy consumption models of all parts of the system;
and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption.
Further, the network model comprises a cloud, the MBS, the SBS and the mobile user terminal, wherein the MBS and the SBS have the caching and calculating capabilities, and the cloud stores the content requested by the mobile user terminal.
Further, the energy consumption model of each part of the system comprises: the energy consumption model of the mobile user terminal, the energy consumption model of the base station, the energy consumption model of the cloud and the energy consumption model of the link.
Further, the energy consumption model of the mobile user terminal comprises the energy consumption of the mobile user terminal directly connecting the SBS and the mobile user terminal directly connecting the MBS;
the base station energy consumption model comprises MBS energy consumption and SBS energy consumption;
the cloud energy consumption model comprises static energy consumption of a cloud end and energy consumption generated by meeting a request which cannot be processed by MBS;
the link energy consumption model comprises energy consumption between the MBS and the SBS and between the MBS and the cloud.
Further, the system energy consumption model includes link energy consumption and base station energy consumption for processing and transmitting requests in the network.
Further, the caching policy includes: the method comprises the steps of not considering the converged non-caching strategy, considering the offline caching strategy and considering the online caching strategy.
Further, the aggregation refers to processing the same request as one request in one time slot, and the buffering refers to buffering proper contents at MBS and SBS.
In a second aspect, the present invention provides a network content transmission device facing cloud edge collaboration, including:
the network model building module is used for analyzing and building a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station;
the energy consumption model building module is used for analyzing and building energy consumption models of all parts of the system based on the network model and building energy consumption models of the system based on the energy consumption models of all parts of the system;
and the system energy consumption optimization module is used for solving the system energy consumption model based on a cache strategy and optimizing the network system based on a solving result so as to reduce energy consumption.
In a third aspect, the present invention provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the cloud edge collaboration-oriented network content transmission method according to the first aspect when executing the program.
In a fourth aspect, the present invention provides a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the cloud edge collaboration oriented network content transmission method according to the first aspect.
As can be seen from the above technical solutions, the cloud-edge-collaboration-oriented network content transmission method, apparatus, and storage medium provided in the present invention reduce redundant transmission of network content and improve content transmission efficiency, resource utilization rate, and network energy efficiency of a heterogeneous network by reasonably deploying in-network caches in cooperation with SBS and MBS base stations and by using request aggregation and cooperative allocation of computing, caching, and communication (referred to as 3C) resources.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the embodiments or technical solutions in the prior art are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a network content transmission method for cloud edge collaboration according to an embodiment of the present invention;
FIG. 2 is a network topology model diagram depicting a heterogeneous network system, according to an embodiment of the invention;
FIG. 3 illustrates the core parameter notation and significance employed by the model according to an embodiment of the present invention;
FIG. 4 is a flow diagram of an LRU policy according to an embodiment of the invention;
FIG. 5 is a diagram of a performance description of a caching policy according to an embodiment of the invention;
fig. 6 is a schematic structural diagram of a network content transmission apparatus facing cloud edge collaboration according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention designs an energy-efficient network content transmission algorithm facing cloud edge cooperation, and realizes energy consumption minimization of the whole network by optimizing cooperation allocation and optimal routing path transmission of 3C resources. The invention focuses on the energy consumption optimization problem under the cooperative condition of MBS and SBS.
Fig. 1 is a flowchart of a network content transmission method facing cloud edge collaboration according to an embodiment of the present invention, and referring to fig. 1, the network content transmission method facing cloud edge collaboration according to the embodiment of the present invention includes:
step 110: analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station;
step 120: analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing the energy consumption models of the system based on the energy consumption models of all parts of the system;
step 130: and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption.
In the embodiment of the present invention, it should be noted that, the present invention first introduces a topology structure of a network according to SBS and MBS connection conditions under a heterogeneous network and a manner of accessing a user to a base station.
In order to effectively analyze the energy consumption problem of the base station in the heterogeneous network, fig. 2 shows a reasonable model for describing the network system infrastructure under the heterogeneous network. According to the sequence from top to bottom, the uppermost layer is a cloud end, the middle layer is MBS, and the layer closest to the user is SBS. Each MBS or SBS has buffering and computation capabilities, and the content requests of the mobile user terminals can be satisfied by the base station or the cloud. The cloud stores the content and resources requested by all users, but the users directly connect with the MBS or SBS to send the requests because of high energy consumption and long distance from the users.
Cloud and N through the Internet M MBS is connected with each other, each MBS and N i,s And SBS are connected. The base stations are connected by wire. Both MBS and SBS are edge nodes closer to the mobile user terminal, both with caching and computing capabilities, which can respond to and process part of the requests from the mobile user terminal. But MBS is larger than SBS buffer area, and brings larger energy consumption; SBS has limited processing capacity for user requests but consumes less power. Each MBS is limited with N within the coverage range of the base station according to the distance i,m Each mobile user terminal is directly connected with each SBS according to the limitation of distance and N in the coverage range of the base station ij,m And directly connecting the mobile user terminals. The mobile user terminal and the base station adopt a wireless connection mode. The mobile user terminal can only send a request to a base station directly connected with the user to search the content.
In the embodiment of the present invention, it should be noted that the energy consumption models are energy consumption of the mobile user terminal, energy consumption of the base station (including MBS and SBS energy consumption models), cloud energy consumption, and link energy consumption, respectively. The invention also provides a system model and an optimization target of the model. Fig. 3 summarizes the core parameter symbols and meanings employed by the present invention.
And constructing energy consumption models of the mobile user terminals, wherein the energy consumption models are respectively energy consumption of the direct connection of the mobile user terminals to the SBS and the direct connection of the mobile user terminals to the MBS.
Suppose there is N in the network M MBS, MBSi directly links N i,s And the SBS requests can be transmitted to the directly connected base station through the link to search the content. The mobile user terminal can directly access the MBS or SBS through the wireless link. In the system model, there is N i,m The i-th MBS, N is directly accessed by each mobile user terminal ij,m The individual mobile user terminals are connected to the jth SBS under the ith MBS. Assume that the number of different content categories available in the network system of the present model is F.
Energy of mobile user terminalThe consumption may be expressed as the sum of the energy consumption of the MBS and SBS that are directly connected to the mobile client. Therefore, energy consumption E of the mobile user terminal MU Can be expressed as:
Figure BDA0003177393540000061
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003177393540000062
represents the energy consumption generated by the mth user sending a request k to the jth SBS connected under the ith MBS,
Figure BDA0003177393540000063
indicating the energy consumption generated by the mth user sending the request k to the ith MBS.
Figure BDA0003177393540000064
And
Figure BDA0003177393540000065
the two situations of the wireless direct connection SBS and the wireless direct connection MBS when the mobile user terminal accesses the system are described in detail below.
For SBS-direct users:
the energy consumption generated by the user directly connecting with the lowest SBS can be expressed as the product of the power consumption and the time delay generated during the period that the user sends a request and receives the content returned by the network. In the present model, the energy consumption generated by the mth user sending the request k to the jth SBS connected under the ith MBS is represented as
Figure BDA0003177393540000071
The product of the time delay between sending this request and receiving the corresponding data can be expanded to be written as:
Figure BDA0003177393540000072
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003177393540000073
indicating the power consumption generated by the mth user sending a request k to the jth SBS connected below the ith MBS,
Figure BDA0003177393540000074
indicating the time delay generated by the mth user sending the request k to the jth SBS connected under the ith MBS.
Figure BDA0003177393540000075
Can be further expressed as:
Figure BDA0003177393540000076
wherein the content of the first and second substances,
Figure BDA0003177393540000077
indicating the delay that occurs when request k is fulfilled to the jth SBS connected under the ith MBS.
Figure BDA0003177393540000078
Indicating the delay incurred when request k is satisfied by the ith MBS,
Figure BDA0003177393540000079
representing the delay incurred when request k is satisfied to the cloud.
Figure BDA00031773935400000710
And
Figure BDA00031773935400000711
is a boolean variable indicating whether the request k is satisfied in the jth SBS and ith MBS connected under the ith MBS, respectively. If the content k is cached in the jth SBS connected under the ith MBS
Figure BDA00031773935400000712
Equal to 1, otherwise equal to 0. As can be seen from the same process, if the content k is cached in the ith MBS,
Figure BDA00031773935400000713
equal to 1, otherwise equal to 0.
The delay of the system can be described as three parts, namely response delay, processing delay and link delay. To describe the latency in equation (3) in detail, we describe the basic equation assumptions using queuing theory. The queuing model presented herein considers a mechanism of aggregation, the model assuming that requests are M/M/k compliant at the base station s Queuing theory. Suppose λ ij ,k ij And mu ij Respectively representing the arrival rate of the request k to the jth SBS connected under the ith MBS after considering the convergence mechanism, the number of the service stations of the jth SBS connected under the ith MBS and the service rate of the base station. Therefore, the utilization of the jth SBS connected under the ith MBS can be expressed as:
Figure BDA00031773935400000714
but the base station is not always in an idle state when the request arrives at the base station, and when the server is occupied, the request from the mobile user terminal of the j-th SBS connected under the i-th MBS must be queued on this SBS, waiting for the request to be processed again when idle, and this probability can be written as:
Figure BDA0003177393540000081
wherein, pi 0ij Indicating the steady state probability when no task is requested in the jth SBS connected under the ith MBS. Therefore, based on equations (4) and (5), the response delay that the content request k is satisfied in the jth SBS connected under the ith MBS can be written as:
Figure BDA0003177393540000082
when the request k is in the buffer of the jth SBS connected under the ith MBS, the total delay generated by the mobile user equipment includes the uplink and downlink transmission delay between the jth SBS connected under the ith MBS and the mobile user equipment, and the processing delay and the response delay in the jth SBS connected under the ith MBS, which can be expressed as:
Figure BDA0003177393540000083
wherein l ij,up And l ij,down Respectively representing the uplink and downlink transmission rates, v, of the mobile user terminal to the jth SBS connected under the ith MBS ij CPU clock speed, s, representing the SBS process request content k k Which represents the size of the content k and,
Figure BDA0003177393540000084
indicating the number of CPU revolutions, alpha, required to process the request c Indicating the proportion of requested data in the total data generated by a requested task.
Suppose λ i 、k i And mu i Respectively the request arrival rate, the number of servers and the service rate on the ith MBS after considering the convergence condition. Likewise, λ C 、k C And mu C Indicating the request arrival rate, the number of servers, and the service rate of the cloud after considering the convergence situation. Therefore, the total time delay of the request k satisfied at the ith MBS and the cloud is:
Figure BDA0003177393540000085
Figure BDA0003177393540000086
wherein l i,up And l i,down For the uplink and downlink transmission rate between the ith MBS and the jth SBS connected under the MBS, l C,up And l C,down For uplink and downlink transmission rate, v, between MBS and cloud i And v c For the i-th MBS and the cloud CPU clock rate,
Figure BDA0003177393540000091
and
Figure BDA0003177393540000092
the response delays that content request k satisfies on the ith MBS and the cloud, respectively.
For MBS direct connection users:
based on the above analysis, the m-th user sends a request k to the i-th MBS to generate energy consumption
Figure BDA0003177393540000093
Can be written as:
Figure BDA0003177393540000094
wherein the content of the first and second substances,
Figure BDA0003177393540000095
for the power consumed by the user at the ith MBS access request k,
Figure BDA0003177393540000096
the time delay generated for the user in the ith MBS access request k. The time delay can be expanded as follows:
Figure BDA0003177393540000097
wherein the content of the first and second substances,
Figure BDA0003177393540000098
indicating the delay incurred when request k is satisfied by the ith MBS,
Figure BDA0003177393540000099
represents the delay incurred when request k is fulfilled to the jth SBS connected under the ith MBS,
Figure BDA00031773935400000910
representing the delay incurred when request k is satisfied to the cloud.
Figure BDA00031773935400000911
And
Figure BDA00031773935400000912
is a boolean variable indicating whether the request k is satisfied in the jth SBS and ith MBS connected under the ith MBS, respectively. If the content k is cached in the jth SBS connected under the ith MBS
Figure BDA00031773935400000913
Equal to 1, otherwise equal to 0. As can be seen from the same process, if the content k is cached in the ith MBS,
Figure BDA00031773935400000914
equal to 1, otherwise equal to 0.
In a mobile network, the energy consumption of a base station can be written as the sum of the energy consumption of MBS and SBS, wherein the energy consumption of a base station can also be expressed as the product of power consumption and time delay. So the total energy consumption E of the base station BS Can be expressed as:
Figure BDA00031773935400000915
wherein, T s Is the running time of the system, P ij And P i Respectively the power consumption of the jth SBS and the ith MBS connected under the ith MBS.
SBS energy consumption model:
the total power consumption of the jth SBS connected under the ith MBS may be expressed as the sum of the conventional power consumption and the buffer power consumption:
Figure BDA0003177393540000101
wherein the content of the first and second substances,
Figure BDA0003177393540000102
and
Figure BDA0003177393540000103
respectively representing the traditional power consumption and the cache power consumption of the jth SBS connected under the ith MBS. Conventional power consumption of jth SBS connected under ith MBS
Figure BDA0003177393540000104
And can be represented as:
Figure BDA0003177393540000105
at this time, the process of the present invention,
Figure BDA0003177393540000106
γ ij ,B ij respectively representing the signal-to-noise ratio and the maximum transmission rate of j SBS connected under i MBS,
Figure BDA0003177393540000107
indicating the amount of requested data on the jth SBS that requests k to connect under the ith MBS. P 0ij And Δ P ij Respectively representing the static power consumption of the jth SBS connected under the ith MBS and a slope parameter representing the influence of the base station load on the power consumption of the jth SBS.
Caching power consumption in a formula
Figure BDA0003177393540000108
The method comprises two parts, namely cache retrieval power consumption and content cache power consumption, which are respectively related to the data volume requested by a user and the content size. Therefore, the buffering power consumption of the jth SBS connected under the ith MBS may be defined as:
Figure BDA0003177393540000109
wherein the content of the first and second substances,
Figure BDA00031773935400001010
representing the average retrieval power consumption of request k in the buffer. w is a ca Representing powerAn efficiency parameter, which parameter depends on the storage hardware technology.
MBS energy consumption model:
similar to the above, the total power consumption of the ith MBS may be expressed as:
Figure BDA00031773935400001011
wherein the content of the first and second substances,
Figure BDA00031773935400001012
and
Figure BDA00031773935400001013
respectively representing the conventional power consumption and the cache power consumption of the ith MBS. Similar to SBS, legacy power consumption of the ith MBS
Figure BDA00031773935400001014
And cache power consumption
Figure BDA00031773935400001015
Can be respectively unfolded as:
Figure BDA0003177393540000111
Figure BDA0003177393540000112
wherein, γ i ,B i Respectively representing the signal-to-noise ratio and the maximum transmission rate of the ith MBS,
Figure BDA0003177393540000113
indicating the requested data amount of request k on the ith MBS.
Figure BDA00031773935400001110
And Δ P i Respectively representing the static power consumption of the ith MBS and a slope parameter representing the influence of the load of the base station on the power consumption of the ith MBS.
Figure BDA0003177393540000114
Representing the average retrieval power consumption of request k in the buffer. w is a ca Is a power efficiency parameter.
Energy consumption of the cloud is denoted as P c Static energy consumption by cloud P s And the energy consumption for satisfying requests that cannot be handled by the MBS is composed of:
Figure BDA0003177393540000115
wherein the content of the first and second substances,
Figure BDA0003177393540000116
representing the average retrieval power consumption for request k.
And the MBS is connected with the SBS and the MBS is connected with the cloud end through wired links. From the illustration of fig. 2, the total energy consumption of the wired link can be expressed as:
Figure BDA0003177393540000117
wherein the content of the first and second substances,
Figure BDA0003177393540000118
indicating the power consumption of requesting k to transmit from the link between the jth SBS connected under the ith MBS,
Figure BDA0003177393540000119
and the transmission power consumption of the link from the ith MBS to the cloud end is represented. T is a unit of s Representing the run time of the system.
And customizing the minimum energy consumption problem as a cloud-edge cooperative resource allocation model of the content delivery service. Then, theoretical analysis is carried out on the model, and a method for obtaining the optimal solution is given. The total energy consumption of the system may represent the sum of the link energy consumption and the base station energy consumption for processing and transmitting requests in the network, and the energy efficiency model may be represented as:
Figure BDA0003177393540000121
Figure BDA0003177393540000122
Figure BDA0003177393540000123
Figure BDA0003177393540000124
Figure BDA0003177393540000125
C 5 :ρ c ≤1
Figure BDA0003177393540000126
C 7 :α c ∈(0,1)
wherein, C 1 -C 2 Is the maximum buffer size constraint for MBS and SBS,
Figure BDA0003177393540000127
and
Figure BDA0003177393540000128
respectively indicating whether the jth SBS and the ith MBS connected under the ith MBS cache hits, wherein the hit is 1, and the miss is 0.s is k Indicating the requested file size. C ij Indicates the buffer space size of the j SBS connected under the i MBS i Indicating the buffer space size of the ith MBS. C 3 -C 5 Representing base station usage constraints that constrain base station and cloud utilization to less than 1,C 6 Indicating that the boolean variable associated with the caching decision can only be 0 or 1,C 7 the proportion of request data to the total flow generated by the requesting task is limited to between 0 and 1.
And solving the designed model based on four strategies to optimize the system performance. The four strategies are respectively: aggregation-free caching schemes are not considered, aggregation-free caching schemes are considered, offline caching schemes are considered, and online caching schemes are considered.
Without considering the converged no-cache scheme:
the non-buffer scheme means that neither MBS nor SBS has buffer capacity, and the request of the mobile user can only be transmitted to the cloud for processing. Aggregation refers to processing as one request, regardless of queuing at the base station, in one time slot, if there is the same request. The non-cache scheme without considering aggregation means that, when the base station does not have cache capability, the aggregation condition of the same request is not considered in the request transmission process, and each request needs to be transmitted to the cloud in the link.
Consider a converged no-cache scheme:
the non-buffer scheme considering aggregation refers to the situation that the same request is aggregated in the process of request transmission under the condition that the base stations do not have buffer capacity. If the same request is sent to the queue of one base station in one time slot, the request is counted as one and transmitted to the next base station for processing. This strategy reduces response delay, processing delay, and power consumption at the base station, and thus reduces power consumption, compared to non-buffered approaches that do not account for convergence.
An offline caching scheme:
the offline caching scheme is also called as an optimal caching scheme, and is to store contents with the highest popularity in a cache region of the base station according to the ranking of the popularity, and the condition that the replacement request in the cache region is not considered whether the request is hit or not.
Because the theoretically optimal request content is stored in the base station, the hit probability of the user in the base station at the bottom layer is increased, the transmission time delay is obviously reduced, and the theoretically optimal model solution can be obtained. Therefore, the optimal solution of the hierarchical energy consumption problem based on the content popularity distribution characteristics, which is proposed herein, can be deduced, and this provides a benchmark for the online cache solution in the heterogeneous network to obtain a near optimal result.
Assuming that the popularity of the network content obeys the distribution of Zipf, the base station caches the network content according to the ranking of the popularity of the content. Thus, the utilization of the jth SBS and ith MBS using the i-th MBS connection with the best buffer can be written as:
Figure BDA0003177393540000141
Figure BDA0003177393540000142
wherein alpha is ij And alpha i Is the skewness factor of the popularity of the jth SBS and the ith MBS contents connected under the ith MBS. The amount of popular content reaching the base station increases as the skewness factor increases. According to the above rewritten utilization expression, the minimum time delay can be obtained under the optimal buffering.
An online caching scheme:
the online caching scheme is implemented by using a Least Recently Used (LRU) algorithm, which replaces a request that is not Used for the longest time in a cache area with a request that is entered into the cache area, and sets the request as a newly stored request, where this replacement is shown in fig. 4. This replacement may ensure that requests in the cache remain the most recently requested content at all times. This is a cache replacement method considering real life scenarios, i.e. the content that is requested recently is also searched recently by the user, and the probability that the content that is not requested for a long time is requested again is low.
The method of the present invention is then analyzed and compared in terms of performance in conjunction with simulation results.
According to the system model established herein, the network is a three-layer topology. The uppermost layer represents a cloud end; the middle layer represents an MBS layer; the bottom layer represents the SBS layer. The SBS number is set to be 4, the MBS number is set to be 2, and the cloud number is set to be 1. The request can be transmitted in the connected nodes through the connected links, wherein the uplink transmission is a calculation request, and the file is small; and the content requested by the user is transmitted downwards, and the file is large. The user follows 3: the ratio of 1 sends requests according to the allocation access MBS and SBS layers. In the topological structure, the mobile user terminal has no cache capability, and the SBS, the MBS and the cloud have calculation and cache capabilities.
In order to embody the system performance of the invention, under the condition of not considering the deployment of the edge cache in the wireless network, the invention is based on a non-cache scheme without considering aggregation, a non-cache scheme with considering aggregation, an online cache scheme and an offline cache scheme. And (5) obtaining the optimal solution of the system power consumption model by discussing the system performance. The characteristics of the four strategies are shown in fig. 5.
For network energy consumption of different strategies under different cache sizes, performing macroscopic trend analysis to obtain the worst performance of the converged cache-free scheme, and considering the second performance of the converged cache-free scheme; the performance of the offline caching scheme is optimal. The energy consumption without considering the aggregated non-cache remains unchanged due to the lack of cache deployment in the access network. The energy consumption of aggregating the same content requests without buffering is better than not considering the aggregation without buffering at the base station. As cache size increases, more popular content is cached to the edge of the network, reducing the performance gap between "offline caching" and "online caching". As the buffer size increases, the energy efficiency of the BSs converges to a steady state in the model, since the more undesirable content of the buffer, the less impact on reducing energy consumption.
Network energy consumption under different content popularity for different policies. As content popularity increases, there is no impact regardless of aggregate cacheability, since each request must be routed to the cloud to obtain the corresponding content. The performance of aggregate no-cache is considered to improve as content popularity increases because more content requests are aggregated in the queues of the base station. The energy consumption of the offline cache and the online cache is rapidly reduced, and the larger the Zipf skewness parameter is, the more popular the content requested by MUs is, so that most of the data requested to meet the cache in SBS and MBS are directly generated.
Network energy consumption at different request arrival rates for different policies. The energy efficiency of all four schemes tends to decrease because of the increased energy consumption caused by the larger time delay in queuing. The performance gap between offline caching and other solutions increases as the request arrival rate increases, since popular content is always cached at the network edge in "offline caching". The performance gap between not considering aggregate no-cache and considering aggregate no-cache becomes larger because the effect of request aggregation increases with increasing arrival rate.
The energy consumption of network energy under different content types and different strategies increases with the increase of the number of different contents, and as more undesirable contents are requested by MUs, the requests which are not satisfied in BSs with limited cache capacity increase, and the contents of the requests are obtained from the cloud. The smaller the performance gap between offline and online caches, the lower the cache hit rate for popular content in both, as more requests are unpopular. At the same time, the energy consumption for convergence cache-less is considered to grow slowly, since the growing diversity of content has a limited impact on request aggregation compared to in-network caching.
In order to improve network energy efficiency and content transmission, the invention provides a novel energy-saving hierarchical cooperation scheme, which considers the joint allocation of cache, calculation and communication resources in a hierarchical heterogeneous network such as a mobile user terminal, a small base station, a macro base station and a cloud. And establishing a centralized model of the energy consumption problem, and analyzing the optimal solution of the energy consumption problem according to the distribution characteristics of the content popularity on the base station. And finally, performing simulation analysis on the designed combined energy consumption model under various strategies, and discussing the problem of realizing the minimum energy consumption by comprehensively influencing various factors of the energy consumption. Simulation results show that the performance of the model provided by the invention is obviously superior to that of the existing cloud-edge collaborative solution under the condition of not considering cache resource deployment. Provides a feasible solution for the intensive deployment of the base station and the high energy consumption brought by the increase of the number of users and the communication data size.
The energy efficiency problem is defined as a centralized model based on queuing in the cloud edge cooperative network, wherein the caching in the network is considered, and the same content requests can be aggregated in the queue of each base station. The allocation of 3C resources is jointly optimized while guaranteeing the quality of service of the content delivery. The invention minimizes the energy consumption on the basis of ensuring the service quality, and has wider research and development prospect. The invention establishes an energy efficiency model to solve the energy consumption problem in the context of a heterogeneous network. The scheme provided by the invention reasonably deploys the in-network cache under the cooperation of the SBS and MBS base stations, reduces redundant transmission of network contents by adopting request aggregation and cooperative allocation of 3C resources, improves the content transmission efficiency, the resource utilization rate and the network energy efficiency of a heterogeneous network, has positive influence on national energy and financial conservation, accords with the green energy-saving concept, and has wide development prospect.
Fig. 6 is a schematic diagram of a network content transmission apparatus facing cloud edge collaboration provided in an embodiment of the present invention, and as shown in fig. 6, the network content transmission apparatus facing cloud edge collaboration provided in the embodiment of the present invention includes:
the network model building module 610 is configured to analyze and build a network model according to SBS and MBS connection conditions in the heterogeneous network and a mode of accessing a user to a base station;
the energy consumption model building module 620 is used for analyzing and building energy consumption models of all parts of the system based on the network model and building an energy consumption model of the system based on the energy consumption models of all parts of the system;
and the system energy consumption optimization module 630 is configured to solve the system energy consumption model based on a cache policy, and optimize the network system based on a solution result to reduce energy consumption.
Fig. 7 illustrates a physical structure diagram of an electronic device, and as shown in fig. 7, the electronic device may include: a processor (processor) 710, a communication Interface (Communications Interface) 720, a memory (memory) 730, and a communication bus 740, wherein the processor 710, the communication Interface 720, and the memory 730 communicate with each other via the communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform a cloud edge collaboration oriented network content delivery method comprising: analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station; analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing energy consumption models of the system based on the energy consumption models of all parts of the system; and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption.
In addition, the logic instructions in the memory 730 can be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, which includes a computer program stored on a non-transitory computer-readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, the computer being capable of executing the cloud edge collaboration-oriented network content transmission method provided by the above methods, the method including: analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station; analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing energy consumption models of the system based on the energy consumption models of all parts of the system; and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption.
In yet another aspect, the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the cloud edge collaboration-oriented network content transmission method provided in the foregoing, the method including: analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station; analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing energy consumption models of the system based on the energy consumption models of all parts of the system; and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (4)

1. A network content transmission method facing cloud edge collaboration is characterized by comprising the following steps:
analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station;
analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing energy consumption models of the system based on the energy consumption models of all parts of the system;
solving the system energy consumption model based on a cache strategy, and optimizing a network system based on a solving result to reduce energy consumption;
the network model comprises a cloud, an MBS, an SBS and a mobile user terminal, wherein the MBS and the SBS have caching and calculating capabilities, and the cloud stores the content requested by the mobile user terminal;
the energy consumption model of each part of the system comprises: the energy consumption model of the mobile user terminal, the energy consumption model of the base station, the energy consumption model of the cloud and the energy consumption model of the link;
the mobile user terminal energy consumption model comprises the energy consumption of the mobile user terminal directly connecting the SBS and the mobile user terminal directly connecting the MBS;
the base station energy consumption model comprises MBS energy consumption and SBS energy consumption;
the cloud energy consumption model comprises static energy consumption of a cloud end and energy consumption generated by meeting a request which cannot be processed by the MBS;
the link energy consumption model comprises energy consumption between MBS and SBS and between MBS and cloud end;
the system energy consumption model comprises link energy consumption and base station energy consumption for processing and transmitting requests in the network;
the caching strategy comprises the following steps: the converged non-cache strategy, the converged off-line cache strategy and the converged on-line cache strategy are not considered;
the aggregation refers to processing the same request as one request in one time slot, and the buffering refers to buffering proper contents at the MBS and the SBS.
2. A network content transmission device facing cloud edge collaboration is characterized by comprising:
the network model building module is used for analyzing and building a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station;
the energy consumption model building module is used for analyzing and building energy consumption models of all parts of the system based on the network model and building a system energy consumption model based on the energy consumption models of all parts of the system;
the system energy consumption optimization module is used for solving the system energy consumption model based on a cache strategy and optimizing a network system based on a solving result so as to reduce energy consumption;
the network model comprises a cloud, an MBS, an SBS and a mobile user terminal, wherein the MBS and the SBS have caching and calculating capabilities, and the cloud stores the content requested by the mobile user terminal;
the energy consumption model of each part of the system comprises: the energy consumption model of the mobile user terminal, the energy consumption model of the base station, the energy consumption model of the cloud and the energy consumption model of the link;
the mobile user terminal energy consumption model comprises the energy consumption of the direct connection of the mobile user terminal to the SBS and the direct connection of the mobile user terminal to the MBS;
the base station energy consumption model comprises MBS energy consumption and SBS energy consumption;
the cloud energy consumption model comprises static energy consumption of a cloud end and energy consumption generated by meeting a request which cannot be processed by MBS;
the link energy consumption model comprises energy consumption between MBS and SBS and between MBS and cloud end;
the system energy consumption model comprises link energy consumption and base station energy consumption for processing and transmitting requests in the network;
the caching strategy comprises the following steps: the converged cache-free strategy, the offline cache strategy and the online cache strategy are not considered;
the aggregation refers to processing the same request as one request in one time slot, and the caching refers to caching the proper content at the MBS and the SBS.
3. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the cloud-edge-oriented collaborative network content transmission method according to claim 1 when executing the program.
4. A non-transitory computer readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps of the cloud edge collaboration oriented network content delivery method as claimed in claim 1.
CN202110836601.6A 2021-07-23 2021-07-23 Network content transmission method and device oriented to cloud edge collaboration and storage medium Active CN113709853B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110836601.6A CN113709853B (en) 2021-07-23 2021-07-23 Network content transmission method and device oriented to cloud edge collaboration and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110836601.6A CN113709853B (en) 2021-07-23 2021-07-23 Network content transmission method and device oriented to cloud edge collaboration and storage medium

Publications (2)

Publication Number Publication Date
CN113709853A CN113709853A (en) 2021-11-26
CN113709853B true CN113709853B (en) 2022-11-15

Family

ID=78650361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110836601.6A Active CN113709853B (en) 2021-07-23 2021-07-23 Network content transmission method and device oriented to cloud edge collaboration and storage medium

Country Status (1)

Country Link
CN (1) CN113709853B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134382A (en) * 2022-06-06 2022-09-30 北京航空航天大学 Airport transport capacity flexible scheduling method based on cloud edge cooperation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107659946A (en) * 2017-09-19 2018-02-02 北京工业大学 A kind of mobile communications network model building method based on edge cache
CN111124439A (en) * 2019-12-16 2020-05-08 华侨大学 Intelligent dynamic unloading algorithm with cloud edge cooperation
CN111885648A (en) * 2020-07-22 2020-11-03 北京工业大学 Energy-efficient network content distribution mechanism construction method based on edge cache
CN112020103A (en) * 2020-08-06 2020-12-01 暨南大学 Content cache deployment method in mobile edge cloud
AU2020103384A4 (en) * 2020-11-11 2021-01-28 Beijing University Of Technology Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140101312A1 (en) * 2012-10-09 2014-04-10 Transpacific Ip Management Group Ltd. Access allocation in heterogeneous networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107659946A (en) * 2017-09-19 2018-02-02 北京工业大学 A kind of mobile communications network model building method based on edge cache
CN111124439A (en) * 2019-12-16 2020-05-08 华侨大学 Intelligent dynamic unloading algorithm with cloud edge cooperation
CN111885648A (en) * 2020-07-22 2020-11-03 北京工业大学 Energy-efficient network content distribution mechanism construction method based on edge cache
CN112020103A (en) * 2020-08-06 2020-12-01 暨南大学 Content cache deployment method in mobile edge cloud
AU2020103384A4 (en) * 2020-11-11 2021-01-28 Beijing University Of Technology Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
An Edge Cache-Based Content Delivery Scheme in Green Wireless Networks;Chao Fang;《2019 IEEE Global Communications Conference (GLOBECOM)》;20200227;全文 *
An Edge Cache-based Power-Efficient Content Delivery Scheme in Mobile Wireless Networks;Chao Fang等;《2019 19th International Symposium on Communications and Information Technologies (ISCIT)》;20190927;全文 *
Optimal multilevel media stream caching in cloud-edge environment;Hengliang Tang等;《The Journal of Supercomputing》;20210304;全文 *

Also Published As

Publication number Publication date
CN113709853A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN112995950B (en) Resource joint allocation method based on deep reinforcement learning in Internet of vehicles
CN111475274B (en) Cloud collaborative multi-task scheduling method and device
CN112218337A (en) Cache strategy decision method in mobile edge calculation
CN113810931B (en) Self-adaptive video caching method for mobile edge computing network
WO2023108718A1 (en) Spectrum resource allocation method and system for cloud-edge collaborative optical carrier network
Li et al. Distributed task offloading strategy to low load base stations in mobile edge computing environment
Zheng et al. 5G network-oriented hierarchical distributed cloud computing system resource optimization scheduling and allocation
CN114205782B (en) Optimal time delay caching and routing method, device and system based on cloud edge cooperation
Ren et al. Multi-objective optimization for task offloading based on network calculus in fog environments
Li et al. An optimized content caching strategy for video stream in edge-cloud environment
Li et al. DQN-enabled content caching and quantum ant colony-based computation offloading in MEC
CN113709853B (en) Network content transmission method and device oriented to cloud edge collaboration and storage medium
Fang et al. Drl-driven joint task offloading and resource allocation for energy-efficient content delivery in cloud-edge cooperation networks
CN116963182A (en) Time delay optimal task unloading method and device, electronic equipment and storage medium
Kabir et al. Energy-aware caching and collaboration for green communication systems.
Chen et al. Twin delayed deep deterministic policy gradient-based intelligent computation offloading for IoT
Fang et al. AI-driven energy-efficient content task offloading in cloud-edge-end cooperation networks
Peng et al. Value‐aware cache replacement in edge networks for Internet of Things
CN113766540B (en) Low-delay network content transmission method, device, electronic equipment and medium
Fang et al. Offloading strategy for edge computing tasks based on cache mechanism
Tang et al. Optimal multilevel media stream caching in cloud-edge environment
CN108429919B (en) Caching and transmission optimization method of multi-rate video in wireless network
Oualil et al. A personalized learning scheme for internet of vehicles caching
CN115051999B (en) Energy consumption optimal task unloading method, device and system based on cloud edge cooperation
Li et al. Data & computation-intensive service re-scheduling in edge networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant