CN112862060B - Content caching method based on deep learning - Google Patents

Content caching method based on deep learning Download PDF

Info

Publication number
CN112862060B
CN112862060B CN201911195411.XA CN201911195411A CN112862060B CN 112862060 B CN112862060 B CN 112862060B CN 201911195411 A CN201911195411 A CN 201911195411A CN 112862060 B CN112862060 B CN 112862060B
Authority
CN
China
Prior art keywords
content
popularity
data
time
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911195411.XA
Other languages
Chinese (zh)
Other versions
CN112862060A (en
Inventor
张旭
漆政南
马展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201911195411.XA priority Critical patent/CN112862060B/en
Publication of CN112862060A publication Critical patent/CN112862060A/en
Application granted granted Critical
Publication of CN112862060B publication Critical patent/CN112862060B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses a content caching method based on deep learning, which comprises the following steps: (1) Collecting user request information of edge nodes to construct a time request sequence; (2) calculating content popularity from the request sequence data; (3) performing maximum and minimum normalization treatment; (4) Converting the time sequence prediction problem into a supervised learning problem; (5) Offline training is based on a time convolution network popularity prediction model; (6) Invoking a popularity prediction model to predict popularity, carrying out weighted summation on the predicted popularity data and the historical popularity data based on exponential averaging, and calculating content value; (7) making a caching decision using LRU. The method can predict popularity distribution of corresponding content only according to the characteristic of the content request sequence, and can balance long-term memory and short-term burst memory by combining historical content popularity information, thereby achieving better effects on predicting accuracy and improving cache hit rate.

Description

Content caching method based on deep learning
Technical Field
The invention relates to the technical field of edge caching in mobile communication, in particular to a content caching method based on deep learning.
Background
With the continuous rapid popularity of various smart devices and advanced mobile application services, wireless networks have been subject to unprecedented data traffic pressures in recent years. The growing mobile data traffic puts tremendous stress on the limited capacity backhaul links, especially during peak traffic periods. The edge content caching technique can effectively save time and resources required for requesting and transmitting content from a superior content server or an original content server and can reduce data traffic by placing the most popular content closer to the requesting user or the base station. However, due to the limited cache capacity of the nodes and the time and space variation of content popularity, the content caching technology faces various challenges, such as what content is cached at this point in time, how to improve the cache hit rate of the nodes, and so on.
At present, the traditional cache replacement algorithm is mainly adopted in the field of content caching: first-in-first-out queue algorithm (FIFO), least recently used algorithm (LRU), and least frequently used algorithm (LFU), and variants thereof. These algorithms generally follow simple rules and are easy to implement, but fixed rules are difficult to adapt to complex dynamic requests. To solve this problem, prediction-based algorithms have received much attention in recent years, and the proposed method is to extract some manual features using history data to estimate popularity of each content and replace the most popular content with the least popular one; there has also been recent work in predicting popularity through long and short term memory networks (LSTM) based on deep learning methods. However, the method has the problems that the model popularity prediction is not accurate enough, the caching decision is too dependent on the predicted popularity sequencing, the parameters of the model are only updated regularly on a rough time scale, the model cannot cope with and adapt to sudden events (such as the occurrence of new hot content) and the like, the cache hit rate is easy to be low, the network delay is increased, the cache space utilization rate is not high, and the like.
Therefore, how to improve accuracy and precision of popularity prediction, take better caching decisions, and respond to dynamic requests of various situations is an important research direction at present.
Disclosure of Invention
Aiming at the defects existing in the prior content caching method, the invention aims to provide a high-speed content caching method based on deep learning.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a content caching method based on deep learning comprises the following steps:
step 1, collecting user request information of edge nodes in an area, wherein the user request information mainly comprises the ID of content requested by a user, the ID of the user and a time stamp of the requested content, and sequencing according to the time stamp to construct a time sequence;
step 2, constructing a probability sliding window based on the fixed request length N, recording the moment of the current window as t, and calculating the content C i (i=1,23,., n) the number of occurrences O in the probability window at time t i (t) calculating the content C based on the obtained times i Popularity at time t:
P i (t)=O i (t)/N
thus, the popularity vector for the current time t can be expressed as:
P i (t)=[P 1 (t),P 2 (t),P 3 (t).....P n (t)]
the content types of the data set are N types, N is a constant, and the statistical width of popularity calculation is reflected;
step 3, in order to avoid the influence caused by different content popularity value ranges in different time periods, carrying out maximum and minimum normalization processing on the calculated content popularity data (namely popularity vector);
step 4, taking the historical popularity data as an input observed value, taking the popularity data at the current moment as an output, combining the input part and the output part, creating a data list (DataFrame), and converting the time sequence prediction into a supervised learning problem; to better utilize a time convolution network, providing a plurality of historical data as inputs, constructing a three-dimensional input tensor having dimensions (samples, w, n), wherein samples are the total length of the input time series, w is the time step, and n is the total number of contents; dividing the data list into a training set and a testing set;
and 5, constructing a content popularity prediction model based on a time convolution network, inputting the training set and the test set obtained in the step 4 into the time convolution network, defining a mean square error function as a loss function, defining an Adam algorithm as an optimizer, stacking input data through a residual error module, performing forward propagation on the deep network (wherein one-dimensional causal convolution and expansion convolution form standard convolution layers, each two standard convolution layers and identity mapping form a residual error module), obtaining an output value by forward propagation on the last layers of convolution layers, calculating the error between the output value and a target value of the network, performing backward propagation on the error, calculating the error of each layer, calculating an error gradient, performing weight updating, performing forward propagation until the network fits the best effect, and ending training. Performing inverse standardization processing on the predicted data, taking Root Mean Square Error (RMSE) as a model evaluation index, and respectively storing the structure and weight of the trained popularity prediction model into a file;
step 6, calling the trained content popularity prediction model in the step 5, inputting historical content popularity data, and predicting the content popularity at the next moment; carrying out weighted summation on the content popularity data at the next moment predicted by the model and the real historical content popularity data by adopting an exponential averaging method to obtain content value popularity data; the resource value of the content i is calculated by adopting an exponential averaging method, and then the content i is positioned at T j Value of time V i (j):
Content value popularity at the next moment
Wherein T is j For the j-th probability sliding window, F i,n The number of requests of the content i in the nth probability sliding window is T n Time E i,j I.e. the content i predicted by calling the content popularity prediction model is at T j The popularity of time, lambda is a constant (0 < lambda < 1) for adjusting the specific gravity between the historical data and the latest data;
and 7, generating a false content request sequence according to the content value popularity data obtained in the step 6, transferring the generated request sequence to the LRU for decision, and prefetching the content object in the request sequence, thereby updating the edge cache node.
The invention has the following advantages:
(1) According to the method, the content popularity sequence is constructed after data processing according to the request information of the user on the content, and the content popularity prediction model is obtained based on deep learning training, so that the popularity is predicted and tracked in real time.
(2) Compared with the previous research of adopting RNN and LSTM for content popularity prediction, the invention adopts a Time Convolution Network (TCN), the prediction accuracy is better, and the TCN is different from the RNN structure, and can be processed in large-scale parallel, so that the network speed is faster during training and verification, and the occupied memory is less.
(3) In particular, for the situation that the popularity of the content is possibly changed severely, the invention adopts an exponential averaging method to weight the data information of the popularity of the historical content and calculate the value information of the content at the current moment, thereby realizing the balance between long-term memory and short-term burst memory.
(4) According to the false request sequence generated by the model, the cache decision adopts an LRU cache strategy. Compared with the caching strategy of the content with the maximum cache popularity prediction value in the prior art, the method effectively avoids the prediction error caused by direct cache due to over-dependent prediction.
(5) Compared with the prior art, the method has good effect in predicting the accuracy of the content popularity and the cache hit rate, and the method also combines the historical content popularity data information at the same time, thereby effectively preventing the continuous change of popularity and challenges caused by emergency.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a graph of popularity predictions;
FIG. 3 is a graph of the cache hit rate as a function of cache space size.
Detailed Description
The invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, a method for caching content based on deep learning in this embodiment specifically includes the following steps:
step 1, collecting user request information of edge nodes in an area, wherein the user request information mainly comprises the ID of content requested by a user, the ID of the user and time stamps of the requested content, and sequencing according to the time stamps to construct a time sequence.
Step 2, constructing a probability sliding window based on the fixed request length N according to the content request sequence sequenced by the time stamp collected in the step 1, wherein the moving time step of the sliding window is m (m is less than N), recording the moment of the current window as t, and calculating the content C i (i=1, 23,., n) number of occurrences O in the probability window of the current time t i (t) then content C i The probability window content popularity at the current time t is defined as:
P i (t)=O i (t)/N
thus, the popularity vector for the current time t can be expressed as:
P i (t)=[P 1 (t),P 2 (t),P 3 (t).....P n (t)]
wherein the content types of the data set are N kinds, N is a constant, the statistical width of popularity calculation is reflected, and popularity refers to the probability that a content object is requested by a user at a certain moment so as to describe a concept of popularity.
In step 3, in order to avoid the influence caused by different ranges of content popularity values in different time periods, the calculated content popularity data is subjected to maximum and minimum normalization, because the TCN network expected data is generally in the output range of a hyperbolic tangent function, namely between-1 and 1, which is the preferred range of time series data, the maximum and minimum normalization is to linearly transform the obtained content popularity original data, and map the data to between 0 and 1. Let k be the popularity of a certain content at a certain moment, then the maximum and minimum normalization is performed:
wherein k is * K is normalized data min For the minimum of data in the popularity sequence at the current momentValue of k max Is the maximum value of the data in the popularity sequence at the current moment.
Step 4, through Pandas (a data analysis package of python), taking historical data as an input observation value, taking data of the current time (t) as an output, combining the two parts, creating a data list (DataFrame), and converting the time sequence prediction into a supervised learning. To better utilize a time-convolution network, multiple historical data are provided as inputs, and a three-dimensional input tensor is constructed with dimensions (samples, w, n), where samples are the total length of the input time series, w is the time step, and n is the total number of contents. And further dividing the data set into a training set and a testing set, and retaining the latest part of the data set.
And 5, inputting the training set and the test set obtained in the step 4 into a time convolution network, defining a mean square error function as a loss function, defining an Adam algorithm as an optimizer, enabling input data to pass through a depth network formed by stacking residual modules (wherein one-dimensional causal convolution and expansion convolution form standard convolution layers, mapping every two standard convolution layers and identity to form a residual module), and enabling forward propagation of the last layers of convolution layers to obtain an output value, calculating an error between the output value and a target value of the network, carrying out reverse propagation on the error, calculating the error of each layer, solving an error gradient, carrying out weight update, carrying out forward propagation until the network fits an optimal effect, and ending training. And carrying out inverse standardization processing on the predicted data, calculating the error between the actual value of the content popularity and the predicted value of the model by taking Root Mean Square Error (RMSE) as a model evaluation index, and continuously adjusting the network and optimizing the parameters according to the RMSE value until the optimal effect is achieved. Since the input is the result of the data processing, the inverse normalization is performed at the end of the model, and the output predicted value is restored to the result before the data processing. And respectively storing the structure and the weight of the model into a file. Compared with the prior LSTM network-based predictive content popularity, the method has lower RMSE, and the model fitting effect and the predictive effect are relatively better.
Wherein causal convolution is defined as:
filter f= (F 1 ,f 2 ,...,f k ) Content popularity sequence x= (x) 0 ,x 1 ,...,x T ) At x t The causal convolution at this point is:
content popularity sequence x= (X) 0 ,x 1 ,...,x T ) At x t The dilation convolution at a dilation factor d is:
root Mean Square Error (RMSE) is defined as:
wherein Y is obs,i For actual value of content popularity, Y pre,i For the model predictive value, n is the number of video content categories.
And 6, combining the popularity of the past w-1 time slots and the popularity of the current moment as inputs, calling a content popularity prediction model of the step 5 to predict the content popularity of the next moment, and carrying out weighted summation on the content popularity data of the next moment predicted by the model and the real historical content popularity data by adopting an exponential averaging method according to the predicted content popularity of the next moment and the real historical content popularity data to obtain content value popularity data. The resource value of the content i is calculated by adopting an exponential averaging method, and then the content i is positioned at T j Value of time V i (j):
Then the next time is withinPopularity of value
Wherein T is j For the j-th probability sliding window, F i,n The number of requests of the content i in the nth probability sliding window is T n Time E i,j I.e. the content i predicted by calling the content popularity prediction model is at T j The popularity of time, λ, is a constant (0 < λ < 1) for adjusting the specific gravity between the historical data and the latest data.
And 7, calculating the content value popularity data at the next moment generated in the step 6, prefetching M objects with the highest value at the next moment to generate false content sequences, transferring the generated false content sequences into an LRU cache decision to make a decision, and determining the content objects to be cached and evicted in a cache space. Compared with the methods for caching the most popular content by other existing deep learning methods, the content popularity prediction accuracy and the cache hit rate are better improved, namely the caching performance is better improved.
The cache hit rate refers to the ratio between the total hit times and the total access times, and M is the maximum content number that can be cached in the cache space.
Fig. 2 shows the performance of a content popularity prediction model based on a time convolution network, and as shown in fig. 2, the predicted value of the model of the present invention is very close to the original value (where predicted is the predicted value and actual is the actual value), and the model prediction performs well. Fig. 3 is a graph showing a change of a cache hit rate with a cache space, where Optimum is a maximum cache hit rate when the popularity of the content at the next moment is assumed to be known in advance, TCN-LRU-EA is a method proposed by the present invention, LSTM-LRU is a cache method using an LSTM algorithm, and LRU and LFU are conventional cache algorithms. As shown in the comparison result of FIG. 3, the LSTM algorithm is compared with the traditional cache algorithm, and the simulation result shows that the method provided by the invention plays an important role in improving the cache hit rate.

Claims (3)

1. The content caching method based on deep learning is characterized by comprising the following steps of:
step 1, collecting user request information of edge nodes in an area and time stamps of request contents, and sequencing according to the time stamps to construct a time sequence;
step 2, constructing a probability sliding window based on the fixed request length N, and calculating the request content C i Number of occurrences O in probability window at time t i (t) according to the times O i (t) calculating the request content C i Popularity at time t:
P i (t)=O i (t)/N
where i=1, 2,3, …, n, the popularity vector at time t can be expressed as:
P i (t)=[P 1 (t),P 2 (t),P 3 (t)…Px(t)]
wherein the content types of the data set are N types, N is a constant, and the statistical width is calculated in the process of reflecting popularity;
step 3, carrying out maximum and minimum normalization processing on the calculated request content popularity data, namely popularity vectors;
step 4, utilizing historical popularity data at a plurality of moments as an observed value input by a time convolution network, outputting the popularity data at the current moment as the time convolution network, combining the input part and the output part, and creating a data list; dividing the data list into a training set and a testing set;
step 5, constructing a content popularity prediction model based on a time convolution network, inputting the training set and the test set into the time convolution network to obtain an output value, calculating an error between the output value and a target value of the network, carrying out back propagation on the error, calculating the error of each layer, solving an error gradient, carrying out weight updating, carrying out forward propagation until the network fits an optimal effect, and ending training; performing inverse standardization processing on the predicted data, taking root mean square error as a model evaluation index, and respectively storing the structure and weight of the trained popularity prediction model into a file;
step 6, utilizing the trained content popularity prediction model in the step 5 to input historical content popularity data and predict the content popularity at the next moment; carrying out weighted summation on the content popularity data at the next moment predicted by the model and the real historical content popularity data by adopting an exponential averaging method to obtain content value popularity data;
and 7, generating a false content request sequence according to the content value popularity data, transferring the generated request sequence to the LRU for decision, and prefetching the content object in the request sequence, thereby updating the edge cache node.
2. The deep learning-based content caching method according to claim 1, wherein in the step 4, a plurality of historical popularity data are used as inputs to the neural network, and three-dimensional input tensors (samples, w, m) with dimensions are constructed, wherein samples are the total length of the input time series, w is a time step, and m is the total number of the content.
3. The content caching method based on deep learning as claimed in claim 1, wherein in the step 6, an exponential averaging method is adopted to calculate the resource value of the content i, and then the content i is at T j Value of time V i (j):
Content value popularity at the next moment
Wherein T is j For the j-th probability sliding window, F i,q The number of requests of the content i in the q probability sliding window is T q Time E i,j I.e. the content i predicted by calling the content popularity prediction model is at T j The popularity of time, lambda is constant and 0 < lambda < 1, is used to adjust the specific gravity between the historical data and the latest data.
CN201911195411.XA 2019-11-28 2019-11-28 Content caching method based on deep learning Active CN112862060B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911195411.XA CN112862060B (en) 2019-11-28 2019-11-28 Content caching method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911195411.XA CN112862060B (en) 2019-11-28 2019-11-28 Content caching method based on deep learning

Publications (2)

Publication Number Publication Date
CN112862060A CN112862060A (en) 2021-05-28
CN112862060B true CN112862060B (en) 2024-02-13

Family

ID=75995958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911195411.XA Active CN112862060B (en) 2019-11-28 2019-11-28 Content caching method based on deep learning

Country Status (1)

Country Link
CN (1) CN112862060B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785858B (en) * 2022-06-20 2022-09-09 武汉格蓝若智能技术有限公司 Active resource caching method and device applied to mutual inductor online monitoring system
CN115866051A (en) * 2022-11-15 2023-03-28 重庆邮电大学 Edge caching method based on content popularity

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102256280A (en) * 2011-08-24 2011-11-23 华为技术有限公司 Random access method and equipment thereof
CN104253855A (en) * 2014-08-07 2014-12-31 哈尔滨工程大学 Content classification based category popularity cache replacement method in oriented content-centric networking
CN107396124A (en) * 2017-08-29 2017-11-24 南京大学 Video-frequency compression method based on deep neural network
CN107909108A (en) * 2017-11-15 2018-04-13 东南大学 Edge cache system and method based on content popularit prediction
CN109391566A (en) * 2019-01-08 2019-02-26 广州众志诚信息科技有限公司 ETBN backbone switches core board, control method and device
CN109995851A (en) * 2019-03-05 2019-07-09 东南大学 Content popularit prediction and edge cache method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9946933B2 (en) * 2016-08-18 2018-04-17 Xerox Corporation System and method for video classification using a hybrid unsupervised and supervised multi-layer architecture

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102256280A (en) * 2011-08-24 2011-11-23 华为技术有限公司 Random access method and equipment thereof
CN104253855A (en) * 2014-08-07 2014-12-31 哈尔滨工程大学 Content classification based category popularity cache replacement method in oriented content-centric networking
CN107396124A (en) * 2017-08-29 2017-11-24 南京大学 Video-frequency compression method based on deep neural network
CN107909108A (en) * 2017-11-15 2018-04-13 东南大学 Edge cache system and method based on content popularit prediction
CN109391566A (en) * 2019-01-08 2019-02-26 广州众志诚信息科技有限公司 ETBN backbone switches core board, control method and device
CN109995851A (en) * 2019-03-05 2019-07-09 东南大学 Content popularit prediction and edge cache method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图注意力时空神经网络的在线内容流行度预测;鲍鹏等;《模式识别与人工智能》;第32卷(第11期);1014-1021 *

Also Published As

Publication number Publication date
CN112862060A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN107909108B (en) Edge cache system and method based on content popularity prediction
CN109995851B (en) Content popularity prediction and edge caching method based on deep learning
CN112862060B (en) Content caching method based on deep learning
CN109143408B (en) Dynamic region combined short-time rainfall forecasting method based on MLP
CN110689183B (en) Cluster photovoltaic power probability prediction method, system, medium and electronic device
CN113852432A (en) RCS-GRU model-based spectrum prediction sensing method
CN109787821B (en) Intelligent prediction method for large-scale mobile client traffic consumption
CN114553963A (en) Multi-edge node cooperative caching method based on deep neural network in mobile edge calculation
CN108121312A (en) ARV SiteServer LBSs and method based on integrated water electricity control platform
CN113271631B (en) Novel content cache deployment scheme based on user request possibility and space-time characteristics
CN113128666A (en) Mo-S-LSTMs model-based time series multi-step prediction method
Liu et al. Forecasting of wind velocity: An improved SVM algorithm combined with simulated annealing
CN117332896A (en) New energy small time scale power prediction method and system for multilayer integrated learning
CN110363015A (en) A kind of construction method of the markov Prefetching Model based on user property classification
CN112667394B (en) Computer resource utilization rate optimization method
CN110135621A (en) A kind of Short-Term Load Forecasting Method based on PSO optimization model parameter
CN112183814A (en) Short-term wind speed prediction method
CN111242280A (en) Deep reinforcement learning model combination method and device and computer equipment
CN114386602B (en) HTM predictive analysis method for multi-path server load data
CN117150231B (en) Measurement data filling method and system based on correlation and generation countermeasure network
CN113572848B (en) Online service placement method with data refreshing based on value space estimation
CN116737607B (en) Sample data caching method, system, computer device and storage medium
CN115866051A (en) Edge caching method based on content popularity
Xu et al. Multi-level cache system of small spatio-temporal data files based on cloud storage in smart city
CN114676824A (en) Grey correlation analysis and power distribution network line loss prediction method under PSO-RBF

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant