CN105657054B - A kind of content center network caching method based on K mean algorithms - Google Patents

A kind of content center network caching method based on K mean algorithms Download PDF

Info

Publication number
CN105657054B
CN105657054B CN201610125100.6A CN201610125100A CN105657054B CN 105657054 B CN105657054 B CN 105657054B CN 201610125100 A CN201610125100 A CN 201610125100A CN 105657054 B CN105657054 B CN 105657054B
Authority
CN
China
Prior art keywords
content
cache
barycenter
controller
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610125100.6A
Other languages
Chinese (zh)
Other versions
CN105657054A (en
Inventor
蔡岳平
刘军
樊欣唯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201610125100.6A priority Critical patent/CN105657054B/en
Publication of CN105657054A publication Critical patent/CN105657054A/en
Application granted granted Critical
Publication of CN105657054B publication Critical patent/CN105657054B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1014Server selection for load balancing based on the content of a request
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Abstract

The present invention relates to a kind of content center network caching methods based on K mean algorithms, belong to field of communication technology.The caching method utilizes the request number of times of a certain content on the controller statistical content interchanger of software defined network, and select content requests number more than all interchangers of the threshold value as buffer undetermined in controller end given threshold, and several preferably cache nodes are selected from buffer undetermined using K mean values cache algorithm in controller end, send out the instruction of active cache content to the cache node selected by OpenFlow channels from controller;During content is from content service node return requestor, the cache instruction that controller issues can be executed, by content caching in the cache node that controller is selected.The present invention can be good at solving the problems such as homogeneity caching existing for the existing caching mechanism of content center network is low with cache hit rate.

Description

A kind of content center network caching method based on K mean algorithms
Technical field
The invention belongs to fields of communication technology, are related to a kind of content center network caching method based on K mean algorithms.
Background technology
Net in caching mechanism be content center network (Content Centric Network, CCN) core technology it One.By caching partial content on network node so that content requests can use nearest cached copies without logical It crosses after host is found in addressing and obtains corresponding contents again, thus can effectively reduce the time delay of content obtaining, while reducing network In identical content uninterrupted, to improve the performance of network.
The caching of CCN is to applying transparent and generally existing.It is to work as content from supplier in traditional buffering scheme When return, to caching the content, this " universal " cache policy causes to deposit between cache node all nodes on path In redundant data, the diversity of cache contents is reduced, the utilization rate of cache resources is caused to decline.The research of CCN caching technologies It is dedicated to proposing various specific new technique schemes and cache policy, to promote the overall performance of caching system.In order to solve CCN caches the problems such as wasting of resources that machine is brought everywhere, and domestic and foreign scholars have carried out numerous studies.Currently, cache policy is main It is divided into caching to share and two aspects of cache decision.
Caching is shared:Different types of flow and application have different characteristics, and how for different flow to provide differentiated Buffer service be a urgent problem to be solved.In order to realize differentiated buffer service, caching technology of sharing is then most important An one of part.Caching technology of sharing is divided into the caching shared two that the caching based on fixed partition is shared and dynamic divides at present Kind.The caching of fixed partition is shared to be divided into the application that fixed part makes each different classes of by spatial cache to make Use the caching that will not be occupied by other flows.This scheme there are the problem of have at 2 points:First, when some type of flow does not arrive It reaches, and when other flows are more, or generate cache miss and the wasting of resources.Second, it is difficult to ensure to carry for different types of flow For different caching quality assurances.The caching that dynamic divides, which is shared, can then allow some discharge pattern to use unappropriated caching Space.This includes two different strategies again:It shares priority-based and shared based on balance of weights.Priority-based It is shared that certain applications can be allowed to possess higher priority relative to other application, and by removing the content of low priority come to height Priority content vacating space.The problem of this strategy is that when data high-speed reaches, comparing priority repeatedly can serious shadow Ring performance.Shared based on balance of weights can preset weight, but can still use the space being not used by, and difficult point be How weight is optimized.
Cache decision:Cache decision mechanism determines which content needs to be stored on which node, is divided into non-cooperating Formula cache decision and cooperative caching decision two major classes.Non-cooperating formula cache decision need not be known in advance other in network and delay point The status information of node.Mainly there are LCE (Leave Copy than more typical non-cooperating formula cache decision strategy Everywhere)、LCD(Leave Copy Down)、MCD(Move Copy Down)、Prob(Copy with ) and ProbCache (Probabilistic Cache) etc. Probability.LCE is the cache decision strategy given tacit consent in CCN, All routing nodes of the policy mandates on data packet return path are both needed to cache contents object, can cause to go out in network in this way Now a large amount of cache contents redundancies, reduce the diversity of cache contents.One under node where LCD makes content object only be buffered in it Hop node, content object get to network edge node after repeatedly being asked, and can be generated on path in a large amount of cachings Hold redundancy.MCD obeys the order cache contents the downward downstream of interior joint one (except source server) in cache hit, thus subtracts Cache contents redundancy on few requestor to the path of content server, but work as requestor from different edge networks, it can go out Existing content caching point waves, and this dynamic will produce more network overheads.Prob is required on data packet return path All with fixed probability P cache object, the value of P can be adjusted all routing nodes according to caching situation.In ProbCache The object of request is stored in each node according to probability but probability is all different, and probability is inversely proportional at a distance from requesting node, because If this node is closer, it is bigger to cache probability, on the contrary then smaller.Copy quickly can be pushed to network edge by the strategy, Reduce number of copies simultaneously.In cooperative caching decision, network topology and node state are all known in advance.Pass through these letters Breath inputs to calculate final cache location.According to the range of the node of participative decision making, global coordination, path association can be divided into It reconciles adjacent to three kinds of coordination.Global coordination refers to that all cache nodes can all be considered in network, must be known in advance so whole The topology of a network.Coordinate to refer to that this coordination relates merely to cache node from requestor to server along road in path. It is neighbouring to coordinate to refer to coordinating only to occur between the adjacent node of node.Coordinate in net as a kind of method based on hash function, Also it is attributed to neighbouring coordination, it is to cache some blocks of files using a hash function to determine which neighbour.
In summary:Current content center network cache policy still has problems with:Homogeneity caching:Non-cooperating In formula cache decision, the identical content of each nodal cache will be led to by caching and replacing to each node disjoint;Content is in space It concentrates very much or disperses very much in distribution, requestor will be caused to need from node excessively concentrate or dispersion in acquisition in this way Hold, causes flow unreasonable;Irrational distribution in time, in the popular time, each node caches identical content, so And one mistake of popular time, the content almost disappear simultaneously again on each node.Cache hit rate is relatively low:Non-cooperating formula caching is determined The cache contents of other side are not known in plan between each node mutually;And in cooperative caching, even if each node is known mutually pair The content just cached, but promised to undertake without the time, and the replacement of each node is independent, content is possible to be replaced at any time. This makes the effect of caching have certain randomness and contingency, the forward efficiency of Interest relatively low.
Invention content
In view of this, the purpose of the present invention is to provide a kind of content center network caching method based on K mean algorithms, This method can solve the problems such as homogeneity caching existing for the existing caching mechanism of content center network is low with cache hit rate, should Method calculates several more excellent cache locations in controller end using K mean algorithms, and issues caching life by OpenFlow channels It enables to cache node.
In order to achieve the above objectives, the present invention provides the following technical solutions:
A kind of content center network caching method based on K mean algorithms, the caching method utilize software defined network The request number of times of a certain content on controller statistical content interchanger, and in controller end given threshold selection content requests time Number is delayed using K mean values cache algorithm from undetermined more than all interchangers of the threshold value as buffer undetermined, and in controller end Several preferably cache nodes are selected in depositing a little, and master is sent out to the cache node selected by OpenFlow channels from controller The instruction of dynamic cache contents;When content is slow from during content service node returns to requestor, can execute that controller issues Instruction is deposited, by content caching in the cache node that controller is selected.
Further, the K mean algorithms described in this method are a kind of algorithm for partition clustering, according to the sample of input Sample is divided into K cluster by distributed intelligence and required cluster numbers K by successive ignition, and specific algorithm flow is:
1) K barycenter of random initializtion;
2) to remaining each sample measurement, it arrives the distance of each barycenter, and sample is referred in the barycenter of nearest neighbours;
3) to each barycenter, its all sample coordinate is taken into mean value, obtains the new position of the barycenter;
4) 2) -3 are repeated) step is until each sample to the distance of each barycenter and convergence.
Further, this method specifically includes following steps:
S1:Respective switch counts the request number of times count area of a certain content A into controller in network;
S2:Controller finds out exchange of the A number of request content more than preset threshold value T as sample point;
S3:Network topology is abstracted as two-dimensional coordinate by controller according to required measurement;
S4:Required K values are determined according to business, flow, cost factor;
S5:K different interchangers are randomly selected in sample point as initial barycenter;
S6:To remaining each sample measurement, it arrives the distance of barycenter, and sample is referred to belonging to the minimum barycenter of distance In class;
S7:For the class where each barycenter, its all sample coordinate is taken into mean value, then find out the exchange nearest from mean value The machine position new as barycenter;
S8:Step S6-S7 is repeated until each sample to the distance of each barycenter and convergence;
S9:Using the interchanger of each cluster barycenter as content A cache locations, controller by OpenFlow channels to its Send out active cache instruction.
The beneficial effects of the present invention are:The method of the invention can be good at solving the existing caching of content center network The problems such as homogeneity caching existing for mechanism is low with cache hit rate.
Description of the drawings
In order to keep the purpose of the present invention, technical solution and advantageous effect clearer, the present invention provides following attached drawing and carries out Explanation:
Fig. 1 is the content requests number counter analogous diagram of each interchanger;
Fig. 2 is the cache location schematic diagram selected;
Fig. 3 is the embodiment of the present invention implementation flow chart.
Specific implementation mode
Below in conjunction with attached drawing, the preferred embodiment of the present invention is described in detail.
It is an object of the invention to solve homogeneity caching and caching life existing for the existing caching mechanism of content center network The problems such as middle rate is low calculates several more excellent cache locations in controller end using K mean algorithms, and by under OpenFlow channels Cache command is sent out to cache node.
In the method, K mean algorithms be a kind of algorithm for partition clustering, according to the sample distribution information of input with And required cluster numbers K, sample is divided by K cluster by successive ignition, specific algorithm flow is:1) random initializtion K A barycenter;2) to remaining each sample measurement, it arrives the distance of each barycenter, and sample is referred in the barycenter of nearest neighbours;3) To each barycenter, its all sample coordinate is taken into mean value, obtains the new position of the barycenter;4) 2) -3 are repeated) step is until each sample This is until the distance of each barycenter and convergence.
In the content center network cache policy of the present invention based on K mean algorithms, the global net in controller Network information and be the feasible key factor of the cache policy to the centralized control of interchanger.Pass through the knowledge to content request mode Not, each cluster is therefrom divided, optimal cache location can be effectively obtained.
Cache policy based on K mean algorithms is as follows:
S1:Respective switch counts the request number of times count area of a certain content A into controller in network;
S2:Controller finds out exchange of the A number of request content more than preset threshold value T as sample point;
S3:Network topology is abstracted as two-dimensional coordinate by controller according to required measurement;
S4:Required K values are determined according to business, flow, cost factor;
S5:K different interchangers are randomly selected in sample point as initial barycenter;
S6:To remaining each sample measurement, it arrives the distance of barycenter, and sample is referred to belonging to the minimum barycenter of distance In class;
S7:For the class where each barycenter, its all sample coordinate is taken into mean value, then find out the exchange nearest from mean value The machine position new as barycenter;
S8:Step S6-S7 is repeated until each sample to the distance of each barycenter and convergence;
S9:Using the interchanger of each cluster barycenter as content A cache locations, controller by OpenFlow channels to its Send out active cache instruction.
Caching surplus has vital influence for cache decision.When interchanger memory space inadequate, by force will Content caching is likely to result in loss of data in the interchanger.Consider therefore, it is necessary to which caching surplus is included in algorithm. If the spatial cache idleness of interchanger n is Vn, wherein 0<Vn<1.Work as VnWhen being 1, indicates that caching is not used by completely, work as VnFor When 0, indicate that caching has been otherwise fully engaged.Specifically optimization method is:In the request number of times count area for uploading a certain content A When, by VnIt is multiplied as new Counter Value with the value of interchanger content counter, then is uploaded to controller.In this way, if handing over When caching of changing planes occupies very much, the value of counter will substantially be cut down, or even the threshold value T set by controller, the friendship is not achieved The position candidate of caching would not be considered by changing planes.And when interchanger caches idle, the value of counter will not largely reduce, To not interfere with algorithm performs.
Fig. 3 is the embodiment of the present invention implementation flow chart, in the examples below, controls the transmission selection band exterior chain of information It connects, can be ethernet link, IP link channels.
As shown, the method for the invention includes the following steps:
Step 301:Content A is uploaded in the request number of times of the interchanger by control channel by all interchangers in network In controller.
Step 302:Controller collects the count value that all interchangers upload, and selects to count according to preset threshold value T The buffer undetermined that interchanger of the numerical value more than T is used as, and as sample point;According to the Topology connection of network by the company of sample point The relationship of connecing is abstracted as two-dimensional coordinate;According to the factors such as business, flow, cost determine needed for caching points K values, we are by K here It is set as 2;K different interchangers are randomly selected in sample point as initial barycenter;To remaining each sample measurement, it arrives barycenter Sample is referred in the class belonging to the minimum barycenter of distance by distance;For the class where each barycenter, its all sample is sat Mark takes mean value, then finds out the interchanger nearest from the mean value position new as barycenter, until each sample to each barycenter away from From with convergence until.
Step 303:The interchanger of barycenter will be each clustered after convergence as content A cache locations, here selected caching Node is interchanger 4 and interchanger 13, and active cache content A instructions (3.1 are sent out to it by OpenFlow channels from controller With 3.2).
Step 304:Requestor sends out the Interest information (3.3) of request content A, after interchanger receives the information, first Whether there is content in retrieval CS (Content Store), the returned data if having, if without retrieving PIT (Pending Interest Table) in whether have the record of the Interest, the input port of the Interest is recorded in PIT if having In respective entries, if creating a new entry without if, current Interest input ports are recorded.After having retrieved PIT, exchange Machine will retrieve FIB (Forwarding Interest Base), and the message is forwarded to corresponding port if having, is otherwise forwarded To other all of the port in addition to input port.Here Interest message acquisition in interchanger 1,4,8,12 and 13 Match, is finally forwarded to supplier (3.4,3.5,3.6,3.7 and 3.8).
Step 305:Supplier sends out content A (3.9) after receiving Interest information.Content A according to interchanger 13,12, 8,4 and 1 mono- tunnels PIT be back at requestor (3.10,3.11,3.12,3.13 and 3.14), and during passback, Interchanger 4 and interchanger 13 can also execute the active cache order of waiting, and content A is buffered in interchanger 4 and interchanger 13 In CS.
Fig. 1 is the content requests number counter analogous diagram of each interchanger, in the present embodiment, according to the reality of interchanger One network with 81 interchangers is carried out coordinate conversion, network topology is reduced to 9 × 9 interchanger squares by geographical location Battle array, the random integers of the content requests counter information of the input of each interchanger between 0-100.Content requests count Device threshold value T is set as 70.Obtained content requests counter simulation result is as shown in Figure 1, it can be clearly seen that point of content requests Cloth situation.Fig. 2 show the final cache location schematic diagram obtained when K takes 2,3 and 4 respectively, wherein light gray indicates selected Cache node.
Finally illustrate, preferred embodiment above is merely illustrative of the technical solution of the present invention and unrestricted, although logical It crosses above preferred embodiment the present invention is described in detail, however, those skilled in the art should understand that, can be Various changes are made to it in form and in details, without departing from claims of the present invention limited range.

Claims (1)

1. a kind of content center network caching method based on K mean algorithms, it is characterised in that:The caching method is fixed using software The request number of times of a certain content on adopted network-based control device statistical content interchanger, and in the selection of controller end given threshold Hold all interchangers that request number of times is more than the threshold value and utilizes K mean value cache algorithms as buffer undetermined, and in controller end Several preferably cache nodes are selected from buffer undetermined, from controller by OpenFlow channels to the caching section selected Point sends out the instruction of active cache content;During content is from content service node return requestor, controller can be executed The cache instruction issued, by content caching in the cache node that controller is selected;
The K mean algorithms are a kind of algorithm for partition clustering, according to the sample distribution information of input and required Cluster numbers K, sample is divided by K cluster by successive ignition, specific algorithm flow is:
1) K barycenter of random initializtion;
2) to remaining each sample measurement, it arrives the distance of each barycenter, and sample is referred in the barycenter of nearest neighbours;
3) to each barycenter, its all sample coordinate is taken into mean value, obtains the new position of the barycenter;
4) 2) -3 are repeated) step is until each sample to the distance of each barycenter and convergence;
This method specifically includes following steps:
S1:Respective switch counts the request number of times count area of a certain content A into controller in network;
S2:Controller finds out interchanger of the A number of request content more than preset threshold value T as sample point;
S3:Network topology is abstracted as two-dimensional coordinate by controller according to required measurement;
S4:Required K values are determined according to business, flow, cost factor;
S5:K different interchangers are randomly selected in sample point as initial barycenter;
S6:To remaining each sample measurement, it arrives the distance of barycenter, and sample is referred in the class belonging to the minimum barycenter of distance;
S7:For the class where each barycenter, its all sample coordinate is taken into mean value, then finds out the interchanger nearest from mean value and makees For the new position of barycenter;
S8:Step S6-S7 is repeated until each sample to the distance of each barycenter and convergence;
S9:Using the interchanger of each cluster barycenter as content A cache locations, controller is sent out by OpenFlow channels to it Active cache instructs.
CN201610125100.6A 2016-03-04 2016-03-04 A kind of content center network caching method based on K mean algorithms Expired - Fee Related CN105657054B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610125100.6A CN105657054B (en) 2016-03-04 2016-03-04 A kind of content center network caching method based on K mean algorithms

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610125100.6A CN105657054B (en) 2016-03-04 2016-03-04 A kind of content center network caching method based on K mean algorithms

Publications (2)

Publication Number Publication Date
CN105657054A CN105657054A (en) 2016-06-08
CN105657054B true CN105657054B (en) 2018-10-12

Family

ID=56493180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610125100.6A Expired - Fee Related CN105657054B (en) 2016-03-04 2016-03-04 A kind of content center network caching method based on K mean algorithms

Country Status (1)

Country Link
CN (1) CN105657054B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454430B (en) * 2016-10-13 2019-06-04 重庆邮电大学 For the preparatory dissemination method of video traffic in Autonomous Domain in NDN/CCN
CN106888265B (en) * 2017-03-21 2019-08-27 浙江万里学院 Caching method for Internet of Things
CN107835129B (en) * 2017-10-24 2020-06-02 重庆大学 Content center network edge node potential energy enhanced routing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103607386A (en) * 2013-11-15 2014-02-26 南京云川信息技术有限公司 A cooperative caching method in a P2P Cache system
CN103716254A (en) * 2013-12-27 2014-04-09 中国科学院声学研究所 Self-aggregation cooperative caching method in CCN
CN104253855A (en) * 2014-08-07 2014-12-31 哈尔滨工程大学 Content classification based category popularity cache replacement method in oriented content-centric networking

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9137326B2 (en) * 2012-08-14 2015-09-15 Calix, Inc. Distributed cache system for optical networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103607386A (en) * 2013-11-15 2014-02-26 南京云川信息技术有限公司 A cooperative caching method in a P2P Cache system
CN103716254A (en) * 2013-12-27 2014-04-09 中国科学院声学研究所 Self-aggregation cooperative caching method in CCN
CN104253855A (en) * 2014-08-07 2014-12-31 哈尔滨工程大学 Content classification based category popularity cache replacement method in oriented content-centric networking

Also Published As

Publication number Publication date
CN105657054A (en) 2016-06-08

Similar Documents

Publication Publication Date Title
CN105721600B (en) A kind of content center network caching method based on complex network measurement
Fang et al. A survey of energy-efficient caching in information-centric networking
CN106789648B (en) Software defined network route decision method based on content storage and Network status
CN102710489B (en) Dynamic shunt dispatching patcher and method
US7263099B1 (en) Multicast packet replication
US9088584B2 (en) System and method for non-disruptive management of servers in a network environment
CN105657054B (en) A kind of content center network caching method based on K mean algorithms
Nour et al. A distributed cache placement scheme for large-scale information-centric networking
CN104519125B (en) Distributed load distribution in order flexible for change in topology
WO2017101230A1 (en) Routing selection method for data centre network and network manager
CN108366089B (en) CCN caching method based on content popularity and node importance
Wang et al. Effects of cooperation policy and network topology on performance of in-network caching
CN106533733A (en) CCN collaborative cache method and device based on network clustering and Hash routing
CN105656788A (en) CCN (Content Centric Network) content caching method based on popularity statistics
CN108234310A (en) Multi-level interference networks, adaptive routing method and routing device
CN108769252A (en) A kind of ICN network pre-cache methods based on request content relevance
CN108173903B (en) Application method of autonomous system cooperation caching strategy in CCN
CN109617811A (en) The quick migration method of mobile application in a kind of cloud network
Gui et al. A cache placement strategy based on entropy weighting method and TOPSIS in named data networking
CN103401951B (en) Based on the elastic cloud distribution method of peer-to-peer architecture
CN103067294B (en) Based on the method for the data flow equilibrium treatment of stream order-preserving in multi-next-hop forwarding router
WO2017084228A1 (en) Method for managing traffic item in software-defined networking
CN108809829B (en) SDN rule deployment method
Liao et al. An energy-efficient sdn-based data collection strategy for wireless sensor networks
Zhang et al. A cooperation-driven ICN-based caching scheme for mobile content chunk delivery at RAN

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181012

CF01 Termination of patent right due to non-payment of annual fee