CN103905539A - Optimal cache storing method based on popularity of content in content center network - Google Patents

Optimal cache storing method based on popularity of content in content center network Download PDF

Info

Publication number
CN103905539A
CN103905539A CN201410108365.6A CN201410108365A CN103905539A CN 103905539 A CN103905539 A CN 103905539A CN 201410108365 A CN201410108365 A CN 201410108365A CN 103905539 A CN103905539 A CN 103905539A
Authority
CN
China
Prior art keywords
content
node
request
cache
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410108365.6A
Other languages
Chinese (zh)
Inventor
张国印
唐滨
邢志静
吴艳霞
王向辉
高伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201410108365.6A priority Critical patent/CN103905539A/en
Publication of CN103905539A publication Critical patent/CN103905539A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The invention relates to an optimal cache storing method based on the popularity of content in a content center network. The method is characterized by comprising the steps that step 1, when a node receives new content Ci needing to be cached, if a CS table of the node is full is judged, if not, the step 3 is executed directly, if yes, the step 2 is executed, and importance Ch (i) of the new content is judged; step 2, if the importance Ch (i) of the new content is in the top N important content is judged, if yes, the step 3 is executed, and if not, the step 4 is directly executed; step 3, the new content Ci is stored in the CS table of the node through the LRU cache replacement algorithm; step 4, an RRT table is updated according the new content, and the RRT table is used for recording M latest requests received by the node.

Description

The optimum buffer memory laying method of content-based popularity in content center network
Technical field
The present invention relates to the optimum buffer memory laying method of content-based popularity in a kind of content center network.
Background technology
Along with the development of current internet and the growth of the large data acquisition of user and distribution requirements, constantly expose series of problems with the conventional internet architecture of host-to-host pattern, as scalability problem, safety issue and mobility problem etc.In order to address these problems, numerous and confused a series of improvement and the revolutionary scheme of proposing of research institution both at home and abroad, thus the new approaches of proposition Future Internet architecture comprise the typical architecture such as NDN, MobilityFirst, NEBULA, XIA.Content center network (CCN), as one of model more perfect in Future Internet architecture, has been subject to extensive concern.CCN retrieves according to content name, has solved the unmatched problem of demand and communication pattern; Network node embeds caching function simultaneously, has improved Web content distribution performance.The buffer memory of CCN is called again network-caching (in-network cache), is one of outstanding feature of CCN.
In prior art, in CCN, generally adopt LRU (Least Recently Used least recently used algorithm) cache replacement policy, this policing algorithm is simple, be easy to realize, convenient deployment, the weak point existing is, this strategy does not carry out buffer memory to content with distinguishing, makes nodes cache contents bulk redundancy, is subject to nodal cache spatial limitation, cause the minimizing of cache contents kind in network, cause spatial cache utilance too low.
Summary of the invention
The object of the invention is to provide the optimum buffer memory laying method of content-based popularity in a kind of content center network, can effectively avoid spatial cache waste, improves spatial cache utilance.
Realize the object of the invention technical scheme:
An optimum buffer memory laying method for content-based popularity in content center network, is characterized in that:
Step 1: when node receives a fresh content C ineed to carry out buffer memory time, first whether the CS of decision node table is full, if less than, directly enter step 3; If full, enter step 2, judge the importance C of fresh content h(i);
Step 2: judge fresh content C iimportance C h(i), whether in a front n important content, if in a front n important content, enter step 3, otherwise directly enter step 4;
Step 3: utilize LRU cache replacement algorithm, by fresh content C istore in the CS table of node;
C h ( i ) = Σ j = 1 N δ cout ( i , j ) N
Step 4: upgrade RRT table according to fresh content, said RRT table is used for recording node and receives M time nearest request.
In step 2, judge fresh content C by following formula iimportance C h(i),
Importance C h(i) be fresh content C in nearest N request ithe ratio that requested number of times is shared,
Figure BDA0000480696960000022
in N request recently, fresh content C iin RRT table, record number.
RRT table is taked FIFO algorithm maintenance content request record.
The beneficial effect that the present invention has:
Algorithm of the present invention is simple, is easy to realize, according to the importance C of content h(i), the content that priority cache significance level is high, realize the optimal placement of requesting node to cache contents between data source, efficiently solve the problem such as data cached bulk redundancy and the buffer memory replacement mistake being caused by cache algorithm limitation that content center network (CCN) exists, effectively improved spatial cache utilance; On same link, the content of the different request frequency of nodal cache, has reduced the redundancy of cache contents in network, increases the diversity of cache contents in network, improves the hit rate of network-caching.Prove by experiment, the present invention can improve resource request success rate and cache hit rate effectively, reduces request of data time delay, reduces server load in network, has improved content center network data distribution performance.
Brief description of the drawings
Fig. 1 is general flow chart of the present invention;
Fig. 2 is workflow schematic diagram of the present invention;
Fig. 3 is cache contents optimum allocation topological structure schematic diagram of the present invention;
Fig. 4 is single-link network topology structure schematic diagram of the present invention;
Fig. 5 is scale-free networks network topological structure schematic diagram of the present invention;
Fig. 6 is experimental configuration Parametric Representation intention of the present invention;
Fig. 7 is that cache contents of the present invention distributes and theoretical Optimal Distribution contrast schematic diagram;
Fig. 8 is single-link topological network cache hit rate contrast schematic diagram of the present invention;
Fig. 9 is scale-free networks network topological network cache hit rate contrast schematic diagram of the present invention;
Figure 10 is Web content request time delay schematic diagram of the present invention;
Figure 11 is average content requests time delay contrast schematic diagram of the present invention;
Figure 12 is that the cache contents of the present invention rate that swaps out contrasts schematic diagram.
Embodiment
As shown in Figure 1 and Figure 2, in content center network of the present invention, the buffer memory optimal placement method (OCPCP strategy) of content-based popularity comprises three parts: content popularity is calculated, cache contents is replaced and RRT table is safeguarded, specifically realizes by following steps:
Step 1: when node i receives a fresh content C ineed to carry out buffer memory time, first whether the CS of decision node table (content storage list) is full, and whether decision node i has these data of sufficient space buffer memory, if less than, enter step 3, directly buffer memory fresh content is in CS table; If full, enter step 2, judge fresh content C iimportance C h(i);
C h ( i ) = Σ j = 1 N δ cout ( i , j ) N
Step 2: judge fresh content C iimportance C h(i) whether in a front n important content.If exist, carry out step 3, otherwise enter step 4.The present invention increases RRT table (content requests record sheet) on original CCN data structure basis, and RRT table is used for recording node and receives M time nearest request.According to RRT table record, the present invention proposes a content popularity algorithm (CPA), be used for calculating the importance C of content in present node h(i),, taking the request number of times of content as weighing the foundation of content popularity, specific formula for calculation is as follows:
C h(i) be fresh content C in nearest N request ithe ratio that requested number of times is shared, i.e. fresh content C iimportance.As fresh content C iin in RRT table j article
Figure BDA0000480696960000032
hold when identical δ cout(i, j) counting adds 1, counts
In asking for N time recently, fresh content C iin RRT table, record number.
Step 3, according to LRU buffering scheme, stores content in the CS table of node.
Step 4, upgrades RRT table according to fresh content, and RRT table is taked FIFO algorithm (first-in first-out algorithm) maintenance content request record.
Fig. 3 is cache contents optimum allocation topological structure schematic diagram of the present invention.Taking topology shown in Fig. 3 as example, at moment t=0, the buffer memory of all nodes is initially sky, and the content requests that user A sends can be routed to data source S, and the content of S response can be routed to user A through N3 → N2 → N1.In the time that the content type of all user's requests is less than nodal cache capacity n, on operation OCPCP or LRU strategy link, the distribution of content is identical.Along with the increase of user's request content kind and request number of times, OCPCP learns according to user's request record, and content high request frequency is preferentially stored on the node close to requestor.When the network operation is to stable state, N 1n the content CH that cache user request frequency is the highest (1, n), as user's request content CH (1, n)can be directly from N 1meet with a response, therefore node N 2can not receive the request of these contents, N 2secondly interested n content CH of cache user (n+1,2n).In like manner, N 3buffer memory is by N 1, N 2the content CH that n outside a filtration user is most interested (2n+1,3n).OCPCP is identical with LFU strategy arrives this locality by content caching high user's request frequency, but OCPCP is by asking record sheet digging user to be asked in a large number, can more objective, comprehensively reflect that user asks behavior.
Below in conjunction with experiment, further illustrate beneficial effect of the present invention.
NdnSIM platform based on NS-3, has carried out four groups of emulation experiments to OCPCP strategy of the present invention, is respectively that cache contents distribution experiment, the experiment of network-caching hit rate, the experiment of content requests time delay and the cache contents rate that swaps out are tested.
Fig. 4 and Fig. 5 are respectively single-link network of the present invention and scale-free networks network topological structure schematic diagram.The present invention tests the scale-free networks network topology (SFN) of topology employing single-link topology (SPN) and Reality simulation network.On SPN link, adopt and comprise a requesting node, a data source nodes and 10 middle CCN nodes, as shown in Figure 4.Utilize the SFNT topology with 50 nodes of NetworkX instrument generation as shown in Figure 5, wherein node 1 and node 0 are resource node, and leaf node is user node, and other nodes are CCN node.
Fig. 6 is experimental configuration Parametric Representation intention of the present invention.In network, packet route adopts the BestRoute routing policy that ndnSIM provides, and user content request meets Zipf-like and distributes.All nodes have identical spatial cache size (being cacheable content blocks quantity).In experimental configuration, major parameter as shown in Figure 6.
Fig. 7 is that cache contents of the present invention distributes and theoretical Optimal Distribution contrast schematic diagram.The present invention adopts the mean value of nodal cache content array to represent the distribution character of content in network sometime.Place model, the mean value C of cache contents in node i according to optimum cache contents avg(i) computing formula is as follows:
C Avg ( k ) = Σ i = ( k - 1 ) n + 1 kn i / n , 1 ≤ k ≤ N
The wherein spatial cache size of n representation node i, the i.e. quantity of largest buffered content blocks.N represents node maximum quantity on link.
In network, the distribution of cache contents can intuitively embody the quality of cache policy performance.Network topology adopts single-link topological structure (SPN), and as shown in Figure 4, α gets respectively 0.3,0.9, and the larger explanation of α value has fewer content by frequent requests, and the popularity of less request content is more approaching.Simulation time is 300s, and other parameter configuration are shown in Fig. 6.After cache policy is stable, count the mean value of each nodal cache content array, observe various cache policy operation results and theoretical optimum cache contents placement result.Observe OCPCP strategy from Fig. 7 and make the optimum content assignment result of the more approaching theory of nodal cache content link.For example, when α=0.9 theoretical value on node 2 is 150.5, LRU(least recently used algorithm), LFU(least often uses algorithm), OCPCP cache contents mean value is respectively 363,207,184.LFU strategy is the variation of response contents fast, therefore the larger content of some indexes of buffer memory for a long time.LRU is according to the situation of change of the cache entries reflection local content of storage in CS table, so LRU distributes and mainly asked the impact in moment cache contents.OCPCP operates in α=0.9 o'clock, more approaches theoretical value than α=0.3, is because the larger content requests of α is more concentrated.
Fig. 8 and Fig. 9 are respectively the contrast of single-link topological network of the present invention (SPN) cache hit rate and scale-free networks network topological network (SFN) cache hit rate contrast schematic diagram.The network-caching hit rate that the present invention proposes refers to, after request is sent, arrives and obtains content response before data source and be and hit at interest bag.What can be understood as the reaction of network-caching hit rate is the hit rate of cache contents in whole network, and cache hit rate is higher, illustrates that user's request arrival data source just can obtain content before faster, has also reduced the pressure of data source simultaneously.Network-caching hit rate Hit avgcalculate and adopt following formula:
Hit avg = Σ i = 1 N u Σ j = 1 C n ( req ( i , j ) - serv ( i , j ) ) Σ i = 1 N u Σ j = 1 C n req ( i , j )
Req (i, j) represents the number of times of user node i request content j in network, the response times of serv (i, j) representative data source to node i request content j.N urepresentative of consumer node sum, C nrepresent the kind sum of content in network.
OCPCP all increases than LRU, LFU strategy in network hit rate as shown in Figure 8, for example, LRU while running to 100s, LFU, OCPCP hit rate (α=0.9) is respectively 43%, 57%, 72%.At front 20s, OCPCP and LFU, LRU hit rate are close, are that along with the increase of running time, OCPCP dominance of strategies is more obvious because the RRT table in OCPCP strategy need to be learnt by a large amount of requests.SPN topological network radius maximum 9 that this experiment adopts is jumped, and 3000 requests of each node R RT table record reach stable state and at least carry out 27000 requests, therefore the OCPCP state that tends towards stability gradually after 135s.As can be seen from Figure 9, OCPCP has higher network-caching hit rate than other three kinds of cache policies equally in SFN topology, to place because OCPCP has realized cache contents near-optimization according to content popularity, reduce cache contents redundancy in network, made a greater variety of contents of buffer memory in network.But due to SFN network topology need to be longer than the complicated OCPCP of SPN learning time.
Figure 10 and Figure 11 are respectively Web content request time delay of the present invention and average content requests time delay contrast schematic diagram.User's request content time delay, can the most directly embody network performance, when shorter request, postpones a meeting or conference and brings better service experience to user.Suppose total N node in single-link network topology, between adjacent two nodes, have fixing data transmission delay R, the probability that content k hits in node i is p k(i).Meaningful request time delay is defined as follows:
Known according to content optimal placement model, the content that request frequency is high is placed on the node close to requestor, just little according to above formula access time delay.Therefore, when distributing, cache contents more approaches optimal placement model,
RTT k = Σ i = 1 N R * ( i - 1 ) p k ( i ) Π j = 1 i - 1 ( 1 - p k ( j ) )
The average retardation of the whole network content requests is less.
The experiment of content requests time delay adopts the B-A topology of Reality simulation network, α value 0.9.Figure 10 has shown that the different content of request is under various cache policies, the average delay of content requests.In OCPCP, the content time delay 30s left and right that sequence is less than 100, illustrates that user can be in a hop distance request to content.Known according to distribution of content experiment, OCPCP strategy will be accessed content caching frequently apart from the nearer node of requestor, and therefore access time delay can be slightly smaller.The content that sequence is less, OCPCP request time delay is significantly less than LFU and LRU, and the content requests time delay LRU that content array is larger can be better than OCPCP, because the locality of lru algorithm can be by content caching larger a part of sequence on the node close to from requestor.Although LFU is also according to how many cache contents of access frequency, OCPCP add up and has more comprehensively according to a large amount of request record, and the Dynamic Updating Mechanism that request is recorded simultaneously, more can adapt to the content of different popularity than LFU.Figure 11 is known, the average content requests time delay of the whole network, and OCPCP has obvious advantage.Along with the continuous increase of simulation time, the node unceasing study content requests record of operation OCPCP, request time delay reduces and the state that more tends towards stability gradually.
Figure 12 is that the cache contents of the present invention rate that swaps out contrasts schematic diagram.The cache contents rate of swapping out refers to, when nodal cache space is completely time, fresh content arrives and old content replacement gone out to the ratio of buffer memory.The cache contents rate that swaps out, can not directly weigh the quality of cache policy, but higher buffer memory swaps out, rate needs larger maintenance costs.
Buffer memory is replaced out content frequently can affect the recall precision of content, has also illustrated that inadequate optimization of cache policy algorithm causes content replacement frequently simultaneously.The cache contents rate r that swaps out avgadopt following formula:
r avg = Σ i = 1 N rmDate ( i ) Σ i = 1 N onDate ( i )
Wherein, rmDate (i) representation node i swaps out and replaces out the number of times of content, and onDate (i) representation node i receives the number of times of content requests.According to the cache contents optimal placement model proposing above, if cache algorithm is according to optimal placement principle, reach stable state, buffer memory is replaced the situation of makeing mistakes and will be reduced so, and therefore the cache contents rate that swaps out is also just lower.Although the content rate of swapping out can not embody the excellent summary of replace Algorithm performance, the lower cache contents frequency that swaps out, can reduce resource overhead and energy loss that cache algorithm is safeguarded.Experiment adopts B-A network topology, α value 0.9.The network operation is to stable state, the node total degree of content that swaps out in statistical unit time network.As seen in Figure 12, the content of the lru algorithm rate that swaps out is the highest, and LFU is slightly higher than the OCPCP buffer memory rate that swaps out.The 30s just having brought into operation, three cache policies have the similar buffer memory rate that swaps out, and are because just started all nodal cache contents for empty.After 100s, the OCPCP content rate of swapping out tends towards stability, and illustrates that OCPCP has carried out sufficient learning process.

Claims (3)

1. an optimum buffer memory laying method for content-based popularity in content center network, is characterized in that:
Step 1: when node receives a fresh content C ineed to carry out buffer memory time, first whether the CS of decision node table is full, if less than, directly enter step 3; If full, enter step 2, judge the importance C of fresh content h(i);
Step 2: judge fresh content C iimportance C h(i), whether in a front n important content, if in a front n important content, enter step 3, otherwise directly enter step 4;
Step 3: utilize LRU cache replacement algorithm, by fresh content C istore in the CS table of node;
Step 4: upgrade RRT table according to fresh content, said RRT table is used for recording node and receives M time nearest request.
C h ( i ) = Σ j = 1 N δ cout ( i , j ) N
2. the optimum buffer memory laying method of content-based popularity in content center network according to claim 1, is characterized in that: in step 2, judge fresh content C by following formula iimportance C h(i),
Importance C h(i) be fresh content C in nearest N request ithe ratio that requested number of times is shared,
Figure FDA0000480696950000012
in N request recently, fresh content C iin RRT table, record number.
3. the optimum buffer memory laying method of content-based popularity in content center network according to claim 2, is characterized in that: RRT table is taked FIFO algorithm maintenance content request record.
CN201410108365.6A 2014-03-22 2014-03-22 Optimal cache storing method based on popularity of content in content center network Pending CN103905539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410108365.6A CN103905539A (en) 2014-03-22 2014-03-22 Optimal cache storing method based on popularity of content in content center network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410108365.6A CN103905539A (en) 2014-03-22 2014-03-22 Optimal cache storing method based on popularity of content in content center network

Publications (1)

Publication Number Publication Date
CN103905539A true CN103905539A (en) 2014-07-02

Family

ID=50996694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410108365.6A Pending CN103905539A (en) 2014-03-22 2014-03-22 Optimal cache storing method based on popularity of content in content center network

Country Status (1)

Country Link
CN (1) CN103905539A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104253855A (en) * 2014-08-07 2014-12-31 哈尔滨工程大学 Content classification based category popularity cache replacement method in oriented content-centric networking
CN106572501A (en) * 2015-10-09 2017-04-19 中国科学院信息工程研究所 Content center mobile self-organizing network caching method based on dual threshold decision
CN106790421A (en) * 2016-12-01 2017-05-31 广东技术师范学院 A kind of step caching methods of ICN bis- based on corporations
CN107135271A (en) * 2017-06-12 2017-09-05 浙江万里学院 A kind of content center network caching method of Energy Efficient
CN110245095A (en) * 2019-06-20 2019-09-17 华中科技大学 A kind of solid-state disk cache optimization method and system based on data block map

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101184021A (en) * 2007-12-14 2008-05-21 华为技术有限公司 Method, equipment and system for implementing stream media caching replacement
US20120239823A1 (en) * 2007-11-19 2012-09-20 ARRIS Group Inc. Apparatus, system and method for selecting a stream server to which to direct a content title
CN103501315A (en) * 2013-09-06 2014-01-08 西安交通大学 Cache method based on relative content aggregation in content-oriented network
CN103634231A (en) * 2013-12-02 2014-03-12 江苏大学 Content popularity-based CCN cache partition and substitution method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120239823A1 (en) * 2007-11-19 2012-09-20 ARRIS Group Inc. Apparatus, system and method for selecting a stream server to which to direct a content title
CN101184021A (en) * 2007-12-14 2008-05-21 华为技术有限公司 Method, equipment and system for implementing stream media caching replacement
CN103501315A (en) * 2013-09-06 2014-01-08 西安交通大学 Cache method based on relative content aggregation in content-oriented network
CN103634231A (en) * 2013-12-02 2014-03-12 江苏大学 Content popularity-based CCN cache partition and substitution method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104253855A (en) * 2014-08-07 2014-12-31 哈尔滨工程大学 Content classification based category popularity cache replacement method in oriented content-centric networking
CN104253855B (en) * 2014-08-07 2018-04-24 哈尔滨工程大学 Classification popularity buffer replacing method based on classifying content in a kind of content oriented central site network
CN106572501A (en) * 2015-10-09 2017-04-19 中国科学院信息工程研究所 Content center mobile self-organizing network caching method based on dual threshold decision
CN106572501B (en) * 2015-10-09 2019-12-10 中国科学院信息工程研究所 Content center mobile self-organizing network caching method based on double threshold judgment
CN106790421A (en) * 2016-12-01 2017-05-31 广东技术师范学院 A kind of step caching methods of ICN bis- based on corporations
CN106790421B (en) * 2016-12-01 2020-11-24 广东技术师范大学 ICN two-step caching method based on community
CN107135271A (en) * 2017-06-12 2017-09-05 浙江万里学院 A kind of content center network caching method of Energy Efficient
CN110245095A (en) * 2019-06-20 2019-09-17 华中科技大学 A kind of solid-state disk cache optimization method and system based on data block map

Similar Documents

Publication Publication Date Title
CN105022700B (en) A kind of name data network cache management system and management method based on spatial cache division and content similarity
Zhong et al. A deep reinforcement learning-based framework for content caching
CN103905539A (en) Optimal cache storing method based on popularity of content in content center network
CN108900570B (en) Cache replacement method based on content value
CN107171961B (en) Caching method and its device based on content popularit
CN108696895A (en) Resource acquiring method, apparatus and system
CN102438020A (en) Method and equipment for distributing contents in content distribution network, and network system
CN105049254A (en) Data caching substitution method based on content level and popularity in NDN/CCN
CN105468541B (en) A kind of buffer memory management method towards lucidification disposal intelligent terminal
CN108920616A (en) A kind of metadata access performance optimization method, system, device and storage medium
CN104811493A (en) Network-aware virtual machine mirroring storage system and read-write request handling method
CN108920552A (en) A kind of distributed index method towards multi-source high amount of traffic
CN107368608A (en) The HDFS small documents buffer memory management methods of algorithm are replaced based on ARC
CN106899692A (en) A kind of content center network node data buffer replacing method and device
CN103905538A (en) Neighbor cooperation cache replacement method in content center network
CN115473854B (en) Intelligent flow control method for multi-mode network
CN103716254A (en) Self-aggregation cooperative caching method in CCN
CN105302830A (en) Map tile caching method and apparatus
CN108366089A (en) A kind of CCN caching methods based on content popularit and pitch point importance
CN110365801A (en) Based on the cooperation caching method of subregion in information centre's network
CN108319634A (en) The directory access method and apparatus of distributed file system
CN106550408A (en) A kind of data object integration method based on MANET
CN108183867B (en) Information center network node cache replacement method
CN106973088B (en) A kind of buffering updating method and network of the joint LRU and LFU based on shift in position
CN108093056A (en) Information centre's wireless network virtualization nodes buffer replacing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140702