CN103052114B - Data cache placement system and data caching method - Google Patents

Data cache placement system and data caching method Download PDF

Info

Publication number
CN103052114B
CN103052114B CN201210562480.1A CN201210562480A CN103052114B CN 103052114 B CN103052114 B CN 103052114B CN 201210562480 A CN201210562480 A CN 201210562480A CN 103052114 B CN103052114 B CN 103052114B
Authority
CN
China
Prior art keywords
node
candidate
data
cache
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210562480.1A
Other languages
Chinese (zh)
Other versions
CN103052114A (en
Inventor
范小朋
毛海霞
须成忠
张帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HANGZHOU ZHONGKE ADVANCED TECHNOLOGY RESEARCH INSTITUTE Co.,Ltd.
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201210562480.1A priority Critical patent/CN103052114B/en
Publication of CN103052114A publication Critical patent/CN103052114A/en
Application granted granted Critical
Publication of CN103052114B publication Critical patent/CN103052114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a data cache placement system, which is used for a wireless network. The wireless network comprises a plurality of nodes. The cache placement system is characterized by comprising a calculation module, a judgment module, a selection module and a continuing module, wherein the calculation module calculates a hedge data stream of each node in the wireless network, and each hedge data stream is the summation of data streams added and reduced when the corresponding node is used as a caching node; the judgment module judges whether a distance between a node to another node is shorter than a distance between the another node and the currently closest caching node or not; if the distance between the node to the another node is shorter than the distance between the another node and the currently closest caching node, the calculation module calculates the hedge data stream of the node, and the judgment module judges whether the hedge data stream of the node is greater than a first threshold value or not; and if the hedge data stream of the node is greater than the first threshold value, the selection module selects the node as a candidate node, the calculation module calculates a competition coefficient of the candidate node in the wireless network according to the hedge data stream of the candidate node and the distance between the candidate node and the caching node, the judgment module judges whether the competition coefficient is greater than a second threshold value or not, and if the competition coefficient is greater than the second threshold value, the selection module selects the candidate node as the caching node.

Description

The method of data cache placement system and data buffer storage
Technical field
The present invention relates to network technology, particularly relate to a kind of method of data cache placement system and data buffer storage.
Background technology
Wireless self-networking (Internet-based Mobile Ad Hoc Network, IMANET) based on Internet can make mobile subscriber be had access to resource on Internet by the mode of multi-hop, is one data access mode very easily.In addition, wireless sensor network (WSN) is more and more extensive in application in recent years, and application comprises the military or civil areas such as environmental monitoring, mobile multimedia, logistics management, traffic control, target following and Smart Home.Because present Problems existing in wireless network, such as wireless communication bandwidth resource is nervous, mobile device memory capacity is limited, mobile network links instability, and the finite energy of mobile device self, therefore in order to improve the efficiency of data access, finding a kind of method can sharing data, just seeming extremely important.
By the data access efficiency that the method for the method (Data Caching) of network data buffer memory can effectively improve.This method mainly takes data source to be placed on cache node by data copy, and other nodes can be copied by visit data, instead of removes data source visit data to save time, and saves access expense.The problem of core is, how to select cache node to place data copy in a network.Because place copy meaning to require that the data of latest edition can be transferred to cache node by data source timely, also new expense to be produced.But, traditional method all assume that the access delay that each is jumped is the same, but in the wireless network of reality, time wireless signal is propagated in atmosphere, all nodes all have shared this unique medium of air, can compete when transmitting data at the same time, wireless communication protocol conventional at present has fallback mechanism to adjust this transfer of data.Therefore, when measurement cache data access expense, existing method does not consider the feature of wireless channel competition, thus does not reach the effect of theory analysis, can affect the efficiency of transfer of data on the contrary.
In data buffer storage problem, in any network topology structure, how placing cached copies to reach Data Update and user accesses data overhead is minimum, i.e. buffer memory Placement Problems (CachePlacement Problem), is a most key problem.Classical problem in this problem and graph theory, equipment Placement Problems (The Facility Location Problem) belongs to equivalence problem.But buffer memory Placement Problems (Caching Placement Problem) is the key issue in caching technology, in wired and wireless network, there is more research work.
In mobile radio network, existing data buffer storage work mainly: transform problem itself, renting or purchase problem, then according to the principle of greedy algorithm, proposing corresponding algorithm by being converted into by buffer memory Placement Problems.Mainly there is two problems in this technology: 1) algorithm computational process is comparatively complicated, transformed, to the decomposition of tree structure, obtain final buffer memory placement schemes by a large amount of set.Construction process is too complicated, compares and is difficult to implementation algorithm; 2) arithmetic result is checked by the method for numerical analysis, is not placed in wireless network environment and carries out any experiment or emulation.
Therefore, for the problems referred to above, be necessary to propose a kind of method based on data buffering, reduce the expense of caching system.
Summary of the invention
In view of this, a kind of method that data cache placement system and data buffer storage are provided is necessary.
A kind of data cache placement system provided by the invention, in wireless network, wherein, wireless network comprises multiple node, and described buffer memory place system comprises: computing module, judge module, selection module and continuation module.Wherein, computing module for calculating the data flow that liquidates of each node in described wireless network, wherein, described in the data flow that liquidates be node as increasing during cache node and the data flow sum reduced, judge module is for judging whether a node is less than the distance of a certain node to nearest cache node to the distance of a certain node, wherein, computing module is also for being less than a certain node to the data flow that liquidates increasing this node during the distance of nearest cache node at a node to the distance of a certain node, and judge module is also for judging whether the data flow that liquidates of this node is greater than first threshold, module is selected to be used for electing this node as candidate node when the data flow that liquidates of this node is greater than first threshold, and by this node join to candidate's node set, wherein, computing module also calculates this candidate's node coefficient of competition in the wireless network for liquidate data flow and the distance between this candidate's node and cache node according to this candidate's node, judge module is also with judging whether described coefficient of competition is greater than Second Threshold, described selection module also for when described coefficient of competition is greater than Second Threshold using this candidate's node as cache node, and by this candidate's node join to cache node set.
The present invention also provides a kind of method of data buffer storage, for in wireless network, wherein wireless network comprises multiple node, said method comprising the steps of: the data flow that liquidates calculating each node in described wireless network, wherein, the data flow that liquidates described in is that node is as increasing during cache node and the data flow sum reduced; Judge whether a node is less than the distance of a certain node to nearest cache node to the distance of a certain node; If a node is less than the distance of a certain node to nearest cache node to the distance of a certain node, then increase the data flow that liquidates of this node; Judge whether the data flow that liquidates of this node is greater than first threshold; If the data flow that liquidates of this node is greater than first threshold, then elect this node as candidate node, and by this node join to candidate's node set; Liquidate data flow and the distance between this candidate's node and cache node according to this candidate's node calculate this candidate's node coefficient of competition in the wireless network; Judge whether described coefficient of competition is greater than Second Threshold; If described coefficient of competition is greater than Second Threshold, then using this candidate's node as cache node, and by this candidate's node join to cache node set.
Whether be candidate node by judging the data flow that liquidates of node whether to be greater than first threshold if deciding node for data cache placement system in the present invention and the method for data buffer storage, then calculated candidate binding place coefficient of competition and judge whether coefficient of competition is greater than Second Threshold and decides node whether cache node, form cache node set, decrease the analysis to data flow, and data cached access delay, decrease the expense of data access in network simultaneously.
Accompanying drawing explanation
Fig. 1 is the module map of data cache placement system in an embodiment of the present invention;
Fig. 2 is the embodiment calculating the data flow that liquidates.
The flow chart of Fig. 3 for utilizing the data cache placement system shown in Fig. 1 to carry out the method for data buffer storage in an embodiment of the present invention.
Embodiment
Be described below in detail embodiments of the invention, the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or has element that is identical or similar functions from start to finish.Being exemplary below by the embodiment be described with reference to the drawings, only for explaining the present invention, and can not limitation of the present invention being interpreted as.
In describing the invention, term " interior ", " outward ", " longitudinal direction ", " transverse direction ", " on ", D score, " top ", the orientation of the instruction such as " end " or position relationship be based on orientation shown in the drawings or position relationship, be only the present invention for convenience of description instead of require that the present invention with specific azimuth configuration and operation, therefore must can not be interpreted as limitation of the present invention.
Refer to Fig. 1, Figure 1 shows that the module map of data cache placement system 10 in an embodiment of the present invention.
In the present embodiment, data cache placement system 10 comprises: computing module 102, judge module 104, selection module 106, continuation module 108, memory 110 and processor 112, wherein, computing module 102, judge module 104, selection module 106 and continuation module 108 store in the memory 110, and processor 112 is for performing storage functional module in the memory 110.
In the present embodiment, data cache placement system 10 is in wireless network, and wherein, wireless network comprises multiple node.
In the present embodiment, cache node set C, candidate's node set H, the nearest cache node set NC of all nodes and coefficient of competition set CC is provided with in memory 110.In the present embodiment, under initial situation, data source node belongs to cache node set C and nearest cache node set NC.
In the present embodiment, computing module 102 is for calculating the data flow that liquidates of each node in described wireless network.
Judge module 104 is for judging whether a node is less than the distance of a certain node to nearest cache node to the distance of a certain node.
In the present embodiment, computing module 102 is also for being less than a certain node to the data flow that liquidates increasing this node during the distance of nearest cache node at a node to the distance of a certain node.
In the present embodiment, the data flow that liquidates described in is that node is as increasing during cache node and the data flow sum reduced.
In the present embodiment, the hedge fund in the concept of the data flow that liquidates (Hedging Data Flow) and financial industry has the place of playing the same tune on different musical instruments.The type of the data flow existed in caching system comprises following three kinds: the data flow of the first type is visit data stream (Accessing DataFlow), referred to as ADF, and the request situation of a node visit data in this description of data stream network; The second is reply data stream (Reply Data Flow), and referred to as RDF, this description of data stream data source node or cache node reply the situation of the request of data of other nodes; The third is update data stream (Update Data Flow), referred to as UDF, this description of data stream be the data flow that data source upgrades the data copy on cache node and produces.
Therefore comprehensive above-mentioned four kinds of data flow, if desired increase a cache node, just need minimizing visit data stream as much as possible and reply data stream, the update data stream also needing minimizing as much as possible and introduce because increasing cache node.
In the present embodiment, the relation between the data flow that liquidates and other three kinds of data flow:
HDF(i)=ΔADF(i)+ΔRDF(i)-UDF(i)
Incorporated by reference to Fig. 2, Figure 2 shows that computing module 102 calculates an embodiment of the data flow that liquidates.In embodiments, the path of tentation data access is the same with the path that data return, and therefore, visit data stream and reply data stream can be combined, have visit data stream just to have reply data stream as long as also just mean.
In the figure in the upper left corner in fig. 2, node N1 is data source.
If in the figure in the upper right corner of Fig. 2, plan selects N3 to be cache node, and the visit data stream so reduced should be:
ΔADF(3)=(f a(4)+f a(5)+f a(6)+f a(7)+f a(8))×(s r+s d)×w(3,1) (1)
In superincumbent formula (1), f a(4) frequency of node N4 visit data is represented, s rand s drepresent the size of data itself and the size of request of data respectively, w (3,1) represents the weight of path just from node N3 to data source N1.
Formula (1) above represents, other nodes will go data source N1 to go to have access to data through node N3 originally, as long as just can have access to data now on node N3.Like this, double bounce (2hops) can just be reduced for other nodes N4 to N8.According to same reason, originally node N3 does not need from update data stream, and now because it act as cache node, therefore just add update data stream, the data flow of increase can represent with formula below:
UDF(3)=f U×s d×w(1,3) (2)
In formula (2), f urepresent the frequency of data source more new data, s drepresent the size of data itself, w (1,3) represents that path is just from data source N1 to the weight of node N3.
In the present embodiment, judge module 104 is also for judging whether the data flow that liquidates of this node is greater than first threshold.
Select module 106 for electing this node as candidate node when the data flow that liquidates of this node is greater than first threshold, and by this node join to candidate's node set.
In the present embodiment, computing module 102 also calculates this candidate's node coefficient of competition in the wireless network for liquidate data flow and the distance between this candidate's node and cache node according to this candidate's node.
In the present embodiment, the coefficient of competition of node Ni is defined as follows:
CC ( i ) = d ( i , c ) × ( f a ( i ) + Σ j ∈ F ( i ) f a ( j ) ) - - - ( 3 )
In the present embodiment, the node Ni coefficient of competition in formula (3) depends primarily on the distance of node Ni to nearest cache node, and the amount of node Ni and the data flow through node Ni visit data.
Judge module 104 is also with judging whether described coefficient of competition is greater than Second Threshold.
Described selection module 106 also for when described coefficient of competition is greater than Second Threshold using this candidate's node as cache node, and by this candidate's node join to cache node set.
In the present embodiment, to during the distance of nearest cache node, described judge module 104 also judges whether described candidate's node set is empty for being more than or equal to a certain node at a node to the distance of a certain node, and do not judge whether this node is last node for empty at described candidate's node set.
In the present embodiment, for empty, described judge module 104 also for judging when the data flow that liquidates of this node is less than or equal to first threshold whether described candidate's node set is empty, and does not judge whether this node is last node at described candidate's node set.
In the present embodiment, for empty, described judge module 104 also for judging when described coefficient of competition is less than or equal to Second Threshold whether described candidate's node set is empty, and does not judge whether this node is last node at described candidate's node set.
In the present embodiment, continue module 108 to process next node for continuing when described node is not last node.
Refer to Fig. 3, Figure 3 shows that in an embodiment of the present invention the flow chart utilizing the data cache placement system 10 shown in Fig. 1 to carry out the method for data buffer storage.
In the present embodiment, the method for data buffer storage is used in wireless network, and wherein wireless network comprises multiple node.
In the present embodiment, cache node set C, candidate's node set H, the nearest cache node set NC of all nodes and coefficient of competition set CC is provided with in memory 110.In the present embodiment, under initial situation, data source node belongs to cache node set C and nearest cache node set NC.
In step S200, computing module 102 calculates the data flow that liquidates of each node in described wireless network.
In the present embodiment, the data flow that liquidates described in is that node is as increasing during cache node and the data flow sum reduced.
In the present embodiment, computing module 102 is according to the data flow that liquidates of following formulae discovery node.
HDF(i)=ΔADF(i)+ΔRDF(i)-UDF(i)
In step S202, judge module 104 judges whether a node is less than the distance of a certain node to nearest cache node to the distance of a certain node.
If a node is less than the distance of a certain node to nearest cache node to the distance of a certain node, then in step S204, computing module 102 increases the data flow that liquidates of this node.
In the present embodiment, computing module 102 increases the data flow that liquidates of this node according to following formula.
HDF(x)+=(w(y,NC[y])-w(y,x))*(s d+s r)*f a(y)
In step S206, judge module 104 judges whether the data flow that liquidates of this node is greater than first threshold.
If the data flow that liquidates of this node is greater than first threshold, then in step S208, module 106 is selected to elect this node as candidate node, and by this node join to candidate's node set.
In step S210, computing module 102 calculates this candidate's node coefficient of competition in the wireless network according to liquidate data flow and the distance between this candidate's node and cache node of this candidate's node.
In the present embodiment, the coefficient of competition of node Ni is defined as:
CC ( i ) = d ( i , c ) × ( f a ( i ) + Σ j ∈ F ( i ) f a ( j ) )
In the present embodiment, node Ni coefficient of competition depends primarily on the distance of node Ni to nearest cache node, and the amount of node Ni and the data flow through node Ni visit data.
In step S212, judge module 104 judges whether described coefficient of competition is greater than Second Threshold.
If described coefficient of competition is greater than Second Threshold, then in step S214, select module 106 using this candidate's node as cache node, and by this candidate's node join to cache node set.
If the judged result in step S202 is a node be more than or equal to the distance of a certain node to nearest cache node to the distance of a certain node, then in step S216, judge module 104 judges whether this node is last node.
If the data flow that liquidates that the judged result in step S206 is this node is less than or equal to first threshold, then in step S216, judge module 104 judges whether the described nodal set that selects is empty.
If described candidate's nodal set is not empty, then in step S218, judge module 104 judges whether this node is last node.
If the judged result in step S210 is described coefficient of competition be less than or equal to Second Threshold, then in step S216, judge module 104 judges whether the described nodal set that selects is empty.
If described candidate's nodal set is not empty, then in step S216, judge module 104 judges whether this node is last node.
If the judged result in step S216 is described node is not last node, then in step S218, continue module 108 and continue to process next node.
Whether be candidate node by judging the data flow that liquidates of node whether to be greater than first threshold if deciding node for data cache placement system 10 in embodiment of the present invention and the method for data buffer storage, then calculated candidate binding place coefficient of competition and judge whether coefficient of competition is greater than Second Threshold and decides node whether cache node, form cache node set, decrease the analysis to data flow, and data cached access delay, decrease the expense of data access in network simultaneously.
Although the present invention is described with reference to current better embodiment; but those skilled in the art will be understood that; above-mentioned better embodiment is only used for the present invention is described; not be used for limiting protection scope of the present invention; any within the spirit and principles in the present invention scope; any modification of doing, equivalence replacement, improvement etc., all should be included within the scope of the present invention.

Claims (10)

1. a data cache placement system, in wireless network, wherein, wireless network comprises multiple node, it is characterized in that, described buffer memory place system comprises:
Computing module, for calculating the data flow that liquidates of each node in described wireless network, wherein, described in the data flow that liquidates be node as increasing during cache node and the data flow sum reduced;
Judge module, for judging whether a node is less than the distance of a certain node to nearest cache node to the distance of a certain node, wherein, computing module is also for being less than a certain node to the data flow that liquidates increasing this node during the distance of nearest cache node at a node to the distance of a certain node, and judge module is also for judging whether the data flow that liquidates of this node is greater than first threshold;
Select module, for electing this node as candidate node when the data flow that liquidates of this node is greater than first threshold, and by this node join to candidate's node set, wherein, computing module also calculates this candidate's node coefficient of competition in the wireless network for liquidate data flow and the distance between this candidate's node and cache node according to this candidate's node, judge module is also with judging whether described coefficient of competition is greater than Second Threshold, described selection module also for when described coefficient of competition is greater than Second Threshold using this candidate's node as cache node, and by this candidate's node join to cache node set.
2. data cache placement system as claimed in claim 1, it is characterized in that, to during the distance of nearest cache node, described judge module also judges whether described candidate's node set is empty for being more than or equal to a certain node at a node to the distance of a certain node, and at described candidate's node set not for sky judges whether this node is last node.
3. data cache placement system as claimed in claim 1, it is characterized in that, described judge module also for judging that when the data flow that liquidates of this node is less than or equal to first threshold whether described candidate's node set is empty, and at described candidate's node set not for sky judges whether this node is last node.
4. data cache placement system as claimed in claim 1, it is characterized in that, described judge module also for judging that when described coefficient of competition is less than or equal to Second Threshold whether described candidate's node set is empty, and at described candidate's node set not for sky judges whether this node is last node.
5. the data cache placement system as described in Claims 2 or 3 or 4, is characterized in that, also comprises continuation module, processes next node for continuing when described node is not last node.
6. a method for data buffer storage, in wireless network, wherein wireless network comprises multiple node, it is characterized in that, said method comprising the steps of:
Calculate the data flow that liquidates of each node in described wireless network, wherein, described in the data flow that liquidates be node as increasing during cache node and the data flow sum reduced;
Judge whether a node is less than the distance of a certain node to nearest cache node to the distance of a certain node;
If a node is less than the distance of a certain node to nearest cache node to the distance of a certain node, then increase the data flow that liquidates of this node;
Judge whether the data flow that liquidates of this node is greater than first threshold;
If the data flow that liquidates of this node is greater than first threshold, then elect this node as candidate node, and by this node join to candidate's node set;
Liquidate data flow and the distance between this candidate's node and cache node according to this candidate's node calculate this candidate's node coefficient of competition in the wireless network;
Judge whether described coefficient of competition is greater than Second Threshold;
If described coefficient of competition is greater than Second Threshold, then using this candidate's node as cache node, and by this candidate's node join to cache node set.
7. the method for data buffer storage as claimed in claim 6, is characterized in that, further comprising the steps of:
If a node is more than or equal to the distance of a certain node to nearest cache node to the distance of a certain node, then judge whether described candidate's nodal set is empty;
If described candidate's nodal set is not empty, then judge whether this node is last node.
8. the method for data buffer storage as claimed in claim 6, is characterized in that, further comprising the steps of:
If the data flow that liquidates of this node is less than or equal to first threshold, then judge whether described candidate's nodal set is empty;
If described candidate's nodal set is not empty, then judge whether this node is last node.
9. the method for data buffer storage as claimed in claim 6, is characterized in that, further comprising the steps of:
If described coefficient of competition is less than or equal to Second Threshold, then judge whether described candidate's nodal set is empty;
If described candidate's nodal set is not empty, then judge whether this node is last node.
10. the method for the data buffer storage as described in claim 7 or 8 or 9, is characterized in that, further comprising the steps of:
If described node is not last node, then continue to process next node.
CN201210562480.1A 2012-12-21 2012-12-21 Data cache placement system and data caching method Active CN103052114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210562480.1A CN103052114B (en) 2012-12-21 2012-12-21 Data cache placement system and data caching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210562480.1A CN103052114B (en) 2012-12-21 2012-12-21 Data cache placement system and data caching method

Publications (2)

Publication Number Publication Date
CN103052114A CN103052114A (en) 2013-04-17
CN103052114B true CN103052114B (en) 2015-04-22

Family

ID=48064584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210562480.1A Active CN103052114B (en) 2012-12-21 2012-12-21 Data cache placement system and data caching method

Country Status (1)

Country Link
CN (1) CN103052114B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105743975B (en) * 2016-01-28 2019-03-05 深圳先进技术研究院 Caching laying method and system based on data access distribution
CN108900618A (en) * 2018-07-04 2018-11-27 重庆邮电大学 Content buffering method in a kind of information centre's network virtualization
CN109803245B (en) * 2019-03-12 2022-01-28 南京邮电大学 Cache node selection method based on D2D communication
CN110505277B (en) * 2019-07-18 2022-04-26 北京奇艺世纪科技有限公司 Data caching method and device and client
CN117220902A (en) * 2023-07-24 2023-12-12 达州市斑马工业设计有限公司 Data attack processing method and server applied to intelligent cloud

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8131612B1 (en) * 2005-05-09 2012-03-06 Genesis Financial Products, Inc. Program generator for hedging the guaranteed benefits of a set of variable annuity contracts
CN102497646A (en) * 2011-12-08 2012-06-13 中山大学 Low-overhead cache data discovery mechanism used for wireless network
CN102571913A (en) * 2010-12-08 2012-07-11 中国科学院声学研究所 Network-transmission-overhead-based data migration method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8131612B1 (en) * 2005-05-09 2012-03-06 Genesis Financial Products, Inc. Program generator for hedging the guaranteed benefits of a set of variable annuity contracts
CN102571913A (en) * 2010-12-08 2012-07-11 中国科学院声学研究所 Network-transmission-overhead-based data migration method
CN102497646A (en) * 2011-12-08 2012-06-13 中山大学 Low-overhead cache data discovery mechanism used for wireless network

Also Published As

Publication number Publication date
CN103052114A (en) 2013-04-17

Similar Documents

Publication Publication Date Title
CN103052114B (en) Data cache placement system and data caching method
CN101977226B (en) Novel opportunity network data transmission method
Farsi et al. A congestion-aware clustering and routing (CCR) protocol for mitigating congestion in WSN
Peiravi et al. An optimal energy‐efficient clustering method in wireless sensor networks using multi‐objective genetic algorithm
CN112104558B (en) Method, system, terminal and medium for implementing block chain distribution network
CN111538570B (en) Energy-saving and QoS guarantee-oriented VNF deployment method and device
CN105743980A (en) Constructing method of self-organized cloud resource sharing distributed peer-to-peer network model
CN102749084A (en) Path selecting method oriented to massive traffic information
Shanmugam et al. An energy‐efficient clustering and cross‐layer‐based opportunistic routing protocol (CORP) for wireless sensor network
CN113382059B (en) Collaborative caching method based on federal reinforcement learning in fog wireless access network
Jiang et al. Cooperative caching in fog radio access networks: A graph‐based approach
CN111526208A (en) High-concurrency cloud platform file transmission optimization method based on micro-service
CN110225493A (en) Based on D2D route selection method, system, equipment and the medium for improving ant colony
CN102982395A (en) Rapid bus transfer method based on space node clustering method
CN109327340B (en) Mobile wireless network virtual network mapping method based on dynamic migration
Dong et al. A survey on the network models applied in the industrial network optimization
CN110765319B (en) Method for improving Janusgraph path exploration performance
CN116489668A (en) Edge computing task unloading method based on high-altitude communication platform assistance
CN112116081B (en) Optimization method and device for deep learning network
Jeevanantham et al. Energy-aware neuro-fuzzy routing model for WSN based-IoT
CN103491128A (en) Optimal placement method for popular resource duplicates in peer-to-peer network
CN109756908B (en) Method/system for optimizing wireless network cache strategy, storage medium and equipment
Wang et al. A Routing Algorithm Based on the Prediction of Node Meeting Location in Opportunistic Networks
CN112261628A (en) Content edge cache architecture method applied to D2D equipment
Gitzenis et al. Joint transmitter power control and mobile cache management in wireless computing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200131

Address after: Office building of Shenzhen Institute of advanced technology A-207 518000 in Guangdong city of Shenzhen province Nanshan District City Road No. 1068 Chinese Academy of Shenzhen University Academy of Sciences

Patentee after: Shenzhen advanced science and technology Cci Capital Ltd

Address before: 1068 No. 518055 Guangdong city in Shenzhen Province, Nanshan District City Xili University School Avenue

Patentee before: SHENZHEN INSTITUTES OF ADVANCED TECHNOLOGY CHINESE ACADEMY OF SCIENCES

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200706

Address after: 12 / F, building 5, Haiju center, Qiantang New District, Hangzhou City, Zhejiang Province

Patentee after: HANGZHOU ZHONGKE ADVANCED TECHNOLOGY RESEARCH INSTITUTE Co.,Ltd.

Address before: Office building of Shenzhen Institute of advanced technology A-207 518000 in Guangdong city of Shenzhen province Nanshan District City Road No. 1068 Chinese Academy of Shenzhen University Academy of Sciences

Patentee before: Shenzhen advanced science and technology Cci Capital Ltd.

TR01 Transfer of patent right