CN113065032A - Self-adaptive data sensing and collaborative caching method for edge network ensemble learning - Google Patents
Self-adaptive data sensing and collaborative caching method for edge network ensemble learning Download PDFInfo
- Publication number
- CN113065032A CN113065032A CN202110297787.2A CN202110297787A CN113065032A CN 113065032 A CN113065032 A CN 113065032A CN 202110297787 A CN202110297787 A CN 202110297787A CN 113065032 A CN113065032 A CN 113065032A
- Authority
- CN
- China
- Prior art keywords
- data
- combinable
- counting bloom
- bloom filter
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims description 27
- 238000003491 array Methods 0.000 claims abstract description 37
- 230000006870 function Effects 0.000 claims description 24
- 230000002776 aggregation Effects 0.000 claims description 19
- 238000004220 aggregation Methods 0.000 claims description 19
- 230000003044 adaptive effect Effects 0.000 claims description 12
- 238000012217 deletion Methods 0.000 claims description 5
- 230000037430 deletion Effects 0.000 claims description 5
- 238000003780 insertion Methods 0.000 claims description 5
- 230000037431 insertion Effects 0.000 claims description 5
- 230000008447 perception Effects 0.000 claims description 4
- 238000003062 neural network model Methods 0.000 abstract description 11
- 230000005540 biological transmission Effects 0.000 abstract description 6
- 238000012549 training Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000007786 learning performance Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/901—Indexing; Data structures therefor; Storage structures
- G06F16/9014—Indexing; Data structures therefor; Storage structures hash tables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/901—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
An edge computing node uses g bit arrays in a Combinable Counting Bloom Filter (CCBF) to efficiently record cache data, the combinable counting bloom filter exchanging the cache data among adjacent nodes deletes redundant data in a mode of clearing the bit arrays, and then combines all the combinable counting bloom filters of the node to generate a global view CCBF of the cache datag(ii) a According to CCBFgThe aggregate bit array in (1) determines that the node and other nodes are not cachedAnd caching the stored data, thereby ensuring that different data are cached between the neighbor nodes. According to the invention, different data are cached between the adjacent nodes, and different sub-models are generated by differentiated data, so that the performance of a neural network model in applications such as robot sensor image recognition can be improved, and the transmission overhead is reduced.
Description
Technical Field
The invention belongs to the technical field of computers, relates to an edge network architecture, and particularly relates to an Adaptive Data aware and Collaborative Caching method (ADC) for edge network ensemble learning.
Background
With the rise of big data technology and artificial intelligence technology, deep learning is more and more widely applied in the field of edge computing. Meanwhile, the neural network model is deployed at the edge computing node, and the high-quality neural network model can be obtained in an integrated learning mode. However, the above prior art has the following disadvantages: if similar individual submodels are combined together, the performance of the neural network model cannot be improved, for example, in image recognition of a robot sensor, the performance of the neural network model directly influences the efficiency and the effect of the image recognition.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide an adaptive data sensing and collaborative caching method for edge network ensemble learning, which caches different data among neighboring nodes to ensure that different submodels are generated by differentiated data so as to improve the performance of a neural network model in applications such as robot sensor image recognition and reduce transmission overhead.
In order to achieve the purpose, the invention adopts the technical scheme that:
an edge network ensemble learning-oriented adaptive data sensing and collaborative caching method comprises the following steps:
step 1, an edge computing node efficiently records cache data by using g bit arrays in a Combinable Counting Bloom Filter (CCBF), wherein the combinable counting bloom filter consists of g bit arrays, k hash functions, 1 pseudorandom number generator and 1 aggregation bit array (orbar);
step 2, the combinable counting bloom filter of the cache data is exchanged between the adjacent nodes, the redundant data is deleted by setting the corresponding unit of the available bit array to be 0, and then all the combinable counting bloom filters of the node are combined to generate the overall view CCBF of the cache datag;
Step 3, according to CCBFgThe aggregation bit array in the node determines that the node and other nodes do not cache data, and caching is carried out, so that different data are cached between adjacent nodes.
In step 1, the data of each edge compute node is inserted into a combinable count bloom filter.
When the data is firstly cached and inserted into the combinable counting bloom filter, input data is hashed to k bit arrays by using k hash functions, then a corresponding unit is set to be 1, and the inserting operation is completed, wherein the corresponding unit is a unit mapped by the hash function;
when the data is cached and inserted into the combinable counting bloom filter and needs to be deleted, input data is hashed to k bit arrays by using k hash functions, and then a corresponding unit is set to be 0, so that the deletion operation is completed.
The method for setting the corresponding unit to 1 is as follows: determining an available bit array, determining the next available bit array according to the number of the bit arrays of which the corresponding units are already 1 by using a pseudorandom number generator, setting the corresponding units of the available bit array to be 1 after the determination, executing bitwise OR operation on the g bit arrays, updating the aggregated bit array, and finishing the insertion operation;
the method for setting the corresponding unit to 0 is as follows: determining a deletable bit array, positioning the bit array of the last adding operation by utilizing a pseudo-random number generator and a counter, namely the bit array of the unit of each hash function operation, setting the corresponding unit to be 0 after the determination, executing bitwise operation on g bit array units, updating the aggregated bit array, and finishing the deletion operation.
In step 2, a combinable counting bloom filter is exchanged by using a network interface.
In the step 2, whether redundant data exists in the combinable counting bloom filter requested by the node to the neighbor node or not is judged, if the data to be exchanged by the edge computing node in the combinable counting bloom filter exists in the neighbor node, namely the redundant data exists, the repeated data in the edge computing node is deleted by setting the corresponding unit of the bit array to be 0, then the edge computing node and the data amount in the combinable counting bloom filter in the neighbor node are added, if the capacity n of the combinable counting bloom filter is not exceeded, all combinable counting bloom filters are combined, the bit arrays corresponding to the combinable counting bloom filters are used for executing pairwise bitwise or operation, the aggregated bit array is updated at the same time, and the global view CCBF of the cache data is generatedg(ii) a And if the redundant data does not exist, directly adding the edge computing node and the data amount in the combinable counting bloom filter in the neighbor node.
The method for judging whether redundant data exists in the combinable counting bloom filter requested by the node to the neighbor node is as follows:
hashing input data to k bit array units by using k hash functions, inquiring whether corresponding aggregation bit arrays are all 1, and if the aggregation bit arrays are all 1, inputting the existing data, namely, redundant data exists; otherwise, the input is not present, i.e., there is no redundant data.
Inquiring whether the corresponding aggregation bit arrays in the CCBFg are all 1 or not according to the hash result of the data fields requested to be cached, if so, ignoring the data and not caching any more; if the data are not 1, the data do not exist, the data are cached, and meanwhile, the corresponding data are inserted into the CCBFg, so that the differentiated data are guaranteed to generate different submodels, and the data are promoted to learn local knowledge in a mode of performing ensemble learning on the different submodels, and a high-performance ensemble result is obtained.
Compared with the prior art, the method can perform sub-model training of ensemble learning by fully utilizing the difference of data cache between nodes, thereby ensuring the integration performance of the neural network model and reducing the transmission overhead. The method can be used for obtaining the neural network model in the fields of robot sensor image recognition and the like, and further the image recognition efficiency is greatly improved.
Drawings
FIG. 1 is a schematic flow diagram of the present invention.
FIG. 2 is a schematic flow chart of an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the drawings and examples.
The invention relates to a self-adaptive data perception and cooperative caching method for edge network ensemble learning, which utilizes a bloom filter to efficiently perceive peripheral data and cooperatively cache related data, and provides effective support for training a differential sub-model so as to improve ensemble learning performance difference.
As shown in fig. 1, the present invention mainly comprises the following steps:
step 1, the edge computing node uses g bit arrays in a Combinable Counting Bloom Filter (CCBF) to efficiently record cache data. The combinable technology bloom filter is composed of g bit arrays, k hash functions, 1 pseudorandom number generator and 1 aggregation bit array (orbar), can realize the operations of adding, inquiring, deleting and combining data, can be used for efficiently recording and perceiving peripheral differentiated data, and cooperatively caches related data, so that a differentiated submodel is trained, and the integrated learning performance is improved.
Specifically, the data for each edge compute node is inserted into the mergeable count bloom filter:
when the data is firstly cached and inserted into the mergeable count bloom filter, input data is hashed to k bit arrays by using k hash functions, and then corresponding units (units mapped by the hash functions) are set to be 1, so that the insertion operation is completed.
When the data is cached and inserted into the combinable counting bloom filter and needs to be deleted, input data is hashed to k bit arrays by using k hash functions, and then a corresponding unit is set to be 0, so that the deletion operation is completed.
The method for setting the corresponding unit to 1 comprises the following steps: determining an available bit array, determining the next available bit array according to the number of the bit arrays of which the corresponding units are already 1 by using a pseudorandom number generator, setting the corresponding units of the available bit array to be 1 after the determination, executing bitwise OR operation on the g bit arrays, and updating the aggregation bit array.
The method of setting the corresponding cell to 0 is: determining a deletable bit array, positioning the bit array of the last adding operation (the bit array of the unit of each hash function operation) by utilizing a pseudo-random number generator and a counter, setting the corresponding unit to be 0 after the determination, executing bitwise OR operation on g bit array units, and updating an aggregation bit array.
In conclusion, the edge computing node can efficiently record the cache data by adopting the CCBF, and avoids the repeated addition of the data.
Step 2, exchanging mergeable counting bloom filters of cache data among adjacent nodes, deleting redundant data by means of clearing bit arrays, merging all the mergeable counting bloom filters of the nodes, and generating a global view CCBF of the cache datag。
The mergeable counting bloom filter can utilize network interface exchange, in order to realize the exchange, firstly judging whether redundant data exists in the mergeable counting bloom filter requested by a node to a neighbor node, hashing input data to k bit array units by using k hash functions, inquiring whether corresponding aggregation bit arrays are all 1, and if the aggregation bit arrays are all 1, inputting the merged counting bloom filter, namely, the merged counting bloom filter has the redundant data; otherwise, the input is not present, i.e., there is no redundant data.
If the data to be exchanged in the combinable counting bloom filter of the edge computing node exists in the neighbor node, namely redundant data exists, deleting the repeated data in the edge computing node by setting the bit array corresponding unit to be 0, then adding the edge computing node and the data amount in the combinable counting bloom filter in the neighbor node, and if the combinable counting bloom filter does not exceed the combinable counting bloom filter, adding the data amount in the edge computing node and the combinable counting bloom filter in the neighbor nodeMerging all combinable counting bloom filters if the capacity of the bloom filters is n, executing bitwise or operation by using bit arrays corresponding to the combinable counting bloom filters, updating the aggregation bit array simultaneously, and generating the overall view CCBF of the cache datag(ii) a And if the redundant data does not exist, skipping the step of deleting the redundant data, directly adding the edge calculation node and the data amount in the combinable counting bloom filter in the neighbor node, and combining all CCBF of the node.
Step 3, according to CCBFgThe aggregation bit array in the node determines that the node and other nodes do not cache data, and caching is carried out, so that different data are cached between adjacent nodes.
Specifically, whether the corresponding aggregation bit arrays in the CCBFg are all 1 or not is inquired according to the hash result of the data field requested to be cached, if yes, the data exists, the data is ignored, and caching is not performed any more; if the data is not all 1, the data does not exist, the data is cached, and meanwhile, the corresponding data is inserted into CCBFg, and the inserting operation is the same as that in the step 1. Therefore, different submodels generated by the differential data are guaranteed, and the data are improved to learn local knowledge in an integrated learning mode of the different submodels, so that a high-performance integrated result is obtained.
The implementation of the above method is illustrated below by a specific example. Embodiments include a neural network model and a data set:
a neural network model:
visual geometry network (VGG) model. VGG is a deep convolutional neural network for computer vision. In this embodiment, 5 convolution blocks are designed, each convolution block is composed of 2-4 convolution layers, and each convolution layer respectively comprises 64-128-256-512 convolution kernels.
Data set:
coverage type dataset (D1): the D1 data set is composed of forest coverage data of Resource Information System (RIS) area of the United states forestry department, and comprises six main tree species. 581012 data items for these tree species are displayed in the four wilderness scenes.
A, B are edge computing nodes for efficient recording of cached data and training of neural networks, as shown in FIG. 2; ci is a cloud end and is used for regularly exchanging and aggregating compression records; and C is a data center for integrated learning.
Firstly, the edge computing node terminal device B collects a D1 data set, and when the data is first cached and inserted into the CCBF in the D1, hashes the D1 data to k bit array units by using k hash functions, so as to set the corresponding unit to 1. In order to set the corresponding unit to 1 and determine the usable bit array, the pseudo random number generator may be used to determine the next used bit array according to the number of bit arrays whose corresponding unit (the unit to which the hash function is mapped) is already 1, after the determination, set the corresponding unit to 1, perform bitwise or operation on the g bit array units, update the orbar, and complete the insertion operation, that is, complete the insertion of the D1 data cached by the edge computing node into the CCBF.
When the data in D1 has been cached by terminal B and inserted into the CCBF, a delete operation is required. And setting the corresponding unit to be 0 by using k hash functions to hash the D1 data to k bit array units, thereby finishing the deleting operation. In order to set the corresponding unit to 0, it is necessary to determine the deletable bit array, and may use the pseudo random number generator and the counter to count and locate the last adding operation (the bit array where the unit of each hash function operation is located), set the corresponding unit to 0 after determination, and perform bitwise or operation on the g bit array units, update the orbar, and complete the deleting operation.
Next, the CCBF in the edge computing node end device A, B is exchanged, and it is checked whether redundant data of D1 exists in the exchanged CCBF. If redundant data is present, additional transmission overhead must be incurred. Therefore, the merging operation of CCBF should be performed after the redundant data is deleted. Firstly, a network interface is utilized to exchange CCBF, in order to exchange CCBF, whether redundant data of D1 exists in the CCBF requested by a node to a neighbor node or not is judged firstly, the D1 data are hashed to k bit array units through k hash functions, whether the corresponding orbar units are all 1 or not is inquired, and if the orbar units are all 1, D1 data already exist; otherwise, it is not present. If it is determined that the data already exists, the redundant data needs to be deleted by setting the corresponding cell to 0. In order to set the corresponding unit to 0, it is necessary to determine the deletable bit array, and may use the pseudo random number generator and the counter to count and locate the last adding operation (the bit array where the unit of each hash function operation is located), set the corresponding unit to 0 after determination, and perform bitwise or operation on the g bit array units to update the orbar. After the redundant data D1 are deleted, all CCBF of the node are merged at the edge computing node terminal equipment A, bitwise or operation is executed by adopting bit arrays corresponding to the two CCBF, and meanwhile, the global view of the CCBF is generated in a mode of updating the array orabarr. And if the data does not exist, skipping the step of deleting redundant data, directly combining all CCBF of the node, similarly adopting the bit arrays corresponding to the two CCBF to execute bitwise OR operation, and updating the array orbar at the same time to generate the global view of the CCBF. In the embodiment, the redundant data is deleted and uploaded to the neural network, so that the transmission overhead of 2/3 can be reduced compared with the traditional network center cache.
Finally, when the merged data still need to buffer the data to support the neural network training convergence, determining D1 data which are not buffered by the node and other nodes according to the global view, and performing buffering, wherein if the corresponding orbar units are all 1, the data exist, the data are ignored and are not buffered any more; if the corresponding units are not all 1 and the data does not exist, caching the data into the edge computing node terminal equipment A, and simultaneously inserting the corresponding D1 data into the global view of the CCBF, wherein the inserting operation is the same as that in the step 1. And then uploading the merged data and the cached data to a neural network for training, and performing ensemble learning in a data center C. In the present embodiment, the training accuracy rate is substantially consistent with the central caching method, which is 0.848 and 0.847, respectively. The invention further reduces the transmission overhead on the premise of ensuring certain accuracy.
In conclusion, according to the technical scheme of the invention, different data are cached between the neighboring nodes, different sub-models are generated by using differentiated data, the integrated learning performance of the neural network model depended on in the fields of image recognition of the robot sensor and the like can be improved, and meanwhile, the communication overhead is reduced by means of cooperative caching.
Claims (9)
1. An edge network ensemble learning-oriented adaptive data sensing and collaborative caching method is characterized by comprising the following steps:
step 1, an edge computing node efficiently records cache data by using g bit arrays in a Combinable Counting Bloom Filter (CCBF), wherein the combinable counting bloom filter consists of g bit arrays, k hash functions, 1 pseudorandom number generator and 1 aggregation bit array (orbar);
step 2, exchanging mergeable counting bloom filters of cache data among adjacent nodes, deleting redundant data by means of clearing bit arrays, merging all the mergeable counting bloom filters of the nodes, and generating a global view CCBF of the cache datag;
Step 3, according to CCBFgThe aggregation bit array in the node determines that the node and other nodes do not cache data, and caching is carried out, so that different data are cached between adjacent nodes.
2. The adaptive data-aware collaborative caching method for edge-network ensemble learning according to claim 1, wherein in step 1, the data of each edge compute node is inserted into a combinable counting bloom filter.
3. The adaptive data perception and collaborative caching method for edge-network ensemble learning according to claim 1, wherein in the step 1:
when the data is firstly cached and inserted into the combinable counting bloom filter, input data is hashed to k bit arrays by using k hash functions, then a corresponding unit is set to be 1, and the inserting operation is completed, wherein the corresponding unit is a unit mapped by the hash function;
when the data is cached and inserted into the combinable counting bloom filter and needs to be deleted, input data is hashed to k bit arrays by using k hash functions, and then a corresponding unit is set to be 0, so that the deletion operation is completed.
4. The adaptive data perception and collaborative caching method for edge network ensemble learning according to claim 3, wherein the method for setting the corresponding unit to 1 is: determining an available bit array, determining the next available bit array according to the number of the bit arrays of which the corresponding units are already 1 by using a pseudorandom number generator, setting the corresponding units of the available bit array to be 1 after the determination, executing bitwise OR operation on the g bit arrays, updating the aggregated bit array, and finishing the insertion operation;
the method for setting the corresponding unit to 0 is as follows: determining a deletable bit array, positioning the bit array of the last adding operation by utilizing a pseudo-random number generator and a counter, namely the bit array of the unit of each hash function operation, setting the corresponding unit to be 0 after the determination, executing bitwise operation on g bit array units, updating the aggregated bit array, and finishing the deletion operation.
5. The adaptive data-aware collaborative caching method for edge-network-ensemble-learning according to claim 1, wherein in the step 2, a combinable counting bloom filter is exchanged by using a network interface.
6. The adaptive data sensing and cooperative caching method for edge-network ensemble learning according to claim 1, wherein in step 2, it is first determined whether redundant data exists in the combinable counting bloom filter requested by the node from the neighboring node, if the data in the combinable counting bloom filter to be exchanged by the edge computing node already exists in the neighboring node, that is, there exists redundant data, the duplicated data in the edge computing node is deleted by setting the bit array corresponding unit to 0, and then the edge computing node is added to the data amount in the combinable counting bloom filter in the neighboring node, and if so, the data amount in the combinable counting bloom filter in the edge computing node is added to the data amount in the neighboring nodeIf the capacity n of the combinable counting bloom filter is not exceeded, combining all combinable counting bloom filters, performing bitwise OR operation on the bit arrays corresponding to the combinable counting bloom filters, updating the aggregation bit array, and generating the overall view CCBF of the cache datag(ii) a And if the redundant data does not exist, directly adding the edge computing node and the data amount in the combinable counting bloom filter in the neighbor node.
7. The adaptive data sensing and cooperative caching method for edge-network-based ensemble learning according to claim 6, wherein the method for determining whether redundant data exists in a combinable counting bloom filter requested by a node from a neighboring node is as follows:
hashing input data to k bit array units by using k hash functions, inquiring whether corresponding aggregation bit arrays are all 1, and if the aggregation bit arrays are all 1, inputting the existing data, namely, redundant data exists; otherwise, the input is not present, i.e., there is no redundant data.
8. The adaptive data sensing and collaborative caching method for edge network ensemble learning according to claim 1, wherein in step 3, it is queried according to the hash result of the data field requested to be cached whether the corresponding aggregation bit arrays in CCBFg are all 1, if both are 1, the data exists, the data is ignored, and no caching is performed; if the data are not 1, the data do not exist, the data are cached, and meanwhile, the corresponding data are inserted into the CCBFg, so that the condition that different submodels are generated by the differentiated data is guaranteed.
9. The adaptive data perception and collaborative caching method for edge network ensemble learning according to claim 8, wherein a high-performance ensemble result is obtained by performing ensemble learning on the different submodels.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110297787.2A CN113065032A (en) | 2021-03-19 | 2021-03-19 | Self-adaptive data sensing and collaborative caching method for edge network ensemble learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110297787.2A CN113065032A (en) | 2021-03-19 | 2021-03-19 | Self-adaptive data sensing and collaborative caching method for edge network ensemble learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113065032A true CN113065032A (en) | 2021-07-02 |
Family
ID=76562538
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110297787.2A Pending CN113065032A (en) | 2021-03-19 | 2021-03-19 | Self-adaptive data sensing and collaborative caching method for edge network ensemble learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113065032A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030005036A1 (en) * | 2001-04-06 | 2003-01-02 | Michael Mitzenmacher | Distributed, compressed Bloom filter Web cache server |
CN106982248A (en) * | 2017-03-01 | 2017-07-25 | 中国科学院深圳先进技术研究院 | The caching method and device of a kind of content center network |
CN109167840A (en) * | 2018-10-19 | 2019-01-08 | 网宿科技股份有限公司 | A kind of task method for pushing, Site autonomy server and edge cache server |
CN110991630A (en) * | 2019-11-10 | 2020-04-10 | 天津大学 | Convolutional neural network processor for edge calculation |
US20200412635A1 (en) * | 2019-06-27 | 2020-12-31 | Intel Corporation | Routing updates in icn based networks |
-
2021
- 2021-03-19 CN CN202110297787.2A patent/CN113065032A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030005036A1 (en) * | 2001-04-06 | 2003-01-02 | Michael Mitzenmacher | Distributed, compressed Bloom filter Web cache server |
CN106982248A (en) * | 2017-03-01 | 2017-07-25 | 中国科学院深圳先进技术研究院 | The caching method and device of a kind of content center network |
CN109167840A (en) * | 2018-10-19 | 2019-01-08 | 网宿科技股份有限公司 | A kind of task method for pushing, Site autonomy server and edge cache server |
US20200412635A1 (en) * | 2019-06-27 | 2020-12-31 | Intel Corporation | Routing updates in icn based networks |
CN110991630A (en) * | 2019-11-10 | 2020-04-10 | 天津大学 | Convolutional neural network processor for edge calculation |
Non-Patent Citations (4)
Title |
---|
QIN Y 等: "Adaptive In-network Collaborative Caching for Enhanced Ensemble Deep Learning at Edge", 《ARXIV PREPRINT ARXIV:2010.12899》 * |
乐光学 等: "边缘计算可信协同服务策略建模", 《计算机研究与发展》 * |
许志伟 等: "针对层次化名字路由的聚合机制", 《软件学报》 * |
黄胜等: "CCN中一种基于流行度的邻居协作缓存策略", 《小型微型计算机系统》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI735545B (en) | Model training method and device | |
US11093466B2 (en) | Incremental out-of-place updates for index structures | |
US20180088996A1 (en) | Systems and Methods of Memory Allocation for Neural Networks | |
CN110188080B (en) | Remote file data access performance optimization method based on client-side efficient cache | |
CN103488709B (en) | A kind of index establishing method and system, search method and system | |
WO2019024780A1 (en) | Light-weight processing method for blockchain, and blockchain node and storage medium | |
KR20120027132A (en) | Differential file and system restores from peers and the cloud | |
CN110998660A (en) | Method, system and apparatus for optimizing pipeline execution | |
CN104820708A (en) | Cloud computing platform based big data clustering method and device | |
CN110287201A (en) | Data access method, device, equipment and storage medium | |
CN107368608A (en) | The HDFS small documents buffer memory management methods of algorithm are replaced based on ARC | |
CN104809244A (en) | Data mining method and device in big data environment | |
CN115358487A (en) | Federal learning aggregation optimization system and method for power data sharing | |
CN110018997B (en) | Mass small file storage optimization method based on HDFS | |
CN109597903A (en) | Image file processing apparatus and method, document storage system and storage medium | |
CN112966807A (en) | Convolutional neural network implementation method based on storage resource limited FPGA | |
CN112200310B (en) | Intelligent processor, data processing method and storage medium | |
CN115544029A (en) | Data processing method and related device | |
CN113065032A (en) | Self-adaptive data sensing and collaborative caching method for edge network ensemble learning | |
JP2018511131A (en) | Hierarchical cost-based caching for online media | |
US11899625B2 (en) | Systems and methods for replication time estimation in a data deduplication system | |
EP3207457B1 (en) | Hierarchical caching for online media | |
CN114399124A (en) | Path data processing method, path planning method, path data processing device, path planning device and computer equipment | |
CN113392280A (en) | Cross-region-oriented multi-master model distributed graph calculation method | |
CN111652346A (en) | Large-scale map deep learning calculation framework based on hierarchical optimization paradigm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210702 |
|
RJ01 | Rejection of invention patent application after publication |