US11943114B1 - Active edge caching method based on community discovery and weighted federated learning - Google Patents

Active edge caching method based on community discovery and weighted federated learning Download PDF

Info

Publication number
US11943114B1
US11943114B1 US18/327,574 US202318327574A US11943114B1 US 11943114 B1 US11943114 B1 US 11943114B1 US 202318327574 A US202318327574 A US 202318327574A US 11943114 B1 US11943114 B1 US 11943114B1
Authority
US
United States
Prior art keywords
user
content
caching
users
different
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US18/327,574
Other languages
English (en)
Inventor
Haixia Zhang
Dongyang Li
Dongfeng Yuan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Assigned to SHANDONG UNIVERSITY reassignment SHANDONG UNIVERSITY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, DONGYANG, YUAN, Dongfeng, ZHANG, HAIXIA
Application granted granted Critical
Publication of US11943114B1 publication Critical patent/US11943114B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/22Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/14Direct-mode setup
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the present disclosure belongs to the technical fields of wireless communication and artificial intelligence, and specifically relates to an active edge caching method based on community discovery and weighted federated learning, which may be used for intelligent management and planning of caching resources in device-to-device (D2D) assisted wireless communication networks.
  • D2D device-to-device
  • edge caching may store hot content that users are interested in at the edge of a network in advance, thereby reducing load pressure of a communication link and greatly reducing transmission latency of content.
  • existing edge caching schemes may be divided into base station caching and user caching.
  • the user caching may store hot content in a terminal closer to a user, and the content may be transmitted through D2D direct communication, thereby further reducing the transmission latency of the content. Therefore, the user caching is considered by the industry and academia as one of the important technical means to ensure the low latency requirements of services.
  • the present disclosure provides an active edge caching method based on community discovery and weighted federated learning, which is used for selecting a best caching user and developing an optimal user caching strategy, so as to achieve an optimal compromise between the operation cost and the transmission latency of the content.
  • the present disclosure first provides a user grouping method based on community discovery, in which users are divided into different user groups according to users' mobility and social attributes, then degrees of importance of different users are computed in each user group, and the most important user is selected as a caching node to provide content distribution services.
  • the present disclosure provides a content popularity prediction framework based on attention weighted federated learning, which combines an attention weighted federated learning mechanism with a deep learning (DL) model to predict future user preferences for different content. This framework not only improves accuracy of content popularity prediction, but also solves problems of user privacy disclosure.
  • an optimal caching strategy is developed based on caching user selection and content popularity prediction to reduce network transmission latency and network operation cost.
  • the present disclosure provides an active edge caching method based on community discovery and weighted federated learning.
  • Users are aggregated into different user groups in a service scope of a base station by using a community discovery algorithm, and a most important user is selected from each user group as a caching node to provide content distribution services.
  • a content popularity prediction framework based on attention weighted federated learning is designed to train the DL model. Then, user's content preferences at the next moment are predicted by using the trained DL model to cache hot content on a selected user.
  • the present disclosure caches the hottest content to the optimal selected user, which can greatly reduce network transmission latency and network operation cost.
  • An active edge caching method based on community discovery and weighted federated learning includes:
  • the aggregating users into different user groups in a service scope of a base station by using a community discovery algorithm includes:
  • dividing users into different user groups by using a Louvain community discovery algorithm includes:
  • the selecting a most important user from each user group as a caching node to provide content distribution services includes:
  • the training a content popularity deep learning prediction model namely, DL model with an attention weighted federated learning framework includes:
  • ⁇ r U ° represents a quantity of requests for different content by the selected terminal U between time windows [r i 1/ ⁇ dot over (2) ⁇ r];
  • q r+1 u represents a computing capability of the selected user terminal U in the (r+1) th federated training process,
  • e r+1 U represents the number of local training times that the computing capability of the selected terminal U may be performed in the (r+1) th federated process, log( ) is logarithmic computation, and maxf eg is the maximum number of local training times;
  • the caching user selected in step (4) uses the obtained content popularity deep learning prediction model to predict user preferences for different content at the next moment to cache hot content, including:
  • ⁇ f 1 F ⁇ Y ⁇ r + 1 f represents a sum of user preferences for all content
  • the content popularity deep learning prediction model is a bidirectional long short-term memory network model.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the active edge caching method based on community discovery and weighted federated learning when executing the computer program.
  • a computer-readable storage medium stores a computer program, and the computer program implements the steps of the active edge caching method based on community discovery and weighted federated learning when executed by a processor.
  • a weighted federated learning framework is used for content popularity prediction to effectively solve problems of user privacy disclosure.
  • the present disclosure may be applied to intelligent management and planning of storage resources in communication scenarios of cellular networks, Internet of vehicles, industrial Internet, and the like to meet low latency communication requirements of various novel service applications in different vertical fields.
  • FIG. 1 is a block diagram of an operating system for an active edge caching method according to the present disclosure
  • FIG. 2 is a schematic flowchart of constructing a D2D content sharing graph according to the present disclosure
  • FIG. 3 is a schematic flowchart of grouping users using a Louvain community discovery algorithm according to the present disclosure
  • FIG. 4 is a schematic diagram of a content popularity prediction model trained based on a weighted federated learning framework according to the present disclosure
  • FIG. 5 is a schematic flowchart of weighted aggregation of different local models at a base station according to the present disclosure
  • FIG. 6 is a block diagram of a BiLSTM-based content popularity deep learning prediction model used in the present disclosure
  • FIG. 7 is a performance analysis diagram of a content popularity deep learning prediction model based on a weighted federated learning framework according to the present disclosure
  • FIG. 8 A is a latency performance analysis diagram of an active edge caching method based on community discovery and weighted federated learning according to the present disclosure under different caching capabilities
  • FIG. 8 B is an analysis diagram of system benefit per unit cost of an active edge caching method based on community discovery and weighted federated learning according to the present disclosure under different caching capabilities.
  • An active edge caching method based on community discovery and weighted federated learning includes:
  • the present disclosure provides an active edge caching method to reduce network transmission latency and network operation cost.
  • the aggregating users into different user groups in a service scope of a base station by using a community discovery algorithm includes:
  • the communication distance threshold is generally determined by transmitting power of a user terminal, and the higher transmitting power indicates a longer transmission distance
  • the dividing the users into different user groups by using a Louvain community discovery algorithm includes:
  • step B repeating step B until the communities of all nodes do not change;
  • the selecting a most important user from each user group as a caching node to provide content distribution services includes:
  • the training a content popularity deep learning prediction model namely, DL model with an attention weighted federated learning framework includes:
  • ⁇ r u ° represents a quantity of requests for different content by the selected terminal U between time windows [r i 1/ ⁇ dot over (2) ⁇ r];
  • g r+1 u represents a computing capability of the selected user terminal U in the (r+1) th federated training process,
  • e r+1 u represents the number of local training times that the computing capability of the selected terminal U may be performed in the (r+1) th federated process, log( ) is logarithmic computation, and maxf eg is the maximum number of local training times;
  • the caching user selected in step (4) uses the obtained content popularity deep learning prediction model to predict user preferences for different content at the next moment to cache hot content, including:
  • ⁇ f 1 F ⁇ Y ⁇ r + 1 f represents a sum of user preferences for all content
  • the content popularity deep learning prediction model used in the present disclosure is a bidirectional long short-term memory (BiLSTM) network model, with a structure shown in FIG. 6 .
  • the prediction model is not limited to the use of a bidirectional long and short-term memory network, but may be a deep learning network model such as a convolutional neural network model or a graph neural network model.
  • FIG. 7 is a performance analysis diagram of the content popularity deep learning prediction model based on a weighted federated learning framework in this embodiment, where horizontal coordinates represent indexes of different request content, and vertical coordinates represent the number of times the user has requested different content.
  • AWFL is a predicted value of the content popularity model based on a weighted federated learning framework
  • Group True is a true value. It may be seen that the AWFL method of the present disclosure can accurately predict user's future requests for different content.
  • the combination method of weighted federated learning and a bidirectional long short-term memory network, provided in this embodiment, can well fit user's preferences for different content.
  • FIG. 8 ( a ) is a latency performance analysis diagram of the active edge caching method based on community discovery and weighted federated learning in this embodiment under different caching capabilities, where horizontal coordinates represent quantities of content that may be cached by different user terminals, vertical coordinates represent content downloading latency, and CAFLPC is the active edge caching method based on community discovery and weighted federated learning provided in the present disclosure.
  • FIG. 8 ( a ) can demonstrate that the provided CAFLPC method can well reduce content downloading latency and obtain approximately optimal policy performance under different caching capabilities compared with other methods.
  • FIG. 8 ( b ) is an analysis diagram of system benefit per unit cost of the active edge caching method based on community discovery and weighted federated learning in this embodiment under different caching capabilities. Horizontal coordinates represent quantities of content that may be cached by different user terminals, and vertical coordinates represent system benefit per unit cost. FIG. 8 ( b ) can prove that the provided CAFLPC method can reduce more content downloading latency per unit cost compared with other methods, that is, the provided method can achieve goals of reducing network transmission latency and network operation cost.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the active edge caching method based on community discovery and weighted federated learning when executing the computer program.
  • a computer-readable storage medium stores a computer program, and the computer program implements the steps of the active edge caching method based on community discovery and weighted federated learning when executed by a processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Transfer Between Computers (AREA)
US18/327,574 2022-10-25 2023-06-01 Active edge caching method based on community discovery and weighted federated learning Active US11943114B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211310238.5A CN115696296B (zh) 2022-10-25 2022-10-25 一种基于社区发现和加权联邦学习的主动边缘缓存方法
CN202211310238.5 2022-10-25

Publications (1)

Publication Number Publication Date
US11943114B1 true US11943114B1 (en) 2024-03-26

Family

ID=85099619

Family Applications (1)

Application Number Title Priority Date Filing Date
US18/327,574 Active US11943114B1 (en) 2022-10-25 2023-06-01 Active edge caching method based on community discovery and weighted federated learning

Country Status (2)

Country Link
US (1) US11943114B1 (zh)
CN (1) CN115696296B (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150026289A1 (en) * 2013-07-19 2015-01-22 Opanga Networks, Inc. Content source discovery
US20160294971A1 (en) * 2015-03-30 2016-10-06 Huawei Technologies Co., Ltd. Distributed Content Discovery for In-Network Caching
CN111865826A (zh) 2020-07-02 2020-10-30 大连理工大学 一种基于联邦学习的主动内容缓存方法
US20210144202A1 (en) * 2020-11-13 2021-05-13 Christian Maciocco Extended peer-to-peer (p2p) with edge networking
CN113315978A (zh) 2021-05-13 2021-08-27 江南大学 一种基于联邦学习的协作式在线视频边缘缓存方法
CN114205791A (zh) 2021-12-13 2022-03-18 西安电子科技大学 一种基于深度q学习的社交感知d2d协同缓存方法
CN114595632A (zh) 2022-03-07 2022-06-07 北京工业大学 一种基于联邦学习的移动边缘缓存优化方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150026289A1 (en) * 2013-07-19 2015-01-22 Opanga Networks, Inc. Content source discovery
US20160294971A1 (en) * 2015-03-30 2016-10-06 Huawei Technologies Co., Ltd. Distributed Content Discovery for In-Network Caching
CN111865826A (zh) 2020-07-02 2020-10-30 大连理工大学 一种基于联邦学习的主动内容缓存方法
US20210144202A1 (en) * 2020-11-13 2021-05-13 Christian Maciocco Extended peer-to-peer (p2p) with edge networking
CN113315978A (zh) 2021-05-13 2021-08-27 江南大学 一种基于联邦学习的协作式在线视频边缘缓存方法
CN114205791A (zh) 2021-12-13 2022-03-18 西安电子科技大学 一种基于深度q学习的社交感知d2d协同缓存方法
CN114595632A (zh) 2022-03-07 2022-06-07 北京工业大学 一种基于联邦学习的移动边缘缓存优化方法

Also Published As

Publication number Publication date
CN115696296A (zh) 2023-02-03
CN115696296B (zh) 2023-07-07

Similar Documents

Publication Publication Date Title
Lin et al. Resource management for pervasive-edge-computing-assisted wireless VR streaming in industrial Internet of Things
Khan et al. Self organizing federated learning over wireless networks: A socially aware clustering approach
Yang et al. Learning automata based Q-learning for content placement in cooperative caching
CN104995870B (zh) 多目标服务器布局确定方法和装置
Singh et al. [Retracted] Energy‐Efficient Clustering and Routing Algorithm Using Hybrid Fuzzy with Grey Wolf Optimization in Wireless Sensor Networks
Wu et al. A reputation value‐based task‐sharing strategy in opportunistic complex social networks
Li et al. Learning-based delay-aware caching in wireless D2D caching networks
Li et al. An optimized content caching strategy for video stream in edge-cloud environment
Bai et al. A deep-reinforcement-learning-based social-aware cooperative caching scheme in D2D communication networks
Zhang et al. Two time-scale caching placement and user association in dynamic cellular networks
Li et al. User-preference-learning-based proactive edge caching for D2D-assisted wireless networks
Li et al. Learning-based hierarchical edge caching for cloud-aided heterogeneous networks
Qian et al. Many-to-many matching for social-aware minimized redundancy caching in D2D-enabled cellular networks
Jiang et al. Federated learning-based content popularity prediction in fog radio access networks
Wu et al. Social-aware graph-based collaborative caching in edge-user networks
Li et al. Influence maximization for emergency information diffusion in social internet of vehicles
Chang et al. Cooperative edge caching via multi agent reinforcement learning in fog radio access networks
Ren et al. Incentivized social-aware proactive device caching with user preference prediction
US11943114B1 (en) Active edge caching method based on community discovery and weighted federated learning
CN116155991B (zh) 一种基于深度强化学习的边缘内容缓存与推荐方法及系统
Zhu et al. Edge collaborative caching solution based on improved NSGA II algorithm in Internet of Vehicles
Ghosh et al. Reliable data transmission for a VANET-IoIT architecture: A DNN approach
Wu et al. Multi-Agent Federated Deep Reinforcement Learning Based Collaborative Caching Strategy for Vehicular Edge Networks
Chen et al. An edge caching strategy based on separated learning of user preference and content popularity
Shi et al. A diversified recommendation scheme for wireless content caching networks

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE