WO2023093238A1 - Method and apparatus for performing service processing by using learning model - Google Patents

Method and apparatus for performing service processing by using learning model Download PDF

Info

Publication number
WO2023093238A1
WO2023093238A1 PCT/CN2022/119866 CN2022119866W WO2023093238A1 WO 2023093238 A1 WO2023093238 A1 WO 2023093238A1 CN 2022119866 W CN2022119866 W CN 2022119866W WO 2023093238 A1 WO2023093238 A1 WO 2023093238A1
Authority
WO
WIPO (PCT)
Prior art keywords
intelligent layer
intelligent
aggregation
model parameters
layer
Prior art date
Application number
PCT/CN2022/119866
Other languages
French (fr)
Chinese (zh)
Inventor
崔琪楣
梁盛源
赵博睿
任崇万
陶小峰
Original Assignee
北京邮电大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京邮电大学 filed Critical 北京邮电大学
Publication of WO2023093238A1 publication Critical patent/WO2023093238A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Definitions

  • This application relates to data processing technology, especially a method and device for business processing using a learning model.
  • the architecture of the existing communication network refers to the "cloud-edge-end" three-layer intelligent architecture, where the edge intelligence usually refers to the edge server, which is used to handle tasks such as calculations on the user data plane. However, it does not consider the intelligentization of the network control plane and management plane at the edge. In addition, the existing network architecture does not fully reflect the intelligent characteristics of base station equipment.
  • Embodiments of the present application provide a method and device for processing services using a learning model, wherein, according to an aspect of the embodiments of the present application, a method for processing services using a learning model is provided, which is applied to base station equipment, including:
  • Business processing is performed by using the layered federated learning model.
  • the acquisition of the communication network architecture composed of the first intelligent layer, the second intelligent layer and the third intelligent layer further includes:
  • the fourth intelligent layer After detecting that the function configuration of the fourth intelligent layer is completed, it is determined to generate a communication network architecture composed of the first intelligent layer, the second intelligent layer, the third intelligent layer, and the fourth intelligent layer.
  • the use of a hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side includes:
  • the initial model parameters are model parameters obtained by the user equipment or the first intelligent layer using local data for model training;
  • the second intelligent layer sends the first aggregation model parameters to the first intelligent layer, and determines that the aggregation at the first level is completed after reaching the first number.
  • the method includes:
  • the first intelligent layer sends the first aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the first aggregated model parameters; or,
  • the first intelligent layer trains an initial learning model according to the first aggregated model parameters.
  • the method includes:
  • the third intelligent layer sends the second aggregation model parameters to the second intelligent layer, and determines that the aggregation at the second level is completed after reaching a second number of times.
  • the method includes:
  • the second intelligence layer sends the second aggregated model parameters to the first intelligence layer;
  • the first intelligent layer sends the second aggregation model parameters to the user equipment, so that the user equipment trains the initial learning model according to the second aggregation model parameters to obtain the hierarchical federated learning model; or ,
  • the first intelligent layer trains an initial learning model according to the second aggregation model parameters to obtain the hierarchical federated learning model.
  • the third intelligent layer after the third intelligent layer sends the second aggregation model parameters to the second intelligent layer, it includes:
  • each of the third intelligent layers sends the second aggregation model parameters to the fourth intelligent layer, so that the fourth intelligent layer performs a process on the second aggregation model parameters. After the third level of aggregation, the third aggregation model parameters are obtained;
  • the fourth intelligent layer sends the third aggregation model parameters to the first intelligent layer step by step, so that the first intelligent layer trains the initial learning model according to the first aggregation model parameters; or ,
  • the first intelligent layer sends the third aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the third aggregated model parameters to obtain the hierarchical federated learning model.
  • the use of the hierarchical federated learning model for business processing includes:
  • the user equipment uses the hierarchical federated learning model to perform the first service processing; or,
  • the base station device uses the layered federated learning model to perform the second service processing.
  • the first intelligent layer and the second intelligent layer are deployed in the base station equipment, including:
  • the first intelligent layer is deployed in a distributed unit DU of the base station device, and the second intelligent layer is deployed in a centralized unit CU of the base station device; or,
  • the first intelligent layer is deployed in small base station equipment, and the second intelligent layer is deployed in macro base station equipment.
  • a device for using a learning model for business processing including:
  • An acquisition module configured to acquire a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, wherein the first intelligent layer and the second intelligent layer are deployed in base station equipment, and the third intelligent layer Deployed in edge nodes;
  • the generating module is configured to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side by using the hierarchical federated learning algorithm and the communication network architecture;
  • the processing module is configured to use the hierarchical federated learning model to perform business processing.
  • an electronic device including:
  • the display is used for displaying with the memory to execute the executable instruction so as to complete the operation of any one of the above-mentioned methods for business processing using a learning model.
  • a computer-readable storage medium for storing computer-readable instructions, and when the instructions are executed, any one of the above-mentioned methods for business processing using a learning model is performed operation.
  • the communication network architecture composed of the first intelligent layer, the second intelligent layer and the third intelligent layer can be obtained, wherein the first intelligent layer and the second intelligent layer are deployed in the base station equipment, and the third intelligent layer is deployed in the edge In the node; use the hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side; and use the hierarchical federated learning model for business processing.
  • the distributed unit DU and the centralized unit CU of the base station equipment can be used to form a communication network architecture together with the edge nodes.
  • the communication network can be used to aggregate the model parameters of each device node in the network, and the aggregated model parameters can be used to construct a hierarchical federated learning model deployed on the user equipment side or the base station equipment side. Then realize the purpose of using the hierarchical federated learning model for business processing.
  • FIG. 1 is a schematic diagram of a method of using a learning model for business processing proposed in this application
  • Fig. 2 is a schematic diagram of the system architecture applied to the business processing method using the learning model proposed by the present application;
  • FIG. 3 is a schematic structural diagram of an electronic device using a learning model for business processing proposed in this application;
  • FIG. 4 is a schematic structural diagram of an electronic device using a learning model for business processing proposed in this application.
  • FIGS. 1-2 A method for performing service processing using a learning model according to an exemplary embodiment of the present application will be described below with reference to FIGS. 1-2 . It should be noted that the following application scenarios are only shown for easy understanding of the spirit and principle of the present application, and the implementation manners of the present application are not limited in this regard. On the contrary, the embodiments of the present application can be applied to any applicable scene.
  • the present application also proposes a method and device for business processing using a learning model.
  • Fig. 1 schematically shows a schematic flowchart of a method for using a learning model to process business according to an embodiment of the present application. As shown in Figure 1, the method includes:
  • S101 Obtain a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, where the first intelligent layer and the second intelligent layer are deployed in base station devices, and the third intelligent layer is deployed in edge nodes.
  • FIG. 2 it is a system schematic diagram of a communication network architecture proposed in this application. It includes the first intelligent layer deployed in the distributed unit DU of the base station equipment, the second intelligent layer deployed in the centralized unit CU of the base station equipment, and the third intelligent layer deployed in the edge node. In one manner, the fourth intelligent layer deployed in the cloud server is also included.
  • the fourth intelligent layer is a high-level management intelligent component, which is responsible for the management between various sub-networks.
  • the third intelligent layer is the network intelligent orchestration component above the base station, which is responsible for the functional orchestration management between each base station.
  • the second intelligent layer is a centralized intelligent component inside the base station, which is responsible for intelligent enhancement and realization of traditional wireless resource management (RRM).
  • the first intelligent layer is a distributed intelligent component inside the base station, which is responsible for further optimizing the parameters with short scheduling cycles.
  • first intelligent layer and the second intelligent layer mentioned in this application can be deployed in various ways, for example, the first intelligent layer can be deployed in the distributed unit DU of the base station equipment, and the second intelligent layer can be deployed in In the centralized unit CU of the base station equipment.
  • the first intelligent layer may be deployed on small base station equipment.
  • the second intelligent layer can be deployed on the macro base station equipment.
  • the process of a traditional distributed machine learning model usually includes steps:
  • the central (centralized) server collects various distributed scattered data
  • the central server will distribute learning tasks (and training data) to each distributed node;
  • Each distributed node receives the assigned learning tasks (and training data) and starts learning;
  • the central server merges the learning results of each node
  • this application can use the communication network architecture constructed by multiple intelligent layers to realize the aggregation of the model parameters uploaded by each client device node, and obtain the aggregated model parameters, so that the subsequent use of the aggregated model parameters for the initial learning model Perform hierarchical learning training to obtain a hierarchical federated learning model for service processing on the user equipment side or base station side.
  • edge node in the embodiment of the present application may be an edge server, or may be an edge device such as an edge network element.
  • the communication network architecture proposed in this application may have the function of aggregating parameters, so that the hierarchical federated learning algorithm is used to aggregate the model parameters uploaded by each node in the network architecture and send them to the user equipment.
  • the user equipment uses the aggregated model parameters to train the initial learning model deployed on itself, and then obtains a hierarchical federated learning model. In order to enable subsequent business processing through this model.
  • the user equipment may use the hierarchical federated learning model to perform the first service processing; and/or, the base station device may use the hierarchical federated learning model to perform the second service processing.
  • the first business processing may include driving route planning, face recognition, keyboard input prediction, and so on. It can be understood that in this way, the trained hierarchical federated learning model is handed over to the user for business processing.
  • the second service processing may include traditional RRM AI enhanced services performed by the base station, such as mobility management, load balancing, dynamic resource allocation, interference coordination, MAC real-time scheduling, beam management and so on.
  • RRM AI enhanced services performed by the base station
  • the purpose of RRM is to improve the utilization rate of radio resources and meet the demand of mobile services for radio resources. It can be understood that in this way, the trained hierarchical federated learning model is handed over to the base station for business processing.
  • the communication network architecture composed of the first intelligent layer, the second intelligent layer and the third intelligent layer can be obtained, wherein the first intelligent layer and the second intelligent layer are deployed in the base station equipment, and the third intelligent layer is deployed in the edge In the node; use the hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side; and use the hierarchical federated learning model for business processing.
  • the distributed unit DU and the centralized unit CU of the base station equipment can be used to form a communication network architecture together with the edge nodes.
  • the communication network can be used to aggregate the model parameters of each device node in the network, and the aggregated model parameters can be used to construct a hierarchical federated learning model deployed on the user equipment side or the base station equipment side. Then realize the purpose of using the hierarchical federated learning model for business processing.
  • the acquisition of the communication network architecture composed of the first intelligent layer, the second intelligent layer and the third intelligent layer further includes:
  • the fourth intelligent layer After detecting that the function configuration of the fourth intelligent layer is completed, it is determined to generate a communication network architecture composed of the first intelligent layer, the second intelligent layer, the third intelligent layer, and the fourth intelligent layer.
  • the use of a hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side includes:
  • the initial model parameters are model parameters obtained by the user equipment or the first intelligent layer using local data for model training;
  • the second intelligent layer sends the first aggregation model parameters to the first intelligent layer, and determines that the aggregation at the first level is completed after reaching the first number.
  • the method includes:
  • the first intelligent layer sends the first aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the first aggregated model parameters; or,
  • the first intelligent layer trains an initial learning model according to the first aggregated model parameters.
  • the method includes:
  • the third intelligent layer sends the second aggregation model parameters to the second intelligent layer, and determines that the aggregation at the second level is completed after reaching a second number of times.
  • the method includes:
  • the second intelligence layer sends the second aggregated model parameters to the first intelligence layer;
  • the first intelligent layer sends the second aggregation model parameters to the user equipment, so that the user equipment trains the initial learning model according to the second aggregation model parameters to obtain the hierarchical federated learning model; or ,
  • the first intelligent layer trains the initial learning model according to the second aggregation model parameters to obtain the hierarchical federated learning model.
  • the third intelligent layer after the third intelligent layer sends the second aggregation model parameters to the second intelligent layer, it includes:
  • each of the third intelligent layers sends the second aggregation model parameters to the fourth intelligent layer, so that the fourth intelligent layer performs a process on the second aggregation model parameters. After the third level of aggregation, the third aggregation model parameters are obtained;
  • the fourth intelligent layer sends the third aggregation model parameters to the first intelligent layer step by step, so that the first intelligent layer trains the initial learning model according to the first aggregation model parameters; or ,
  • the first intelligent layer sends the third aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the third aggregated model parameters to obtain the hierarchical federated learning model.
  • the use of the hierarchical federated learning model for business processing includes:
  • the user equipment uses the hierarchical federated learning model to perform the first service processing; or,
  • the base station device uses the layered federated learning model to perform the second service processing.
  • the first intelligent layer and the second intelligent layer are deployed in the base station equipment, including:
  • the first intelligent layer is deployed in a distributed unit DU of the base station device, and the second intelligent layer is deployed in a centralized unit CU of the base station device; or,
  • the first intelligent layer is deployed in small base station equipment, and the second intelligent layer is deployed in macro base station equipment.
  • the first intelligent layer deployed in the distributed unit DU of the base station equipment and the second intelligent layer deployed in the centralized unit CU of the base station equipment can be obtained; in this way, the base station equipment can There are one or more CUs and one or more DUs. Among them, one CU can be connected to one or more DUs. or,
  • the small base station equipment SBS small base station
  • MBS macro base station
  • MBS macro base station
  • MBS macro base station
  • either MSB or SBS includes one or more CUs and DUs.
  • one MBS can manage one or more SBSs.
  • the embodiment of the present application is illustrated by taking the communication network architecture including three intelligent layers as an example:
  • Step 1 the first intelligent layer obtains the user equipment and uses local data to perform model training and learning, so as to generate initial model parameters.
  • the high-level aggregator and the low-level aggregator can be defined first before this And build a digital twin network.
  • Step 2 The first intelligent layer uploads the update of the initial model parameters to the second intelligent layer, and the second intelligent layer updates all the received model parameters based on the aggregation criteria for first-level aggregation to obtain the first aggregated model parameters.
  • the aggregation criterion includes: a layered federated average algorithm, etc., which can be used for aggregation algorithms or criteria.
  • Step 3 The second intelligent layer sends the first aggregated model parameters after aggregation to the first intelligent layer that manages the connection, and completes a process of low-level federated learning.
  • Step 4 Repeat the above steps to the first number until it is determined that the aggregation of the first level is completed, the second intelligent layer uploads the parameters of the first aggregation model after aggregation to the third intelligent layer, and the third intelligent layer will receive all the parameters of the first aggregation model
  • An aggregation model parameter update performs second-level aggregation based on aggregation criteria to obtain second aggregation model parameters.
  • the aggregation criterion includes: a layered federated average algorithm, etc., which can be used for aggregation algorithms or criteria.
  • Step 5 The third intelligent layer sends the aggregated parameters of the second aggregation model to the second intelligent layer of its management connection, and the second intelligent layer sends the aggregated parameters of the second aggregation model to the first layer of its management connection.
  • the intelligent layer and selectively sends the aggregated model parameters to the user equipment. Complete a process of high-level federated learning.
  • Step 6 After receiving the aggregated model parameters, the user equipment uses the aggregated model parameters to perform initial learning model training until it is determined that the trained business network model meets the preset conditions, and then determines to generate a hierarchical federated learning model , where the preset conditions include one of training until the model converges, the number of training times reaching the maximum number of iterations, and the training duration reaching the maximum training time.
  • the preset conditions include one of training until the model converges, the number of training times reaching the maximum number of iterations, and the training duration reaching the maximum training time.
  • the embodiment of the present application uses a case where the communication network architecture includes four intelligent layers, and the user equipment uses a hierarchical federated learning model to perform the first service processing as an example:
  • Step 1 the first intelligent layer obtains the user equipment and uses local data to perform model training and learning, so as to generate initial model parameters.
  • the high-level aggregator and the low-level aggregator can be defined first before this And build a digital twin network.
  • Step 2 The first intelligent layer uploads the update of the initial model parameters to the second intelligent layer, and the second intelligent layer updates all the received model parameters based on the aggregation criteria for first-level aggregation to obtain the first aggregated model parameters.
  • the aggregation criterion includes: a layered federated average algorithm, etc., which can be used for aggregation algorithms or criteria.
  • Step 3 The second intelligent layer sends the first aggregated model parameters after aggregation to the first intelligent layer that manages the connection, and completes a process of low-level federated learning.
  • Step 4 Repeat the above steps to the first number until it is determined that the aggregation of the first level is completed, and the second intelligent layer uploads the first aggregation model parameters after aggregation to the third intelligent layer, and the third intelligent layer will receive all first-level
  • the aggregation model parameter update performs second-level aggregation based on the aggregation criterion to obtain second aggregation model parameters.
  • the aggregation criterion includes: a layered federated average algorithm, etc., which can be used for aggregation algorithms or criteria.
  • Step 5 The third intelligent layer sends the aggregated parameters of the second aggregation model to the second intelligent layer of its management connection, and the second intelligent layer sends the aggregated parameters of the second aggregation model to the first layer of its management connection. intelligence layer, and subsequently to the user device. Complete a process of high-level federated learning.
  • Step 6 Repeat the above steps to the second number of times, the third intelligent layer uploads the aggregated second aggregation model parameters to the fourth intelligent layer, and the fourth intelligent layer performs third-level aggregation on the second aggregation model parameters to obtain the first Three aggregation model parameters.
  • Step 7 The fourth intelligent layer delivers the aggregated third aggregation model parameters to the first intelligent layer level by level; and, the first intelligent layer selectively sends the third aggregation model parameters to the user equipment.
  • Step 8 After receiving the aggregated model parameters, the user equipment uses the aggregated model parameters to perform initial learning model training until it is determined that the trained service network model meets the preset conditions, and then determines to generate a hierarchical federated learning model , where the preset conditions include one of training to model convergence, the number of training times reaching the maximum number of iterations, and the training duration reaching the maximum training time.
  • the model training and subsequent reasoning process can be performed locally or on The digital twin network is performed in vivo.
  • the embodiment of the present application uses a case where the communication network architecture includes four intelligent layers, and the base station equipment uses a hierarchical federated learning model to perform the second service processing as an example:
  • Step 1 wherein, the first intelligent layer (i.e., the base station equipment) uses local data for model training and learning to generate initial model parameters.
  • the high-level aggregator and the low-level aggregator can be first defined before this. Aggregators and building digital twin networks.
  • Step 2 The first intelligent layer uploads the update of the initial model parameters to the second intelligent layer, and the second intelligent layer updates all the received model parameters based on the aggregation criteria for first-level aggregation to obtain the first aggregated model parameters.
  • the aggregation criterion includes: a layered federated average algorithm, etc., which can be used for aggregation algorithms or criteria.
  • Step 3 The second intelligent layer sends the first aggregated model parameters after aggregation to the first intelligent layer that manages the connection, and completes a process of low-level federated learning.
  • Step 4 Repeat the above steps to the first number until it is determined that the aggregation of the first level is completed, and the second intelligent layer uploads the first aggregation model parameters after aggregation to the third intelligent layer, and the third intelligent layer will receive all first-level
  • the aggregation model parameter update performs second-level aggregation based on the aggregation criterion to obtain second aggregation model parameters.
  • the aggregation criterion includes: a layered federated average algorithm, etc., which can be used for aggregation algorithms or criteria.
  • Step 5 The third intelligent layer sends the aggregated parameters of the second aggregation model to the second intelligent layer of its management connection, and the second intelligent layer sends the aggregated parameters of the second aggregation model to the first layer of its management connection. smart layer. Complete a process of high-level federated learning.
  • Step 6 Repeat the above steps to the second number of times, the third intelligent layer uploads the aggregated second aggregation model parameters to the fourth intelligent layer, and the fourth intelligent layer performs third-level aggregation on the second aggregation model parameters to obtain the first Three aggregation model parameters.
  • Step 7 The fourth intelligent layer sends the aggregated parameters of the third aggregation model to the first intelligent layer level by level.
  • Step 8 After receiving the aggregated model parameters, the first intelligent layer uses the aggregated model parameters to perform initial learning model training until it is determined that the trained business network model meets the preset conditions, and then determines to generate a hierarchical federation Learning model, where the preset conditions include one of the training until the model converges, the number of training times reaches the maximum number of iterations, and the training time reaches the maximum training time.
  • the model training and subsequent reasoning process can be performed locally or Can be done within the digital twin network body.
  • the present application further provides an apparatus for performing service processing by using a learning model.
  • a learning model including:
  • An acquisition module configured to acquire a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, wherein the first intelligent layer and the second intelligent layer are deployed in base station equipment, and the third intelligent layer Deployed in edge nodes;
  • the generating module is configured to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side by using the hierarchical federated learning algorithm and the communication network architecture;
  • the processing module is configured to use the hierarchical federated learning model to perform business processing.
  • the communication network architecture composed of the first intelligent layer, the second intelligent layer and the third intelligent layer can be obtained, wherein the first intelligent layer and the second intelligent layer are deployed in the base station equipment, and the third intelligent layer is deployed in the edge In the node; use the hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side; and use the hierarchical federated learning model for business processing.
  • the distributed unit DU and the centralized unit CU of the base station equipment can be used to form a communication network architecture together with the edge nodes.
  • the communication network can be used to aggregate the model parameters of each device node in the network, and the aggregated model parameters can be used to construct a hierarchical federated learning model deployed on the user equipment side or the base station equipment side. Then realize the purpose of using the hierarchical federated learning model for business processing.
  • the acquiring module 201 further includes:
  • the obtaining module 201 is configured to obtain the fourth intelligent layer deployed in the cloud server;
  • the acquisition module 201 is configured to perform functional configuration on the fourth intelligent layer according to a preset configuration policy
  • the acquiring module 201 is configured to, when it is detected that the function configuration of the fourth intelligent layer is completed, determine to generate the first intelligent layer, the second intelligent layer, the third intelligent layer and the fourth intelligent layer Composed of communication network architecture.
  • the acquiring module 201 further includes:
  • the obtaining module 201 is configured to use each of the first intelligent layers to obtain initial model parameters, where the initial model parameters are model parameters obtained by the user equipment or the first intelligent layer using local data for model training;
  • the acquisition module 201 is configured to use the second intelligent layer to receive the initial model parameters transmitted by each of the first intelligent layers, and perform first-level aggregation on the initial model parameters to obtain first aggregated model parameters;
  • the obtaining module 201 is configured so that the second intelligent layer sends the first aggregation model parameter to the first intelligent layer, and determines that the aggregation of the first layer is completed after reaching the first number.
  • the acquiring module 201 further includes:
  • the obtaining module 201 is configured such that the first intelligent layer sends the first aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the first aggregated model parameters; or,
  • the obtaining module 201 is configured such that the first intelligent layer trains an initial learning model according to the first aggregation model parameters.
  • the acquiring module 201 further includes:
  • the acquisition module 201 is configured to send the first aggregation model parameters to the third intelligent layer by each of the second intelligent layers;
  • the acquisition module 201 is configured to perform second-level aggregation on each of the first aggregation model parameters by the third intelligent layer to obtain second aggregation model parameters;
  • the obtaining module 201 is configured so that the third intelligent layer sends the second aggregation model parameters to the second intelligent layer, and determines that the second layer aggregation is completed after reaching a second number of times.
  • the acquiring module 201 further includes:
  • the acquisition module 201 is configured to send the second aggregation model parameters to the first intelligent layer by the second intelligent layer;
  • the obtaining module 201 is configured such that the first intelligent layer sends the second aggregation model parameters to the user equipment, so that the user equipment trains an initial learning model according to the second aggregation model parameters to obtain the A hierarchical federated learning model; or,
  • the acquisition module 201 is configured such that the first intelligent layer trains an initial learning model according to the second aggregation model parameters to obtain the hierarchical federated learning model.
  • the acquiring module 201 further includes:
  • the obtaining module 201 is configured to, if it is determined that there is a fourth intelligent layer, each of the third intelligent layers sends the second aggregation model parameters to the fourth intelligent layer, so that the fourth intelligent layer can After the second aggregation model parameters are aggregated at the third level, the third aggregation model parameters are obtained;
  • the obtaining module 201 is configured such that the fourth intelligent layer sends the third aggregation model parameters to the first intelligent layer step by step, so that the first intelligent layer The initial learning model is trained; or,
  • the obtaining module 201 is configured such that the first intelligent layer sends the third aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the third aggregated model parameters to obtain the described Hierarchical federated learning model.
  • the acquiring module 201 further includes:
  • the obtaining module 201 is configured to use the hierarchical federated learning model to perform the first service processing by the user equipment; or,
  • the obtaining module 201 is configured to use the hierarchical federated learning model to perform second service processing by the base station device.
  • the acquiring module 201 further includes:
  • the acquiring module 201 is configured such that the first intelligent layer is deployed in the distributed unit DU of the base station device, and the second intelligent layer is deployed in the centralized unit CU of the base station device; or,
  • the obtaining module 201 is configured such that the first intelligent layer is deployed in small base station equipment, and the second intelligent layer is deployed in macro base station equipment.
  • Fig. 4 is a logical structural block diagram of an electronic device according to an exemplary embodiment.
  • the electronic device 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • a non-transitory computer-readable storage medium including instructions, such as a memory including instructions, the instructions can be executed by the electronic device processor to complete the above-mentioned method for business processing using a learning model
  • the method includes: obtaining a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, wherein the first intelligent layer and the second intelligent layer are deployed in base station equipment, and the third intelligent layer is deployed in In the edge node: using the layered federated learning algorithm and the communication network architecture to generate a layered federated learning model deployed on the user equipment side or the base station device side; using the layered federated learning model to perform business processing.
  • the above instructions may also be executed by a processor of the electronic device to complete other steps involved in the above exemplary embodiments.
  • the non-transitory computer readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
  • an application program/computer program product including one or more instructions, which can be executed by a processor of an electronic device to complete the above-mentioned business processing using a learning model
  • the method includes: obtaining a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, wherein the first intelligent layer and the second intelligent layer are deployed in base station equipment, and the third intelligent layer
  • the layer is deployed in the edge node; using the layered federated learning algorithm and the communication network architecture to generate a layered federated learning model deployed on the user equipment side or the base station equipment side; using the layered federated learning model for business processing.
  • the above instructions may also be executed by a processor of the electronic device to complete other steps involved in the above exemplary embodiments.
  • FIG. 4 is an example diagram of a computer device 30 .
  • the schematic diagram 4 is only an example of the computer device 30, and does not constitute a limitation to the computer device 30, and may include more or less components than those shown in the figure, or combine certain components, or different components
  • the computer device 30 may also include an input and output device, a network access device, a bus, and the like.
  • the so-called processor 302 may be a central processing unit (Central Processing Unit, CPU), and may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor can be a microprocessor or the processor 302 can also be any conventional processor, etc.
  • the processor 302 is the control center of the computer device 30 and uses various interfaces and lines to connect various parts of the entire computer device 30 .
  • the memory 301 can be used to store computer-readable instructions 303 , and the processor 302 implements various functions of the computer device 30 by running or executing computer-readable instructions or modules stored in the memory 301 and calling data stored in the memory 301 .
  • the memory 301 can mainly include a program storage area and a data storage area, wherein the program storage area can store an operating system, at least one application program required by a function (such as a sound playback function, an image playback function, etc.); Data created using the computer device 30 and the like.
  • the memory 301 may include a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card), at least one magnetic disk storage device, a flash memory device, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), or other non-volatile/volatile storage devices.
  • a hard disk a memory
  • a plug-in hard disk a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card), at least one magnetic disk storage device, a flash memory device, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), or other non-volatile/volatile storage devices.
  • a smart memory card Smart Media Card, SMC
  • the integrated modules of the computer device 30 are realized in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the present invention implements all or part of the processes in the methods of the above embodiments, and can also use computer-readable instructions to instruct related hardware to complete computer-readable instructions that can be stored in a computer-readable storage medium. When the readable instructions are executed by the processor, the steps in the above-mentioned various method embodiments can be realized.

Abstract

The present application discloses a method and apparatus for performing service processing by using a learning model. By applying the technical solution of the present application, a distributed unit (DU) and a centralized unit (CU) of a base station device can be utilized, and jointly form a communication network architecture with an edge node. In this way, the communication network can be subsequently used to aggregate the model parameters of device nodes, and the aggregated model parameter is used to construct a hierarchical federated learning model deployed at a user equipment end or a base station device end. Therefore, the purpose of performing service processing by using the hierarchical federated learning model is achieved.

Description

利用学习模型进行业务处理的方法以及装置Method and device for business processing using learning model 技术领域technical field
本申请中涉及数据处理技术,尤其是一种利用学习模型进行业务处理的方法以及装置。This application relates to data processing technology, especially a method and device for business processing using a learning model.
背景技术Background technique
现有通信网络的架构是指“云-边-端”三层智能架构,其中边缘智能通常代指边缘服务器,用于处理用户数据面上的计算等任务。但是其未考虑在边缘实现网络控制面和管理面的智能化。此外,现有的网络架构也未充分体现基站设备的智能化特点。The architecture of the existing communication network refers to the "cloud-edge-end" three-layer intelligent architecture, where the edge intelligence usually refers to the edge server, which is used to handle tasks such as calculations on the user data plane. However, it does not consider the intelligentization of the network control plane and management plane at the edge. In addition, the existing network architecture does not fully reflect the intelligent characteristics of base station equipment.
因此,如何设计一种可以充分利用各个节点设备以组成的通信网络架构实现业务处理,成为了本领域人员需要解决的问题。Therefore, how to design a communication network architecture that can fully utilize various node devices to realize business processing has become a problem to be solved by those skilled in the art.
发明内容Contents of the invention
本申请实施例提供一种利用学习模型进行业务处理的方法以及装置,其中,根据本申请实施例的一个方面,提供的一种利用学习模型进行业务处理的方法,应用于基站设备,包括:Embodiments of the present application provide a method and device for processing services using a learning model, wherein, according to an aspect of the embodiments of the present application, a method for processing services using a learning model is provided, which is applied to base station equipment, including:
获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构,其中所述第一智能层以及第二智能层部署在基站设备,第三智能层部署在边缘节点中;Obtain a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, wherein the first intelligent layer and the second intelligent layer are deployed in base station equipment, and the third intelligent layer is deployed in edge nodes;
利用分层联邦学习算法以及所述通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型;Using a hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side;
利用所述分层联邦学习模型进行业务处理。Business processing is performed by using the layered federated learning model.
可选地,在基于本申请上述方法的另一个实施例中,在所述获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构之后,还包括:Optionally, in another embodiment based on the above method of the present application, after the acquisition of the communication network architecture composed of the first intelligent layer, the second intelligent layer and the third intelligent layer, further includes:
获取部署在云端服务器中的第四智能层;Obtain the fourth intelligent layer deployed in the cloud server;
按照预设配置策略,对所述第四智能层进行功能配置;performing functional configuration on the fourth intelligent layer according to a preset configuration strategy;
当检测到对第四智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层、所述第三智能层以及第四智能层所组成的通信网络架构。After detecting that the function configuration of the fourth intelligent layer is completed, it is determined to generate a communication network architecture composed of the first intelligent layer, the second intelligent layer, the third intelligent layer, and the fourth intelligent layer.
可选地,在基于本申请上述方法的另一个实施例中,所述利用分层联邦学习算法以及所述通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型,包括:Optionally, in another embodiment based on the above method of the present application, the use of a hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side includes:
利用各所述第一智能层获取初始模型参数,所述初始模型参数为所述用户设备或所述第一智能层利用本地数据进行模型训练而得到的模型参数;Using each of the first intelligent layers to acquire initial model parameters, where the initial model parameters are model parameters obtained by the user equipment or the first intelligent layer using local data for model training;
利用所述第二智能层接收各所述第一智能层传送的初始模型参数之后,对所述初始模型参数进行第一层级聚合,得到第一聚合模型参数;After using the second intelligent layer to receive the initial model parameters transmitted by each of the first intelligent layers, perform first-level aggregation on the initial model parameters to obtain first aggregated model parameters;
所述第二智能层将所述第一聚合模型参数发送给所述第一智能层,直至达到第一次数后确定所述第一层级聚合完成。The second intelligent layer sends the first aggregation model parameters to the first intelligent layer, and determines that the aggregation at the first level is completed after reaching the first number.
可选地,在基于本申请上述方法的另一个实施例中,在所述直至达到第一次数后确定所述第一层级聚合完成之后,包括:Optionally, in another embodiment based on the above-mentioned method of the present application, after determining that the first-level aggregation is completed after the first number is reached, the method includes:
所述第一智能层将所述第一聚合模型参数发送给用户设备,以使所述用户设备根据所述第一聚合模型参数对初始学习模型进行训练;或,The first intelligent layer sends the first aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the first aggregated model parameters; or,
所述第一智能层根据所述第一聚合模型参数对初始学习模型进行训练。The first intelligent layer trains an initial learning model according to the first aggregated model parameters.
可选地,在基于本申请上述方法的另一个实施例中,在所述确定所述第一层级聚合完成之后,包括:Optionally, in another embodiment based on the above-mentioned method of the present application, after the determination that the first-level aggregation is completed, the method includes:
由各所述第二智能层将第一聚合模型参数发送给所述第三智能层;sending the first aggregation model parameters to the third intelligent layer by each of the second intelligent layers;
由所述第三智能层对各所述第一聚合模型参数进行第二层级聚合,得到第二聚合模型参数;performing second-level aggregation on each of the first aggregation model parameters by the third intelligent layer to obtain second aggregation model parameters;
所述第三智能层将所述第二聚合模型参数发送给所述第二智能层,直至达到第二次数后确定所述第二层级聚合完成。The third intelligent layer sends the second aggregation model parameters to the second intelligent layer, and determines that the aggregation at the second level is completed after reaching a second number of times.
可选地,在基于本申请上述方法的另一个实施例中,在所述达到第二次数后确定所述第二层级聚合完成之后,包括:Optionally, in another embodiment based on the above-mentioned method of the present application, after determining that the second-level aggregation is completed after the second number of times is reached, the method includes:
所述第二智能层将所述第二聚合模型参数发送给所述第一智能层;以及,the second intelligence layer sends the second aggregated model parameters to the first intelligence layer; and,
所述第一智能层将所述第二聚合模型参数发送给用户设备,以使所述用户设备根据所述第二聚合模型参数对初始学习模型进行训练,得到所述分层联邦学习模型;或,The first intelligent layer sends the second aggregation model parameters to the user equipment, so that the user equipment trains the initial learning model according to the second aggregation model parameters to obtain the hierarchical federated learning model; or ,
所述第一智能层根据所述第二聚合模型参数对初始学习模型进行训练,得到所述分层联邦学习模型。The first intelligent layer trains an initial learning model according to the second aggregation model parameters to obtain the hierarchical federated learning model.
可选地,在基于本申请上述方法的另一个实施例中,在所述第三智能层将所述第二聚合模型参数发送给所述第二智能层之后,包括:Optionally, in another embodiment based on the above method of the present application, after the third intelligent layer sends the second aggregation model parameters to the second intelligent layer, it includes:
若确定存在第四智能层,由各所述第三智能层将所述第二聚合模型参数发送给所述第四智能层,以使所述第四智能层对所述第二聚合模型参数进行第三层级聚合后,得到第三聚合模型参数;If it is determined that there is a fourth intelligent layer, each of the third intelligent layers sends the second aggregation model parameters to the fourth intelligent layer, so that the fourth intelligent layer performs a process on the second aggregation model parameters. After the third level of aggregation, the third aggregation model parameters are obtained;
所述第四智能层将所述第三聚合模型参数逐级下发至所述第一智能层,以使所述第一智能层根据所述第一聚合模型参数对初始学习模型进行训练;或,The fourth intelligent layer sends the third aggregation model parameters to the first intelligent layer step by step, so that the first intelligent layer trains the initial learning model according to the first aggregation model parameters; or ,
所述第一智能层将所述第三聚合模型参数发送给用户设备,以使所述用户设备根据所述第三聚合模型参数对初始学习模型进行训练,得到所述分层联邦学习模型。The first intelligent layer sends the third aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the third aggregated model parameters to obtain the hierarchical federated learning model.
可选地,在基于本申请上述方法的另一个实施例中,所述利用所述分层联邦学习模型进行业务处理,包括:Optionally, in another embodiment based on the above-mentioned method of this application, the use of the hierarchical federated learning model for business processing includes:
由用户设备利用所述分层联邦学习模型进行第一业务处理;或,The user equipment uses the hierarchical federated learning model to perform the first service processing; or,
由基站设备利用所述分层联邦学习模型进行第二业务处理。The base station device uses the layered federated learning model to perform the second service processing.
可选地,在基于本申请上述方法的另一个实施例中,所述第一智能层以及第二智能层部署在基站设备,包括:Optionally, in another embodiment based on the above method of the present application, the first intelligent layer and the second intelligent layer are deployed in the base station equipment, including:
所述第一智能层部署在所述基站设备的分布式单元DU中,以及所述第二智能层部署在所述基站设备的集中式单元CU中;或,The first intelligent layer is deployed in a distributed unit DU of the base station device, and the second intelligent layer is deployed in a centralized unit CU of the base station device; or,
所述第一智能层部署在小基站设备,以及所述第二智能层部署在宏基站设 备中。The first intelligent layer is deployed in small base station equipment, and the second intelligent layer is deployed in macro base station equipment.
其中,根据本申请实施例的又一个方面,提供的一种利用学习模型进行业务处理的装置,包括:Wherein, according to another aspect of the embodiment of the present application, a device for using a learning model for business processing is provided, including:
获取模块,被配置为获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构,其中所述第一智能层以及第二智能层部署在基站设备,第三智能层部署在边缘节点中;An acquisition module configured to acquire a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, wherein the first intelligent layer and the second intelligent layer are deployed in base station equipment, and the third intelligent layer Deployed in edge nodes;
生成模块,被配置为利用分层联邦学习算法以及所述通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型;The generating module is configured to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side by using the hierarchical federated learning algorithm and the communication network architecture;
处理模块,被配置为利用所述分层联邦学习模型进行业务处理。The processing module is configured to use the hierarchical federated learning model to perform business processing.
根据本申请实施例的又一个方面,提供的一种电子设备,包括:According to still another aspect of the embodiments of the present application, an electronic device is provided, including:
存储器,用于存储可执行指令;以及memory for storing executable instructions; and
显示器,用于与所述存储器显示以执行所述可执行指令从而完成上述任一所述利用学习模型进行业务处理的方法的操作。The display is used for displaying with the memory to execute the executable instruction so as to complete the operation of any one of the above-mentioned methods for business processing using a learning model.
根据本申请实施例的还一个方面,提供的一种计算机可读存储介质,用于存储计算机可读取的指令,所述指令被执行时执行上述任一所述利用学习模型进行业务处理的方法的操作。According to still another aspect of the embodiments of the present application, there is provided a computer-readable storage medium for storing computer-readable instructions, and when the instructions are executed, any one of the above-mentioned methods for business processing using a learning model is performed operation.
本申请中,可以获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构,其中第一智能层以及第二智能层部署在基站设备,第三智能层部署在边缘节点中;利用分层联邦学习算法以及通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型;并利用分层联邦学习模型进行业务处理。通过应用本申请的技术方案,可以利用基站设备的分布式单元DU与集中式单元CU,并与边缘节点共同组成通信网络架构。以使后续还可以利用该通信网络聚合网络中各个设备节点的模型参数,并利用该聚合模型参数构建部署在用户设备端或基站设备端的分层联邦学习模型。进而实现利用分层联邦学习模型进行业务处理的目的。In this application, the communication network architecture composed of the first intelligent layer, the second intelligent layer and the third intelligent layer can be obtained, wherein the first intelligent layer and the second intelligent layer are deployed in the base station equipment, and the third intelligent layer is deployed in the edge In the node; use the hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side; and use the hierarchical federated learning model for business processing. By applying the technical solution of the present application, the distributed unit DU and the centralized unit CU of the base station equipment can be used to form a communication network architecture together with the edge nodes. In the future, the communication network can be used to aggregate the model parameters of each device node in the network, and the aggregated model parameters can be used to construct a hierarchical federated learning model deployed on the user equipment side or the base station equipment side. Then realize the purpose of using the hierarchical federated learning model for business processing.
下面通过附图和实施例,对本申请的技术方案做进一步的详细描述。The technical solutions of the present application will be described in further detail below with reference to the drawings and embodiments.
附图说明Description of drawings
构成说明书的一部分的附图描述了本申请的实施例,并且连同描述一起用于解释本申请的原理。The accompanying drawings, which constitute a part of this specification, illustrate the embodiments of the application and, together with the description, serve to explain the principles of the application.
参照附图,根据下面的详细描述,可以更加清楚地理解本申请,其中:The present application can be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
图1为本申请提出的一种利用学习模型进行业务处理的方法示意图;FIG. 1 is a schematic diagram of a method of using a learning model for business processing proposed in this application;
图2为本申请提出的应用于利用学习模型进行业务处理方法的系统架构示意图;Fig. 2 is a schematic diagram of the system architecture applied to the business processing method using the learning model proposed by the present application;
图3为本申请提出的利用学习模型进行业务处理的电子装置的结构示意图;FIG. 3 is a schematic structural diagram of an electronic device using a learning model for business processing proposed in this application;
图4为本申请提出的利用学习模型进行业务处理的电子设备结构示意图。FIG. 4 is a schematic structural diagram of an electronic device using a learning model for business processing proposed in this application.
具体实施方式Detailed ways
现在将参照附图来详细描述本申请的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本申请的范围。Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangements of components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。At the same time, it should be understood that, for the convenience of description, the sizes of the various parts shown in the drawings are not drawn according to the actual proportional relationship.
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,不作为对本申请及其应用或使用的任何限制。The following description of at least one exemplary embodiment is merely illustrative in nature and not intended as any limitation of the application, its application or uses.
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。Techniques, methods and devices known to those of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, such techniques, methods and devices should be considered part of the description.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。It should be noted that like numerals and letters denote like items in the following figures, therefore, once an item is defined in one figure, it does not require further discussion in subsequent figures.
另外,本申请各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实 现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。In addition, the technical solutions of the various embodiments of the present application can be combined with each other, but it must be based on the realization of those skilled in the art. When the combination of technical solutions is contradictory or cannot be realized, it should be considered as a combination of technical solutions. Does not exist, nor is it within the scope of protection required by this application.
需要说明的是,本申请实施例中所有方向性指示(诸如上、下、左、右、前、后……)仅用于解释在某一特定姿态(如附图所示)下各部件之间的相对位置关系、运动情况等,如果该特定姿态发生改变时,则该方向性指示也相应地随之改变。It should be noted that all directional indications (such as up, down, left, right, front, back...) in the embodiments of the present application are only used to explain If the specific posture changes, the directional indication will also change accordingly.
下面结合图1-图2来描述根据本申请示例性实施方式的用于进行利用学习模型进行业务处理的方法。需要注意的是,下述应用场景仅是为了便于理解本申请的精神和原理而示出,本申请的实施方式在此方面不受任何限制。相反,本申请的实施方式可以应用于适用的任何场景。A method for performing service processing using a learning model according to an exemplary embodiment of the present application will be described below with reference to FIGS. 1-2 . It should be noted that the following application scenarios are only shown for easy understanding of the spirit and principle of the present application, and the implementation manners of the present application are not limited in this regard. On the contrary, the embodiments of the present application can be applied to any applicable scene.
本申请还提出一种利用学习模型进行业务处理的方法以及装置。The present application also proposes a method and device for business processing using a learning model.
图1示意性地示出了根据本申请实施方式的一种利用学习模型进行业务处理的方法的流程示意图。如图1所示,该方法,包括:Fig. 1 schematically shows a schematic flowchart of a method for using a learning model to process business according to an embodiment of the present application. As shown in Figure 1, the method includes:
S101,获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构,其中第一智能层以及第二智能层部署在基站设备,第三智能层部署在边缘节点中。S101. Obtain a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, where the first intelligent layer and the second intelligent layer are deployed in base station devices, and the third intelligent layer is deployed in edge nodes.
如图2所示,为本申请提出的一种通信网络架构的系统示意图。其中包括部署在基站设备的分布式单元DU中的第一智能层、部署在基站设备的集中式单元CU中的第二智能层以及部署在边缘节点中的第三智能层。一种方式中,还包括部署在云端服务器中的第四智能层。As shown in FIG. 2 , it is a system schematic diagram of a communication network architecture proposed in this application. It includes the first intelligent layer deployed in the distributed unit DU of the base station equipment, the second intelligent layer deployed in the centralized unit CU of the base station equipment, and the third intelligent layer deployed in the edge node. In one manner, the fourth intelligent layer deployed in the cloud server is also included.
其中,第四智能层是高层管理智能组件,负责各个子网络之间的管理。第三智能层是基站之上的网络智能编排组件,负责各个基站之间的功能编排管理。第二智能层是基站内部的集中式智能组件,负责对传统无线资源管理(Wireless Resource management,RRM)进行智能增强与实现。第一智能层是基站内部的分布式智能组件,负责对调度周期短的参数进行进一步优化。Among them, the fourth intelligent layer is a high-level management intelligent component, which is responsible for the management between various sub-networks. The third intelligent layer is the network intelligent orchestration component above the base station, which is responsible for the functional orchestration management between each base station. The second intelligent layer is a centralized intelligent component inside the base station, which is responsible for intelligent enhancement and realization of traditional wireless resource management (RRM). The first intelligent layer is a distributed intelligent component inside the base station, which is responsible for further optimizing the parameters with short scheduling cycles.
进一步的,本申请中提及的第一智能层和第二智能层可以有多种部署方式,例如第一智能层可以部署在基站设备的分布式单元DU中,以及第二智能层可以部署在基站设备的集中式单元CU中。Further, the first intelligent layer and the second intelligent layer mentioned in this application can be deployed in various ways, for example, the first intelligent layer can be deployed in the distributed unit DU of the base station equipment, and the second intelligent layer can be deployed in In the centralized unit CU of the base station equipment.
另一种方式中,第一智能层可以部署在小基站设备上。而第二智能层可以部署在宏基站设备上。In another manner, the first intelligent layer may be deployed on small base station equipment. And the second intelligent layer can be deployed on the macro base station equipment.
相关技术中,由于传统分布式机器学习模型的流程通常包括步骤:In related technologies, the process of a traditional distributed machine learning model usually includes steps:
1.中央(集中式)服务器收集各个分布式的零散数据汇合;1. The central (centralized) server collects various distributed scattered data;
2.汇合后由中央服务器对各个分布式节点进行学习任务(和训练数据)分配;2. After the convergence, the central server will distribute learning tasks (and training data) to each distributed node;
3.各个分布式节点收到分配的学习任务(和训练数据)并开启学习;3. Each distributed node receives the assigned learning tasks (and training data) and starts learning;
4.各个分布式节点学习结束,将学习结果返回给中央服务器;4. After the learning of each distributed node is completed, the learning results will be returned to the central server;
5.中央服务器将各个节点的学习结果汇合;5. The central server merges the learning results of each node;
6.重复流程3~5,直至汇合后的学习结果达到预设训练条件,其中预设条件包括训练至模型收敛,训练次数达到最大迭代次数以及训练时长达到最长训练时间的其中一种。6. Repeat processes 3 to 5 until the combined learning results reach the preset training conditions, wherein the preset conditions include one of training until the model converges, the number of training times reaching the maximum number of iterations, and the training duration reaching the maximum training time.
然而,相关技术中的传统分布式机器学习模型的方式并没有考虑大量数据传输对无线链路带来的巨大传输压力,并且其也没有考虑分布式节点的数据直接传输带来的数据隐私保护的问题。因此本申请可以采用利用多个智能层所构建的通信网络架构来实现对各个客户端设备节点上传的模型参数的聚合,得到聚合模型参数,以使后续利用该聚合后的模型参数对初始学习模型进行分层学习训练,得到用于在用户设备端或基站端进行业务处理的分层联邦学习模型。However, the traditional distributed machine learning model in related technologies does not consider the huge transmission pressure brought by a large amount of data transmission on the wireless link, and it does not consider the data privacy protection brought by the direct data transmission of distributed nodes. question. Therefore, this application can use the communication network architecture constructed by multiple intelligent layers to realize the aggregation of the model parameters uploaded by each client device node, and obtain the aggregated model parameters, so that the subsequent use of the aggregated model parameters for the initial learning model Perform hierarchical learning training to obtain a hierarchical federated learning model for service processing on the user equipment side or base station side.
需要说明的是,本申请实施例中的边缘节点可以为边缘服务器,也可以为边缘网元等边缘设备。It should be noted that the edge node in the embodiment of the present application may be an edge server, or may be an edge device such as an edge network element.
S102,利用分层联邦学习算法以及通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型。S102, using a hierarchical federated learning algorithm and a communication network architecture to generate a hierarchical federated learning model deployed on a user equipment side or a base station equipment side.
其中,本申请提出的通信网络架构可以具备聚合参数的功能,以使利用分层联邦学习算法对网络架构中,各个节点上传的模型参数进行聚合后发送给用户设备。以使用户设备利用聚合后的模型参数对部署在自身的初始学习模型进行训练,进而得到分层联邦学习模型。以使后续通过该模型进行业务处理。Among them, the communication network architecture proposed in this application may have the function of aggregating parameters, so that the hierarchical federated learning algorithm is used to aggregate the model parameters uploaded by each node in the network architecture and send them to the user equipment. The user equipment uses the aggregated model parameters to train the initial learning model deployed on itself, and then obtains a hierarchical federated learning model. In order to enable subsequent business processing through this model.
S103,利用分层联邦学习模型进行业务处理。S103, use the layered federated learning model to perform business processing.
需要说明的是,本申请中可以由用户设备利用分层联邦学习模型进行第一业务处理;和/或,由基站设备利用分层联邦学习模型进行第二业务处理。It should be noted that in this application, the user equipment may use the hierarchical federated learning model to perform the first service processing; and/or, the base station device may use the hierarchical federated learning model to perform the second service processing.
其中第一业务处理可以包括驾驶路线规划,人脸识别,键盘输入预测等等。可以理解的,该种方式下,训练好的分层联邦学习模型是交由用户进行业务处理的。The first business processing may include driving route planning, face recognition, keyboard input prediction, and so on. It can be understood that in this way, the trained hierarchical federated learning model is handed over to the user for business processing.
可选的,第二业务处理可以包括由基站执行的传统RRM的AI增强业务,例如为移动性管理,负载均衡,动态资源分配,干扰协同,MAC实时调度,波束管理等等。其中,RRM的目的为提高无线资源的利用率,满足移动业务对于无线资源的需求。可以理解的,该种方式下,训练好的分层联邦学习模型是交由基站进行业务处理。Optionally, the second service processing may include traditional RRM AI enhanced services performed by the base station, such as mobility management, load balancing, dynamic resource allocation, interference coordination, MAC real-time scheduling, beam management and so on. Among them, the purpose of RRM is to improve the utilization rate of radio resources and meet the demand of mobile services for radio resources. It can be understood that in this way, the trained hierarchical federated learning model is handed over to the base station for business processing.
本申请中,可以获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构,其中第一智能层以及第二智能层部署在基站设备,第三智能层部署在边缘节点中;利用分层联邦学习算法以及通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型;并利用分层联邦学习模型进行业务处理。通过应用本申请的技术方案,可以利用基站设备的分布式单元DU与集中式单元CU,并与边缘节点共同组成通信网络架构。以使后续还可以利用该通信网络聚合网络中各个设备节点的模型参数,并利用该聚合模型参数构建部署在用户设备端或基站设备端的分层联邦学习模型。进而实现利用分层联邦学习模型进行业务处理的目的。In this application, the communication network architecture composed of the first intelligent layer, the second intelligent layer and the third intelligent layer can be obtained, wherein the first intelligent layer and the second intelligent layer are deployed in the base station equipment, and the third intelligent layer is deployed in the edge In the node; use the hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side; and use the hierarchical federated learning model for business processing. By applying the technical solution of the present application, the distributed unit DU and the centralized unit CU of the base station equipment can be used to form a communication network architecture together with the edge nodes. In the future, the communication network can be used to aggregate the model parameters of each device node in the network, and the aggregated model parameters can be used to construct a hierarchical federated learning model deployed on the user equipment side or the base station equipment side. Then realize the purpose of using the hierarchical federated learning model for business processing.
可选地,在基于本申请上述方法的另一个实施例中,在所述获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构之后,还包括:Optionally, in another embodiment based on the above method of the present application, after the acquisition of the communication network architecture composed of the first intelligent layer, the second intelligent layer and the third intelligent layer, further includes:
获取部署在云端服务器中的第四智能层;Obtain the fourth intelligent layer deployed in the cloud server;
按照预设配置策略,对所述第四智能层进行功能配置;performing functional configuration on the fourth intelligent layer according to a preset configuration strategy;
当检测到对第四智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层、所述第三智能层以及第四智能层所组成的通信网络架构。After detecting that the function configuration of the fourth intelligent layer is completed, it is determined to generate a communication network architecture composed of the first intelligent layer, the second intelligent layer, the third intelligent layer, and the fourth intelligent layer.
可选地,在基于本申请上述方法的另一个实施例中,所述利用分层联邦学 习算法以及所述通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型,包括:Optionally, in another embodiment based on the above method of the present application, the use of a hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side includes:
利用各所述第一智能层获取初始模型参数,所述初始模型参数为所述用户设备或所述第一智能层利用本地数据进行模型训练而得到的模型参数;Using each of the first intelligent layers to acquire initial model parameters, where the initial model parameters are model parameters obtained by the user equipment or the first intelligent layer using local data for model training;
利用所述第二智能层接收各所述第一智能层传送的初始模型参数之后,对所述初始模型参数进行第一层级聚合,得到第一聚合模型参数;After using the second intelligent layer to receive the initial model parameters transmitted by each of the first intelligent layers, perform first-level aggregation on the initial model parameters to obtain first aggregated model parameters;
所述第二智能层将所述第一聚合模型参数发送给所述第一智能层,直至达到第一次数后确定所述第一层级聚合完成。The second intelligent layer sends the first aggregation model parameters to the first intelligent layer, and determines that the aggregation at the first level is completed after reaching the first number.
可选地,在基于本申请上述方法的另一个实施例中,在所述直至达到第一次数后确定所述第一层级聚合完成之后,包括:Optionally, in another embodiment based on the above-mentioned method of the present application, after determining that the first-level aggregation is completed after the first number is reached, the method includes:
所述第一智能层将所述第一聚合模型参数发送给用户设备,以使所述用户设备根据所述第一聚合模型参数对初始学习模型进行训练;或,The first intelligent layer sends the first aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the first aggregated model parameters; or,
所述第一智能层根据所述第一聚合模型参数对初始学习模型进行训练。The first intelligent layer trains an initial learning model according to the first aggregated model parameters.
可选地,在基于本申请上述方法的另一个实施例中,在所述确定所述第一层级聚合完成之后,包括:Optionally, in another embodiment based on the above-mentioned method of the present application, after the determination that the first-level aggregation is completed, the method includes:
由各所述第二智能层将第一聚合模型参数发送给所述第三智能层;sending the first aggregation model parameters to the third intelligent layer by each of the second intelligent layers;
由所述第三智能层对各所述第一聚合模型参数进行第二层级聚合,得到第二聚合模型参数;performing second-level aggregation on each of the first aggregation model parameters by the third intelligent layer to obtain second aggregation model parameters;
所述第三智能层将所述第二聚合模型参数发送给所述第二智能层,直至达到第二次数后确定所述第二层级聚合完成。The third intelligent layer sends the second aggregation model parameters to the second intelligent layer, and determines that the aggregation at the second level is completed after reaching a second number of times.
可选地,在基于本申请上述方法的另一个实施例中,在所述达到第二次数后确定所述第二层级聚合完成之后,包括:Optionally, in another embodiment based on the above-mentioned method of the present application, after determining that the second-level aggregation is completed after the second number of times is reached, the method includes:
所述第二智能层将所述第二聚合模型参数发送给所述第一智能层;以及,the second intelligence layer sends the second aggregated model parameters to the first intelligence layer; and,
所述第一智能层将所述第二聚合模型参数发送给用户设备,以使所述用户设备根据所述第二聚合模型参数对初始学习模型进行训练,得到所述分层联邦学习模型;或,The first intelligent layer sends the second aggregation model parameters to the user equipment, so that the user equipment trains the initial learning model according to the second aggregation model parameters to obtain the hierarchical federated learning model; or ,
所述第一智能层根据所述第二聚合模型参数对初始学习模型进行训练,得 到所述分层联邦学习模型。The first intelligent layer trains the initial learning model according to the second aggregation model parameters to obtain the hierarchical federated learning model.
可选地,在基于本申请上述方法的另一个实施例中,在所述第三智能层将所述第二聚合模型参数发送给所述第二智能层之后,包括:Optionally, in another embodiment based on the above method of the present application, after the third intelligent layer sends the second aggregation model parameters to the second intelligent layer, it includes:
若确定存在第四智能层,由各所述第三智能层将所述第二聚合模型参数发送给所述第四智能层,以使所述第四智能层对所述第二聚合模型参数进行第三层级聚合后,得到第三聚合模型参数;If it is determined that there is a fourth intelligent layer, each of the third intelligent layers sends the second aggregation model parameters to the fourth intelligent layer, so that the fourth intelligent layer performs a process on the second aggregation model parameters. After the third level of aggregation, the third aggregation model parameters are obtained;
所述第四智能层将所述第三聚合模型参数逐级下发至所述第一智能层,以使所述第一智能层根据所述第一聚合模型参数对初始学习模型进行训练;或,The fourth intelligent layer sends the third aggregation model parameters to the first intelligent layer step by step, so that the first intelligent layer trains the initial learning model according to the first aggregation model parameters; or ,
所述第一智能层将所述第三聚合模型参数发送给用户设备,以使所述用户设备根据所述第三聚合模型参数对初始学习模型进行训练,得到所述分层联邦学习模型。The first intelligent layer sends the third aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the third aggregated model parameters to obtain the hierarchical federated learning model.
可选地,在基于本申请上述方法的另一个实施例中,所述利用所述分层联邦学习模型进行业务处理,包括:Optionally, in another embodiment based on the above-mentioned method of this application, the use of the hierarchical federated learning model for business processing includes:
由用户设备利用所述分层联邦学习模型进行第一业务处理;或,The user equipment uses the hierarchical federated learning model to perform the first service processing; or,
由基站设备利用所述分层联邦学习模型进行第二业务处理。The base station device uses the layered federated learning model to perform the second service processing.
可选地,在基于本申请上述方法的另一个实施例中,所述第一智能层以及第二智能层部署在基站设备,包括:Optionally, in another embodiment based on the above method of the present application, the first intelligent layer and the second intelligent layer are deployed in the base station equipment, including:
所述第一智能层部署在所述基站设备的分布式单元DU中,以及所述第二智能层部署在所述基站设备的集中式单元CU中;或,The first intelligent layer is deployed in a distributed unit DU of the base station device, and the second intelligent layer is deployed in a centralized unit CU of the base station device; or,
所述第一智能层部署在小基站设备,以及所述第二智能层部署在宏基站设备中。The first intelligent layer is deployed in small base station equipment, and the second intelligent layer is deployed in macro base station equipment.
进一步的,本申请中可以获取部署在基站设备的分布式单元DU中的第一智能层,以及部署在基站设备的集中式单元CU中的第二智能层;在此方式下,基站设备中可以有一个或多个CU以及一个或多个DU。其中,一个CU可以连接一个或多个DU。或,Further, in this application, the first intelligent layer deployed in the distributed unit DU of the base station equipment and the second intelligent layer deployed in the centralized unit CU of the base station equipment can be obtained; in this way, the base station equipment can There are one or more CUs and one or more DUs. Among them, one CU can be connected to one or more DUs. or,
获取部署在小基站设备的第一智能层,以及部署在宏基站设备的第二智能层。Obtain the first intelligent layer deployed on small base station equipment, and the second intelligent layer deployed on macro base station equipment.
其中,小基站设备SBS(small base station)是一种信号发射覆盖半径小,适用于小范围精确覆盖的基站。其可以为用户提供高速数据服务。而对于宏基站设备(MBS,macro base station),是一种通信覆盖范围广,但是单一用户可以分享到的容量较小,仅能提供低速数据服务和通信服务的基站。Among them, the small base station equipment SBS (small base station) is a base station with a small signal transmission coverage radius and suitable for small-scale accurate coverage. It can provide users with high-speed data services. As for the macro base station equipment (MBS, macro base station), it is a base station that has a wide communication coverage, but the capacity that a single user can share is small, and can only provide low-speed data services and communication services.
在此方式下,无论是MSB或SBS均包括一个或多个的CU和DU。另外,一个MBS可以管理一个或多个SBS。In this way, either MSB or SBS includes one or more CUs and DUs. In addition, one MBS can manage one or more SBSs.
一种方式中,本申请实施例以通信网络架构包括三个智能层来进行举例说明:In one approach, the embodiment of the present application is illustrated by taking the communication network architecture including three intelligent layers as an example:
步骤一:其中,第一智能层获取用户设备利用本地数据进行模型训练学习,以此产生初始模型参数,需要说明的是,本申请实施例中可以在此之前首先定义高层聚合器、低层聚合器以及构建数字孪生网络。Step 1: Among them, the first intelligent layer obtains the user equipment and uses local data to perform model training and learning, so as to generate initial model parameters. It should be noted that, in the embodiment of the present application, the high-level aggregator and the low-level aggregator can be defined first before this And build a digital twin network.
步骤二:由第一智能层上传初始模型参数更新至第二智能层,第二智能层将收到的所有模型参数更新基于聚合准则进行第一层级聚合,得到第一聚合模型参数。其中,聚合准则包括:分层联邦平均算法等等可用于聚合的算法或准则。Step 2: The first intelligent layer uploads the update of the initial model parameters to the second intelligent layer, and the second intelligent layer updates all the received model parameters based on the aggregation criteria for first-level aggregation to obtain the first aggregated model parameters. Wherein, the aggregation criterion includes: a layered federated average algorithm, etc., which can be used for aggregation algorithms or criteria.
步骤三:第二智能层将聚合之后的第一聚合模型参数下发至其管理连接的第一智能层,完成一次低层联邦学习的过程。Step 3: The second intelligent layer sends the first aggregated model parameters after aggregation to the first intelligent layer that manages the connection, and completes a process of low-level federated learning.
步骤四:重复上述步骤至第一次数,直到确定第一层级聚合完成,第二智能层将聚合之后的第一聚合模型参数上传至第三智能层,第三智能层将收到的所有第一聚合模型参数更新基于聚合准则进行第二层级聚合,得到第二聚合模型参数。同样的,聚合准则包括:分层联邦平均算法等等可用于聚合的算法或准则。Step 4: Repeat the above steps to the first number until it is determined that the aggregation of the first level is completed, the second intelligent layer uploads the parameters of the first aggregation model after aggregation to the third intelligent layer, and the third intelligent layer will receive all the parameters of the first aggregation model An aggregation model parameter update performs second-level aggregation based on aggregation criteria to obtain second aggregation model parameters. Similarly, the aggregation criterion includes: a layered federated average algorithm, etc., which can be used for aggregation algorithms or criteria.
步骤五:第三智能层将聚合之后的第二聚合模型参数下发至其管理连接的第二智能层,第二智能层将聚合之后的第二聚合模型参数下发至其管理连接的第一智能层,并选择性的将该聚合模型参数发送给用户设备。完成一次高层联邦学习的过程。Step 5: The third intelligent layer sends the aggregated parameters of the second aggregation model to the second intelligent layer of its management connection, and the second intelligent layer sends the aggregated parameters of the second aggregation model to the first layer of its management connection. The intelligent layer, and selectively sends the aggregated model parameters to the user equipment. Complete a process of high-level federated learning.
步骤六:用户设备接收到聚合后的模型参数之后,利用该各个聚合后的模 型参数进行初始学习模型训练,直至当确定训练后的业务网络模型达到预设条件后,确定生成分层联邦学习模型,其中预设条件包括训练至模型收敛,训练次数达到最大迭代次数以及训练时长达到最长训练时间的其中一种。Step 6: After receiving the aggregated model parameters, the user equipment uses the aggregated model parameters to perform initial learning model training until it is determined that the trained business network model meets the preset conditions, and then determines to generate a hierarchical federated learning model , where the preset conditions include one of training until the model converges, the number of training times reaching the maximum number of iterations, and the training duration reaching the maximum training time.
可选的一种方式中,本申请实施例以通信网络架构包括四个智能层,且由用户设备利用分层联邦学习模型进行第一业务处理的情况来进行举例说明:In an optional manner, the embodiment of the present application uses a case where the communication network architecture includes four intelligent layers, and the user equipment uses a hierarchical federated learning model to perform the first service processing as an example:
步骤一:其中,第一智能层获取用户设备利用本地数据进行模型训练学习,以此产生初始模型参数,需要说明的是,本申请实施例中可以在此之前首先定义高层聚合器、低层聚合器以及构建数字孪生网络。Step 1: Among them, the first intelligent layer obtains the user equipment and uses local data to perform model training and learning, so as to generate initial model parameters. It should be noted that, in the embodiment of the present application, the high-level aggregator and the low-level aggregator can be defined first before this And build a digital twin network.
步骤二:由第一智能层上传初始模型参数更新至第二智能层,第二智能层将收到的所有模型参数更新基于聚合准则进行第一层级聚合,得到第一聚合模型参数。其中,聚合准则包括:分层联邦平均算法等等可用于聚合的算法或准则。Step 2: The first intelligent layer uploads the update of the initial model parameters to the second intelligent layer, and the second intelligent layer updates all the received model parameters based on the aggregation criteria for first-level aggregation to obtain the first aggregated model parameters. Wherein, the aggregation criterion includes: a layered federated average algorithm, etc., which can be used for aggregation algorithms or criteria.
步骤三:第二智能层将聚合之后的第一聚合模型参数下发至其管理连接的第一智能层,完成一次低层联邦学习的过程。Step 3: The second intelligent layer sends the first aggregated model parameters after aggregation to the first intelligent layer that manages the connection, and completes a process of low-level federated learning.
步骤四:重复上述步骤至第一次数,直到确定第一层级聚合完成,第二智能层将聚合之后的第一聚合模型参数上传至第三智能层,第三智能层将受到的所有第一聚合模型参数更新基于聚合准则进行第二层级聚合,得到第二聚合模型参数。同样的,聚合准则包括:分层联邦平均算法等等可用于聚合的算法或准则。Step 4: Repeat the above steps to the first number until it is determined that the aggregation of the first level is completed, and the second intelligent layer uploads the first aggregation model parameters after aggregation to the third intelligent layer, and the third intelligent layer will receive all first-level The aggregation model parameter update performs second-level aggregation based on the aggregation criterion to obtain second aggregation model parameters. Similarly, the aggregation criterion includes: a layered federated average algorithm, etc., which can be used for aggregation algorithms or criteria.
步骤五:第三智能层将聚合之后的第二聚合模型参数下发至其管理连接的第二智能层,第二智能层将聚合之后的第二聚合模型参数下发至其管理连接的第一智能层,并后续发生给用户设备。完成一次高层联邦学习的过程。Step 5: The third intelligent layer sends the aggregated parameters of the second aggregation model to the second intelligent layer of its management connection, and the second intelligent layer sends the aggregated parameters of the second aggregation model to the first layer of its management connection. intelligence layer, and subsequently to the user device. Complete a process of high-level federated learning.
步骤六:重复上述步骤至第二次数,第三智能层将聚合之后的第二聚合模型参数上传至第四智能层,第四智能层对第二聚合模型参数进行第三层级聚合后,得到第三聚合模型参数。Step 6: Repeat the above steps to the second number of times, the third intelligent layer uploads the aggregated second aggregation model parameters to the fourth intelligent layer, and the fourth intelligent layer performs third-level aggregation on the second aggregation model parameters to obtain the first Three aggregation model parameters.
步骤七:第四智能层将聚合后的第三聚合模型参数逐级下发至第一智能层;以及,第一智能层将选择性的将第三聚合模型参数发送给用户设备。Step 7: The fourth intelligent layer delivers the aggregated third aggregation model parameters to the first intelligent layer level by level; and, the first intelligent layer selectively sends the third aggregation model parameters to the user equipment.
步骤八:用户设备接收到聚合后的模型参数之后,利用该各个聚合后的模型参数进行初始学习模型训练,直至当确定训练后的业务网络模型达到预设条件后,确定生成分层联邦学习模型,其中预设条件包括训练至模型收敛,训练次数达到最大迭代次数以及训练时长达到最长训练时间的其中一种,需要说明的是,模型训练以及后续的推理过程可以在本地上进行也可以在数字孪生网络体内进行。Step 8: After receiving the aggregated model parameters, the user equipment uses the aggregated model parameters to perform initial learning model training until it is determined that the trained service network model meets the preset conditions, and then determines to generate a hierarchical federated learning model , where the preset conditions include one of training to model convergence, the number of training times reaching the maximum number of iterations, and the training duration reaching the maximum training time. It should be noted that the model training and subsequent reasoning process can be performed locally or on The digital twin network is performed in vivo.
可选的另一种方式中,本申请实施例以通信网络架构包括四个智能层,且由基站设备利用分层联邦学习模型进行第二业务处理的情况来进行举例说明:In another optional manner, the embodiment of the present application uses a case where the communication network architecture includes four intelligent layers, and the base station equipment uses a hierarchical federated learning model to perform the second service processing as an example:
步骤一:其中,第一智能层(即基站设备)利用本地数据进行模型训练学习,以此产生初始模型参数,需要说明的是,本申请实施例中可以在此之前首先定义高层聚合器、低层聚合器以及构建数字孪生网络。Step 1: Wherein, the first intelligent layer (i.e., the base station equipment) uses local data for model training and learning to generate initial model parameters. It should be noted that in the embodiment of the present application, the high-level aggregator and the low-level aggregator can be first defined before this. Aggregators and building digital twin networks.
步骤二:由第一智能层上传初始模型参数更新至第二智能层,第二智能层将收到的所有模型参数更新基于聚合准则进行第一层级聚合,得到第一聚合模型参数。其中,聚合准则包括:分层联邦平均算法等等可用于聚合的算法或准则。Step 2: The first intelligent layer uploads the update of the initial model parameters to the second intelligent layer, and the second intelligent layer updates all the received model parameters based on the aggregation criteria for first-level aggregation to obtain the first aggregated model parameters. Wherein, the aggregation criterion includes: a layered federated average algorithm, etc., which can be used for aggregation algorithms or criteria.
步骤三:第二智能层将聚合之后的第一聚合模型参数下发至其管理连接的第一智能层,完成一次低层联邦学习的过程。Step 3: The second intelligent layer sends the first aggregated model parameters after aggregation to the first intelligent layer that manages the connection, and completes a process of low-level federated learning.
步骤四:重复上述步骤至第一次数,直到确定第一层级聚合完成,第二智能层将聚合之后的第一聚合模型参数上传至第三智能层,第三智能层将受到的所有第一聚合模型参数更新基于聚合准则进行第二层级聚合,得到第二聚合模型参数。同样的,聚合准则包括:分层联邦平均算法等等可用于聚合的算法或准则。Step 4: Repeat the above steps to the first number until it is determined that the aggregation of the first level is completed, and the second intelligent layer uploads the first aggregation model parameters after aggregation to the third intelligent layer, and the third intelligent layer will receive all first-level The aggregation model parameter update performs second-level aggregation based on the aggregation criterion to obtain second aggregation model parameters. Similarly, the aggregation criterion includes: a layered federated average algorithm, etc., which can be used for aggregation algorithms or criteria.
步骤五:第三智能层将聚合之后的第二聚合模型参数下发至其管理连接的第二智能层,第二智能层将聚合之后的第二聚合模型参数下发至其管理连接的第一智能层。完成一次高层联邦学习的过程。Step 5: The third intelligent layer sends the aggregated parameters of the second aggregation model to the second intelligent layer of its management connection, and the second intelligent layer sends the aggregated parameters of the second aggregation model to the first layer of its management connection. smart layer. Complete a process of high-level federated learning.
步骤六:重复上述步骤至第二次数,第三智能层将聚合之后的第二聚合模型参数上传至第四智能层,第四智能层对第二聚合模型参数进行第三层级聚合 后,得到第三聚合模型参数。Step 6: Repeat the above steps to the second number of times, the third intelligent layer uploads the aggregated second aggregation model parameters to the fourth intelligent layer, and the fourth intelligent layer performs third-level aggregation on the second aggregation model parameters to obtain the first Three aggregation model parameters.
步骤七:第四智能层将聚合后的第三聚合模型参数逐级下发至第一智能层。Step 7: The fourth intelligent layer sends the aggregated parameters of the third aggregation model to the first intelligent layer level by level.
步骤八:第一智能层接收到聚合后的模型参数之后,利用该各个聚合后的模型参数进行初始学习模型训练,直至当确定训练后的业务网络模型达到预设条件后,确定生成分层联邦学习模型,其中预设条件包括训练至模型收敛,训练次数达到最大迭代次数以及训练时长达到最长训练时间的其中一种,需要说明的是,模型训练以及后续的推理过程可以在本地上进行也可以在数字孪生网络体内进行。Step 8: After receiving the aggregated model parameters, the first intelligent layer uses the aggregated model parameters to perform initial learning model training until it is determined that the trained business network model meets the preset conditions, and then determines to generate a hierarchical federation Learning model, where the preset conditions include one of the training until the model converges, the number of training times reaches the maximum number of iterations, and the training time reaches the maximum training time. It should be noted that the model training and subsequent reasoning process can be performed locally or Can be done within the digital twin network body.
可选的,在本申请的另外一种实施方式中,如图3所示,本申请还提供一种利用学习模型进行业务处理的装置。其中,包括:Optionally, in another implementation manner of the present application, as shown in FIG. 3 , the present application further provides an apparatus for performing service processing by using a learning model. Among them, including:
获取模块,被配置为获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构,其中所述第一智能层以及第二智能层部署在基站设备,第三智能层部署在边缘节点中;An acquisition module configured to acquire a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, wherein the first intelligent layer and the second intelligent layer are deployed in base station equipment, and the third intelligent layer Deployed in edge nodes;
生成模块,被配置为利用分层联邦学习算法以及所述通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型;The generating module is configured to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side by using the hierarchical federated learning algorithm and the communication network architecture;
处理模块,被配置为利用所述分层联邦学习模型进行业务处理。The processing module is configured to use the hierarchical federated learning model to perform business processing.
本申请中,可以获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构,其中第一智能层以及第二智能层部署在基站设备,第三智能层部署在边缘节点中;利用分层联邦学习算法以及通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型;并利用分层联邦学习模型进行业务处理。通过应用本申请的技术方案,可以利用基站设备的分布式单元DU与集中式单元CU,并与边缘节点共同组成通信网络架构。以使后续还可以利用该通信网络聚合网络中各个设备节点的模型参数,并利用该聚合模型参数构建部署在用户设备端或基站设备端的分层联邦学习模型。进而实现利用分层联邦学习模型进行业务处理的目的。In this application, the communication network architecture composed of the first intelligent layer, the second intelligent layer and the third intelligent layer can be obtained, wherein the first intelligent layer and the second intelligent layer are deployed in the base station equipment, and the third intelligent layer is deployed in the edge In the node; use the hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side; and use the hierarchical federated learning model for business processing. By applying the technical solution of the present application, the distributed unit DU and the centralized unit CU of the base station equipment can be used to form a communication network architecture together with the edge nodes. In the future, the communication network can be used to aggregate the model parameters of each device node in the network, and the aggregated model parameters can be used to construct a hierarchical federated learning model deployed on the user equipment side or the base station equipment side. Then realize the purpose of using the hierarchical federated learning model for business processing.
在本申请的另外一种实施方式中,获取模块201,还包括:In another embodiment of the present application, the acquiring module 201 further includes:
获取模块201,被配置为获取部署在云端服务器中的第四智能层;The obtaining module 201 is configured to obtain the fourth intelligent layer deployed in the cloud server;
获取模块201,被配置为按照预设配置策略,对所述第四智能层进行功能配置;The acquisition module 201 is configured to perform functional configuration on the fourth intelligent layer according to a preset configuration policy;
获取模块201,被配置为当检测到对第四智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层、所述第三智能层以及第四智能层所组成的通信网络架构。The acquiring module 201 is configured to, when it is detected that the function configuration of the fourth intelligent layer is completed, determine to generate the first intelligent layer, the second intelligent layer, the third intelligent layer and the fourth intelligent layer Composed of communication network architecture.
在本申请的另外一种实施方式中,获取模块201,还包括:In another embodiment of the present application, the acquiring module 201 further includes:
获取模块201,被配置为利用各所述第一智能层获取初始模型参数,所述初始模型参数为所述用户设备或所述第一智能层利用本地数据进行模型训练而得到的模型参数;The obtaining module 201 is configured to use each of the first intelligent layers to obtain initial model parameters, where the initial model parameters are model parameters obtained by the user equipment or the first intelligent layer using local data for model training;
获取模块201,被配置为利用所述第二智能层接收各所述第一智能层传送的初始模型参数之后,对所述初始模型参数进行第一层级聚合,得到第一聚合模型参数;The acquisition module 201 is configured to use the second intelligent layer to receive the initial model parameters transmitted by each of the first intelligent layers, and perform first-level aggregation on the initial model parameters to obtain first aggregated model parameters;
获取模块201,被配置为所述第二智能层将所述第一聚合模型参数发送给所述第一智能层,直至达到第一次数后确定所述第一层级聚合完成。The obtaining module 201 is configured so that the second intelligent layer sends the first aggregation model parameter to the first intelligent layer, and determines that the aggregation of the first layer is completed after reaching the first number.
在本申请的另外一种实施方式中,获取模块201,还包括:In another embodiment of the present application, the acquiring module 201 further includes:
获取模块201,被配置为所述第一智能层将所述第一聚合模型参数发送给用户设备,以使所述用户设备根据所述第一聚合模型参数对初始学习模型进行训练;或,The obtaining module 201 is configured such that the first intelligent layer sends the first aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the first aggregated model parameters; or,
获取模块201,被配置为所述第一智能层根据所述第一聚合模型参数对初始学习模型进行训练。The obtaining module 201 is configured such that the first intelligent layer trains an initial learning model according to the first aggregation model parameters.
在本申请的另外一种实施方式中,获取模块201,还包括:In another embodiment of the present application, the acquiring module 201 further includes:
获取模块201,被配置为由各所述第二智能层将第一聚合模型参数发送给所述第三智能层;The acquisition module 201 is configured to send the first aggregation model parameters to the third intelligent layer by each of the second intelligent layers;
获取模块201,被配置为由所述第三智能层对各所述第一聚合模型参数进行第二层级聚合,得到第二聚合模型参数;The acquisition module 201 is configured to perform second-level aggregation on each of the first aggregation model parameters by the third intelligent layer to obtain second aggregation model parameters;
获取模块201,被配置为所述第三智能层将所述第二聚合模型参数发送给所述第二智能层,直至达到第二次数后确定所述第二层级聚合完成。The obtaining module 201 is configured so that the third intelligent layer sends the second aggregation model parameters to the second intelligent layer, and determines that the second layer aggregation is completed after reaching a second number of times.
在本申请的另外一种实施方式中,获取模块201,还包括:In another embodiment of the present application, the acquiring module 201 further includes:
获取模块201,被配置为所述第二智能层将所述第二聚合模型参数发送给所述第一智能层;以及,The acquisition module 201 is configured to send the second aggregation model parameters to the first intelligent layer by the second intelligent layer; and,
获取模块201,被配置为所述第一智能层将所述第二聚合模型参数发送给用户设备,以使所述用户设备根据所述第二聚合模型参数对初始学习模型进行训练,得到所述分层联邦学习模型;或,The obtaining module 201 is configured such that the first intelligent layer sends the second aggregation model parameters to the user equipment, so that the user equipment trains an initial learning model according to the second aggregation model parameters to obtain the A hierarchical federated learning model; or,
获取模块201,被配置为所述第一智能层根据所述第二聚合模型参数对初始学习模型进行训练,得到所述分层联邦学习模型。The acquisition module 201 is configured such that the first intelligent layer trains an initial learning model according to the second aggregation model parameters to obtain the hierarchical federated learning model.
在本申请的另外一种实施方式中,获取模块201,还包括:In another embodiment of the present application, the acquiring module 201 further includes:
获取模块201,被配置为若确定存在第四智能层,由各所述第三智能层将所述第二聚合模型参数发送给所述第四智能层,以使所述第四智能层对所述第二聚合模型参数进行第三层级聚合后,得到第三聚合模型参数;The obtaining module 201 is configured to, if it is determined that there is a fourth intelligent layer, each of the third intelligent layers sends the second aggregation model parameters to the fourth intelligent layer, so that the fourth intelligent layer can After the second aggregation model parameters are aggregated at the third level, the third aggregation model parameters are obtained;
获取模块201,被配置为所述第四智能层将所述第三聚合模型参数逐级下发至所述第一智能层,以使所述第一智能层根据所述第一聚合模型参数对初始学习模型进行训练;或,The obtaining module 201 is configured such that the fourth intelligent layer sends the third aggregation model parameters to the first intelligent layer step by step, so that the first intelligent layer The initial learning model is trained; or,
获取模块201,被配置为所述第一智能层将所述第三聚合模型参数发送给用户设备,以使所述用户设备根据所述第三聚合模型参数对初始学习模型进行训练,得到所述分层联邦学习模型。The obtaining module 201 is configured such that the first intelligent layer sends the third aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the third aggregated model parameters to obtain the described Hierarchical federated learning model.
在本申请的另外一种实施方式中,获取模块201,还包括:In another embodiment of the present application, the acquiring module 201 further includes:
获取模块201,被配置为由用户设备利用所述分层联邦学习模型进行第一业务处理;或,The obtaining module 201 is configured to use the hierarchical federated learning model to perform the first service processing by the user equipment; or,
获取模块201,被配置为由基站设备利用所述分层联邦学习模型进行第二业务处理。The obtaining module 201 is configured to use the hierarchical federated learning model to perform second service processing by the base station device.
在本申请的另外一种实施方式中,获取模块201,还包括:In another embodiment of the present application, the acquiring module 201 further includes:
获取模块201,被配置为所述第一智能层部署在所述基站设备的分布式单元 DU中,以及所述第二智能层部署在所述基站设备的集中式单元CU中;或,The acquiring module 201 is configured such that the first intelligent layer is deployed in the distributed unit DU of the base station device, and the second intelligent layer is deployed in the centralized unit CU of the base station device; or,
获取模块201,被配置为所述第一智能层部署在小基站设备,以及所述第二智能层部署在宏基站设备中。The obtaining module 201 is configured such that the first intelligent layer is deployed in small base station equipment, and the second intelligent layer is deployed in macro base station equipment.
图4是根据一示例性实施例示出的一种电子设备的逻辑结构框图。例如,电子设备300可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。Fig. 4 is a logical structural block diagram of an electronic device according to an exemplary embodiment. For example, the electronic device 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器,上述指令可由电子设备处理器执行以完成上述利用学习模型进行业务处理的方法,该方法包括:获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构,其中所述第一智能层以及第二智能层部署在基站设备,第三智能层部署在边缘节点中;利用分层联邦学习算法以及所述通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型;利用所述分层联邦学习模型进行业务处理。可选地,上述指令还可以由电子设备的处理器执行以完成上述示例性实施例中所涉及的其他步骤。例如,非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium including instructions, such as a memory including instructions, the instructions can be executed by the electronic device processor to complete the above-mentioned method for business processing using a learning model, The method includes: obtaining a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, wherein the first intelligent layer and the second intelligent layer are deployed in base station equipment, and the third intelligent layer is deployed in In the edge node: using the layered federated learning algorithm and the communication network architecture to generate a layered federated learning model deployed on the user equipment side or the base station device side; using the layered federated learning model to perform business processing. Optionally, the above instructions may also be executed by a processor of the electronic device to complete other steps involved in the above exemplary embodiments. For example, the non-transitory computer readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
在示例性实施例中,还提供了一种应用程序/计算机程序产品,包括一条或多条指令,该一条或多条指令可以由电子设备的处理器执行,以完成上述利用学习模型进行业务处理的方法,该方法包括:获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构,其中所述第一智能层以及第二智能层部署在基站设备,第三智能层部署在边缘节点中;利用分层联邦学习算法以及所述通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型;利用所述分层联邦学习模型进行业务处理。可选地,上述指令还可以由电子设备的处理器执行以完成上述示例性实施例中所涉及的其他步骤。In an exemplary embodiment, an application program/computer program product is also provided, including one or more instructions, which can be executed by a processor of an electronic device to complete the above-mentioned business processing using a learning model The method includes: obtaining a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, wherein the first intelligent layer and the second intelligent layer are deployed in base station equipment, and the third intelligent layer The layer is deployed in the edge node; using the layered federated learning algorithm and the communication network architecture to generate a layered federated learning model deployed on the user equipment side or the base station equipment side; using the layered federated learning model for business processing. Optionally, the above instructions may also be executed by a processor of the electronic device to complete other steps involved in the above exemplary embodiments.
图4为计算机设备30的示例图。本领域技术人员可以理解,示意图4仅仅是计算机设备30的示例,并不构成对计算机设备30的限定,可以包括比图示 更多或更少的部件,或者组合某些部件,或者不同的部件,例如计算机设备30还可以包括输入输出设备、网络接入设备、总线等。FIG. 4 is an example diagram of a computer device 30 . Those skilled in the art can understand that the schematic diagram 4 is only an example of the computer device 30, and does not constitute a limitation to the computer device 30, and may include more or less components than those shown in the figure, or combine certain components, or different components For example, the computer device 30 may also include an input and output device, a network access device, a bus, and the like.
所称处理器302可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器302也可以是任何常规的处理器等,处理器302是计算机设备30的控制中心,利用各种接口和线路连接整个计算机设备30的各个部分。The so-called processor 302 may be a central processing unit (Central Processing Unit, CPU), and may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor can be a microprocessor or the processor 302 can also be any conventional processor, etc. The processor 302 is the control center of the computer device 30 and uses various interfaces and lines to connect various parts of the entire computer device 30 .
存储器301可用于存储计算机可读指令303,处理器302通过运行或执行存储在存储器301内的计算机可读指令或模块,以及调用存储在存储器301内的数据,实现计算机设备30的各种功能。存储器301可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据计算机设备30的使用所创建的数据等。此外,存储器301可以包括硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)或其他非易失性/易失性存储器件。The memory 301 can be used to store computer-readable instructions 303 , and the processor 302 implements various functions of the computer device 30 by running or executing computer-readable instructions or modules stored in the memory 301 and calling data stored in the memory 301 . The memory 301 can mainly include a program storage area and a data storage area, wherein the program storage area can store an operating system, at least one application program required by a function (such as a sound playback function, an image playback function, etc.); Data created using the computer device 30 and the like. In addition, the memory 301 may include a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card), at least one magnetic disk storage device, a flash memory device, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), or other non-volatile/volatile storage devices.
计算机设备30集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成计算机可读指令可存储于一计算机可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。If the integrated modules of the computer device 30 are realized in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the present invention implements all or part of the processes in the methods of the above embodiments, and can also use computer-readable instructions to instruct related hardware to complete computer-readable instructions that can be stored in a computer-readable storage medium. When the readable instructions are executed by the processor, the steps in the above-mentioned various method embodiments can be realized.
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开 的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。Other embodiments of the present application will be readily apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any modification, use or adaptation of the application, these modifications, uses or adaptations follow the general principles of the application and include common knowledge or conventional technical means in the technical field not disclosed in the application . The specification and examples are to be considered exemplary only, with a true scope and spirit of the application indicated by the following claims.
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。It should be understood that the present application is not limited to the precise constructions which have been described above and shown in the accompanying drawings, and various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (12)

  1. 一种利用学习模型进行业务处理的方法,其特征在于,其中:A method for business processing using a learning model, wherein:
    获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构,其中所述第一智能层以及第二智能层部署在基站设备,第三智能层部署在边缘节点中;Obtain a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, wherein the first intelligent layer and the second intelligent layer are deployed in base station equipment, and the third intelligent layer is deployed in edge nodes;
    利用分层联邦学习算法以及所述通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型;Using a hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side;
    利用所述分层联邦学习模型进行业务处理。Business processing is performed by using the layered federated learning model.
  2. 如权利要求1所述的方法,其特征在于,在所述获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构之后,还包括:The method according to claim 1, further comprising:
    获取部署在云端服务器中的第四智能层;Obtain the fourth intelligent layer deployed in the cloud server;
    按照预设配置策略,对所述第四智能层进行功能配置;performing functional configuration on the fourth intelligent layer according to a preset configuration strategy;
    当检测到对第四智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层、所述第三智能层以及第四智能层所组成的通信网络架构。After detecting that the function configuration of the fourth intelligent layer is completed, it is determined to generate a communication network architecture composed of the first intelligent layer, the second intelligent layer, the third intelligent layer, and the fourth intelligent layer.
  3. 如权利要求1或2所述的方法,其特征在于,所述利用分层联邦学习算法以及所述通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型,包括:The method according to claim 1 or 2, characterized in that, using a hierarchical federated learning algorithm and the communication network architecture to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side includes:
    利用各所述第一智能层获取初始模型参数,所述初始模型参数为所述用户设备或所述第一智能层利用本地数据进行模型训练而得到的模型参数;Using each of the first intelligent layers to acquire initial model parameters, where the initial model parameters are model parameters obtained by the user equipment or the first intelligent layer using local data for model training;
    利用所述第二智能层接收各所述第一智能层传送的初始模型参数之后,对所述初始模型参数进行第一层级聚合,得到第一聚合模型参数;After using the second intelligent layer to receive the initial model parameters transmitted by each of the first intelligent layers, perform first-level aggregation on the initial model parameters to obtain first aggregated model parameters;
    所述第二智能层将所述第一聚合模型参数发送给所述第一智能层,直至达到第一次数后确定所述第一层级聚合完成。The second intelligent layer sends the first aggregation model parameters to the first intelligent layer, and determines that the aggregation at the first level is completed after reaching the first number.
  4. 如权利要求3所述的方法,其特征在于,在所述直至达到第一次数后确定所述第一层级聚合完成之后,包括:The method according to claim 3, characterized in that, after determining that the first-level aggregation is completed until the first number is reached, comprising:
    所述第一智能层将所述第一聚合模型参数发送给用户设备,以使所述用户设备根据所述第一聚合模型参数对初始学习模型进行训练;或,The first intelligent layer sends the first aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the first aggregated model parameters; or,
    所述第一智能层根据所述第一聚合模型参数对初始学习模型进行训练。The first intelligent layer trains an initial learning model according to the first aggregated model parameters.
  5. 如权利要求3所述的方法,其特征在于,在所述确定所述第一层级聚合完成之后,包括:The method according to claim 3, after said determining that said first-level aggregation is completed, comprising:
    由各所述第二智能层将第一聚合模型参数发送给所述第三智能层;sending the first aggregation model parameters to the third intelligent layer by each of the second intelligent layers;
    由所述第三智能层对各所述第一聚合模型参数进行第二层级聚合,得到第二聚合模型参数;performing second-level aggregation on each of the first aggregation model parameters by the third intelligent layer to obtain second aggregation model parameters;
    所述第三智能层将所述第二聚合模型参数发送给所述第二智能层,直至达到第二次数后确定所述第二层级聚合完成。The third intelligent layer sends the second aggregation model parameters to the second intelligent layer, and determines that the aggregation at the second level is completed after reaching a second number of times.
  6. 如权利要求4所述的方法,其特征在于,在所述达到第二次数后确定所述第二层级聚合完成之后,包括:The method according to claim 4, wherein after the second number of times is reached and it is determined that the aggregation of the second level is completed, it comprises:
    所述第二智能层将所述第二聚合模型参数发送给所述第一智能层;以及,the second intelligence layer sends the second aggregated model parameters to the first intelligence layer; and,
    所述第一智能层将所述第二聚合模型参数发送给用户设备,以使所述用户设备根据所述第二聚合模型参数对初始学习模型进行训练,得到所述分层联邦学习模型;或,The first intelligent layer sends the second aggregation model parameters to the user equipment, so that the user equipment trains the initial learning model according to the second aggregation model parameters to obtain the hierarchical federated learning model; or ,
    所述第一智能层根据所述第二聚合模型参数对初始学习模型进行训练,得到所述分层联邦学习模型。The first intelligent layer trains an initial learning model according to the second aggregation model parameters to obtain the hierarchical federated learning model.
  7. 如权利要求5所述的方法,其特征在于,在所述第三智能层将所述第二聚合模型参数发送给所述第二智能层之后,包括:The method according to claim 5, characterized in that, after the third intelligent layer sends the second aggregation model parameters to the second intelligent layer, comprising:
    若确定存在第四智能层,由各所述第三智能层将所述第二聚合模型参数发送给所述第四智能层,以使所述第四智能层对所述第二聚合模型参数进行第三 层级聚合后,得到第三聚合模型参数;If it is determined that there is a fourth intelligent layer, each of the third intelligent layers sends the second aggregation model parameters to the fourth intelligent layer, so that the fourth intelligent layer performs a process on the second aggregation model parameters. After the third level of aggregation, the third aggregation model parameters are obtained;
    所述第四智能层将所述第三聚合模型参数逐级下发至所述第一智能层,以使所述第一智能层根据所述第一聚合模型参数对初始学习模型进行训练;或,The fourth intelligent layer sends the third aggregation model parameters to the first intelligent layer step by step, so that the first intelligent layer trains the initial learning model according to the first aggregation model parameters; or ,
    所述第一智能层将所述第三聚合模型参数发送给用户设备,以使所述用户设备根据所述第三聚合模型参数对初始学习模型进行训练,得到所述分层联邦学习模型。The first intelligent layer sends the third aggregated model parameters to the user equipment, so that the user equipment trains an initial learning model according to the third aggregated model parameters to obtain the hierarchical federated learning model.
  8. 如权利要求1所述的方法,其特征在于,所述利用所述分层联邦学习模型进行业务处理,包括:The method according to claim 1, wherein said utilizing said hierarchical federated learning model for business processing comprises:
    由用户设备利用所述分层联邦学习模型进行第一业务处理;或,The user equipment uses the hierarchical federated learning model to perform the first service processing; or,
    由基站设备利用所述分层联邦学习模型进行第二业务处理。The base station device uses the layered federated learning model to perform the second service processing.
  9. 如权利要求1所述的方法,其特征在于,所述第一智能层以及第二智能层部署在基站设备,包括:The method according to claim 1, wherein the first intelligent layer and the second intelligent layer are deployed in base station equipment, comprising:
    所述第一智能层部署在所述基站设备的分布式单元DU中,以及所述第二智能层部署在所述基站设备的集中式单元CU中;或,The first intelligent layer is deployed in a distributed unit DU of the base station device, and the second intelligent layer is deployed in a centralized unit CU of the base station device; or,
    所述第一智能层部署在小基站设备,以及所述第二智能层部署在宏基站设备中。The first intelligent layer is deployed in small base station equipment, and the second intelligent layer is deployed in macro base station equipment.
  10. 一种利用学习模型进行业务处理的装置,其特征在于,应用于基站设备,包括:A device for business processing using a learning model, characterized in that it is applied to base station equipment, including:
    获取模块,被配置为获取由第一智能层、第二智能层以及第三智能层所组成的通信网络架构,其中所述第一智能层以及第二智能层部署在基站设备,第三智能层部署在边缘节点中;An acquisition module configured to acquire a communication network architecture composed of a first intelligent layer, a second intelligent layer, and a third intelligent layer, wherein the first intelligent layer and the second intelligent layer are deployed in base station equipment, and the third intelligent layer Deployed in edge nodes;
    生成模块,被配置为利用分层联邦学习算法以及所述通信网络架构,生成部署在用户设备端或基站设备端的分层联邦学习模型;The generating module is configured to generate a hierarchical federated learning model deployed on the user equipment side or the base station equipment side by using the hierarchical federated learning algorithm and the communication network architecture;
    处理模块,被配置为利用所述分层联邦学习模型进行业务处理。The processing module is configured to use the hierarchical federated learning model to perform business processing.
  11. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    存储器,用于存储可执行指令;以及,memory for storing executable instructions; and,
    处理器,用于与所述存储器执行所述可执行指令从而完成权利要求1-9中任一所述利用学习模型进行业务处理的方法的操作。A processor, configured to execute the executable instructions with the memory so as to complete the operations of the method for business processing using a learning model in any one of claims 1-9.
  12. 一种计算机可读存储介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时执行权利要求1-9中任一所述利用学习模型进行业务处理的方法的操作。A computer-readable storage medium for storing computer-readable instructions, wherein when the instructions are executed, the operations of the method for business processing by using a learning model in any one of claims 1-9 are performed.
PCT/CN2022/119866 2021-11-29 2022-09-20 Method and apparatus for performing service processing by using learning model WO2023093238A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111437886.2 2021-11-29
CN202111437886.2A CN114302422A (en) 2021-11-29 2021-11-29 Method and device for processing business by using learning model

Publications (1)

Publication Number Publication Date
WO2023093238A1 true WO2023093238A1 (en) 2023-06-01

Family

ID=80966195

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/119866 WO2023093238A1 (en) 2021-11-29 2022-09-20 Method and apparatus for performing service processing by using learning model

Country Status (2)

Country Link
CN (1) CN114302422A (en)
WO (1) WO2023093238A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117076132A (en) * 2023-10-12 2023-11-17 北京邮电大学 Resource allocation and aggregation optimization method and device for hierarchical federal learning system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114302422A (en) * 2021-11-29 2022-04-08 北京邮电大学 Method and device for processing business by using learning model
WO2024000438A1 (en) * 2022-06-30 2024-01-04 Shenzhen Tcl New Technology Co., Ltd. Communication device and method for determining post-processing based on artificial intelligence/machine learning
CN116996406B (en) * 2023-09-22 2024-02-02 山东未来互联科技有限公司 Provincial SDN backbone network networking-based data interaction management system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210142223A1 (en) * 2019-11-07 2021-05-13 International Business Machines Corporation Hierarchical federated learning using access permissions
CN112804107A (en) * 2021-01-28 2021-05-14 南京邮电大学 Layered federal learning method for energy consumption adaptive control of equipment of Internet of things
CN113268920A (en) * 2021-05-11 2021-08-17 西安交通大学 Safe sharing method for sensing data of unmanned aerial vehicle cluster based on federal learning
CN113504999A (en) * 2021-08-05 2021-10-15 重庆大学 Scheduling and resource allocation method for high-performance hierarchical federated edge learning
CN114302421A (en) * 2021-11-29 2022-04-08 北京邮电大学 Method and device for generating communication network architecture, electronic equipment and medium
CN114302422A (en) * 2021-11-29 2022-04-08 北京邮电大学 Method and device for processing business by using learning model

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11244242B2 (en) * 2018-09-07 2022-02-08 Intel Corporation Technologies for distributing gradient descent computation in a heterogeneous multi-access edge computing (MEC) networks
GB2577055B (en) * 2018-09-11 2021-09-01 Samsung Electronics Co Ltd Improvements in and relating to telecommunication networks
US10797968B2 (en) * 2018-11-15 2020-10-06 Cisco Technology, Inc. Automated provisioning of radios in a virtual radio access network
US11562176B2 (en) * 2019-02-22 2023-01-24 Cisco Technology, Inc. IoT fog as distributed machine learning structure search platform
CN110891283A (en) * 2019-11-22 2020-03-17 超讯通信股份有限公司 Small base station monitoring device and method based on edge calculation model
CN111091200B (en) * 2019-12-20 2021-03-19 深圳前海微众银行股份有限公司 Updating method and system of training model, intelligent device, server and storage medium
CN111768008B (en) * 2020-06-30 2023-06-16 平安科技(深圳)有限公司 Federal learning method, apparatus, device, and storage medium
CN112181666B (en) * 2020-10-26 2023-09-01 华侨大学 Equipment assessment and federal learning importance aggregation method based on edge intelligence
CN113163409B (en) * 2021-03-16 2022-09-20 重庆邮电大学 Mobile edge computing service placement method based on artificial intelligence
CN113490184B (en) * 2021-05-10 2023-05-26 北京科技大学 Random access resource optimization method and device for intelligent factory
CN113238867B (en) * 2021-05-19 2024-01-19 浙江凡双科技股份有限公司 Federal learning method based on network unloading
CN113435604B (en) * 2021-06-16 2024-05-07 清华大学 Federal learning optimization method and device
CN113408746B (en) * 2021-06-22 2023-03-14 深圳大学 Distributed federal learning method and device based on block chain and terminal equipment
CN113537514B (en) * 2021-07-27 2023-07-25 北京邮电大学 Digital twinning-based federal learning framework with high energy efficiency

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210142223A1 (en) * 2019-11-07 2021-05-13 International Business Machines Corporation Hierarchical federated learning using access permissions
CN112804107A (en) * 2021-01-28 2021-05-14 南京邮电大学 Layered federal learning method for energy consumption adaptive control of equipment of Internet of things
CN113268920A (en) * 2021-05-11 2021-08-17 西安交通大学 Safe sharing method for sensing data of unmanned aerial vehicle cluster based on federal learning
CN113504999A (en) * 2021-08-05 2021-10-15 重庆大学 Scheduling and resource allocation method for high-performance hierarchical federated edge learning
CN114302421A (en) * 2021-11-29 2022-04-08 北京邮电大学 Method and device for generating communication network architecture, electronic equipment and medium
CN114302422A (en) * 2021-11-29 2022-04-08 北京邮电大学 Method and device for processing business by using learning model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIU LUMIN; ZHANG JUN; SONG S.H.; LETAIEF KHALED B.: "Client-Edge-Cloud Hierarchical Federated Learning", ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), IEEE, 7 June 2020 (2020-06-07), pages 1 - 6, XP033797875, DOI: 10.1109/ICC40277.2020.9148862 *
XU SIYA; XING YIFEI; GUO SHAOYONG; YANG CHAO; QIU XUESONG; MENG LUOMING: "Deep Reinforcement Learning Based Task Allocation Mechanism for Intelligent Inspection in Energy Internet", JOURNAL ON COMMUNICATIONS, RENMIN YOUDIAN CHUBANSHE, BEIJING, CN, vol. 42, no. 5, 31 May 2021 (2021-05-31), CN , pages 191 - 204, XP009546447, ISSN: 1000-436X, DOI: 10.11959/j.issn.1000−436x.2021071 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117076132A (en) * 2023-10-12 2023-11-17 北京邮电大学 Resource allocation and aggregation optimization method and device for hierarchical federal learning system
CN117076132B (en) * 2023-10-12 2024-01-05 北京邮电大学 Resource allocation and aggregation optimization method and device for hierarchical federal learning system

Also Published As

Publication number Publication date
CN114302422A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
WO2023093238A1 (en) Method and apparatus for performing service processing by using learning model
Zhang et al. Task offloading in vehicular edge computing networks: A load-balancing solution
Zhang et al. A hierarchical game framework for resource management in fog computing
WO2023093235A1 (en) Communication network architecture generation method and apparatus, electronic device, and medium
US8260272B2 (en) Health-related opportunistic networking
CN113573331B (en) Communication method, device and system
CN105122772B (en) A kind of method and apparatus by head swap server state and client-side information
US9319852B2 (en) Interoperability and communications system dynamic media proxy based on capability negotiation
WO2019029268A1 (en) Method and device for deploying network slice
CN104486741B (en) The mixed serve that a kind of trust state perceives finds method
CN112650581A (en) Cloud-side cooperative task scheduling method for intelligent building
WO2022001941A1 (en) Network element management method, network management system, independent computing node, computer device, and storage medium
CN109391502A (en) A kind of information configuring methods and administrative unit
WO2018090191A1 (en) Management method, management unit and system for network function
Xiao et al. Optimizing resource-efficiency for federated edge intelligence in IoT networks
CN115552933A (en) Federal learning in a telecommunications system
Yu et al. Green fog computing resource allocation using joint benders decomposition, dinkelbach algorithm, and modified distributed inner convex approximation
Abkenar et al. Energy optimization in association-free fog-IoT networks
CN113315669B (en) Cloud edge cooperation-based throughput optimization machine learning inference task deployment method
CN114168293A (en) Hybrid architecture system and task scheduling method based on data transmission time consumption
Yuan et al. An A3C-based joint optimization offloading and migration algorithm for SD-WBANs
WO2023071616A1 (en) Service processing method and apparatus, electronic device, and medium
WO2021027842A1 (en) Method, device and system for implementing edge computing
CN112492591A (en) Method and device for accessing power Internet of things terminal to network
US20190149613A1 (en) Method, communication terminal, and communication node device for associating resources

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22897324

Country of ref document: EP

Kind code of ref document: A1