WO2023093235A1 - 通信网络架构的生成方法、装置、电子设备及介质 - Google Patents
通信网络架构的生成方法、装置、电子设备及介质 Download PDFInfo
- Publication number
- WO2023093235A1 WO2023093235A1 PCT/CN2022/119831 CN2022119831W WO2023093235A1 WO 2023093235 A1 WO2023093235 A1 WO 2023093235A1 CN 2022119831 W CN2022119831 W CN 2022119831W WO 2023093235 A1 WO2023093235 A1 WO 2023093235A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- intelligent layer
- communication network
- intelligent
- network architecture
- layer
- Prior art date
Links
- 238000004891 communication Methods 0.000 title claims abstract description 120
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000012549 training Methods 0.000 claims abstract description 33
- 238000012545 processing Methods 0.000 claims abstract description 16
- 230000006870 function Effects 0.000 claims description 63
- 230000004931 aggregating effect Effects 0.000 claims description 2
- 230000002776 aggregation Effects 0.000 abstract description 33
- 238000004220 aggregation Methods 0.000 abstract description 33
- 238000007726 management method Methods 0.000 description 64
- 230000008569 process Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 229920000468 styrene butadiene styrene block copolymer Polymers 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/02—Arrangements for optimising operational condition
Definitions
- This application relates to data processing technology, especially a method, device, electronic equipment and medium for generating a communication network architecture.
- the architecture of the existing communication network refers to the "cloud-edge-end" three-layer intelligent architecture, where the edge intelligence usually refers to the edge server, which is used to handle tasks such as calculations on the user data plane. However, it does not consider the intelligentization of the network control plane and management plane at the edge. In addition, the existing network architecture does not fully reflect the intelligent characteristics of base station equipment.
- Embodiments of the present application provide a method, device, electronic device, and medium for generating a communication network architecture, wherein, according to an aspect of the embodiments of the present application, a method for generating a communication network architecture is provided, including base station equipment and edge nodes, in:
- each intelligent layer When it is detected that the function configuration of each intelligent layer is completed, it is determined to generate a communication network architecture composed of the first intelligent layer, the second intelligent layer, and the third intelligent layer.
- the acquiring is deployed on the first intelligent layer and the second intelligent layer of the base station equipment, including:
- a cloud server is also included, wherein:
- the fourth intelligent layer After detecting that the function configuration of the fourth intelligent layer is completed, it is determined to generate a communication network architecture composed of the first intelligent layer, the second intelligent layer, the third intelligent layer, and the fourth intelligent layer.
- the function configuration of the fourth intelligent layer is completed, including:
- the fourth intelligent layer is configured to schedule service communications among various sub-networks in the communication network architecture.
- the function configuration of the fourth intelligent layer according to a preset configuration policy includes:
- Configuring the first intelligent layer to provide management services, business network model warehouse, business network model reasoning, database management, and security function services in the communication network architecture;
- an apparatus for generating a communication network architecture including a base station device and an edge node, wherein:
- An acquisition module configured to acquire the first intelligent layer and the second intelligent layer deployed in the base station device, and the third intelligent layer deployed in the edge node;
- the configuration module is configured to perform functional configuration on the first intelligent layer, the second intelligent layer, and the third intelligent layer step by step according to a preset configuration strategy;
- the generating module is configured to determine to generate a communication network architecture composed of the first intelligent layer, the second intelligent layer, and the third intelligent layer after detecting that the function configuration of each intelligent layer is completed.
- an electronic device including:
- a display configured to display with the memory to execute the executable instructions so as to complete the operation of any one of the methods for generating the communication network architecture described above.
- a computer-readable storage medium which is used to store computer-readable instructions, and when the instructions are executed, the operations of any one of the methods for generating a communication network architecture described above are performed.
- the first intelligent layer and the second intelligent layer deployed in the base station equipment, and the third intelligent layer deployed in the edge node can be obtained; according to the preset configuration strategy, the first intelligent layer and the The second intelligent layer and the third intelligent layer perform functional configuration; when it is detected that the functional configuration of each intelligent layer is completed, it is determined to generate Communication network architecture composed of intelligent layers.
- the distributed unit DU and the centralized unit CU of the base station equipment can be used to form a communication network architecture together with the edge nodes.
- model parameters of each device node in the communication network aggregation network can be aggregated to obtain the aggregated model parameters in the future, so that the aggregated model parameters can be used for hierarchical learning and training, and the user equipment end or the base station end can be used to perform service.
- Hierarchical federated learning models for processing Further, the purpose of optimizing service processing efficiency by utilizing the communication network architecture is achieved.
- FIG. 1 is a schematic diagram of a method for generating a communication network architecture proposed by the present application
- FIG. 2 is a schematic diagram of a system architecture applied to a communication network architecture proposed by the present application
- Figure 3- Figure 6 is a schematic diagram of the configuration functions of each intelligent layer in a communication network architecture proposed by the present application.
- FIG. 7 is a schematic structural diagram of an electronic device generated by the communication network architecture proposed in the present application.
- FIG. 8 is a schematic structural diagram of an electronic device generated by the communication network architecture proposed in this application.
- FIGS. 1-6 A method for generating a communication network architecture according to an exemplary embodiment of the present application is described below with reference to FIGS. 1-6 . It should be noted that the following application scenarios are only shown for easy understanding of the spirit and principle of the present application, and the implementation manners of the present application are not limited in this respect. On the contrary, the embodiments of the present application can be applied to any applicable scene.
- the present application also proposes a method, device, base station equipment, and medium for generating a communication network architecture.
- Fig. 1 schematically shows a schematic flowchart of a method for generating a communication network architecture according to an embodiment of the present application.
- the method includes a base station device and an edge node, where:
- the existing communication network architecture refers to the "cloud-edge-end" three-layer intelligent architecture, where edge intelligence usually refers to edge servers, which are used to handle tasks such as calculations on the user data plane. However, it does not consider the intelligentization of the network control plane and management plane at the edge. In addition, the existing network architecture does not fully reflect the intelligent characteristics of base station equipment.
- first intelligent layer and the second intelligent layer mentioned in this application can be deployed in various ways, for example, the first intelligent layer can be deployed in the distributed unit DU of the base station equipment, and the second intelligent layer can be deployed in In the centralized unit CU of the base station equipment.
- the first intelligent layer may be deployed on small base station equipment.
- the second intelligent layer can be deployed on the macro base station equipment.
- edge node in the embodiment of the present application may be an edge server, or may be an edge device such as an edge network element.
- FIG. 2 it is a system schematic diagram of a communication network architecture proposed in this application. It includes the first intelligent layer deployed in the distributed unit DU of the base station equipment, the second intelligent layer deployed in the centralized unit CU of the base station equipment, and the third intelligent layer deployed in the edge node. In one manner, the fourth intelligent layer deployed in the cloud server is also included.
- the fourth intelligent layer is a high-level management intelligent component, which is responsible for the management between various sub-networks.
- the third intelligent layer is the network intelligent orchestration component above the base station, which is responsible for the functional orchestration management between each base station.
- the second intelligent layer is a centralized intelligent component inside the base station, which is responsible for intelligent enhancement and realization of traditional wireless resource management (RRM).
- the first intelligent layer is a distributed intelligent component inside the base station, which is responsible for further optimizing the parameters with short scheduling cycles.
- the first intelligent layer and the second intelligent layer deployed in the base station equipment, and the third intelligent layer deployed in the edge node can be obtained; according to the preset configuration strategy, the first intelligent layer and the The second intelligent layer and the third intelligent layer perform functional configuration; when it is detected that the functional configuration of each intelligent layer is completed, it is determined to generate Communication network architecture composed of intelligent layers.
- the distributed unit DU and the centralized unit CU of the base station equipment can be used to form a communication network architecture together with the edge nodes.
- model parameters of each device node in the communication network aggregation network can be aggregated to obtain the aggregated model parameters in the future, so that the aggregated model parameters can be used for hierarchical learning and training, and the user equipment end or the base station end can be used to perform service.
- Hierarchical federated learning models for processing. Further, the purpose of optimizing service processing efficiency by utilizing the communication network architecture is realized.
- obtaining the first intelligent layer and the second intelligent layer deployed on the base station equipment includes:
- the small base station equipment SBS small base station
- MBS macro base station
- MBS macro base station
- MBS macro base station
- either MSB or SBS includes one or more CUs and DUs.
- one MBS can manage one or more SBSs.
- a cloud server is also included, wherein:
- the fourth intelligent layer After detecting that the function configuration of the fourth intelligent layer is completed, it is determined to generate a communication network architecture composed of the first intelligent layer, the second intelligent layer, the third intelligent layer, and the fourth intelligent layer.
- the communication network architecture in this application may have a cloud server (ie, the fourth intelligent layer), one or more edge nodes (ie, the third intelligent layer), and one or more CUs or macro base station intelligence (ie, The second intelligence layer), one or more DU/small base station intelligence (that is, the first intelligence layer).
- a cloud server ie, the fourth intelligent layer
- one or more edge nodes ie, the third intelligent layer
- one or more CUs or macro base station intelligence ie, The second intelligence layer
- one or more DU/small base station intelligence that is, the first intelligence layer.
- the communication architecture will form a relatively closed domain on a horizontal communication level, and since the data in the same domain is not transmitted outside, it has privacy sexual reasons, and then there is no direct communication between two nodes on the same horizontal line. Therefore, the above-mentioned deployment method in the embodiment of the present application can be applied to ensure the privacy between multiple edge nodes, multiple CUs, and multiple DUs, so that each node can achieve cross- Collaboration between domains.
- any DU transmits non-original data (for example, the transmission of learning model parameters) to the common upper-level node CU of both, so that the upper-level node CU will receive the non-original data
- the data is transmitted to other DUs. And then indirectly realized the synergy between the two.
- the function configuration of the fourth intelligent layer is completed, including:
- the fourth intelligent layer is configured to schedule service communications among various sub-networks in the communication network architecture.
- the performing function configuration on the fourth intelligent layer according to a preset configuration strategy includes:
- Configuring the first intelligent layer to provide management services, business network model warehouse, business network model reasoning, database management, and security function services in the communication network architecture;
- Configuring the second intelligent layer to provide third-party application function introduction and management, wireless connection management, mobility management, security functions, business network model warehouse, business network model reasoning, database management, and interface in the communication network architecture manage the business;
- Configuring the third intelligence layer to provide digital twins, parameter configuration, service and policy management, conflict resolution, subscription management, security functions, business network model warehouse, business network model reasoning, database management, in the communication network architecture, interface management services; and,
- the fourth intelligent layer is configured to provide business network model warehouse, business network model reasoning, database management, computing power provision, service and policy management, security functions, and interface management services in the communication network architecture.
- this application can build a network digital twin based on the four elements of data, model, mapping and interaction, so as to apply the digital twin function to the communication network architecture proposed by this application.
- the digital twin function is a kind of Create a virtual image of physical network facilities, build a digital twin platform that is consistent with the real network, and conduct experiments and verifications on network configurations.
- the network twins built can help achieve low-cost trial and error.
- model training and model inference in the Siamese network can be carried out with low-cost trial and error without affecting the real network situation.
- the determination to generate the communication network architecture includes:
- the central (centralized) server collects various distributed scattered data
- the central server will distribute learning tasks (and training data) to each distributed node;
- Each distributed node receives the assigned learning tasks (and training data) and starts learning;
- the central server merges the learning results of each node
- this application can adopt the communication network architecture constructed by using multiple intelligent layers to realize the aggregation of the model parameters uploaded by each client device node, and obtain the aggregated model parameters, so that the subsequent use of the aggregated model parameters for the initial learning model Perform hierarchical learning training to obtain a hierarchical federated learning model for service processing on the user equipment side or base station side.
- the fourth intelligent layer can provide the following public functions: business network model warehouse, business network model reasoning, database, computing power provision, service and policy management, security functions, interface management and more.
- the third intelligent layer can provide the following public functions: digital twin, parameter configuration, service and policy management, conflict resolution, subscription management, security function, business network model warehouse , business network model reasoning, database, interface management, etc.
- the second intelligent layer can provide the following public functions: introduction and management of third-party application functions, wireless connection management, mobility management, security functions, business network model warehouse, Business network model reasoning, database, interface management, etc.
- the first intelligent layer can provide the following common functions: management service, business network model warehouse, business network model reasoning, database, security functions, and so on.
- the deployment location of the fourth intelligent layer in the embodiment of the present application may be a cloud server, etc.; the deployment location of the third intelligent layer may include MBS, SBS, etc. in addition to edge nodes; The deployment location of the second intelligent layer includes SBS, CU, etc.; the deployment location of the first intelligent layer includes DU, etc.
- the communication network architecture proposed in this application can also be used to construct a hierarchical federated learning model deployed on the user equipment side.
- a communication network architecture including three intelligent layers is used as an example to illustrate, including:
- Step 1101 Define the high-level aggregator, the low-level aggregator and build a digital twin network, and start the high-level aggregation iteration and the low-level aggregation iteration.
- the third intelligent layer is used as the high-level aggregator
- the second intelligent layer is used as the low-level aggregator as the specific embodiment illustrate.
- Step 1102 The distributed user equipment uses local data to perform model training and learning, so as to generate model parameter updates.
- Learning models include: input method prediction, handwritten digit recognition and other services that can be optimized by AI.
- Step 1103 The user equipment uploads the updated model parameters to the first intelligent layer, and the first intelligent layer continues to upload the updated model parameters to the second intelligent layer.
- the second intelligent layer updates all the model parameters received based on aggregation criteria to perform low-level aggregation of the service network model.
- Aggregation criteria include: hierarchical federated averaging algorithm and other algorithms or criteria that can be used for aggregation.
- Step 1104 The second intelligent layer sends the aggregated new model parameters to the first intelligent layer that manages the connection, and the first intelligent layer continues to deliver the new model parameters to the user equipment, completing a process of low-level federated learning.
- Step 1105 Repeat steps 1102-1104 until the low-level aggregation iteration is completed, the second intelligent layer uploads the aggregated new model parameters to the third intelligent layer, and the third intelligent layer updates all received model parameters based on aggregation criteria for business High-level aggregation of network models.
- Aggregation criteria include: hierarchical federated averaging algorithm and other algorithms or criteria that can be used for aggregation.
- Step 1106 The third intelligent layer sends the aggregated new model parameters to the second intelligent layer that manages the connection, and the second intelligent layer sends the aggregated new model parameters to the first intelligent layer that manages the connection.
- the first intelligent layer continues to send the new model parameters to the user equipment to complete a high-level federated learning process.
- Step 1107 Repeat steps 1102 to 1106 until the performance of the global learning model meets the requirements of preset conditions, where the preset conditions can include one of training until the model converges, the number of training times reaching the maximum number of iterations, and the training duration reaching the maximum training time .
- the model training and subsequent reasoning process can be performed locally or in the digital twin network.
- the data source of the intelligent layer can be the public data set used by the user-side AI business and the data that the user can transmit and provide; the data source of the user equipment is the user's own local data.
- the hierarchical federated learning model is deployed on the user equipment, and the low-level aggregator can be
- the high-level aggregator may be the third intelligent layer and the fourth intelligent layer (if any).
- the high-level aggregation can choose the third intelligent layer or the fourth intelligent layer; when the low-level aggregation is placed on the third intelligent layer, the second intelligent layer will local model parameters The update is directly transparently transmitted to the third intelligent layer, and the high-level aggregation is selected at the fourth intelligent layer.
- the communication network architecture proposed in this application can also be used to construct a hierarchical federated learning model deployed in the base station equipment (that is, the first intelligent layer).
- the communication network architecture including three intelligent layers is used as an example to illustrate, including:
- Step 1201 Define the high-level aggregator, the low-level aggregator and build a digital twin network, and start the high-level aggregation iteration and the low-level aggregation iteration.
- the third intelligent layer is used as the high-level aggregator
- the second intelligent layer is used as the low-level aggregator as the specific embodiment illustrate.
- Step 1202 The first intelligent layer (ie, the base station device) uses local data to perform model training and learning, thereby generating model parameter updates.
- the learning model includes: MAC layer real-time scheduling, interference management, etc., which can be optimized by AI in the base station business.
- Step 1203 The first intelligent layer uploads model parameter updates to the second intelligent layer, wherein the second intelligent layer updates all received model parameters based on the aggregation criteria for low-level aggregation of the service network model.
- Aggregation criteria include: hierarchical federated averaging algorithm and other algorithms or criteria that can be used for aggregation.
- Step 1204 The second intelligent layer sends the aggregated new model parameters to the first intelligent layer that manages the connection, and completes a process of low-level federated learning.
- Step 1205 Repeat steps 1202 to 1204 until the low-level aggregation iteration is completed, the second intelligent layer uploads the aggregated new model parameters to the third intelligent layer, and the third intelligent layer updates all received model parameters based on aggregation criteria for business High-level aggregation of network models.
- Aggregation criteria include: hierarchical federated averaging algorithm and other algorithms or criteria that can be used for aggregation.
- Step 1206 The third intelligent layer sends the aggregated new model parameters to the second intelligent layer that manages the connection, and the second intelligent layer sends the aggregated new model parameters to the first intelligent layer that manages the connection to Make the first intelligent layer complete a high-level federated learning process.
- Step 1207 Repeat steps 1202 to 1206 until the performance of the global learning model meets the requirements of preset conditions, where the preset conditions can include one of training until the model converges, the number of training times reaching the maximum number of iterations, and the training duration reaching the maximum training time , it should be noted that the model training and subsequent reasoning process can be performed locally or in the digital twin network.
- the communication network architecture proposed in this application can be used as the data source of the fourth intelligent layer (if any) when planning and optimizing the internal network in the future.
- the usable data of the available network can be collected in the third intelligent layer; the data source of the third intelligent layer can be the usable data of the base station collected in the second intelligent layer; the data source of the second intelligent layer can be Independently collected internal base station data.
- the hierarchical federated learning model is used for internal planning and optimization of future networks
- local model training is deployed on the second intelligent layer
- low-level aggregation is deployed on the third intelligent layer
- high-level aggregation is deployed on the fourth intelligent layer.
- the present application further provides an apparatus for generating a communication network architecture.
- a communication network architecture including base station equipment and edge nodes, among which:
- the obtaining module 201 is configured to obtain the first intelligent layer and the second intelligent layer deployed in the base station device, and the third intelligent layer deployed in the edge node;
- the configuration module 202 is configured to perform functional configuration on the first intelligent layer, the second intelligent layer, and the third intelligent layer step by step according to a preset configuration strategy;
- the generating module 203 is configured to determine to generate a communication network architecture composed of the first intelligent layer, the second intelligent layer and the third intelligent layer after detecting that the function configuration of each intelligent layer is completed.
- the first intelligent layer and the second intelligent layer deployed in the base station equipment, and the third intelligent layer deployed in the edge node can be obtained; according to the preset configuration strategy, the first intelligent layer and the The second intelligent layer and the third intelligent layer perform functional configuration; when it is detected that the functional configuration of each intelligent layer is completed, it is determined to generate Communication network architecture composed of intelligent layers.
- the distributed unit DU and the centralized unit CU of the base station equipment can be used to form a communication network architecture together with the edge nodes.
- model parameters of each device node in the communication network aggregation network can be aggregated to obtain the aggregated model parameters in the future, so that the aggregated model parameters can be used for hierarchical learning and training, and the user equipment end or the base station end can be used to perform service.
- Hierarchical federated learning models for processing Further, the purpose of optimizing service processing efficiency by utilizing the communication network architecture is achieved.
- the acquiring module 201 further includes:
- the obtaining module 201 is configured to obtain the first intelligent layer deployed in the distributed unit DU of the base station device, and the second intelligent layer deployed in the centralized unit CU of the base station device; or,
- the acquiring module 201 is configured to acquire a first intelligent layer deployed in a small base station device and a second intelligent layer deployed in a macro base station device.
- the acquiring module 201 further includes:
- the obtaining module 201 is configured to obtain the fourth intelligent layer deployed in the cloud server;
- the acquisition module 201 is configured to perform functional configuration on the fourth intelligent layer according to a preset configuration policy
- the acquiring module 201 is configured to, when it is detected that the function configuration of the fourth intelligent layer is completed, determine to generate the first intelligent layer, the second intelligent layer, the third intelligent layer and the fourth intelligent layer Composed of communication network architecture.
- the acquiring module 201 further includes:
- the obtaining module 201 is configured to configure the first intelligent layer for scheduling short-cycle service parameters in the communication network architecture;
- the obtaining module 201 is configured to configure the second intelligent layer for managing radio resource services in the communication network architecture;
- the obtaining module 201 is configured to configure the third intelligent layer for scheduling service communication between various base station devices in the communication network architecture;
- the acquiring module 201 is configured to configure the fourth intelligent layer to schedule service communication between various sub-networks in the communication network architecture.
- the acquiring module 201 further includes:
- the acquisition module 201 is configured to configure the first intelligent layer to provide management services, business network model warehouse, business network model reasoning, database management, and security function services in the communication network architecture; and,
- the acquisition module 201 is configured to configure the second intelligent layer to provide digital twins in the communication network architecture, third-party application function introduction and management, wireless connection management, mobility management, conflict resolution, subscription management, security function, business network model warehouse, business network model reasoning, database management, interface management business; and,
- the acquisition module 201 is configured to configure the third intelligent layer to provide digital twins in the communication network architecture, third-party application function introduction and management, parameter configuration, service and policy management, conflict resolution, subscription management, security function, business network model warehouse, business network model reasoning, database management, interface management business; and,
- the acquisition module 201 is configured to configure the fourth intelligent layer to provide digital twins in the communication network architecture, third-party application function introduction and management, business network model warehouse, business network model reasoning, database management, computing power Provide, service and policy management, security function, interface management business.
- the acquiring module 201 further includes:
- the obtaining module 201 is configured to use each intelligent layer of the communication network architecture to aggregate model parameters to obtain aggregated model parameters;
- the acquisition module 201 is configured to use the aggregated model parameters to perform hierarchical learning training on the initial learning model to obtain a hierarchical federated learning model for service processing at the user equipment side or the base station side.
- Fig. 8 is a block diagram showing a logic structure of an electronic device according to an exemplary embodiment.
- the electronic device 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
- a non-transitory computer-readable storage medium including instructions, such as a memory including instructions, the instructions can be executed by the electronic device processor to complete the above-mentioned method for generating the communication network architecture, the method Including: obtaining the first intelligent layer and the second intelligent layer deployed in the base station equipment, and the third intelligent layer deployed in the edge node; The second intelligent layer and the third intelligent layer perform function configuration; when it is detected that the function configuration of each intelligent layer is completed, determine to generate the first intelligent layer, the second intelligent layer and the second intelligent layer A communication network architecture composed of three intelligent layers.
- the above instructions may also be executed by a processor of the electronic device to complete other steps involved in the above exemplary embodiments.
- the non-transitory computer readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
- an application program/computer program product including one or more instructions, which can be executed by a processor of an electronic device to complete the above-mentioned method for generating a communication network architecture , the method includes: acquiring the first intelligent layer and the second intelligent layer deployed in the base station equipment, and the third intelligent layer deployed in the edge node; The intelligent layer, the second intelligent layer, and the third intelligent layer perform functional configuration; when it is detected that the functional configuration of each intelligent layer is completed, it is determined to generate a function composed of the first intelligent layer, the second intelligent layer, and the The communication network architecture formed by the third intelligent layer.
- the above instructions may also be executed by a processor of the electronic device to complete other steps involved in the above exemplary embodiments.
- FIG. 8 is an example diagram of a computer device 30 .
- the schematic diagram 8 is only an example of the computer device 30, and does not constitute a limitation to the computer device 30, and may include more or less components than those shown in the figure, or combine certain components, or different components
- the computer device 30 may also include an input and output device, a network access device, a bus, and the like.
- the so-called processor 302 may be a central processing unit (Central Processing Unit, CPU), and may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor can be a microprocessor or the processor 302 can also be any conventional processor, etc.
- the processor 302 is the control center of the computer device 30 and uses various interfaces and lines to connect various parts of the entire computer device 30 .
- the memory 301 can be used to store computer-readable instructions 303 , and the processor 302 implements various functions of the computer device 30 by running or executing computer-readable instructions or modules stored in the memory 301 and calling data stored in the memory 301 .
- the memory 301 can mainly include a program storage area and a data storage area, wherein the program storage area can store an operating system, at least one application program required by a function (such as a sound playback function, an image playback function, etc.); Data created using the computer device 30 and the like.
- the memory 301 may include a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card), at least one magnetic disk storage device, a flash memory device, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), or other non-volatile/volatile storage devices.
- a hard disk a memory
- a plug-in hard disk a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card), at least one magnetic disk storage device, a flash memory device, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), or other non-volatile/volatile storage devices.
- a smart memory card Smart Media Card, SMC
- the integrated modules of the computer device 30 are realized in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on such an understanding, the present invention realizes all or part of the processes in the methods of the above embodiments, and can also use computer-readable instructions to instruct related hardware to complete, and the computer-readable instructions can be stored in a computer-readable storage medium. When the computer-readable instructions are executed by the processor, the steps of the above-mentioned various method embodiments can be realized.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
本申请公开了一种通信网络架构的生成方法、装置、电子设备及介质。通过应用本申请的技术方案,可以利用基站设备的分布式单元DU与集中式单元CU,并与边缘节点共同组成通信网络架构。以使后续还可以利用该通信网络聚合网络中各个设备节点的模型参数进行聚合以得到聚合模型参数,从而利用该聚合模型参数进行分层学习训练,得到用于在用户设备端或基站端进行业务处理的分层联邦学习模型。进而实现利用通信网络架构优化业务处理效率的目的。
Description
本申请中涉及数据处理技术,尤其是一种通信网络架构的生成方法、装置、电子设备及介质。
现有通信网络的架构是指“云-边-端”三层智能架构,其中边缘智能通常代指边缘服务器,用于处理用户数据面上的计算等任务。但是其未考虑在边缘实现网络控制面和管理面的智能化。此外,现有的网络架构也未充分体现基站设备的智能化特点。
因此,如何设计一种可以充分利用各个节点设备以组成的通信网络架构,成为了本领域人员需要解决的问题。
发明内容
本申请实施例提供一种通信网络架构的生成方法、装置、电子设备及介质,其中,根据本申请实施例的一个方面,提供的一种通信网络架构的生成方法,包括基站设备以及边缘节点,其中:
获取部署在所述基站设备的第一智能层和第二智能层,以及部署在所述边缘节点中的第三智能层;
按照预设配置策略,逐级对所述第一智能层、所述第二智能层以及所述第三智能层进行功能配置;
当检测到对各智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层以及所述第三智能层所组成的通信网络架构。
可选地,在基于本申请上述方法的另一个实施例中,所述获取部署在所述基站设备的第一智能层和第二智能层,包括:
获取部署在所述基站设备的分布式单元DU中的第一智能层,以及部署在所述基站设备的集中式单元CU中的第二智能层;或,
获取部署在小基站设备的第一智能层,以及部署在宏基站设备的第二智能层。
可选地,在基于本申请上述方法的另一个实施例中,还包括云端服务器,其中:
获取部署在所述云端服务器中的第四智能层;
按照预设配置策略,对所述第四智能层进行功能配置;
当检测到对第四智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层、所述第三智能层以及第四智能层所组成的通信网络架构。
可选地,在基于本申请上述方法的另一个实施例中,所述对第四智能层的功能配置完成,包括:
配置所述第一智能层用于在所述通信网络架构中调度短周期业务参数;以及,
配置所述第二智能层用于在所述通信网络架构中管理无线资源业务;以及,
配置所述第三智能层用于在所述通信网络架构中调度各个基站设备之间的业务通信;以及,
配置所述第四智能层用于在所述通信网络架构中调度各个子网络之间的业务通信。
可选地,在基于本申请上述方法的另一个实施例中,所述按照预设配置策略,对所述第四智能层进行功能配置,包括:
配置所述第一智能层用于在所述通信网络架构提供管理服务,业务网络模型仓库,业务网络模型推理,数据库管理,安全功能业务;以及,
配置所述第二智能层用于在所述通信网络架构中提供数字孪生,第三方应用功能引入与管理,无线连接管理,移动性管理,冲突解决,订阅管理,安全 功能,业务网络模型仓库,业务网络模型推理,数据库管理,接口管理业务;以及,
配置所述第三智能层用于在所述通信网络架构中提供数字孪生,第三方应用功能引入与管理,参数配置,服务与策略管理,冲突解决,订阅管理,安全功能,业务网络模型仓库,业务网络模型推理,数据库管理,接口管理业务;以及,
配置所述第四智能层用于在所述通信网络架构中提供数字孪生,第三方应用功能引入与管理,业务网络模型仓库,业务网络模型推理,数据库管理,算力提供,服务与策略管理,安全功能,接口管理业务。
可选地,在基于本申请上述方法的另一个实施例中,在所述确定生成通信网络架构之后,包括:
利用所述通信网络架构的每一个智能层进行模型参数的聚合,得到聚合模型参数;
利用所述聚合模型参数对初始学习模型进行分层学习训练,得到用于在用户设备端或基站端进行业务处理的分层联邦学习模型。
其中,根据本申请实施例的又一个方面,提供的一种通信网络架构的生成装置,包括基站设备以及边缘节点,其中:
获取模块,被配置为获取部署在所述基站设备的第一智能层和第二智能层,以及部署在所述边缘节点中的第三智能层;
配置模块,被配置为按照预设配置策略,逐级对所述第一智能层、所述第二智能层以及所述第三智能层进行功能配置;
生成模块,被配置为当检测到对各智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层以及所述第三智能层所组成的通信网络架构。
根据本申请实施例的又一个方面,提供的一种电子设备,包括:
存储器,用于存储可执行指令;以及
显示器,用于与所述存储器显示以执行所述可执行指令从而完成上述任一所述通信网络架构的生成方法的操作。
根据本申请实施例的还一个方面,提供的一种计算机可读存储介质,用于存储计算机可读取的指令,所述指令被执行时执行上述任一所述通信网络架构的生成方法的操作。
本申请中,可以获取部署在基站设备的第一智能层和第二智能层,以及部署在边缘节点中的第三智能层;按照预设配置策略,逐级对所述第一智能层、所述第二智能层以及所述第三智能层进行功能配置;当检测到对各智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层以及所述第三智能层所组成的通信网络架构。通过应用本申请的技术方案,可以利用基站设备的分布式单元DU与集中式单元CU,并与边缘节点共同组成通信网络架构。以使后续还可以利用该通信网络聚合网络中各个设备节点的模型参数进行聚合以得到聚合模型参数,从而利用该聚合模型参数进行分层学习训练,得到用于在用户设备端或基站端进行业务处理的分层联邦学习模型。进而实现利用通信网络架构优化业务处理效率的目的。
下面通过附图和实施例,对本申请的技术方案做进一步的详细描述。
构成说明书的一部分的附图描述了本申请的实施例,并且连同描述一起用于解释本申请的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本申请,其中:
图1为本申请提出的一种通信网络架构的生成方法示意图;
图2为本申请提出的应用于通信网络架构的系统架构示意图;
图3-图6为本申请提出的一种通信网络架构中各个智能层配置功能的示意图;
图7为本申请提出的通信网络架构的生成的电子装置的结构示意图;
图8为本申请提出的通信网络架构的生成的电子设备的结构示意图。
现在将参照附图来详细描述本申请的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本申请的范围。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,不作为对本申请及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
另外,本申请各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。
需要说明的是,本申请实施例中所有方向性指示(诸如上、下、左、右、前、后……)仅用于解释在某一特定姿态(如附图所示)下各部件之间的相对位置关系、运动情况等,如果该特定姿态发生改变时,则该方向性指示也相应地随之改变。
下面结合图1-图6来描述根据本申请示例性实施方式的用于进行通信网络架构的生成方法。需要注意的是,下述应用场景仅是为了便于理解本申请的精神和原理而示出,本申请的实施方式在此方面不受任何限制。相反,本申请的实施方式可以应用于适用的任何场景。
本申请还提出一种通信网络架构的生成方法、装置、基站设备及介质。
图1示意性地示出了根据本申请实施方式的一种通信网络架构的生成方法的流程示意图。如图1所示,该方法包括基站设备以及边缘节点,其中:
S101,获取部署在基站设备的第一智能层和第二智能层,以及部署在边缘节点中的第三智能层。
一种方式中,现有通信网络架构指“云-边-端”三层智能架构,其中边缘智能通常代指边缘服务器,用于处理用户数据面上的计算等任务。但是其未考虑在边缘实现网络控制面和管理面的智能化。此外,现有的网络架构也未充分体现基站设备的智能化特点。
进一步的,本申请中提及的第一智能层和第二智能层可以有多种部署方式,例如第一智能层可以部署在基站设备的分布式单元DU中,以及第二智能层可以部署在基站设备的集中式单元CU中。
另一种方式中,第一智能层可以部署在小基站设备上。而第二智能层可以部署在宏基站设备上。
还需要说明的是,本申请实施例中的边缘节点可以为边缘服务器,也可以为边缘网元等边缘设备。
S102,按照预设配置策略,逐级对所述第一智能层、所述第二智能层以及所述第三智能层进行功能配置。
S103,当检测到对各智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层以及所述第三智能层所组成的通信网络架构。
如图2所示,为本申请提出的一种通信网络架构的系统示意图。其中包括部署在基站设备的分布式单元DU中的第一智能层、部署在基站设备的集中式单元CU中的第二智能层以及部署在边缘节点中的第三智能层。一种方式中,还包括部署在云端服务器中的第四智能层。
其中,第四智能层是高层管理智能组件,负责各个子网络之间的管理。第三智能层是基站之上的网络智能编排组件,负责各个基站之间的功能编排管理。第二智能层是基站内部的集中式智能组件,负责对传统无线资源管理(Wireless Resource management,RRM)进行智能增强与实现。第一智能层是基站内部的 分布式智能组件,负责对调度周期短的参数进行进一步优化。
本申请中,可以获取部署在基站设备的第一智能层和第二智能层,以及部署在边缘节点中的第三智能层;按照预设配置策略,逐级对所述第一智能层、所述第二智能层以及所述第三智能层进行功能配置;当检测到对各智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层以及所述第三智能层所组成的通信网络架构。通过应用本申请的技术方案,可以利用基站设备的分布式单元DU与集中式单元CU,并与边缘节点共同组成通信网络架构。以使后续还可以利用该通信网络聚合网络中各个设备节点的模型参数进行聚合以得到聚合模型参数,从而利用该聚合模型参数进行分层学习训练,得到用于在用户设备端或基站端进行业务处理的分层联邦学习模型。进而实现利用通信网络架构优化业务处理效率的目的。
可选的,在本申请一种可能的实施方式中,获取部署在基站设备的第一智能层和第二智能层,包括:
获取部署在基站设备的分布式单元DU中的第一智能层,以及部署在基站设备的集中式单元CU中的第二智能层;在此方式下,基站设备中可以有一个或多个CU以及一个或多个DU。其中,一个CU可以连接一个或多个DU。
或,
获取部署在小基站设备的第一智能层,以及部署在宏基站设备的第二智能层。
其中,小基站设备SBS(small base station)是一种信号发射覆盖半径小,适用于小范围精确覆盖的基站。其可以为用户提供高速数据服务。而对于宏基站设备(MBS,macro base station),是一种通信覆盖范围广,但是单一用户可以分享到的容量较小,仅能提供低速数据服务和通信服务的基站。
在此方式下,无论是MSB或SBS均包括一个或多个的CU和DU。另外,一个MBS可以管理一个或多个SBS。
可选的,在本申请一种可能的实施方式中,还包括云端服务器,其中:
获取部署在所述云端服务器中的第四智能层;
按照预设配置策略,对所述第四智能层进行功能配置;
当检测到对第四智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层、所述第三智能层以及第四智能层所组成的通信网络架构。
需要说明的是,本申请中的通信网络架构可以有一个云端服务器(即第四智能层),一个或多个边缘节点(即第三智能层),一个或多个CU或宏基站智能(即第二智能层),一个或多个DU/小基站智能(即第一智能层)。
一种方式中,由于通信架构中各个节点的部署位置,特殊业务需求等原因的不同,会导致通信架构在一个水平通信层面上形成相对封闭的域,且由于同一域内的数据不外传,具有隐私性的原因,进而出现同水平线上的两个节点间不能直接进行通信。因此,本申请实施例中上述部署方式可以适用保证多个边缘节点、多个CU和多个DU之间的隐私性的情况,从而实现每个节点可以通过它们共同的上级节点的调度,实现跨域之间的协同合作。
以通信架构中包括两个所在不同域的DU节点进行举例说明,当两个跨域之间的DU节点需要进行数据传输时,二者虽然不能直接进行通信,但是可以通过本申请所构建的通信网络架构,以分层联邦学习的方式由任意一个DU向二者共同的上级节点CU进行非原始数据的传输(例如可以为学习模型参数的传输),以使上级节点CU将接收到的非原始数据进行聚合后,再向其他DU进行数据传输。进而间接实现了二者的协同合作。
可选的,在本申请一种可能的实施方式中,所述对第四智能层的功能配置完成,包括:
配置所述第一智能层用于在所述通信网络架构中调度短周期业务参数;以及,
配置所述第二智能层用于在所述通信网络架构中管理无线资源业务;以及,
配置所述第三智能层用于在所述通信网络架构中调度各个基站设备之间的业务通信;以及,
配置所述第四智能层用于在所述通信网络架构中调度各个子网络之间的业务通信。
可选的,在本申请一种可能的实施方式中,所述按照预设配置策略,对所述第四智能层进行功能配置,包括:
配置所述第一智能层用于在所述通信网络架构提供管理服务,业务网络模型仓库,业务网络模型推理,数据库管理,安全功能业务;以及,
配置所述第二智能层用于在所述通信网络架构中提供第三方应用功能引入与管理,无线连接管理,移动性管理,安全功能,业务网络模型仓库,业务网络模型推理,数据库管理,接口管理业务;以及,
配置所述第三智能层用于在所述通信网络架构中提供数字孪生,参数配置,服务与策略管理,冲突解决,订阅管理,安全功能,业务网络模型仓库,业务网络模型推理,数据库管理,接口管理业务;以及,
配置所述第四智能层用于在所述通信网络架构中提供业务网络模型仓库,业务网络模型推理,数据库管理,算力提供,服务与策略管理,安全功能,接口管理业务。
可以理解的,由于网络需要具有高可靠性,因此现网环境难以直接用于网络创新技术研究。但仅基于线下仿真平台的研究会大大影响结果的有效性,这导致网络新技术研发周期长、部署难度大;网络资源的云化、业务的按需设计、资源的编排等,使得网络运行和维护面临前所未有的压力;由于缺乏有效的虚拟验证平台,网络优化操作不得不直接作用在现网基础设施中,造成较长的时间消耗以及较高的现网运行业务风险,从而加大网络的运营成本以及影响运营的风险。
因此,本申请可以基于数据、模型、映射和交互四要素来构建一种网络数字孪生体,从而将数字孪生功能应用到本申请提出的通信网络架构中,具体来说,数字孪生功能是一种创建物理网络设施的虚拟镜像,搭建与真实网络一致的数字孪生平台,进行网络配置的实验与验证,构建的网络孪生体可以帮助实现成本低廉的试错。
具体而言,基于在真实网络中进行智能决策模型推广的目的,必然需要经历在机器学习中的模型训练和模型推理过程。但是由于在无法保证模型准确率 的情况下,利用真实网络的数字世界映射/镜像的数字孪生网络来进行模型训练和模型推理以及后续的功能部署是一种可选的实施方式。另外,在孪生网络中进行模型训练和模型推理可以低成本试错而不影响真实网络情况。
可选的,在本申请一种可能的实施方式中,在所述确定生成通信网络架构之后,包括:
利用通信网络架构的每一个智能层进行模型参数的聚合,得到聚合模型参数;
利用所述聚合模型参数对初始学习模型进行分层学习训练,得到用于在用户设备端或基站端进行业务处理的分层联邦学习模型。
进一步的,由于传统分布式机器学习模型的流程通常包括步骤:
1.中央(集中式)服务器收集各个分布式的零散数据汇合;
2.汇合后由中央服务器对各个分布式节点进行学习任务(和训练数据)分配;
3.各个分布式节点收到分配的学习任务(和训练数据)并开启学习;
4.各个分布式节点学习结束,将学习结果返回给中央服务器;
5.中央服务器将各个节点的学习结果汇合;
6.重复流程3~5,直至汇合后的学习结果达到预设训练条件,其中预设条件包括训练至模型收敛,训练次数达到最大迭代次数以及训练时长达到最长训练时间的其中一种。
然而,相关技术中的传统分布式机器学习模型的方式并没有考虑大量数据传输对无线链路带来的巨大传输压力,并且其也没有考虑分布式节点的数据直接传输带来的数据隐私保护的问题。因此本申请可以采用利用多个智能层所构建的通信网络架构来实现对各个客户端设备节点上传的模型参数的聚合,得到聚合模型参数,以使后续利用该聚合后的模型参数对初始学习模型进行分层学习训练,得到用于在用户设备端或基站端进行业务处理的分层联邦学习模型。
一种方式中,对于第四智能层来说,如图3所示,其可以提供以下公共功能:业务网络模型仓库,业务网络模型推理,数据库,算力提供,服务与策略 管理,安全功能,接口管理等等。
一种方式中,对于第三智能层来说,如图4所示,其可以提供以下公共功能:数字孪生,参数配置,服务与策略管理,冲突解决,订阅管理,安全功能,业务网络模型仓库,业务网络模型推理,数据库,接口管理等等。
一种方式中,对于第二智能层来说,如图5所示,其可以提供以下公共功能:第三方应用功能引入与管理,无线连接管理,移动性管理,安全功能,业务网络模型仓库,业务网络模型推理,数据库,接口管理等等。
一种方式中,对于第一智能层来说,如图6所示,其可以提供以下公共功能:管理服务,业务网络模型仓库,业务网络模型推理,数据库,安全功能等等。
其中需要说明的是,本申请实施例中的第四智能层的部署位置可以为云端服务器等等;第三智能层的部署位置除边缘节点之外,还可以包括MBS、SBS、等等;第二智能层的部署位置包括SBS、CU等;第一智能层的部署位置包括DU等。
进一步的,一种实施例中,还可以利用本申请提出的通信网络架构实施构建部署在用户设备端的分层联邦学习模型。其中,以包含三个智能层的通信网络架构来进行举例说明,其中包括:
步骤1101:定义高层聚合器、低层聚合器以及构建数字孪生网络,并启动高层聚合迭代和低层聚合迭代,此处以第三智能层作为高层聚合器,第二智能层作为低层聚合器作为实施例具体说明。
步骤1102:分布式用户设备利用本地数据进行模型训练学习,以此产生模型参数更新。
学习模型包括:输入法预测、手写数字识别等可用AI优化的业务。
步骤1103:用户设备上传模型参数更新至第一智能层,第一智能层将该模型参数更新继续上传至第二智能层。第二智能层将收到的所有模型参数更新基于聚合准则进行业务网络模型低层聚合。
聚合准则包括:分层联邦平均算法等等可用于聚合的算法或准则。
步骤1104:第二智能层将聚合之后的新模型参数下发至其管理连接的第一智能层,第一智能层继续将新模型参数向用户设备下发,完成一次低层联邦学习的过程。
步骤1105:重复步骤1102~1104,直到低层聚合迭代完成,第二智能层将聚合之后的新模型参数上传至第三智能层,第三智能层将收到的所有模型参数更新基于聚合准则进行业务网络模型高层聚合。
聚合准则包括:分层联邦平均算法等等可用于聚合的算法或准则。
步骤1106:第三智能层将聚合之后的新模型参数下发至其管理连接的第二智能层,第二智能层将聚合之后的新模型参数下发至其管理连接的第一智能层,第一智能层继续将新模型参数向用户设备下发,完成一次高层联邦学习的过程。
步骤1107:重复步骤1102~1106,直至全局学习模型性能表现达到预设条件要求,其中预设条件可以包括训练至模型收敛,训练次数达到最大迭代次数以及训练时长达到最长训练时间的其中一种。需要说明的是,模型训练以及后续的推理过程可以在本地上进行也可以在数字孪生网络体内进行。
需要说明的是,对于利用本申请提出的通信网络架构实施构建部署在用户设备端的分层联邦学习模型来说,可以用于未来网络用户侧的业务优化时,第二、三(如有四)智能层的数据来源可以为用户侧AI业务使用的公开数据集以及用户所能传输提供的数据;用户设备的数据来源为用户自己的本地数据。
另外,当利用本申请提出的通信网络架构实施构建部署在用户设备端的分层联邦学习模型用于未来网络用户侧AI业务优化时,分层联邦学习模型部署在用户设备上,低层聚合器可以是第二智能层以及第三智能层,高层聚合器可以是第三智能层以及第四智能层(如有)。对应着,当低层聚合放置在第二智能层时,高层聚合可以选择第三智能层也可以选择为第四智能层;当低层聚合放置在第三智能层时,第二智能层将局部模型参数更新直接透传至第三智能层,高层聚合则选择在第四智能层。
进一步的,一种实施例中,还可以利用本申请提出的通信网络架构实施构建部署在基站设备中(即第一智能层)的分层联邦学习模型。其中,以包含三 个智能层的通信网络架构来进行举例说明,其中包括:
步骤1201:定义高层聚合器、低层聚合器以及构建数字孪生网络,并启动高层聚合迭代和低层聚合迭代,此处以第三智能层作为高层聚合器,第二智能层作为低层聚合器作为实施例具体说明。
步骤1202:第一智能层(即基站设备)利用本地数据进行模型训练学习,以此产生模型参数更新。
学习模型包括:MAC层实时调度,干扰管理等等可用AI优化的基站内业务。
步骤1203:第一智能层上传模型参数更新至第二智能层,其中第二智能层将收到的所有模型参数更新基于聚合准则进行业务网络模型低层聚合。
聚合准则包括:分层联邦平均算法等等可用于聚合的算法或准则。
步骤1204:第二智能层将聚合之后的新模型参数下发至其管理连接的第一智能层,完成一次低层联邦学习的过程。
步骤1205:重复步骤1202~1204,直到低层聚合迭代完成,第二智能层将聚合之后的新模型参数上传至第三智能层,第三智能层将收到的所有模型参数更新基于聚合准则进行业务网络模型高层聚合。
聚合准则包括:分层联邦平均算法等等可用于聚合的算法或准则。
步骤1206:第三智能层将聚合之后的新模型参数下发至其管理连接的第二智能层,第二智能层将聚合之后的新模型参数下发至其管理连接的第一智能层,以使第一智能层完成一次高层联邦学习的过程。
步骤1207:重复步骤1202~1206,直至全局学习模型性能表现达到预设条件要求,其中预设条件可以包括训练至模型收敛,训练次数达到最大迭代次数以及训练时长达到最长训练时间的其中一种,需要说明的是,模型训练以及后续的推理过程可以在本地上进行也可以在数字孪生网络体内进行。
需要说明的是,对于利用本申请提出的通信网络架构实施构建部署在基站设备端的分层联邦学习模型来说,可以用于未来网络内部规划优化时,第四智能层(如有)的数据来源可以为第三智能层中收集可提供的网络的可使用数据;第三智能层的数据来源可以为第二智能层中收集可提供的基站的可使用数据; 第二智能层的数据来源可以为独立收集的基站内部数据。
另外,该分层联邦学习模型用于未来网络内部规划优化时,局部模型训练部署在第二智能层上,低层聚合部署在第三智能层,高层聚合部署在第四智能层。
可以理解的,随着大数据时代的到来,网络中必然将产生大量的数据,然而在大多数行业中,存在行业竞争、手续复杂等问题,数据往往以孤岛的形式存在。然而利用本申请提出的利用通信网络架构实施构建部署在用户设备端或基站设备端的分层联邦学习模型的训练方法来说,可以将分散在网络内各设备节点对于的各领域的数据进行收集整合,从而避免相关技术中存在的,需要面对数据分散,利用孤岛形式的数据进行训练网络模型的弊端。
可选的,在本申请的另外一种实施方式中,如图7所示,本申请还提供一种通信网络架构的生成装置。其中,包括基站设备以及边缘节点,其中:
获取模块201,被配置为获取部署在所述基站设备的第一智能层和第二智能层,以及部署在所述边缘节点中的第三智能层;
配置模块202,被配置为按照预设配置策略,逐级对所述第一智能层、所述第二智能层以及所述第三智能层进行功能配置;
生成模块203,被配置为当检测到对各智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层以及所述第三智能层所组成的通信网络架构。
本申请中,可以获取部署在基站设备的第一智能层和第二智能层,以及部署在边缘节点中的第三智能层;按照预设配置策略,逐级对所述第一智能层、所述第二智能层以及所述第三智能层进行功能配置;当检测到对各智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层以及所述第三智能层所组成的通信网络架构。通过应用本申请的技术方案,可以利用基站设备的分布式单元DU与集中式单元CU,并与边缘节点共同组成通信网络架构。以使后续还可以利用该通信网络聚合网络中各个设备节点的模型参数进行聚合以得 到聚合模型参数,从而利用该聚合模型参数进行分层学习训练,得到用于在用户设备端或基站端进行业务处理的分层联邦学习模型。进而实现利用通信网络架构优化业务处理效率的目的。
在本申请的另外一种实施方式中,获取模块201,还包括:
获取模块201,被配置为获取部署在所述基站设备的分布式单元DU中的第一智能层,以及部署在所述基站设备的集中式单元CU中的第二智能层;或,
获取模块201,被配置为获取部署在小基站设备的第一智能层,以及部署在宏基站设备的第二智能层。
在本申请的另外一种实施方式中,获取模块201,还包括:
获取模块201,被配置为获取部署在所述云端服务器中的第四智能层;
获取模块201,被配置为按照预设配置策略,对所述第四智能层进行功能配置;
获取模块201,被配置为当检测到对第四智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层、所述第三智能层以及第四智能层所组成的通信网络架构。
在本申请的另外一种实施方式中,获取模块201,还包括:
获取模块201,被配置为配置所述第一智能层用于在所述通信网络架构中调度短周期业务参数;以及,
获取模块201,被配置为配置所述第二智能层用于在所述通信网络架构中管理无线资源业务;以及,
获取模块201,被配置为配置所述第三智能层用于在所述通信网络架构中调度各个基站设备之间的业务通信;以及,
获取模块201,被配置为配置所述第四智能层用于在所述通信网络架构中调度各个子网络之间的业务通信。
在本申请的另外一种实施方式中,获取模块201,还包括:
获取模块201,被配置为配置所述第一智能层用于在所述通信网络架构提供 管理服务,业务网络模型仓库,业务网络模型推理,数据库管理,安全功能业务;以及,
获取模块201,被配置为配置所述第二智能层用于在所述通信网络架构中提供数字孪生,第三方应用功能引入与管理,无线连接管理,移动性管理,冲突解决,订阅管理,安全功能,业务网络模型仓库,业务网络模型推理,数据库管理,接口管理业务;以及,
获取模块201,被配置为配置所述第三智能层用于在所述通信网络架构中提供数字孪生,第三方应用功能引入与管理,参数配置,服务与策略管理,冲突解决,订阅管理,安全功能,业务网络模型仓库,业务网络模型推理,数据库管理,接口管理业务;以及,
获取模块201,被配置为配置所述第四智能层用于在所述通信网络架构中提供数字孪生,第三方应用功能引入与管理,业务网络模型仓库,业务网络模型推理,数据库管理,算力提供,服务与策略管理,安全功能,接口管理业务。
在本申请的另外一种实施方式中,获取模块201,还包括:
获取模块201,被配置为利用所述通信网络架构的每一个智能层进行模型参数的聚合,得到聚合模型参数;
获取模块201,被配置为利用所述聚合模型参数对初始学习模型进行分层学习训练,得到用于在用户设备端或基站端进行业务处理的分层联邦学习模型。
图8是根据一示例性实施例示出的一种电子设备的逻辑结构框图。例如,电子设备300可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器,上述指令可由电子设备处理器执行以完成上述通信网络架构的生成方法,该方法包括:获取部署在所述基站设备的第一智能层和第二智能层,以及部署在所述边缘节点中的第三智能层;按照预设配置策略,逐级对所述第一智能层、所述第二智能层以及所述第三智能层进行功能配置; 当检测到对各智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层以及所述第三智能层所组成的通信网络架构。可选地,上述指令还可以由电子设备的处理器执行以完成上述示例性实施例中所涉及的其他步骤。例如,非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在示例性实施例中,还提供了一种应用程序/计算机程序产品,包括一条或多条指令,该一条或多条指令可以由电子设备的处理器执行,以完成上述通信网络架构的生成方法,该方法包括:获取部署在所述基站设备的第一智能层和第二智能层,以及部署在所述边缘节点中的第三智能层;按照预设配置策略,逐级对所述第一智能层、所述第二智能层以及所述第三智能层进行功能配置;当检测到对各智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层以及所述第三智能层所组成的通信网络架构。可选地,上述指令还可以由电子设备的处理器执行以完成上述示例性实施例中所涉及的其他步骤。
图8为计算机设备30的示例图。本领域技术人员可以理解,示意图8仅仅是计算机设备30的示例,并不构成对计算机设备30的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如计算机设备30还可以包括输入输出设备、网络接入设备、总线等。
所称处理器302可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器302也可以是任何常规的处理器等,处理器302是计算机设备30的控制中心,利用各种接口和线路连接整个计算机设备30的各个部分。
存储器301可用于存储计算机可读指令303,处理器302通过运行或执行存储在存储器301内的计算机可读指令或模块,以及调用存储在存储器301内的数据,实现计算机设备30的各种功能。存储器301可主要包括存储程序区和存 储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据计算机设备30的使用所创建的数据等。此外,存储器301可以包括硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)或其他非易失性/易失性存储器件。
计算机设备30集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成,的计算机可读指令可存储于一计算机可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。
Claims (9)
- 一种通信网络架构的生成方法,其特征在于,包括基站设备以及边缘节点,其中:获取部署在所述基站设备的第一智能层和第二智能层,以及部署在所述边缘节点中的第三智能层;按照预设配置策略,逐级对所述第一智能层、所述第二智能层以及所述第三智能层进行功能配置;当检测到对各智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层以及所述第三智能层所组成的通信网络架构。
- 如权利要求1所述的方法,其特征在于,所述获取部署在所述基站设备的第一智能层和第二智能层,包括:获取部署在所述基站设备的分布式单元DU中的第一智能层,以及部署在所述基站设备的集中式单元CU中的第二智能层;或,获取部署在小基站设备的第一智能层,以及部署在宏基站设备的第二智能层。
- 如权利要求1所述的方法,其特征在于,还包括云端服务器,其中:获取部署在所述云端服务器中的第四智能层;按照预设配置策略,对所述第四智能层进行功能配置;当检测到对第四智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层、所述第三智能层以及第四智能层所组成的通信网络架构。
- 如权利要求3所述的方法,其特征在于,所述对第四智能层的功能配置完成,包括:配置所述第一智能层用于在所述通信网络架构中调度短周期业务参数;以 及,配置所述第二智能层用于在所述通信网络架构中管理无线资源业务;以及,配置所述第三智能层用于在所述通信网络架构中调度各个基站设备之间的业务通信;以及,配置所述第四智能层用于在所述通信网络架构中调度各个子网络之间的业务通信。
- 如权利要求3或4所述的方法,其特征在于,所述按照预设配置策略,对所述第四智能层进行功能配置,包括:配置所述第一智能层用于在所述通信网络架构提供管理服务,业务网络模型仓库,业务网络模型推理,数据库管理,安全功能业务;以及,配置所述第二智能层用于在所述通信网络架构中提供数字孪生,第三方应用功能引入与管理,无线连接管理,移动性管理,冲突解决,订阅管理,安全功能,业务网络模型仓库,业务网络模型推理,数据库管理,接口管理业务;以及,配置所述第三智能层用于在所述通信网络架构中提供数字孪生,第三方应用功能引入与管理,参数配置,服务与策略管理,冲突解决,订阅管理,安全功能,业务网络模型仓库,业务网络模型推理,数据库管理,接口管理业务;以及,配置所述第四智能层用于在所述通信网络架构中提供数字孪生,第三方应用功能引入与管理,业务网络模型仓库,业务网络模型推理,数据库管理,算力提供,服务与策略管理,安全功能,接口管理业务。
- 如权利要求1-3任一项所述的方法,其特征在于,在所述确定生成通信网络架构之后,包括:利用所述通信网络架构的每一个智能层进行模型参数的聚合,得到聚合模型参数;利用所述聚合模型参数对初始学习模型进行分层学习训练,得到用于在用户设备端或基站端进行业务处理的分层联邦学习模型。
- 一种通信网络架构的生成装置,其特征在于,包括基站设备以及边缘节点,其中:获取模块,被配置为获取部署在所述基站设备的第一智能层和第二智能层,以及部署在所述边缘节点中的第三智能层;配置模块,被配置为按照预设配置策略,逐级对所述第一智能层、所述第二智能层以及所述第三智能层进行功能配置;生成模块,被配置为当检测到对各智能层的功能配置完成后,确定生成由所述第一智能层、所述第二智能层以及所述第三智能层所组成的通信网络架构。
- 一种电子设备,其特征在于,包括:存储器,用于存储可执行指令;以及,处理器,用于与所述存储器显示以执行所述可执行指令从而完成权利要求1-6中任一所述通信网络架构的生成方法的操作。
- 一种计算机可读存储介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时执行权利要求1-6中任一所述通信网络架构的生成方法的操作。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111436227.7A CN114302421B (zh) | 2021-11-29 | 2021-11-29 | 通信网络架构的生成方法、装置、电子设备及介质 |
CN202111436227.7 | 2021-11-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023093235A1 true WO2023093235A1 (zh) | 2023-06-01 |
Family
ID=80964706
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/119831 WO2023093235A1 (zh) | 2021-11-29 | 2022-09-20 | 通信网络架构的生成方法、装置、电子设备及介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114302421B (zh) |
WO (1) | WO2023093235A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118429139A (zh) * | 2024-07-03 | 2024-08-02 | 深圳市凯宏膜环保科技有限公司 | 一种基于智慧污水处理云平台的管控方法及系统 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114302422B (zh) * | 2021-11-29 | 2024-06-18 | 北京邮电大学 | 利用学习模型进行业务处理的方法以及装置 |
CN114302421B (zh) * | 2021-11-29 | 2024-06-18 | 北京邮电大学 | 通信网络架构的生成方法、装置、电子设备及介质 |
CN115460700A (zh) * | 2022-08-02 | 2022-12-09 | 北京邮电大学 | 基于联邦学习的网络资源配置方法、装置、电子设备及介质 |
CN115567298B (zh) * | 2022-09-27 | 2024-04-09 | 中国联合网络通信集团有限公司 | 面向6g的安全性能优化方法、装置及服务器 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200236162A1 (en) * | 2018-12-20 | 2020-07-23 | Atos Worldgrid | Network of intelligent nodes for mesh distributed network adaptable to industrial or service applications |
CN112532451A (zh) * | 2020-11-30 | 2021-03-19 | 安徽工业大学 | 基于异步通信的分层联邦学习方法、装置、终端设备及存储介质 |
CN113419857A (zh) * | 2021-06-24 | 2021-09-21 | 广东工业大学 | 一种基于边缘数字孪生关联的联邦学习方法及系统 |
CN113537514A (zh) * | 2021-07-27 | 2021-10-22 | 北京邮电大学 | 一种高能效的基于数字孪生的联邦学习框架 |
CN114302421A (zh) * | 2021-11-29 | 2022-04-08 | 北京邮电大学 | 通信网络架构的生成方法、装置、电子设备及介质 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015077939A1 (zh) * | 2013-11-27 | 2015-06-04 | 华为技术有限公司 | 一种通信调度的方法和设备 |
US10498537B2 (en) * | 2016-08-01 | 2019-12-03 | Institute For Development And Research In Banking Technology (Drbt) | System and method for providing secure collaborative software as a service (SaaS) attestation service for authentication in cloud computing |
CN108924198B (zh) * | 2018-06-21 | 2021-05-11 | 中国联合网络通信集团有限公司 | 一种基于边缘计算的数据调度方法、装置及系统 |
GB2577055B (en) * | 2018-09-11 | 2021-09-01 | Samsung Electronics Co Ltd | Improvements in and relating to telecommunication networks |
CN111369042B (zh) * | 2020-02-27 | 2021-09-24 | 山东大学 | 一种基于加权联邦学习的无线业务流量预测方法 |
CN113259147B (zh) * | 2020-06-28 | 2022-07-26 | 中兴通讯股份有限公司 | 网元管理方法、装置、计算机设备、介质 |
CN111970733B (zh) * | 2020-08-04 | 2024-05-14 | 河海大学常州校区 | 超密集网络中基于深度强化学习的协作式边缘缓存算法 |
CN112488398A (zh) * | 2020-12-03 | 2021-03-12 | 广东电力通信科技有限公司 | 一种基于mec边缘智能网关的用电管理方法及系统 |
CN113011602B (zh) * | 2021-03-03 | 2023-05-30 | 中国科学技术大学苏州高等研究院 | 一种联邦模型训练方法、装置、电子设备和存储介质 |
CN113504999B (zh) * | 2021-08-05 | 2023-07-04 | 重庆大学 | 一种面向高性能分层联邦边缘学习的调度与资源分配方法 |
-
2021
- 2021-11-29 CN CN202111436227.7A patent/CN114302421B/zh active Active
-
2022
- 2022-09-20 WO PCT/CN2022/119831 patent/WO2023093235A1/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200236162A1 (en) * | 2018-12-20 | 2020-07-23 | Atos Worldgrid | Network of intelligent nodes for mesh distributed network adaptable to industrial or service applications |
CN112532451A (zh) * | 2020-11-30 | 2021-03-19 | 安徽工业大学 | 基于异步通信的分层联邦学习方法、装置、终端设备及存储介质 |
CN113419857A (zh) * | 2021-06-24 | 2021-09-21 | 广东工业大学 | 一种基于边缘数字孪生关联的联邦学习方法及系统 |
CN113537514A (zh) * | 2021-07-27 | 2021-10-22 | 北京邮电大学 | 一种高能效的基于数字孪生的联邦学习框架 |
CN114302421A (zh) * | 2021-11-29 | 2022-04-08 | 北京邮电大学 | 通信网络架构的生成方法、装置、电子设备及介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118429139A (zh) * | 2024-07-03 | 2024-08-02 | 深圳市凯宏膜环保科技有限公司 | 一种基于智慧污水处理云平台的管控方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN114302421A (zh) | 2022-04-08 |
CN114302421B (zh) | 2024-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023093235A1 (zh) | 通信网络架构的生成方法、装置、电子设备及介质 | |
Wu | Cloud-edge orchestration for the Internet of Things: Architecture and AI-powered data processing | |
Jha et al. | IoTSim‐Edge: a simulation framework for modeling the behavior of Internet of Things and edge computing environments | |
Khan et al. | Edge-computing-enabled smart cities: A comprehensive survey | |
WO2023093238A1 (zh) | 利用学习模型进行业务处理的方法以及装置 | |
Alsboui et al. | Enabling distributed intelligence for the Internet of Things with IOTA and mobile agents | |
CN102902536B (zh) | 一种物联网计算机系统 | |
CN109743893A (zh) | 用于网络切片的方法和设备 | |
CN110191148A (zh) | 一种面向边缘计算的统计函数分布式执行方法及系统 | |
Lee et al. | IoT service classification and clustering for integration of IoT service platforms | |
Ng et al. | Reputation-aware hedonic coalition formation for efficient serverless hierarchical federated learning | |
Alsboui et al. | Distributed intelligence in the internet of things: Challenges and opportunities | |
WO2022001941A1 (zh) | 网元管理方法、网管系统、独立计算节点、计算机设备、存储介质 | |
Guo et al. | When network operation meets blockchain: An artificial-intelligence-driven customization service for trusted virtual resources of IoT | |
Rahman et al. | Off-street vehicular fog for catering applications in 5G/B5G: A trust-based task mapping solution and open research issues | |
CN107924332A (zh) | Ict服务供应的方法和系统 | |
CN110247795A (zh) | 一种基于意图的云网资源服务链编排方法及系统 | |
Patel et al. | Smart dashboard: A novel approach for sustainable development of smart cities using fog computing | |
Li et al. | Human in the loop: distributed deep model for mobile crowdsensing | |
Rani et al. | QoS aware cross layer paradigm for urban development applications in IoT | |
Wang et al. | Performance modeling and suitability assessment of data center based on fog computing in smart systems | |
Wen et al. | An efficient content distribution network architecture using heterogeneous channels | |
Zhou et al. | Blockchain-based volunteer edge cloud for IoT applications | |
KR20210049812A (ko) | 포그 기반 데이터 처리를 가능하게 하기 위한 데이터 샘플 템플릿(dst) 관리 | |
Tao et al. | O-RAN-Based Digital Twin Function Virtualization for Sustainable IoV Service Response: An Asynchronous Hierarchical Reinforcement Learning Approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22897321 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |