WO2024016803A1 - 基于联邦学习的基站算力调用方法、装置、设备及介质 - Google Patents
基于联邦学习的基站算力调用方法、装置、设备及介质 Download PDFInfo
- Publication number
- WO2024016803A1 WO2024016803A1 PCT/CN2023/094342 CN2023094342W WO2024016803A1 WO 2024016803 A1 WO2024016803 A1 WO 2024016803A1 CN 2023094342 W CN2023094342 W CN 2023094342W WO 2024016803 A1 WO2024016803 A1 WO 2024016803A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- base station
- computing power
- federation
- resources
- power resources
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 230000015654 memory Effects 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 22
- 230000006855 networking Effects 0.000 claims description 12
- 238000002955 isolation Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 230000006854 communication Effects 0.000 description 23
- 238000004891 communication Methods 0.000 description 22
- 238000010586 diagram Methods 0.000 description 15
- 238000005516 engineering process Methods 0.000 description 11
- 230000005540 biological transmission Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000013526 transfer learning Methods 0.000 description 3
- 239000002699 waste material Substances 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/04—Wireless resource allocation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/12—Wireless traffic scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/12—Wireless traffic scheduling
- H04W72/1263—Mapping of traffic onto schedule, e.g. scheduled allocation or multiplexing of flows
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W88/00—Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
- H04W88/08—Access point devices
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Definitions
- This application relates to the field of base station communication technology, and in particular to a base station computing power calling method, device, equipment and medium based on federated learning.
- Embodiments of this application provide a base station computing power calling method, device, electronic device, and storage medium based on federated learning.
- embodiments of the present application provide a base station computing power calling method based on federated learning, which is applied to base stations.
- the method includes: combining multiple connected base stations to form a base station federation; and processing the computing power of the base stations in isolation.
- computing power resources generate the computing power resources of the base station federation; obtain the computing power requirements of the target base station; and call the computing power resources of the base station federation according to the computing power requirements.
- embodiments of the present application provide a device for calling base station computing power based on federated learning, including: a networking module configured to unite multiple connected base stations to form a federation of base stations; and a generating module configured to isolate processing units.
- the computing power resources of the base station are used to generate the computing power resources of the base station federation;
- the acquisition module is configured to obtain the computing power requirements of the target base station;
- the calling module is configured to call the computing power of the base station federation based on the computing power requirements. resource.
- embodiments of the present application provide an electronic device, including: a memory, a processor, and a computer program stored in the memory and executable on the processor.
- the processor executes the computer program, the present application is implemented.
- the base station computing power calling method based on federated learning provided by the embodiment.
- embodiments of the present application provide a computer-readable storage medium that stores a computer program.
- the computer program is executed by a processor, the base station computing power calling method based on federated learning provided by the embodiment of the present application is implemented.
- Figure 1 is a schematic flowchart of a base station computing power calling method based on federated learning provided by an embodiment of the present application
- FIG. 2 is a schematic diagram of the specific implementation process of another embodiment of step S1000 in Figure 1;
- FIG. 3 is a schematic diagram of the specific implementation process of another embodiment of step S1000 in Figure 1;
- FIG. 4 is a schematic diagram of the specific implementation process of another embodiment of step S2000 in Figure 1;
- FIG. 5 is a schematic diagram of the specific implementation process of another embodiment of step S2200 in Figure 4;
- FIG. 6 is a schematic diagram of the specific implementation process of another embodiment of step S4000 in Figure 1;
- FIG. 7 is a schematic diagram of the specific implementation process of another embodiment of step S4000 in Figure 1;
- Figure 8 is a structural diagram of a base station computing power calling device based on federated learning provided by an embodiment of the present application
- Figure 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- At least one of the following and similar expressions refers to any combination of these items, including any combination of single or plural items.
- at least one of a, b and c can mean: a, b, c, a and b, a and c, b and c or a and b and c, where a, b, c can be single, also Can be multiple.
- the federated learning-based base station computing power calling method involved in the embodiment of this application is to call the base station's computing power resources based on federated learning (Federated Learning).
- Federated learning is a distributed machine learning technology. Its core idea is to conduct distributed model training among multiple data sources with local data, without exchanging local individual or sample data, only by exchanging models. Parameters or intermediate results are used to build a global model based on virtual fusion data, thereby achieving a balance between data privacy protection and data sharing calculations.
- the existing 5th Generation Mobile Communication Technology (5G) wireless network uses the following methods for networking: The first is to use a Centralized Unit (CU) ) and distributed unit (Distributed Unit, DU) are combined and deployed.
- the base stations do not share computing power and use maximum computing power to provide services.
- This method of networking is simple, but it is powerless for needs beyond your own computing power.
- base stations need to be temporarily added to complete the work, such as Olympic events and tidal effect scenarios (business peaks are high during working hours in the office area and after working hours in the living area).
- different base stations have different busyness levels, and all base station resources in the region cannot be fully utilized.
- the second is to use independent deployment of CU and DU to split the services that require a lot of calculations in the base station into centralized servers to reduce the load on the base station.
- This method is convenient for expansion, and the server can be continuously expanded when computing power is insufficient.
- the base station is the node closest to the user in the wireless communication process.
- the delay of the second method cannot meet the requirements; it also does not fully utilize all base stations in the area. computing resources.
- embodiments of this application provide a base station computing power calling method, device, system and computer-readable storage medium based on federated learning, which combines multiple connected base stations to form a base station federation; isolates and processes the computing power resources of the base stations to generate The computing power resources of the base station federation; obtain the computing power requirements of the target base station; according to the computing power requirements, call the computing power resources of the base station federation to improve the utilization of base station computing power resources and ensure the communication quality of the wireless network and user experience.
- Figure 1 shows the flow of a base station computing power calling method based on federated learning provided by an embodiment of the present application.
- the base station computing power calling method based on federated learning in the embodiment of this application includes the following steps:
- S1000 combines multiple connected base stations to form a base station federation.
- federated learning is a distributed machine learning technology that allows multiple parties to build models according to specified algorithms through local training sets.
- the federated learning process is that after federated learning participants train local data, they upload the trained parameters to the server, and the server aggregates them to obtain the overall parameters.
- federated learning is usually divided into horizontal federated learning (Horizontal Federated Learning, HFL), vertical federated learning (Vertical Federated Learning, VFL) and federated transfer learning (Federated Transfer Learning, FTL) .
- HFL Horizontal Federated Learning
- VFL Vertical Federated Learning
- FTL federated Transfer Learning
- Horizontal federated learning also known as sample-based federated learning, refers to the feature space of the data set shared among different participants; vertical federated learning, also known as feature-based federated learning, is used in the sample space or features of the participant data set. In scenarios where the spaces have obvious overlap but are different, that is, different participants have independent attributes for the same record data; federated transfer learning means that there is almost no overlap in sample space or feature space between participants.
- the networking methods of base stations are divided into static networking and dynamic networking. Both methods can combine multiple connected base stations to form a base station federation based on actual network requirements, and share computing resources through transmission links between base stations.
- step S1000 at least includes the following steps:
- S1100 Obtain the identification information of multiple connected base stations to form federated information, where the identification information includes at least one of the following: the identification number of the base station, or the interface address of the base station.
- a base station can obtain the identification information of multiple base stations connected to it.
- the network management platform sends pre-planned federation information to all base stations in the federation, so that the base station federation can be quickly established.
- the base station communicates based on federal information.
- the local base station can quickly establish transmission links within the base station federation through the identification number and interface address of the base station in the federation information, thereby improving the efficiency of calling base station computing resources.
- step S1000 at least includes the following steps:
- S1400 Send federation information to multiple connected base stations, where the federation information includes at least one of the following: the identification number of the base station federation, or the interface address of the collection node.
- the base stations automatically feed back the identification information of the base stations to the collection nodes, and then dynamically collect the identification information of the base stations to achieve the function of the base station federation automatically discovering new base stations. It is understandable that this dynamic networking method has the advantages of flexibility and adaptability, and is suitable for scenarios where the scale of the base station federation often changes.
- the base station receives the federation information and sends identification information of the base station to the collection node, where the identification information includes at least one of the following: an identification number of the base station or an interface address of the base station.
- the base station after receiving the federation information, the base station sends the identification information of the base station connected to the collection node according to the interface address of the collection node, so as to achieve the effect of automatic discovery of the base station and integration into the base station federation.
- the base station communicates through the collection node.
- the base station can send the identification information of the base station through the interface address of the collection node. Since the collection node collects the identification information and federation information of base stations, when the target base station needs to communicate with an idle base station in the base station federation, it can obtain the identification information of the idle base station through the collection node and establish a communication channel; in addition, the collection node can also Provides query function for base station federation.
- the base stations in the base station federation may also have services deployed by other base stations.
- the base station is equivalent to multiple logical network elements sharing the same physical base station.
- the services of multiple logical network elements on the base station need to be isolated.
- isolation processing includes but is not limited to network isolation, memory quotas and disk partitioning.
- the computing power resources of the base station are isolated, so that the computing power resources of the base station have naturally high reliability.
- step S2000 at least includes the following steps:
- S2200 Isolate computing resources according to the target base station to form isolated computing resources.
- isolating computing resources according to different target base stations can effectively ensure the security and anti-interference ability of the isolated computing resources, and avoid communication conflicts between the target base station and idle base stations, resulting in the loss of computing power. Waste of resources.
- step S2200 includes at least the following steps:
- the computing resources are divided into different network addresses to form isolated computing resources.
- VXLAN Virtual Extensible Local Area Network
- VXLAN is one of the third-generation data center virtualization technology framework (Network Virtualization over Layer 3, NVO3) standard technology defined by the Internet Engineering Task Force (IETF).
- IETF Internet Engineering Task Force
- VXLAN is a network virtualization technology that can improve the expansion problems of large-scale cloud computing during deployment. It is an extension of VLAN.
- VXLAN is a powerful tool that can penetrate the three-layer network to extend the second layer. It can solve the problem of Virtual Memory System (VMS) by encapsulating traffic and extending it to the third-layer gateway. ), allowing it to access servers on external IP subnets.
- computing resources may use other NVO3 technologies for network isolation, which is not limited here.
- computing resources can also be divided into different operating spaces to isolate the operating spaces of computing resources.
- the embodiment of this application uses the virtualization technology provided by Linux to isolate the resources of multiple logical network elements in the idle base station, including but not limited to memory quota, file isolation, inter-process communication (Inter-Process Communication, IPC) isolation, and process space. Isolation, user space isolation, etc. Let each logical network element in the idle base station run in an independent space with isolated resources without interfering with each other. This ensures that if one of the idle base station services is abnormal, it will not cause other service abnormalities, which greatly improves the security and reliability of the computing resources in the idle base station.
- IPC Inter-Process Communication
- S2300 Aggregate and isolate computing power resources to form the computing power resources of a base station federation.
- the computing power demand information needs to comprehensively characterize the service processing performance required by the target base station.
- the computing power demand information includes the required transmission coefficient and the required number of network segments.
- the demand transmission coefficient is also characterized by the Quality of Service (QoS) coefficient to facilitate comparison between the idle transmission coefficient and the demand transmission coefficient.
- QoS Quality of Service
- S4000 based on computing power requirements, call the computing power resources of the base station federation.
- step S4000 at least includes the following steps:
- the target base station broadcasts the computing power requirements.
- the target base station broadcasts its computing power requirements to each base station in the base station federation to seek responses from idle base stations in the base station federation.
- idle base stations in the base station federation respond to computing power requirements and match the target base station.
- the idle base station responds to the target base station and obtains the identification number and interface address of the target base station through federation information or collection nodes, and then quickly establishes the idle base station and Communication channel between target base stations.
- the target base station occupies the computing power resources of the idle base station.
- the services on the target base station can be scheduled to the idle base station through the communication channel, so that the computing resources on the idle base station can be fully utilized. It is understandable that the computing power of idle base stations After the resources are occupied by the target base station, in order to avoid communication conflicts between the idle base station and the target base station, the computing resources need to be isolated.
- step S4000 at least includes the following steps:
- S4400 According to the computing power requirements of the target base station, obtain idle base stations in the base station federation that meet the computing power requirements.
- the base station federation can obtain idle base stations that meet the computing power needs based on the computing power needs.
- This calling method is coordinated and planned by the base station federation, which makes the response speed of idle base stations faster and improves the efficiency of calling the computing resources of idle base stations.
- S4500 Send the identification information of the idle base station to the target base station.
- the base station federation sends the identification information of the idle base station to the target base station, so that the target base station can obtain the identification number and interface address of the idle base station, and then quickly establish a communication channel between the target base station and the idle base station.
- the target base station matches the idle base station and occupies the computing power resources of the idle base station.
- the idle base station matches the target base station, and the services on the target base station can be scheduled to the idle base station through the communication channel, so that the idle base station can computing power resources are fully utilized.
- the computing resources of the idle base station are occupied by the target base station, in order to avoid communication conflicts between the idle base station and the target base station, the computing resources need to be isolated.
- FIG 8 is a schematic structural diagram of a base station computing power calling device 500 based on federated learning provided by an embodiment of this application.
- the entire process of the base station computing power calling method based on federated learning provided by this embodiment involves The following modules in the base station computing power calling device 500 are: networking module 510, generation module 520, acquisition module 530 and calling module 540.
- the networking module 510 is configured to unite multiple connected base stations to form a base station federation
- the generation module 520 is configured to isolate and process the computing power resources of the base station and generate the computing power resources of the base station federation;
- the acquisition module 530 is configured to obtain the computing power requirements of the target base station
- the calling module 540 is configured to call the computing power resources of the base station federation according to the computing power requirements.
- Figure 9 shows an electronic device 600 provided by an embodiment of the present application.
- the electronic device 600 includes but is not limited to:
- Memory 601 is configured to store programs
- the processor 602 is configured to execute the program stored in the memory 601.
- the processor 602 executes the program stored in the memory 601, the processor 602 is configured to execute the above federated learning-based base station computing power calling method.
- the processor 602 and the memory 601 may be connected through a bus or other means.
- the memory 601 can be configured to store non-transitory software programs and non-transitory computer executable programs, such as the federated learning-based base station computing power calling method described in any embodiment of this application.
- the processor 602 implements the above federated learning-based base station computing power calling method by running non-transient software programs and instructions stored in the memory 601.
- the memory 601 may include a storage program area and a storage data area, wherein the storage program area may store an operating system and an application program required for at least one function; the storage data area may store execution of the above-mentioned federated learning-based base station computing power call. method.
- the memory 601 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device.
- memory 601 may include memory located remotely relative to processor 602, and these remote memories may be connected to the processor 602 through a network. Examples of the above-mentioned networks include but are not limited to the Internet, intranets, local area networks, mobile communication networks and combinations thereof.
- the non-transient software programs and instructions required to implement the above federated learning-based base station computing power calling method are stored in the memory 601.
- the federated-based computing power provided by any embodiment of the present application is executed. Learning base station computing power calling method.
- Embodiments of the present application also provide a storage medium that stores computer-executable instructions, and the computer-executable instructions are used to execute the above federated learning-based base station computing power calling method.
- the storage medium stores computer-executable instructions that are executed by one or more control processors, for example, by a processor in the above-mentioned message processing system, so that the above-mentioned one Or multiple processors execute the base station computing power calling method based on federated learning provided by any embodiment of this application.
- the embodiment of this application combines multiple connected base stations to form a base station federation; isolates and processes the computing power resources of the base station to generate the computing power resources of the base station federation; obtains the computing power requirements of the target base station; and calls the computing power of the base station federation based on the computing power requirements.
- human resources improve the utilization of base station computing resources, and ensure the communication quality of wireless networks and user experience.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, tapes, disk storage or other magnetic storage devices, or may Any other medium used to store the desired information and that can be accessed by a computer.
- communication media typically includes computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
本申请实施例公开了基于联邦学习的基站算力调用方法、装置、系统及存储介质,所述方法包括:联合多个相连的基站,组成基站联邦(S1000);隔离处理基站的算力资源,生成基站联邦的算力资源(S2000);获取目标基站的算力需求(S3000);根据算力需求,调用基站联邦的算力资源(S4000)。
Description
相关申请的交叉引用
本申请基于申请号为202210867893.4、申请日为2022年07月21日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
本申请涉及基站通信技术领域,特别是涉及一种基于联邦学习的基站算力调用方法、装置、设备及介质。
目前,现有的无线网络中,大量基站分布在靠近用户的区域,以满足固定区域用户的通信需求。其中,由于时间、地域、用户数的不同导致对应基站的算力资源占用程度不一致。繁忙的基站由于算力资源有限不能充分满足用户需求,导致用户的通信发生卡顿、时延长等情况;而空闲的基站则造成无线网络中算力资源浪费,影响基站算力资源的占用率和能耗。因此,现有的基站算力分配不均匀的组网方式,不仅造成算力资源的浪费,还影响了无线网络的通信质量和用户的使用体验。
发明内容
本申请实施例提供一种基于联邦学习的基站算力调用方法、装置、电子设备及存储介质。
第一方面,本申请实施例提供一种基于联邦学习的基站算力调用方法,应用于基站,所述方法包括:联合多个相连的所述基站,组成基站联邦;隔离处理所述基站的算力资源,生成所述基站联邦的算力资源;获取目标基站的算力需求;根据所述算力需求,调用所述基站联邦的算力资源。
第二方面,本申请实施例提供一种基于联邦学习的基站算力调用装置,包括:组网模块,设置为联合多个相连的所述基站,组成基站联邦;生成模块,设置为隔离处理所述基站的算力资源,生成所述基站联邦的算力资源;获取模块,设置为获取目标基站的算力需求;调用模块,设置为根据所述算力需求,调用所述基站联邦的算力资源。
第三方面,本申请实施例提供一种电子设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现本申请实施例提供的基于联邦学习的基站算力调用方法。
第四方面,本申请实施例提供一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时,实现本申请实施例提供的基于联邦学习的基站算力调用方法。
图1是本申请实施例提供的一种基于联邦学习的基站算力调用方法的流程示意图;
图2是图1中步骤S1000的另一实施例的具体实现过程示意图;
图3是图1中步骤S1000的另一实施例的具体实现过程示意图;
图4是图1中步骤S2000的另一实施例的具体实现过程示意图;
图5是图4中步骤S2200的另一实施例的具体实现过程示意图;
图6是图1中步骤S4000的另一实施例的具体实现过程示意图;
图7是图1中步骤S4000的另一实施例的具体实现过程示意图;
图8是本申请实施例提供的一种基于联邦学习的基站算力调用装置的结构图;
图9是本申请实施例提供的一种电子设备的结构示意图。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
应了解,在本申请实施例的描述中,如果有描述到“第一”、“第二”等只是用于区分技术特征为目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量或者隐含指明所指示的技术特征的先后关系。“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示单独存在A、同时存在A和B、单独存在B的情况。其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项”及其类似表达,是指的这些项中的任意组合,包括单项或复数项的任意组合。例如,a,b和c中的至少一项可以表示:a,b,c,a和b,a和c,b和c或a和b和c,其中a,b,c可以是单个,也可以是多个。
此外,下面所描述的本申请各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。
本申请实施例涉及的基于联邦学习的基站算力调用方法,是基于联邦学习(Federated Learning)对基站的算力资源进行调用。联邦学习是一种分布式机器学习技术,其核心思想是通过在多个拥有本地数据的数据源之间进行分布式模型训练,在不需要交换本地个体或样本数据的前提下,仅通过交换模型参数或中间结果的方式,构建基于虚拟融合数据下的全局模型,从而实现数据隐私保护和数据共享计算的平衡。
为了保证基站算力资源得到充分的利用,现有的第五代移动通信技术(5th Generation Mobile Communication Technology,5G)无线网络采用以下方式进行组网:第一种是采用中央单元(Centralized Unit,CU)和分布单元(Distributed Unit,DU)合并部署的方式,基站之间并不进行算力共享,尽最大算力提供服务。这种方式组网简单,但对超出自己算力范围之外的需求无能为力。而且需要临时增加基站完成工作,比如奥运赛事,潮汐效应场景(办公区上班时间和生活区下班时间业务峰值高)。但由于时间、地域、用户需求等因素影响,不同的基站繁忙度不同,不能充分利用区域内所有的基站资源。第二种是采用CU和DU独立部署的方式,把基站中需要大量计算的服务拆分到集中式的服务器中去,减轻基站负荷。这种方式扩容方便,算力不足时可以不断扩充服务器。但仍存在如下问题:基站是无线通讯过程中最靠近用户的节点,对于实时性要求较高的边缘计算通信需求,第二种方式的时延无法满足要求;同样也没有充分利用区域内所有基站的算力资源。
基于以上,本申请实施例提供一种基于联邦学习的基站算力调用方法、装置、系统及计算机可读存储介质,联合多个相连的基站,组成基站联邦;隔离处理基站的算力资源,生成基站联邦的算力资源;获取目标基站的算力需求;根据算力需求,调用基站联邦的算力资源,提高基站算力资源的利用率,保证了无线网络的通信质量和用户的使用体验。
请参见图1,图1示出了本申请实施例提供的一种基于联邦学习的基站算力调用方法的流程。如图1所示,本申请实施例的基于联邦学习的基站算力调用方法包括以下步骤:
S1000,联合多个相连的基站,组成基站联邦。
可以理解的是,联邦学习是一种分布式机器学习技术,允许多方通过本地训练集按照指定算法构建模型。具体来说,联邦学习过程就是联邦学习参与者对本地数据进行训练后,将训练得到的参数上传服务器,服务器聚合得到整体参数。根据参与者之间的数据特征和数据样本分布,联邦学习通常分为横向联邦学习(Horizontal Federated Learning,HFL)、纵向联邦学习(Vertical Federated Learning,VFL)和联邦迁移学习(Federated Transfer Learning,FTL)。横向联邦学习也称为基于样本的联邦学习,是指不同的参与者之间共享数据集的特征空间;纵向联邦学习也称为基于特征的联邦学习,用在参与者数据集的样本空间或特征空间有着明显重叠但又不同的场景中,即不同的参与者对同一条记录数据有相互独立的属性;联邦迁移学习是指参与者之间几乎没有样本空间或者特征空间的重叠。
可以理解的是,通过联合多个相连的基站,组成提供算力资源的基站联邦,便于基站联邦对基站的算力资源进行统一的协同调用。可以理解的是,现有的基站难以实现远距离跨站传输,造成基站之间的网络不通。而通过组成基站联邦,就能打通基站联邦内基站之间的传输链路,保证基站算力资源的利用率。
可以理解的是,基站的组网方式分为静态组网和动态组网。两种方式能根据实际网络需求,把多个相连的基站,组成基站联邦,并通过基站之间的传输链路进行算力资源的共享。
请参见图2,图2示出了上述步骤S1000的另一实施例的具体实现过程示意图。如图2所示,步骤S1000至少包括以下步骤:
S1100,获取多个相连的基站的标识信息,形成联邦信息,其中,标识信息包括至少以下之一:基站的标识号、或基站的接口地址。
可以理解的是,通过提前获取多个相连的基站的标识号和接口地址,并对基站联邦的结构和组成进行规划,就能收集多个相连的基站的标识信息,形成联邦信息,以便于基站之间进行通信。可以理解的是,这种静态组网的方式具有结构稳定、安全性强的优点,适合于基站业务波动不大的场景。
S1200,发送联邦信息至基站。
可以理解的是,基站通过获取联邦信息,就能获取到多个与之相连基站的标识信息。示例性的,在实际应用中,网管平台把事先规划好的联邦信息下发给联邦内所有基站,使基站联邦能快速地组建起来。
S1300,基站根据联邦信息进行通信。
可以理解的是,本端基站通过联邦信息内的基站的标识号和接口地址,就能快速地建立基站联邦内的传输链路,提高基站算力资源的调用效率。
请参见图3,图3示出了上述步骤S1000的另一实施例的具体实现过程示意图。如图3所示,步骤S1000至少包括以下步骤:
S1400,发送联邦信息至多个相连的基站,其中,联邦信息包括至少以下之一:基站联邦的标识号、或收集节点的接口地址。
可以理解的是,通过发送联邦信息至多个相连的基站,使基站向收集节点自动反馈基站的标识信息,进而动态地收集基站的标识信息,达到基站联邦自动发现新基站的功能。可以理解的是,这种动态组网的方式具有灵活多变、适应性强的优点,适合于基站联邦规模经常改变的场景。
S1500,基站接收联邦信息,并向收集节点发送基站的标识信息,其中,标识信息包括至少以下之一:基站的标识号、或基站的接口地址。
可以理解的是,基站在接收联邦信息后,根据收集节点的接口地址,向收集节点发送与之相连的基站的标识信息,达到基站自动发现、并入基站联邦的效果。
S1600,基站通过收集节点进行通信。
可以理解的是,由于联邦信息包括基站联邦的标识号、收集节点的接口地址,基站能通过收集节点的接口地址发送基站的标识信息。由于收集节点收集基站的标识信息和联邦信息,在目标基站需要与基站联邦内的空闲基站进行通信时,能通过收集节点获取空闲基站的标识信息,并建立通信通道;此外,收集节点还能对外提供基站联邦的查询功能。
S2000,隔离处理基站的算力资源,生成基站联邦的算力资源。
可以理解的是,基站联邦内的基站除了部署本端基站的服务,还有可能存在其他基站部署过来的服务。此时,基站相当于存在多个逻辑网元共享同一个物理基站,为了避免逻辑网元之间发生互相干扰的情况,需要对基站上多个逻辑网元的服务进行隔离处理。可以理解的是,隔离处理包括但不限于网络隔离、内存限额和磁盘划分。对基站的算力资源进行隔离处理,使基站的算力资源具有天然的高可靠性。
请参见图4,图4示出了上述步骤S2000的另一实施例的具体实现过程示意图。如图4所示,步骤S2000至少包括以下步骤:
S2100,获取基站的算力资源的目标基站。
可以理解的是,在实际的无线网络通信场景中,基站联邦内存在多个逻辑网元共享同一个物理基站的情况,即一个空闲基站同时服务多个目标基站。此时通过获取空闲基站的算力资源所对应的目标基站,便于对基站的算力资源进行隔离处理,避免发生通信冲突。
S2200,根据目标基站,对算力资源进行隔离处理,形成隔离算力资源。
可以理解的是,按照目标基站的不同,对算力资源进行隔离处理,能有效保证隔离算力资源的安全性和抗干扰能力,避免目标基站与空闲基站之间的通信发生冲突,导致算力资源的浪费。
请参见图5,图5示出了上述步骤S2200的另一实施例的具体实现过程示意图。如图5所示,步骤S2200至少包括以下步骤:
S2210,根据目标基站,把算力资源划分在不同的网络地址上,形成隔离算力资源。
可以理解的是,由于目标基站上的服务可以调度到空闲基站上,虽然目标基站的服务跨越了多个基站才调度到空闲基站上,但对于业务而言,目标基站的服务属于同一个逻辑管理单位。同时,由于空闲基站上除了自己的服务还有目标基站的服务,为了避免通信冲突,本申请实施例采用虚拟扩展局域网(Virtual Extensible Local Area Network,VXLAN)进行网络隔离和数据传输协议。
可以理解的是,VXLAN是由国际互联网工程任务组(The Internet Engineering Task Force,IETF)定义的第三代数据中心虚拟化技术框架(Network Virtualization over Layer 3,NVO3)标准技术之一。VXLAN是一种网络虚拟化技术,可以改进大型云计算在部署时的扩展问题,是对VLAN的一种扩展。同时,VXLAN是一种功能强大的工具,可以穿透三层网络对二层进行扩展,能通过封装流量并将其扩展到第三层网关,以此来解决虚拟内存系统(Virtual Memory System,VMS)的可移植性限制,使其可以访问在外部IP子网上的服务器。在其他实施例中,算力资源可采用其他NVO3技术进行网络隔离,此处不作限定。
S2220,根据目标基站,把算力资源划分在不同的运行空间上,形成隔离算力资源。
可以理解的是,除了进行网络隔离,还能把算力资源划分在不同的运行空间上,对算力资源进行运行空间的隔离。本申请实施例利用linux提供的虚拟化技术,对空闲基站内多个逻辑网元的资源隔离,包括但不限于内存限额、文件隔离、进程间通信(Inter-Process Communication,IPC)隔离、进程空间隔离、用户空间隔离等。让空闲基站内的每个逻辑网元运行在一个资源隔离的独立空间中,互不干扰。这样保证了空闲基站在其中某个服务异常的情况下,也不会导致其他服务异常,大大提高了空闲基站内算力资源的安全性和可靠性。
S2300,汇总隔离算力资源,形成基站联邦的算力资源。
可以理解的是,把隔离算力资源进行汇总处理,就能得到基站联邦内随时调用的算力资源,便于基站联邦内的基站进行算力资源的共享和调用。
S3000,获取目标基站的算力需求。
可以理解的是,为了全面地反映出目标基站的算力需求,算力需求信息需要全面地表征出目标基站所需要的处理业务的性能。示例性的,算力需求信息包括需求传输系数和需求网段数量。其中,需求传输系数也通过服务质量(Quality of Service,QoS)系数进行表征,以便于对空闲传输系数和需求传输系数进行比较。
S4000,根据算力需求,调用基站联邦的算力资源。
可以理解的是,在获取目标基站的算力需求后,能在基站联邦的算力资源中进行匹配,获取相应的算力资源,达到基站联邦内的基站进行算力资源的共享和调用的效果。其中,调用基站联邦的算力资源的方式有基站联邦主导和非基站联邦主导两种方式,以适用在不同的无线网络环境中。
请参见图6,图6示出了上述步骤S4000的另一实施例的具体实现过程示意图。如图6所示,步骤S4000至少包括以下步骤:
S4100,目标基站广播算力需求。
可以理解的是,在基站联邦不起主导作用的情况下,目标基站向基站联邦内各个基站广播其算力需求,以寻求基站联邦内的空闲基站的响应。
S4200,基站联邦内的空闲基站响应算力需求,并与目标基站匹配。
可以理解的是,当空闲基站的算力资源满足目标基站的算力需求,空闲基站响应目标基站,并通过联邦信息或者收集节点获取目标基站的标识号和接口地址,进而快速地建立空闲基站和目标基站之间的通信通道。
S4300,目标基站占用空闲基站的算力资源。
可以理解的是,空闲基站与目标基站匹配后,目标基站上的服务能通过通信通道调度到空闲基站上,使空闲基站上的算力资源得到充分的利用。可以理解的是,在空闲基站的算力
资源被目标基站占用后,为了避免空闲基站与目标基站之间发生通信冲突,需要对算力资源进行隔离处理。
请参见图7,图7示出了上述步骤S4000的另一实施例的具体实现过程示意图。如图7所示,步骤S4000至少包括以下步骤:
S4400,根据目标基站的算力需求,获取基站联邦中满足算力需求的空闲基站。
可以理解的是,在基站联邦起主导作用的情况下,基站联邦能根据算力需求获取满足算力需求的空闲基站。这种调用方式由基站联邦统一协调和规划,使空闲基站的响应速度更快,提高空闲基站的算力资源的调用效率。
S4500,发送空闲基站的标识信息到目标基站。
可以理解的是,基站联邦把空闲基站的标识信息发送到目标基站,以便于目标基站获取空闲基站的标识号和接口地址,进而快速地建立目标基站和空闲基站之间的通信通道。
S4600,目标基站与空闲基站匹配,并占用空闲基站的算力资源。
可以理解的是,与上述步骤S4300一致,目标基站获取空闲基站的标识号和接口地址后,空闲基站与目标基站匹配,目标基站上的服务能通过通信通道调度到空闲基站上,使空闲基站上的算力资源得到充分的利用。同样的,在空闲基站的算力资源被目标基站占用后,为了避免空闲基站与目标基站之间发生通信冲突,需要对算力资源进行隔离处理。
参见图8,图8是本申请实施例提供的基于联邦学习的基站算力调用装置500的结构示意图,本申请实施例提供的基于联邦学习的基站算力调用方法的整个流程中涉及基于联邦学习的基站算力调用装置500中的以下模块:组网模块510、生成模块520、获取模块530和调用模块540。
其中,组网模块510,设置为联合多个相连的基站,组成基站联邦;
生成模块520,设置为隔离处理基站的算力资源,生成基站联邦的算力资源;
获取模块530,设置为获取目标基站的算力需求;
调用模块540,设置为根据算力需求,调用基站联邦的算力资源。
需要说明的是,上述装置的模块之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
图9示出了本申请实施例提供的电子设备600。该电子设备600包括但不限于:
存储器601,设置为存储程序;
处理器602,设置为执行存储器601存储的程序,当处理器602执行存储器601存储的程序时,处理器602设置为执行上述的基于联邦学习的基站算力调用方法。
处理器602和存储器601可以通过总线或者其他方式连接。
存储器601作为一种非暂态计算机可读存储介质,可设置为存储非暂态软件程序以及非暂态性计算机可执行程序,如本申请任意实施例描述的基于联邦学习的基站算力调用方法。处理器602通过运行存储在存储器601中的非暂态软件程序以及指令,从而实现上述的基于联邦学习的基站算力调用方法。
存储器601可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储执行上述的基于联邦学习的基站算力调用
方法。此外,存储器601可以包括高速随机存取存储器,还可以包括非暂态存储器,比如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。在一些实施方式中,存储器601可包括相对于处理器602远程设置的存储器,这些远程存储器可以通过网络连接至该处理器602。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
实现上述的基于联邦学习的基站算力调用方法所需的非暂态软件程序以及指令存储在存储器601中,当被一个或者多个处理器602执行时,执行本申请任意实施例提供的基于联邦学习的基站算力调用方法。
本申请实施例还提供了一种存储介质,存储有计算机可执行指令,计算机可执行指令用于执行上述的基于联邦学习的基站算力调用方法。
在一实施例中,该存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个控制处理器执行,比如,被上述报文处理系统中的一个处理器执行,可使得上述一个或多个处理器执行本申请任意实施例提供的基于联邦学习的基站算力调用方法。
本申请实施例,联合多个相连的基站,组成基站联邦;隔离处理基站的算力资源,生成基站联邦的算力资源;获取目标基站的算力需求;根据算力需求,调用基站联邦的算力资源,提高基站算力资源的利用率,保证了无线网络的通信质量和用户的使用体验。
以上所描述的实施例仅仅是示意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统可以被实施为软件、固件、硬件及其适当的组合。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包括计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
Claims (11)
- 一种基于联邦学习的基站算力调用方法,应用于基站,所述方法包括:联合多个相连的所述基站,组成基站联邦;隔离处理所述基站的算力资源,生成所述基站联邦的算力资源;获取目标基站的算力需求;根据所述算力需求,调用所述基站联邦的算力资源。
- 根据权利要求1所述的方法,其中,所述联合多个相连的所述基站,组成基站联邦,包括:获取多个相连的所述基站的标识信息,形成联邦信息,其中,所述标识信息包括至少以下之一:所述基站的标识号、或所述基站的接口地址;发送所述联邦信息至所述基站;所述基站根据所述联邦信息进行通信。
- 根据权利要求1所述的方法,其中,所述联合多个相连的所述基站,组成基站联邦,包括:发送联邦信息至多个相连的所述基站,其中,所述联邦信息包括至少以下之一:所述基站联邦的标识号、或收集节点的接口地址;所述基站接收所述联邦信息,并向所述收集节点发送所述基站的标识信息,其中,所述标识信息包括至少以下之一:所述基站的标识号、或所述基站的接口地址;所述基站通过所述收集节点进行通信。
- 根据权利要求1所述的方法,其中,所述隔离处理所述基站的算力资源,生成所述基站联邦的算力资源,包括:获取所述基站的算力资源的目标基站;根据所述目标基站,对所述算力资源进行隔离处理,形成隔离算力资源;汇总所述隔离算力资源,形成所述基站联邦的算力资源。
- 根据权利要求4所述的方法,其中,所述根据所述目标基站,对所述算力资源进行隔离处理,形成隔离算力资源,包括:根据所述目标基站,把所述算力资源划分在不同的网络地址上,形成所述隔离算力资源。
- 根据权利要求4所述的方法,其中,所述根据所述目标基站,对所述算力资源进行隔离处理,形成隔离算力资源,包括:根据所述目标基站,把所述算力资源划分在不同的运行空间上,形成所述隔离算力资源。
- 根据权利要求1所述的方法,其中,所述根据所述算力需求,调用所述基站联邦的算力资源,包括:所述目标基站广播所述算力需求;所述基站联邦内的空闲基站响应所述算力需求,并与所述目标基站匹配;所述目标基站占用所述空闲基站的算力资源。
- 根据权利要求3所述的方法,其中,所述根据所述算力需求,调用所述基站联邦的算力资源,包括:根据所述目标基站的所述算力需求,获取所述基站联邦中满足所述算力需求的空闲基站;发送所述空闲基站的标识信息到所述目标基站;所述目标基站与所述空闲基站匹配,并占用所述空闲基站的算力资源。
- 一种基于联邦学习的基站算力调用装置,包括:组网模块,设置为联合多个相连的所述基站,组成基站联邦;生成模块,设置为隔离处理所述基站的算力资源,生成所述基站联邦的算力资源;获取模块,设置为获取目标基站的算力需求;调用模块,设置为根据所述算力需求,调用所述基站联邦的算力资源。
- 一种电子设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如权利要求1至8任意一项所述的基于联邦学习的基站算力调用方法。
- 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时,实现如权利要求1至8任意一项所述的基于联邦学习的基站算力调用方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210867893.4 | 2022-07-21 | ||
CN202210867893.4A CN117500068A (zh) | 2022-07-21 | 2022-07-21 | 基于联邦学习的基站算力调用方法、装置、设备及介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024016803A1 true WO2024016803A1 (zh) | 2024-01-25 |
Family
ID=89616971
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/094342 WO2024016803A1 (zh) | 2022-07-21 | 2023-05-15 | 基于联邦学习的基站算力调用方法、装置、设备及介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117500068A (zh) |
WO (1) | WO2024016803A1 (zh) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107087303A (zh) * | 2016-02-16 | 2017-08-22 | 中兴通讯股份有限公司 | 一种基站硬件虚拟化方法、装置和基站 |
CN110366194A (zh) * | 2019-06-06 | 2019-10-22 | 深圳市太易云互联科技有限公司 | 资源调用方法、装置及系统 |
CN113271613A (zh) * | 2020-04-24 | 2021-08-17 | 中兴通讯股份有限公司 | 基于基站群的数据处理方法、基站及基站系统 |
US20210368514A1 (en) * | 2020-05-19 | 2021-11-25 | T-Mobile Usa, Inc. | Base station radio resource management for network slices |
EP4002231A1 (en) * | 2020-11-18 | 2022-05-25 | Telefonica Digital España, S.L.U. | Federated machine learning as a service |
CN114706596A (zh) * | 2022-04-11 | 2022-07-05 | 中国电信股份有限公司 | 容器部署方法、资源调度方法、装置、介质和电子设备 |
-
2022
- 2022-07-21 CN CN202210867893.4A patent/CN117500068A/zh active Pending
-
2023
- 2023-05-15 WO PCT/CN2023/094342 patent/WO2024016803A1/zh unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107087303A (zh) * | 2016-02-16 | 2017-08-22 | 中兴通讯股份有限公司 | 一种基站硬件虚拟化方法、装置和基站 |
CN110366194A (zh) * | 2019-06-06 | 2019-10-22 | 深圳市太易云互联科技有限公司 | 资源调用方法、装置及系统 |
CN113271613A (zh) * | 2020-04-24 | 2021-08-17 | 中兴通讯股份有限公司 | 基于基站群的数据处理方法、基站及基站系统 |
US20210368514A1 (en) * | 2020-05-19 | 2021-11-25 | T-Mobile Usa, Inc. | Base station radio resource management for network slices |
EP4002231A1 (en) * | 2020-11-18 | 2022-05-25 | Telefonica Digital España, S.L.U. | Federated machine learning as a service |
CN114706596A (zh) * | 2022-04-11 | 2022-07-05 | 中国电信股份有限公司 | 容器部署方法、资源调度方法、装置、介质和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN117500068A (zh) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111224821B (zh) | 安全服务部署系统、方法及装置 | |
JP6903121B2 (ja) | パケット伝送 | |
EP3133794B1 (en) | Network function virtualization network system | |
US20170257269A1 (en) | Network controller with integrated resource management capability | |
WO2019062836A1 (zh) | 网络切片管理方法及其装置 | |
US20150334696A1 (en) | Resource provisioning method | |
US12063594B2 (en) | Method, device, and system for deploying network slice | |
CN103607430A (zh) | 一种网络处理的方法和系统及网络控制中心 | |
CN108462592A (zh) | 基于sla的资源分配方法和nfvo | |
US11140091B2 (en) | Openflow protocol-based resource control method and system, and apparatus | |
EP3211531B1 (en) | Virtual machine start method and apparatus | |
US20160183229A1 (en) | Ip phone network system, server apparatus, ip exchange and resource capacity expansion method | |
WO2020177255A1 (zh) | 无线接入网的资源分配方法及装置 | |
US20220350637A1 (en) | Virtual machine deployment method and related apparatus | |
CN112953739B (zh) | 基于k8s平台纳管sdn的方法、系统以及存储介质 | |
EP3096492B1 (en) | Page push method and system | |
CN112929206B (zh) | 一种云网环境下云物理机配置的方法与装置 | |
WO2024016803A1 (zh) | 基于联邦学习的基站算力调用方法、装置、设备及介质 | |
EP3503484A1 (en) | Message transmission method, device and network system | |
CN107426109A (zh) | 一种流量调度方法、vnf模块及流量调度服务器 | |
WO2023046026A1 (zh) | 一种容器化vnf的部署方法及装置 | |
CN114221948B (zh) | 一种云网系统及任务处理方法 | |
CN116170509A (zh) | 算力调度方法、装置及存储介质 | |
CN112910939B (zh) | 一种数据处理方法及相关装置 | |
WO2016177134A1 (zh) | 资源处理方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23841872 Country of ref document: EP Kind code of ref document: A1 |