WO2021218274A1 - 一种通信方法、装置及系统 - Google Patents

一种通信方法、装置及系统 Download PDF

Info

Publication number
WO2021218274A1
WO2021218274A1 PCT/CN2021/075317 CN2021075317W WO2021218274A1 WO 2021218274 A1 WO2021218274 A1 WO 2021218274A1 CN 2021075317 W CN2021075317 W CN 2021075317W WO 2021218274 A1 WO2021218274 A1 WO 2021218274A1
Authority
WO
WIPO (PCT)
Prior art keywords
network element
data analysis
analysis network
information
data
Prior art date
Application number
PCT/CN2021/075317
Other languages
English (en)
French (fr)
Inventor
辛阳
崇卫微
吴晓波
阎亚丽
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21797488.0A priority Critical patent/EP4132066A4/en
Publication of WO2021218274A1 publication Critical patent/WO2021218274A1/zh
Priority to US17/976,261 priority patent/US20230083982A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5058Service discovery by the service manager
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W60/00Affiliation to network, e.g. registration; Terminating affiliation with the network, e.g. de-registration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic

Definitions

  • the embodiments of the present application relate to the field of data analysis, and in particular, to a communication method, device, and system.
  • the network data analysis function (NWDAF) network element has the following functions: data collection (for example, collection of core network data, network management data, service data, terminal data), data analysis, and data analysis result feedback.
  • the embodiments of the present application provide a communication method, device, and system, which can expand the application scenarios of data analysis.
  • an embodiment of the present application provides a communication method, including: a first data analysis network element sends a first request for requesting information of a second data analysis network element to a service discovery network element.
  • the first request includes one or more of distributed learning information and first indication information used to indicate the type of the second data analysis network element.
  • the distributed learning information includes the type of distributed learning requested by the first data analysis network element.
  • the first data analysis network element receives information from one or more second data analysis network elements of the service discovery network element, and the second data analysis network element supports the aforementioned type of distributed learning requested by the first data analysis network element.
  • the embodiment of the present application provides a communication method in which a first data analysis network element sends a first request to a service discovery network element, and the first request is used to request the service discovery network element for the first data analysis network element required by the first data analysis network element.
  • the characteristics of data analysis network elements In this way, it is convenient for the service discovery network element to provide the first data analysis network element with information of one or more second data analysis network elements of a type supporting distributed learning according to the first request.
  • the type of the second data analysis network element is the same as the type of the second data analysis network element requested by the first data analysis network element.
  • this solution can achieve the purpose of the first data analysis network element to find a data analysis network element capable of distributed learning and training through the service discovery network element.
  • the first data analysis network element obtains the one or more first data analysis network elements After the second data analyzes the information of the network element, the subsequent first data analysis network element can perform model training in collaboration with one or more second data analysis network elements when model training is required, thereby expanding the application scenarios of data analysis.
  • the method provided in the embodiment of the present application may further include: the first data analysis network element determines a third data analysis network for distributed learning according to the information of one or more second data analysis network elements Meta's information.
  • the number of the third data analysis network element is one or more, in other words, the first data analysis network element determines one or more third data analysis for distributed learning according to the information of one or more second data analysis network elements Information about the network element.
  • the data may not be out of the local domain of the third data analysis network element, and the model training can still be performed by the first data analysis network element.
  • the problem of data leakage is avoided.
  • model training can still be performed. Furthermore, since data training is performed at each third data analysis network element, this distributed training process can also speed up the entire model training.
  • the load of the third data analysis network element is lower than the preset load threshold, or the priority of the third data analysis network element is higher than the preset priority threshold.
  • the range of the third data analysis network element is located within the range of the first data analysis network element.
  • the scope of the third data analysis network element includes: the identity of the public land mobile network PLMN to which the third data analysis network element belongs, the range of network slice instances served by the third data analysis network element, and the data network served by the third data analysis network element Name DNN, equipment vendor information of the third data analysis network element.
  • the first request further includes the range of the first data analysis network element, and correspondingly, the range of the second data analysis network element or the range of the third data analysis network element is located in the first data analysis network element In the range. If the first request also includes the range of the first data analysis network element, the first request is used to request one or one of the types of distributed learning requested by the first data analysis network element and supported by the first data analysis network element. Multiple second data analysis network elements.
  • the scope of the first data analysis network element includes one or more of the following information: the area served by the first data analysis network element, and the public land mobile network PLMN to which the first data analysis network element belongs , The information of the network slice served by the first data analysis network element, the name of the data network DNN served by the first data analysis network element, and the equipment vendor information of the first data analysis network element.
  • the distributed learning information also includes the algorithm information supported by the distributed learning.
  • the second data analysis network element or the third data analysis network element supports the algorithm information corresponding to the distributed learning support. algorithm. In this way, it is convenient for the one or more second data analysis network elements provided by the service discovery network element to the first data analysis network element to also support the algorithm information.
  • the algorithm information supported by distributed learning includes one or more of algorithm type, algorithm identification, and algorithm performance. It can be understood that the algorithm information supported by different second data analysis network elements may be the same or different.
  • the method provided in this embodiment of the present application further includes: the first data analysis network element receives sub-models from one or more third data analysis network elements.
  • the sub-model is obtained by training the third data analysis network element according to the data obtained by the third data analysis network element.
  • the first data analysis network element determines an updated model according to one or more sub-models of the third data analysis network element.
  • the first data analysis network element sends the updated model to one or more third data analysis network elements. Since the first data analysis network element is an updated model based on the sub-models provided by different data analysis network elements in one or more third data analysis network elements, each third data analysis network element does not need to report to the first data analysis network. Meta provides data for training, avoiding data leakage.
  • the method provided in the embodiment of the present application further includes: the first data analysis network element determines the target model according to the updated model.
  • the first data analysis network element sends the target model and one or more of the following information corresponding to the target model to one or more second data analysis network elements: model identification, model version number, or data analysis identification. It is possible to make each second data analysis network element obtain the target model determined by the first data analysis network element.
  • the target model may be a business experience model.
  • the method provided in this embodiment of the present application further includes: the first data analysis network element sends a Or multiple third data analysis network elements send configuration parameters, and the configuration parameters are parameters used when the third data analysis network element determines the training sub-model. It is convenient for the third data analysis network element to configure relevant parameters involved in the distributed learning and training process according to the configuration parameters.
  • the configuration parameters include one or more of the following information: initial model, training set selection criteria, feature generation method, training termination condition, maximum training time, and maximum waiting time.
  • the type of distributed learning includes one of horizontal learning, vertical learning, and transfer learning.
  • the type of the second data analysis network element is one of the following: client, local trainer, or local trainer.
  • the method provided in the embodiment of the present application further includes: the first data analysis network element sends a second request for requesting registration of the information of the first data analysis network element to the service discovery network element.
  • the information of the first data analysis network element includes one or more of the following information corresponding to the first data analysis network element: distributed learning information or second indication information, where the second indication information is used to indicate the first data Analyze the type of network element.
  • distributed learning information or second indication information where the second indication information is used to indicate the first data Analyze the type of network element.
  • the information of the first data analysis network element further includes one of the range of the first data analysis network element, the identification of the first data analysis network element, the address information of the first data analysis network element, or Multiple.
  • the type of the first data analysis network element includes one of the following information: server, coordinator, central trainer, and global trainer.
  • distributed learning is federated learning.
  • the second data analysis network element is a terminal.
  • an embodiment of the present application provides a communication method, the method includes: a service discovery network element receives a first request from a first data analysis network element for requesting information of a second data analysis network element, and the first request It includes one or more of the following information: distributed learning information and first indication information, where the distributed learning information includes the type of distributed learning requested by the first data analysis network element, and the first indication information is used to indicate The second data analysis network element type.
  • the service discovery network element determines the information of one or more second data analysis network elements of the type supporting distributed learning according to the first request.
  • the service discovery network element sends the information of one or more second data analysis network elements to the first data analysis network element.
  • the first request in the method provided in this embodiment of the application further includes the range of the first data analysis network element, and correspondingly, the range of the second data analysis network element is located on the first data analysis network.
  • the service discovery network element determines the information of one or more second data analysis network elements of the type supporting distributed learning according to the first request, including: the service discovery network element will be located in the first data analysis network element
  • the data analysis network element of the type that supports distributed learning within the range of is determined to be one or more second data analysis network elements.
  • the distributed learning information further includes algorithm information supported by the distributed learning, and correspondingly, the second data analysis network element supports an algorithm corresponding to the algorithm information supported by the distributed learning.
  • the service discovery network element determines the information of one or more second data analysis network elements of the type supporting distributed learning according to the first request, including: the service discovery network element will support both the type of distributed learning and the distributed learning
  • the data analysis network element supporting the algorithm information is determined to be one or more second data analysis network elements.
  • the method provided in the embodiment of the present application further includes: the service discovery network element receives a second request from the first data analysis network element for requesting registration of the information of the first data analysis network element, and
  • the information of the first data analysis network element includes one or more of the following information corresponding to the first data analysis network element: distributed learning information, or second indication information, the second indication information is used to indicate the first data analysis network The type of yuan.
  • the service discovery network element registers the information of the first data analysis network element according to the second request.
  • the information of the first data analysis network element further includes one of the range of the first data analysis network element, the identification of the first data analysis network element, the address information of the first data analysis network element, or Multiple.
  • the service discovery network element registering the information of the first data analysis network element according to the second request includes: the service discovery network element stores the information of the first data analysis network element in the service discovery network element, Or the service discovery network element stores the information of the first data analysis network element in the user data management network element.
  • the method provided in the embodiment of the present application further includes: the service discovery network element receives the first information from one or more second data analysis network elements for requesting registration of the second data analysis network element Three requests, the information of the second data analysis network element includes one or more of the following information corresponding to the second data analysis network element: one or more of distributed learning information and third indication information, and a third indication The information is used to indicate the type of the second data analysis network element.
  • the service discovery network element registers the information of one or more second data analysis network elements according to the third request.
  • the information of the second data analysis network element further includes one of the range of the second data analysis network element, or the identification of the second data analysis network element or the address information of the second data analysis network element Or more.
  • the service discovery network element registers the information of one or more second data analysis network elements according to the third request, including: the service discovery network element registers the information of one or more second data analysis network elements The information is stored in the service discovery network element.
  • the service discovery network element registers the information of one or more second data analysis network elements according to the third request, including: the service discovery network element registers the information of one or more second data analysis network elements The information is stored in the user data management network element.
  • the type of the first data analysis network element includes one of the following information: server, coordinator, central trainer, and global trainer.
  • the type of the second data analysis network element includes one of the following information: a client, a local trainer, or a local trainer.
  • distributed learning is federated learning.
  • the second data analysis network element is a terminal.
  • an embodiment of the present application provides a communication method, the method includes: a third data analysis network element determines a sub-model, the sub-model is performed by the third data analysis network element based on the data obtained by the third data analysis network element Get trained.
  • the third data analysis network element sends the sub-model to the first data analysis network element.
  • the method provided in the embodiment of the present application may further include: the sub-model is analyzed by the third data analysis network element according to the data acquired by the third data analysis network element within the scope of the third data analysis network element Get through training.
  • the method provided in the embodiment of the present application may further include: the third data analysis network element receives an updated model from the first data analysis network element, and the updated model consists of a plurality of different third data It is obtained by analyzing the sub-model provided by the network element.
  • the method provided in the embodiment of the present application may further include: the third data analysis network element receives the target model from the first data analysis network element.
  • the method provided in the embodiment of the present application may further include: a third data analysis network element receives configuration parameters from the first data analysis network element, and the configuration parameters determine training for the third data analysis network element Parameters used when submodeling.
  • the configuration parameters include one or more of the following information: initial model, training set selection criteria, feature generation method, training termination condition, maximum training time, and maximum waiting time.
  • the type of distributed learning includes one of horizontal learning, vertical learning, and transfer learning.
  • the type of the third data analysis network element is one of the following: client, local trainer, or local trainer.
  • the range of the third data analysis network element is located within the range of the first data analysis network element.
  • the method provided in the embodiment of the present application may further include: the third data analysis network element sends a third request for requesting registration of the information of the third data analysis network element to the service discovery network element, the The information of the third data analysis network element includes one or more of the following information corresponding to the third data analysis network element: distributed learning information, or third indication information, the third indication information is used to indicate the third data analysis network The type of yuan.
  • the distributed learning information corresponding to the third data analysis network element includes the type of distributed learning supported by the third data analysis network element and/or the algorithm information supported by the distributed learning supported by the third data analysis network element.
  • the information of the third data analysis network element further includes one of the range of the third data analysis network element, or the identification of the third data analysis network element or the address information of the third data analysis network element Or more.
  • the type of the first data analysis network element includes one of the following information: server, coordinator, central trainer, and global trainer.
  • distributed learning is federated learning.
  • the embodiments of the present application provide a communication device, which can implement the first aspect or any one of the possible implementations of the first aspect, and therefore can also implement the first aspect or The beneficial effects in any possible implementation of the first aspect.
  • the communication device may be a first data analysis network element, or a device that can support the first data analysis network element to implement the first aspect or any one of the possible implementation manners of the first aspect. For example, it is applied to the chip in the first data analysis network element.
  • the communication device can implement the above method by software, hardware, or by hardware executing corresponding software.
  • an embodiment of the present application provides a communication device, including: a communication unit and a processing unit, where the communication unit is used for receiving and sending information, and the processing unit is used for processing information.
  • the communication unit is configured to send a first request for requesting information of the second data analysis network element to the service discovery network element.
  • the first request includes one or more of distributed learning information and first indication information for indicating the type of the second data analysis network element, where the distributed learning information includes the first data analysis network element The type of distributed learning requested.
  • the communication unit is further configured to receive information from one or more second data analysis network elements of the service discovery network element, and the second data analysis network element supports the above-mentioned distributed learning type requested by the first data analysis network element.
  • the processing unit is configured to determine information of a third data analysis network element for distributed learning according to information of one or more second data analysis network elements, where the third data analysis network element The number is one or more.
  • the load of the third data analysis network element is lower than the preset load threshold, or the priority of the third data analysis network element is higher than the preset priority threshold.
  • the range of the third data analysis network element is located within the range of the first data analysis network element.
  • the first request further includes the range of the first data analysis network element, and correspondingly, the range of the second data analysis network element or the range of the third data analysis network element is located in the first data analysis network element In the range. It is understandable that if the first request also includes the range of the first data analysis network element, the first request is used to request the distributed learning that is located within the range of the first data analysis network element and supports the first data analysis network element's request One or more second data analysis network elements of the type.
  • the scope of the first data analysis network element includes one or more of the following information: the area served by the first data analysis network element, and the public land mobile network PLMN to which the first data analysis network element belongs , The information of the network slice served by the first data analysis network element, the name of the data network DNN served by the first data analysis network element, and the equipment vendor information of the first data analysis network element.
  • the distributed learning information also includes the algorithm information supported by the distributed learning.
  • the second data analysis network element or the third data analysis network element supports the algorithm information corresponding to the distributed learning support. algorithm. In this way, it is convenient for the one or more second data analysis network elements provided by the service discovery network element to the first data analysis network element to also support algorithm information supported by distributed learning.
  • the algorithm information supported by distributed learning includes one or more of algorithm type, algorithm identification, and algorithm performance. It can be understood that the algorithm information supported by different second data analysis network elements or third data analysis network elements may be the same or different.
  • the communication unit is further configured to receive sub-models from one or more third data analysis network elements.
  • the sub-model is obtained by training the third data analysis network element according to the data obtained by the third data analysis network element.
  • the processing unit is configured to determine an updated model according to one or more sub-models of the third data analysis network element.
  • the communication unit is also used to send the updated model to one or more third data analysis network elements.
  • the processing unit is also used to determine the target model according to the updated model.
  • the communication unit is further configured to send the target model and one or more of the following information corresponding to the target model to one or more second data analysis network elements: model identification or model version number or data analysis identification.
  • model identification or model version number or data analysis identification Although not all the second data analysis network elements in the one or more second data analysis network elements participate in the training process of the target model, sending the target model to one or more second data analysis network elements can make each second data
  • the analysis network element can obtain the target model determined by the first data analysis network element.
  • the target model may be a business experience model.
  • the communication unit is further configured to send configuration parameters to one or more third data analysis network elements, where the configuration parameters are parameters used when the third data analysis network element determines the training sub-model. It is convenient for the third data analysis network element to configure relevant parameters involved in the distributed learning and training process according to the configuration parameters.
  • the configuration parameters include one or more of the following information: initial model, training set selection criteria, feature generation method, training termination condition, maximum training time, and maximum waiting time.
  • the type of distributed learning includes one of horizontal learning, vertical learning, and transfer learning.
  • the type of the second data analysis network element is one of the following: client, local trainer, or local trainer.
  • the communication unit is further configured to send a second request for requesting registration of the information of the first data analysis network element to the service discovery network element.
  • the information of the first data analysis network element includes one or more of the following information corresponding to the first data analysis network element: distributed learning information and second indication information, where the second indication information is used to indicate the first data analysis The type of network element. In order to register the information of the first data analysis network element, it is convenient for other devices to determine the first data analysis network element through the service discovery network element.
  • the information of the first data analysis network element further includes one of the range of the first data analysis network element, or the identification of the first data analysis network element, and the address information of the first data analysis network element. Or more.
  • the type of the first data analysis network element includes one of the following information: server, coordinator, central trainer, and global trainer.
  • distributed learning is federated learning.
  • the second data analysis network element is a terminal.
  • an embodiment of the present application provides a communication device.
  • the communication device may be a first data analysis network element, and may also be applied to a device (for example, a chip) in the first data analysis network element.
  • the communication device may include: a processing unit and a communication unit.
  • the communication device may also include a storage unit.
  • the storage unit is used to store computer program code, and the computer program code includes instructions.
  • the processing unit executes the instructions stored in the storage unit, so that the communication device implements the first aspect or the method described in any one of the possible implementation manners of the first aspect.
  • the processing unit may be a processor.
  • the communication unit may be a communication interface.
  • the storage unit may be a memory.
  • the processing unit may be a processor, and the communication unit may be collectively referred to as a communication interface.
  • the communication interface can be an input/output interface, a pin or a circuit, and so on.
  • the processing unit executes the computer program code stored in the storage unit, so that the first data analysis network element implements the method described in the first aspect or any one of the possible implementations of the first aspect.
  • the storage unit may be the chip
  • the internal storage unit for example, register, cache, etc.
  • the processor, the communication interface, and the memory are coupled with each other.
  • the embodiments of the present application provide a communication device, which can implement the second aspect or any one of the communication methods described in the second aspect, and therefore can also implement the second aspect or In the second aspect, the beneficial effects in any possible implementation manner.
  • the communication device may be a service discovery network element, or a device that can support the service discovery network element to implement the second aspect or any one of the possible implementation manners of the second aspect. For example, it is applied to the chip in the service discovery network element.
  • the communication device can implement the above method by software, hardware, or by hardware executing corresponding software.
  • an embodiment of the present application provides a communication device, including: a communication unit, configured to receive a first request from a first data analysis network element for requesting information of a second data analysis network element, the first request includes One or more of the following information: one or more of distributed learning information and first indication information.
  • the distributed learning information includes the type of distributed learning requested by the first data analysis network element, and the first indication information is used to indicate the type of the second data analysis network element.
  • the processing unit is configured to determine information of one or more second data analysis network elements that support distributed learning according to the first request.
  • the communication unit is further configured to send information of one or more second data analysis network elements to the first data analysis network element.
  • the first request in the method provided in this embodiment of the application further includes the range of the first data analysis network element, and correspondingly, the second data analysis network element is located in the first data analysis network element.
  • the service discovery network element determines the information of one or more second data analysis network elements of the type supporting distributed learning according to the first request, including: the service discovery network element will be located in the range of the first data analysis network element
  • the type of data analysis network element that supports distributed learning is determined as one or more second data analysis network elements.
  • the distributed learning information further includes algorithm information supported by the distributed learning, and correspondingly, the second data analysis network element supports an algorithm corresponding to the algorithm information supported by the distributed learning.
  • the processing unit is configured to determine the information of one or more second data analysis network elements of the type supporting distributed learning according to the first request, including: the processing unit is configured to combine the type supporting distributed learning and the distributed learning The data analysis network element of the algorithm information supported by the style learning is determined to be one or more second data analysis network elements.
  • the communication unit is further configured to receive a second request from the first data analysis network element for requesting registration of the information of the first data analysis network element, and the information of the first data analysis network element It includes one or more of the following information corresponding to the first data analysis network element: distributed learning information and second indication information.
  • the second indication information is used to indicate the type of the first data analysis network element.
  • the processing unit is configured to register the information of the first data analysis network element according to the second request.
  • the distributed learning information corresponding to the first data analysis network element includes the type of distributed learning supported by the first data analysis network element and/or the algorithm information supported by the distributed learning supported by the first data analysis network element.
  • the information of the first data analysis network element further includes one of the range of the first data analysis network element, the identification of the first data analysis network element, the address information of the first data analysis network element, or Multiple.
  • the processing unit configured to register the information of the first data analysis network element according to the second request, includes: a processing unit, configured to store the information of the first data analysis network element in the service discovery network element , Or the processing unit, is used to store the information of the first data analysis network element in the user data management network element.
  • the communication unit is further configured to receive a third request from one or more second data analysis network elements for requesting registration of information of the second data analysis network element, and the second data analysis network element
  • the information of the network element includes one or more of the following information corresponding to the second data analysis network element: distributed learning information or third indication information, where the third indication information is used to indicate the type of the second data analysis network element.
  • the processing unit is configured to register information of one or more second data analysis network elements according to the third request.
  • the distributed learning information corresponding to the second data analysis network element includes the type of distributed learning supported by the second data analysis network element and/or the algorithm information supported by the distributed learning supported by the second data analysis network element.
  • the information of the second data analysis network element further includes one of the range of the second data analysis network element, or the identification of the second data analysis network element or the address information of the second data analysis network element Or more.
  • the processing unit is configured to register information of one or more second data analysis network elements according to the third request, including: a processing unit, which is configured to connect one or more second data analysis network elements
  • the element information is stored in the service discovery network element, or the processing unit is used to store the information of one or more second data analysis network elements in the user data management network element.
  • the type of the first data analysis network element includes one or more of the following information: server, coordinator, central trainer, and global trainer.
  • the type of the second data analysis network element is one of the following: a client, a local trainer, or a local trainer.
  • distributed learning includes federated learning.
  • the second data analysis network element is a terminal.
  • an embodiment of the present application provides a communication device.
  • the communication device may be a service discovery network element or a chip in the service discovery network element.
  • the communication device may include: a processing unit and a communication unit.
  • the communication device may also include a storage unit.
  • the storage unit is used to store computer program code, and the computer program code includes instructions.
  • the processing unit executes the instructions stored in the storage unit, so that the communication device implements the second aspect or the method described in any one of the possible implementation manners of the second aspect.
  • the processing unit may be a processor.
  • the communication unit may be a communication interface.
  • the storage unit may be a memory.
  • the processing unit may be a processor, and the communication unit may be collectively referred to as a communication interface.
  • the communication interface can be an input/output interface, a pin or a circuit, and so on.
  • the processing unit executes the computer program code stored in the storage unit to enable the service discovery network element to implement the method described in the second aspect or any one of the possible implementations of the second aspect.
  • the storage unit may be in the chip
  • the storage unit (for example, a register, a cache, etc.) may also be a storage unit (for example, a read-only memory, a random access memory, etc.) located outside the chip in the service discovery network element.
  • processors communication interface, and memory are coupled to each other.
  • the embodiments of the present application provide a communication device, which can implement the third aspect or any one of the possible implementations of the third aspect, and therefore can also implement the third aspect or
  • the third aspect has the beneficial effects in any possible implementation manner.
  • the communication device may be a third data analysis network element, or a device that can support the third data analysis network element to implement the third aspect or any one of the possible implementation manners of the third aspect. For example, it is applied to the chip in the third data analysis network element.
  • the communication device can implement the above method by software, hardware, or by hardware executing corresponding software.
  • an embodiment of the present application provides a communication device, which includes a processing unit configured to determine a sub-model, and the sub-model is obtained by training the processing unit according to data obtained by the communication unit.
  • the communication unit is configured to send the sub-model to the first data analysis network element.
  • the communication unit is further configured to receive an updated model from the first data analysis network element, where the updated model is obtained from sub-models provided by a plurality of different third data analysis network elements.
  • the communication unit is also used to receive the target model from the first data analysis network element.
  • the communication unit is further configured to receive configuration parameters from the first data analysis network element, where the configuration parameters are parameters used when the third data analysis network element determines the training sub-model.
  • the configuration parameters include one or more of the following information: initial model, training set selection criteria, feature generation method, training termination condition, maximum training time, and maximum waiting time.
  • the type of distributed learning includes one of horizontal learning, vertical learning, and transfer learning.
  • the communication unit is further configured to send a third request for requesting registration of information of a third data analysis network element to the service discovery network element, where the information of the third data analysis network element includes the third data analysis network element.
  • the information corresponding to the data analysis network element includes the third data analysis network element.
  • the distributed learning information corresponding to the third data analysis network element includes the type of distributed learning supported by the third data analysis network element and/or the algorithm information supported by the distributed learning supported by the third data analysis network element.
  • the information of the third data analysis network element further includes one of the range of the third data analysis network element, or the identification of the third data analysis network element or the address information of the third data analysis network element Or more.
  • the type of the first data analysis network element includes one of the following information: server, coordinator, central trainer, and global trainer.
  • distributed learning is federated learning.
  • the type of the third data analysis network element is one of the following information: client, local trainer, or local trainer.
  • the range of the third data analysis network element is located within the range of the first data analysis network element.
  • an embodiment of the present application provides a communication device.
  • the communication device may be a third data analysis network element or a chip in the third data analysis network element.
  • the communication device may include: a processing unit and a communication unit.
  • the communication device may also include a storage unit.
  • the storage unit is used to store computer program code, and the computer program code includes instructions.
  • the processing unit executes the instructions stored in the storage unit, so that the communication device implements the third aspect or the method described in any one of the possible implementation manners of the third aspect.
  • the processing unit may be a processor.
  • the communication unit may be a communication interface.
  • the storage unit may be a memory.
  • the processing unit may be a processor, and the communication unit may be collectively referred to as a communication interface.
  • the communication interface can be an input/output interface, a pin or a circuit, and so on.
  • the processing unit executes the computer program code stored in the storage unit to enable the third data analysis network element to implement the method described in the third aspect or any one of the possible implementations of the third aspect.
  • the storage unit may be the chip
  • the internal storage unit for example, register, cache, etc.
  • processors communication interface, and memory are coupled to each other.
  • embodiments of the present application provide a computer program product including instructions.
  • the instructions run on a computer, the computer executes the communication method described in the first aspect or various possible implementations of the first aspect. .
  • embodiments of the present application provide a computer program product that includes instructions, which when the instructions run on a computer, cause the computer to execute the second aspect or a communication method described in various possible implementations of the second aspect .
  • the embodiments of the present application provide a computer program product including instructions.
  • the instructions run on a computer, the computer executes the communication method described in the third aspect or various possible implementations of the third aspect. .
  • an embodiment of the present application provides a computer-readable storage medium, and a computer program or instruction is stored in the computer-readable storage medium.
  • an embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program or instruction.
  • the computer program or instruction runs on a computer, the computer executes the second aspect or A communication method described in the various possible implementations of the second aspect.
  • the embodiments of the present application provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program or instruction.
  • the computer program or instruction runs on the computer, the computer can execute the third aspect or A communication method described in the various possible implementations of the third aspect.
  • an embodiment of the present application provides a communication device that includes at least one processor, and the at least one processor is configured to run a computer program or instruction stored in a memory, so as to implement the first aspect or the first aspect.
  • an embodiment of the present application provides a communication device that includes at least one processor, and the at least one processor is configured to run a computer program or instruction stored in a memory to implement the second aspect or the second aspect.
  • an embodiment of the present application provides a communication device that includes at least one processor, and the at least one processor is configured to run a computer program or instruction stored in a memory to implement the third aspect or the third aspect.
  • the communication device described in the fourteenth aspect to the sixteenth aspect may further include a memory.
  • an embodiment of the present application provides a communication device that includes a processor and a storage medium.
  • the storage medium stores instructions. When the instructions are executed by the processor, they can implement various aspects such as the first aspect or the first aspect. Possible implementations describe the communication method.
  • an embodiment of the present application provides a communication device.
  • the communication device includes a processor and a storage medium.
  • the storage medium stores instructions. Possible implementations describe the communication method.
  • an embodiment of the present application provides a communication device.
  • the communication device includes a processor and a storage medium.
  • the storage medium stores instructions. Possible implementations describe the communication method.
  • embodiments of the present application provide a communication device that includes one or more modules for implementing the methods of the first, second, and third aspects described above, and the one or more modules It can correspond to each step in the method of the first aspect, the second aspect, and the third aspect described above.
  • an embodiment of the present application provides a chip that includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a computer program or instruction to implement the first aspect or each of the first aspect.
  • the communication interface is used to communicate with other modules outside the chip.
  • an embodiment of the present application provides a chip that includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a computer program or instruction to implement the second aspect or the second aspect A communication method described in the various possible implementations.
  • the communication interface is used to communicate with other modules outside the chip.
  • an embodiment of the present application provides a chip that includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a computer program or instruction to implement the third aspect or the third aspect.
  • the communication interface is used to communicate with other modules outside the chip.
  • the chip provided in the embodiment of the present application further includes a memory for storing computer programs or instructions.
  • an embodiment of the present application provides a device for executing the first aspect or a communication method described in various possible implementation manners of the first aspect.
  • the embodiments of the present application provide a device for executing the communication method described in the second aspect or various possible implementation manners of the second aspect.
  • the embodiments of the present application provide an apparatus for executing the third aspect or a communication method described in various possible implementation manners of the third aspect.
  • any device or computer storage medium or computer program product or chip or communication system provided above is used to execute the corresponding method provided above. Therefore, the beneficial effects that can be achieved can refer to the corresponding method provided above The beneficial effects of the corresponding solutions in the method will not be repeated here.
  • FIG. 1 is an architecture diagram of a communication system provided by an embodiment of this application.
  • FIG. 2 is a diagram of a 5G network architecture provided by an embodiment of this application.
  • FIG. 3 is an architecture diagram of a federated learning provided by an embodiment of this application.
  • FIG. 4 is a schematic diagram of a scenario provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of another scenario provided by an embodiment of the application.
  • FIG. 6 is a schematic flowchart of a communication method provided by an embodiment of this application.
  • FIG. 7 is a schematic flowchart of another communication method provided by an embodiment of this application.
  • FIG. 8 is a detailed embodiment of a communication method provided by an embodiment of this application.
  • FIG. 9 is a detailed embodiment of another communication method provided by an embodiment of this application.
  • FIG. 10 is a schematic diagram of a model training architecture provided by an embodiment of the application.
  • FIG. 11 is a schematic diagram of another model training architecture provided by an embodiment of the application.
  • FIG. 12 is a schematic structural diagram of a communication device provided by an embodiment of this application.
  • FIG. 13 is a schematic structural diagram of another communication device provided by an embodiment of this application.
  • FIG. 14 is a schematic structural diagram of a communication device provided by an embodiment of this application.
  • FIG. 15 is a schematic structural diagram of a chip provided by an embodiment of the application.
  • words such as “first” and “second” are used to distinguish the same items or similar items that have substantially the same function and effect.
  • the first indication information and the first indication information are only for distinguishing different indication information, and the sequence of them is not limited.
  • words such as “first” and “second” do not limit the quantity and order of execution, and words such as "first” and “second” do not limit the difference.
  • the first data analysis network element may be one or more data analysis network elements
  • the second data analysis network element may also be one or more data analysis network elements.
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And/or” describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an “or” relationship.
  • the following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • at least one of a, b, or c can mean: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple .
  • the technical solutions of the embodiments of this application can be applied to various communication systems, such as: code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (frequency division multiple access, TDMA) access, FDMA), orthogonal frequency-division multiple access (OFDMA), single carrier frequency-division multiple access (single carrier FDMA, SC-FDMA) and other systems.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal frequency-division multiple access
  • SC-FDMA single carrier frequency-division multiple access
  • 3GPP is a new version of UMTS using E-UTRA in long term evolution (LTE) and various versions based on LTE evolution.
  • LTE long term evolution
  • NR new radio
  • the communication system may also be applicable to future-oriented communication technologies, all of which are applicable to the technical solutions provided in the embodiments of the present application.
  • the communication system includes: a data analysis network element 100, and one or more data analysis network elements communicating with the data analysis network element 100 (for example, data analysis Network element 201 to data analysis network element 20n), and service discovery network element 300.
  • n is an integer greater than or equal to 1.
  • the data analysis network element 100 and one or more data analysis network elements have distributed learning capabilities.
  • the type of the data analysis network element 100 or the role played by the data analysis network element 100 in distributed learning may be one or more of the following information: server, coordinator, central trainer ( centralized trainer, global trainer.
  • the type of any data analysis network element among data analysis network element 201 to data analysis network element 20n or the role played by any data analysis network element in distributed learning may be one or more of the following: client , Local trainer, distributed trainer, or local trainer.
  • the deployment mode shown in Figure 1 can be called a server (server)-client (client) mode.
  • the type of the data analysis network element in the embodiment of the present application may refer to the role that the data analysis network element plays in distributed learning.
  • the type of the data analysis network element 100 is a server end, it means that the role of the data analysis network element 100 in distributed learning is a server type.
  • the data analysis network element 100 can be regarded as a (central) server node, and the data analysis network element 201 to the data analysis network element 20n can be regarded as (edge) client nodes.
  • each of the data analysis network element 201 to data analysis network element 20n has its own range, and some or all of the data analysis network element 201 to data analysis network element 20n are located in the data analysis network. Within the range of RMB 100.
  • Any data analysis network element in the embodiments of this application can be deployed separately, or can be deployed in conjunction with network function network elements (for example, session management function (SMF) network elements, access and mobility management functions) in the 5G network ( Access and mobility management function (AMF) network elements, policy control function (policy control function, PCF) network elements, etc.) are jointly deployed.
  • SMF session management function
  • AMF Access and mobility management function
  • PCF policy control function
  • it can be deployed on the existing 5GC NF according to the amount of network element data or function requirements, such as Integrate the data analysis network element with terminal mobility (UE Mobility or UE Moving Trajectory) analysis capabilities with the AMF network element, so that the terminal location information on the AMF network element will not be out of the core network, avoiding user data privacy And data security issues.
  • each 5GC NF built-in intelligent module such as the built-in NWDAF functional module
  • NWDAF functional module based on its own data self-closed loop, only for cross-network element data closed loop, data analysis network element is required Closed loop control based on data flow.
  • the embodiment of the application does not limit this.
  • each data analysis network element of data analysis network element 201 to data analysis network element 20n has its own original data distributed, and data analysis network element 100 may not have original data, or data analysis The network element 100 cannot collect the data analysis network element 201 to the data analysis network element 20n.
  • Each data analysis network element is distributed with the original data obtained, and each of the data analysis network element 201 to the data analysis network element 20n Each data analysis network element may not need to send the original data that each has to the data analysis network element 100.
  • the granularity of the deployment of the aforementioned data analysis network element 100 and data analysis network element 201 to data analysis network element 20n can be cross-operator (Inter-Public Land Mobile Network, Inter-PLMN), and cross-regional (Intra-PLMN) of the same operator. -PLMN or Inter-Region), across network slices, within network slices, across vendors, within the same vendor, across data network names (DNN), or within the same DNN.
  • the deployment granularity is the granularity of a manufacturer
  • at least one data analysis network element 100 and one or more data analysis network elements are deployed in the manufacturer.
  • the deployment granularity is a DNN granularity
  • at least one data analysis network element 100 and one or more data analysis network elements are deployed in the DNN.
  • a data analysis network element 100 is deployed, and one or more data are deployed in each of the different network areas served by the network slice. Analyze network elements.
  • the communication system shown in FIG. 1 can be applied to the current 5G network architecture and other network architectures that will appear in the future, which is not specifically limited in the embodiment of the present application.
  • the communication system shown in FIG. 1 is applied to the 5G network architecture as an example.
  • the communication system shown in FIG. 1 is applied to the 5G network architecture shown in FIG. 2 as an example.
  • the above-mentioned data analysis network element 100, or the data analysis network element 201 to data analysis
  • the network element or entity corresponding to any data analysis network element in the network element 20n may be a network data analysis function (NWDAF) network element in the 5G network architecture as shown in Figure 2, or a network management system.
  • NWDAAF network data analysis function
  • MDAF management data analysis function
  • the management data analysis function (MDAF) network element of the RAN can even be a data analysis network element or data analysis device on the RAN side.
  • the network element or entity corresponding to any data analysis network element in the embodiments of this application may also be a NWDAF network element, an MDAF network element, or a data analysis network element on the RAN side or a module in a data analysis device.
  • NWDAF network element an MDAF network element
  • MDAF network element a data analysis network element on the RAN side or a module in a data analysis device.
  • This application is implemented The example does not limit this.
  • the data analysis network element 100 or the network element or entity corresponding to any one of the data analysis network element 201 to the data analysis network element 20n may be a terminal as shown in FIG. 2.
  • the network element or entity corresponding to the data analysis network element 100 or any one of the data analysis network element 201 to the data analysis network element 20n is not limited to the terminal or the NWDAF network element, etc. Any network element that has a model training function or supports distributed learning can be used as a data analysis network element in the embodiment of the present application.
  • the service discovery network element 300 supports network functions or network service registration, discovery, update, and authentication functions.
  • the network element or entity corresponding to the service discovery network element 300 may be the network repository function (NRF) network element or unified data management (UDM) in the 5G network architecture as shown in FIG. 2 Network element or unified data repository (UDR) network element.
  • the service discovery network element 300 may be a domain name system (domain name system, DNS) server.
  • the service discovery network element 300 is an NRF network element as an example.
  • the service discovery network element 300 may be an NRF network element or have other names, which is not limited in this application.
  • the 5G network architecture may also include: terminals, access devices (for example, access network (AN) or radio access network (RAN)), application functions (application function, AF) network element, operation, administration, and maintenance (OAM) network element (also called operation management and maintenance network element), PCF network element, SMF network element, User plane function (UPF) network element, data network (data network, DN), AMF network element, authentication server function (authentication server function, AUSF) network element, network exposure function (network exposure function, NEF) ) Network elements, UDR network elements, or UDM network elements, etc., which are not specifically limited in the embodiment of the present application.
  • access devices for example, access network (AN) or radio access network (RAN)
  • application functions application function, AF network element
  • OAM operation management and maintenance network element
  • PCF network element also called operation management and maintenance network element
  • SMF User plane function
  • UPF User plane function
  • NEF network exposure function
  • the terminal communicates with the AMF network element through a next generation network (next generation, N1) interface (N1 for short).
  • the access device communicates with the AMF network element through the N2 interface (N2 for short).
  • the access device communicates with the UPF network element through the N3 interface (N3 for short).
  • the UPF network element communicates with the DN through the N6 interface (N6 for short).
  • the UPF network element communicates with the SMF network element through the N4 interface (N4 for short).
  • AMF network elements, AUSF network elements, SMF network elements, UDM network elements, UDR network elements, NRF network elements, NEF network elements, or PCF network elements interact with each other using a service interface.
  • the service-oriented interface provided by the AMF network element to the outside may be Namf.
  • the service-oriented interface provided by the SMF network element to the outside may be Nsmf.
  • the service-oriented interface provided by the UDM network element to the outside may be Nudm.
  • the service-oriented interface provided by the UDR network element to the outside may be Nudr.
  • the service-oriented interface provided by the PCF network element to the outside may be Npcf.
  • the service-oriented interface provided by the NEF network element to the outside may be Nnef.
  • the service-oriented interface provided by the NRF network element to the outside may be Nnrf.
  • the service-oriented interface provided by the NWDAF network element to the outside may be Nnwdaf.
  • the AMF network element can also communicate with the SMF network element through the N11 interface (N11 for short).
  • the AMF network element can also communicate with the UDM network element through the N8 interface (N8 for short).
  • the SMF network element can also communicate with the PCF network element through the N7 interface (N7 for short).
  • the SMF network element can also communicate with the UDM network element through the N10 interface (N10 for short).
  • the AMF network element can also communicate with the AUSF network element through the N12 interface (N12 for short).
  • the UDM network element can also communicate with the UDR network element through an interface between each other.
  • the PCF network element may also communicate with the UDR network element through an interface between each other, which is not limited in the embodiment of the present application.
  • the AMF network element is mainly responsible for the mobility management in the mobile network, such as user location update, user registration network, user handover, etc.
  • the SMF network element is mainly responsible for session management in the mobile network, such as session establishment, modification, and release. Specific functions include assigning IP addresses to users and selecting UPF network elements that provide message forwarding functions.
  • PCF network elements are used to formulate background traffic transmission strategies.
  • the UDM network element or UDR network element is used to store user data, such as the information of any data analysis network element.
  • the UPF network element is mainly responsible for processing user messages, such as forwarding and charging.
  • DN refers to an operator's network that provides data transmission services for terminals, such as IP multi-media service (IMS), Internet, and so on.
  • IMS IP multi-media service
  • the data analysis network element is a network element device that can perform big data analysis, and can be, but is not limited to, a network data analysis function network element, etc., for example, the network data analysis function network element may be NWDAF.
  • the data analysis network element can perform distributed learning training or reasoning.
  • the NRF network element supports the registration, discovery, update, and authentication functions of network functions or network services.
  • the specific application network element can be but not limited to the operator's AF network element, terminal, third-party equipment, such as non-operator AF network element (also called third-party AF network element), etc. .
  • the AF network element of the operator may be, but is not limited to, the service management and control server of the operator; the AF network element of the third party may be, but is not limited to, the service server of the third party.
  • Federated learning is an emerging basic artificial intelligence technology. Its design goal is to ensure information security during big data exchange, protect terminal data and personal data privacy, and ensure legal compliance. Carry out high-efficiency machine learning among multiple computing nodes. Cross-domain joint training of the model can be achieved without the original data in the local domain, which can improve the efficiency of training, and most importantly, can use federated learning technology to avoid security problems caused by data aggregation to the data analysis center (such as , The original data is hijacked during transmission, the original data is misused by the data center, etc.).
  • federated learning can be divided into the following three categories:
  • VFL -Vertical Federated Learning
  • FIG. 3 uses linear regression as an example to describe a training process of a horizontal federated learning provided by an embodiment of the present application.
  • the horizontal federation includes a central server node and multiple edge client nodes (for example, client node A, client node B, and client node C).
  • client node A client node A
  • client node B client node B
  • client node C client node C
  • the original The data is distributed in each client node
  • the server node does not have the original data
  • the client node is not allowed to send the original data to the server node.
  • each sample data includes a label, that is, the label and the data are stored together.
  • the data analysis module on each client node can train its own model according to the linear regression algorithm, which is called a sub-model, namely:
  • h(x i ) ⁇ A x i A
  • h(x j ) ⁇ B x i B
  • h(x K ) ⁇ K Kx k K.
  • N I represents the number of samples, Represents the local gradient value.
  • the server node After the server node receives the above information, it aggregates the gradients as follows:
  • the server node sends the aggregated gradient to each client node participating in the training, and then the client node updates the model parameters locally, as follows:
  • the server node can control the end of the training by the number of iterations, such as 10,000 training to terminate the training, or set the threshold of the loss function to control the end of the training, for example , when L I ⁇ 0.0001, the training ends.
  • each client node will keep the same model (which can be from the server node, or it can be obtained locally based on the local personalization from the server node), which is used for local reasoning.
  • the 5G network does not involve how to apply the training process of horizontal federated learning for model training, especially for the scenario described in FIG. 4 or FIG. 5. for example:
  • scenario 1 within the same operator, across vendors.
  • a mobile operator A may purchase equipment from two vendors X and Y at the same time, but the devices of X and Y cannot directly exchange data due to privacy protection.
  • the equipment of X and Y None of the data collected by the mobile operator A is provided to the data analysis network element.
  • the data analysis network element of mobile operator A (such as the server type data analysis network element mentioned above) can use federated learning technology to train a model of the entire network
  • the premise for training federated learning technology is: data analysis network element It can accurately know the network elements or devices that support horizontal federated learning among the devices of different vendors (for example, each vendor has a Client-type data analysis network element that provides services for the vendor). Therefore, how the data analysis network element in the mobile operator A discovers whether the equipment of different manufacturers supports horizontal federated learning is a problem that needs to be solved urgently.
  • the second scenario is a cross-operator scenario within the same network.
  • mobile operator A and mobile operator B share base station-side resources (eg, spectrum), and the two operators want to train an entire network Model, and then mobile operator A and mobile operator B share the results of data analysis with each other, but mobile operator A and mobile operator B are unwilling to report the original data, and federated learning technology can be used to train the entire network model.
  • base station-side resources eg, spectrum
  • federated learning technology can be used to train the entire network model.
  • how the data analysis network element of mobile operator A or the data analysis network element of mobile operator B discovers whether the other party's network element or equipment supports horizontal federated learning is a problem that needs to be solved urgently.
  • federated learning can be used.
  • Technical training technology obtains the target model, but the prerequisite for realizing federated learning is that the data analysis network element (server type) can obtain the information of the data analysis network element (client type) serving each NSI, or the data analysis network element serving each city (client type). Type) information, otherwise it may not be possible to continue horizontal federated learning.
  • the embodiment of the present application describes a communication method in conjunction with FIG. 6 to FIG. 7, which can enable the first data analysis network element to accurately obtain information about one or more second data analysis network elements that support distributed learning.
  • the execution subject of the communication method may be the first data analysis network element or the application.
  • a device for example, a chip in the first data analysis network element.
  • the execution subject of a communication method may be the second data analysis network element, or may be a device (for example, a chip) applied to the second data analysis network element.
  • the execution subject of a communication method may be a service discovery network element, or a device (for example, a chip) applied to the service discovery network element.
  • the following embodiments will be described by taking the execution subject of the communication method as the first data analysis network element, the second data analysis network element, and the service discovery network element as an example.
  • any steps performed by the first data analysis network element can also be performed by the device applied to the first data analysis network element, and the steps performed by the second data analysis network element can also be applied to the second data analysis network element.
  • Analyze the execution of the device in the network element, and the steps executed by the service discovery network element can also be executed by the device applied to the service discovery network element, which will be uniformly explained here, and will not be repeated in the following.
  • FIG. 6 a schematic diagram of the interaction of a communication method provided by this embodiment of the application, the method includes the following steps :
  • Step 601 The first data analysis network element sends a first request to the service discovery network element, and correspondingly, the service discovery network element receives the first request from the first data analysis network element.
  • the first request is used to request information of the second data analysis network element.
  • the first request includes: one or more of distributed learning information and first indication information.
  • the distributed learning information includes the type of distributed learning
  • the first indication information is used to indicate the type of the second data analysis network element required by the first data analysis network element.
  • the type of distributed learning carried in the first request is the type of distributed learning that the second data analysis network element requested by the first data analysis network element should have.
  • distributed learning is taken as an example of federated learning.
  • the type of distributed learning includes one of horizontal learning, vertical learning, and transfer learning.
  • the first request may also carry fourth indication information, and the fourth indication information is used to instruct the first data analysis network element to request information of the second data analysis network element from the service discovery network element .
  • the second data analysis network element requested by the first data analysis network element from the service discovery network element by using the first request in step 601 in the embodiment of the present application may be a general reference.
  • the first data analysis network element may Without knowing the identity of the second data analysis network element, the first data analysis network element carries the demand information of the first data analysis network element in the first request (for example, distributed learning information or first indication information), So that the service discovery network element provides the first data analysis network element with one or more second data analysis network elements that meet the demand information according to the demand information.
  • the type of the second data analysis network element is one of the following: client, local trainer, or local trainer.
  • the first data analysis network element may be the data analysis network element 100 shown in FIG. 1.
  • the service discovery network element may be the service discovery network element 300.
  • Step 602 The service discovery network element determines one or more second data analysis network elements according to the first request.
  • the one or more second data analysis network elements determined by the service discovery network element in step 602 support the type of distributed learning requested by the first data analysis network element, and/or, the one or more second data analysis The type of the network element is the same as the type of the second data analysis network element indicated by the first indication information.
  • the one or more second data analysis network elements may be all or part of the data analysis network element 201 to the data analysis network element 20n shown in FIG. 1.
  • the first data analysis network element requests a second data analysis network element that can perform horizontal learning, and the second data analysis network indicated by the first indication information
  • the type of the element is a client or a local trainer, and the type of distributed learning supported by one or more second data analysis network elements determined by the service discovery network element should be horizontal learning.
  • at least some of the one or more second data analysis network elements are of client type, and the other part of the second data analysis network element is of type local trainer.
  • the type of the one or more second data analysis network elements are all clients or local trainers, which is not limited in the embodiment of the present application.
  • the service discovery network element reports to the first data analysis network element
  • the provided one or more second data analysis network elements must include not only the second data analysis network element of type A, but also the second data analysis network element of type B.
  • the type of the second data analysis network element indicated by the first indication information is A or B
  • the type of the one or more second data analysis network elements may be all A or all B.
  • the type of one or more second data analysis network elements provided by the service discovery network element to the first data analysis network element may be all B or all A, which is not limited in the embodiment of the present application.
  • the types of the second data analysis network element indicated by the first indication information being A and B do not mean that the type of the second data analysis network element requested by the first data analysis network element is both A and B, that is, In other words, the second data analysis network element requested by the first data analysis network element may be either type A or type B.
  • one or more second data analysis network elements may include a second data analysis network element that supports horizontal learning, and supports The second data analysis network element for longitudinal learning and the second data analysis network element that supports migration learning.
  • data analysis network element 201 can support horizontal learning
  • data analysis network element 202 can support longitudinal learning
  • the data analysis network element 203 can support migration learning.
  • each of the one or more second data analysis network elements needs to support horizontal learning , Longitudinal learning and transfer learning.
  • the service discovery network element has at least one or more second data analysis network element information, or the service discovery network element may obtain one or more second data analysis network element information from other devices according to the first request .
  • the information of the one or more second data analysis network elements may be, for example, one or more of the following information corresponding to the second data analysis network element: distributed learning information, the range of the second data analysis network element, or The second indication information is used to indicate the type of the second data analysis network element.
  • the distributed learning information corresponding to the second data analysis network element includes the type of distributed learning supported by the second data analysis network element and/or the distributed learning algorithm information supported by the second data analysis network element.
  • the algorithm information supported by distributed learning involved in the embodiments of this application includes one or more of algorithm type (algorithm type), algorithm ID (algorithm ID), and algorithm performance, which are explained here in a unified manner, and will not be described later. Go into details again.
  • the algorithm type may be: one or more of linear regression, logistic regression, neural network, K-Means, and reinforcement learning.
  • the algorithm performance can be one or more of training time, convergence speed, etc.
  • the algorithm performance is mainly used to assist the data analysis network element to select the algorithm performance higher than the preset algorithm threshold during model training (for example, the training time is less than the preset algorithm threshold). Time threshold or convergence speed higher than the preset speed threshold) algorithm.
  • Step 603 The service discovery network element sends information of one or more second data analysis network elements to the first data analysis network element.
  • the first data analysis network element receives information from one or more second data analysis network elements of the service discovery network element.
  • the types of different second data analysis network elements among the one or more second data analysis network elements may be the same or different.
  • the types of distributed learning supported by different second data analysis network elements may be the same or different.
  • the distributed learning algorithm information supported by different second data analysis network elements may be the same or different, which is not limited in the embodiment of the present application.
  • the embodiment of the present application provides a communication method in which a first data analysis network element sends a first request to a service discovery network element, and the first request is used to request the service discovery network element for the first data analysis network element required by the first data analysis network element.
  • the characteristics of data analysis network elements In this way, it is convenient for the service discovery network element to provide the first data analysis network element with information of one or more second data analysis network elements of a type supporting distributed learning according to the first request.
  • the type of the second data analysis network element is the same as the type of the second data analysis network element requested by the first data analysis network element.
  • this solution can achieve the purpose of the first data analysis network element to find a data analysis network element capable of distributed learning and training through the service discovery network element.
  • the first data analysis network element obtains the one or more first data analysis network elements After the second data analyzes the information of the network element, the subsequent first data analysis network element can perform model training in collaboration with one or more second data analysis network elements when model training is required, thereby expanding the application scenarios of data analysis.
  • the method provided in the embodiment of the present application may further include: the first data analysis network element determines to trigger distributed learning training.
  • the first data analysis network element determines to trigger distributed learning training in the following manner: the first data analysis network element determines to trigger distributed learning training based on configuration information or manual instructions.
  • the first data analysis network element determining to trigger the distributed learning training can be implemented in the following manner: the first data analysis network element actively initiates the distributed learning training.
  • the first data analysis network element determines to trigger distributed learning training in the following manner: the first data analysis network element determines to trigger distributed learning based on the data analysis result request of the Consumer NF network element train. Taking the Consumer NF network element as an SMF network element as an example, if the SMF network element requests the first data analysis network element to identify the data packets flowing through the UPF network element, but at this time the first data analysis network element finds that there is no training service yet Recognize the model, and then trigger distributed learning training.
  • the first request further includes the first data analysis network element.
  • the scope of the data analysis network element corresponds to the scope of one or more second data analysis network elements provided by the service discovery network element to the first data analysis network element.
  • this application Step 602 in the embodiment can be implemented in the following manner: the service discovery network element will be located in the range of the first data analysis network element and support the second data analysis network element that supports the distributed learning information requested by the first data analysis network element As one or more second data analysis network elements.
  • step 602 in the embodiment of the present application can be implemented in the following manner: the service discovery network element will be located within the range of the first data analysis network element, and the type is the same as the type of the second data analysis network element indicated by the first indication information
  • the second data analysis network element serves as one or more second data analysis network elements.
  • the scope of the first data analysis network element includes one or more of the following information: the area served by the first data analysis network element, the identification of the PLMN to which the first data analysis network element belongs, and the service of the first data analysis network element
  • the information of the network slice of the first data analysis network element is used to identify the network slice.
  • the network slice information may be single network slice selection assistance information (S-NSSAI).
  • the range of the network slice served by the first data analysis network element may be taken as the range of the first data analysis network element.
  • the scope of the second data analysis network element includes one or more of the following information: the area served by the second data analysis network element, the identification of the PLMN to which the second data analysis network element belongs, and the service of the second data analysis network element The scope of the network slicing instance, the DNN served by the second data analysis network element, and the equipment vendor information of the second data analysis network element.
  • the distributed learning information in the embodiment of the present application further includes algorithm information supported by distributed learning.
  • one or more information provided by the service discovery network element to the first data analysis network element The second data analysis network element also supports algorithm information supported by distributed learning.
  • the distributed learning information in the first request includes the type of distributed learning and the algorithm information supported by the distributed learning
  • one or more second data provided by the service discovery network element to the first data analysis network element The analysis network element must support not only the type of distributed learning but also the algorithm information supported by the distributed learning.
  • the service discovery network element sends the first request to the service discovery network element.
  • One or more second data analysis network elements provided by a data analysis network element not only support horizontal learning, but also the type of algorithm supported by the one or more second data analysis network elements is "linear regression".
  • the distributed learning information carried in the first request includes the type of distributed learning and the algorithm information supported by the distributed learning.
  • the distributed learning information carried in the first request includes the type of distributed learning and the algorithm information supported by the distributed learning, and the first request also carries the range of the first data analysis network element.
  • FIG. 7 shows another possible embodiment provided by the embodiment of the present application.
  • the method includes: a registration phase, a network element discovery phase, and a model training phase.
  • the registration phase includes steps 701 to 704.
  • the network element discovery phase includes steps 705 to 707.
  • the model training phase includes steps 708 to 714.
  • Step 701 The first data analysis network element sends a second request to the service discovery network element, and correspondingly, the service discovery network element receives the second request from the first data analysis network element.
  • the second request is used to request to register the information of the first data analysis network element.
  • the information of the first data analysis network element includes one or more of the following information corresponding to the first data analysis network element: distributed learning information, a range of the first data analysis network element, or second indication information.
  • the second indication information is used to indicate the type of the first data analysis network element.
  • the distributed learning information corresponding to the first data analysis network element includes one or more of the type of distributed learning supported by the first data analysis network element and the algorithm information supported by the distributed learning supported by the first data analysis network element. indivual.
  • the second request may be a registration request message.
  • the second request may further include fifth indication information, where the fifth indication information is used to request registration of the information of the first data analysis network element.
  • the type of the first data analysis network element includes one or more of the following information: server (server), coordinator (coordinator), central (central or centralized) trainer, and global (global) trainer.
  • the information of the first data analysis network element may further include the identification of the first data analysis network element and the address information of the first data analysis network element.
  • Step 702 The service discovery network element registers the information of the first data analysis network element.
  • step 702 in the embodiment of the present application may be implemented in the following manner: the service discovery network element registers the information of the first data analysis network element at the service discovery network element.
  • the service discovery network element stores the information of the first data analysis network element in the storage device of the service discovery network element.
  • step 702 in the embodiment of the present application can be implemented in the following manner: the service discovery network element sends the information of the first data analysis network element to an external storage device (for example, a UDM network element or a UDR network element). ).
  • the external storage device stores the information of the first data analysis network element.
  • the subsequent service discovery network element may obtain the information of the first data analysis network element from the external storage device.
  • the service discovery network element in the embodiment of this application may also be a UDM network element or a UDR network element, that is, the UDM network element or the UDR network element stores the first data analysis network element information.
  • the first data analysis network element registers the information of the first data analysis network element at the service discovery network element, so that subsequent Consumer NF network elements can support distributed learning through service discovery network element query, and the type is service
  • the terminal or type is the first data analysis network element information of the coordinator. Then the Consumer NF network element requests the first data analysis network element to perform service identification on the data packets flowing through the UPF network element.
  • Step 703 The second data analysis network element sends a third request to the service discovery network element, and correspondingly, the service discovery network element receives the third request from the second data analysis network element.
  • the third request is used to request to register the information of the second data analysis network element.
  • the information of the second data analysis network element includes one or more of the following information corresponding to the second data analysis network element: distributed learning information, the range of the second data analysis network element, or the first indication information.
  • the first indication information is used to indicate the type of the second data analysis network element.
  • the distributed learning information corresponding to the second data analysis network element may include one or more of the type of distributed learning supported by the second data analysis network element and the algorithm information of distributed learning supported by the second data analysis network element. indivual.
  • the third request may further include sixth indication information, where the sixth indication information is used to request registration of the information of the second data analysis network element.
  • the information of the second data analysis network element may further include the identification of the second data analysis network element and the address information of the second data analysis network element.
  • Step 704 The service discovery network element registers the information of the second data analysis network element.
  • step 704 For the implementation of step 704, reference may be made to the description at step 702, which will not be repeated here. The difference is that the service discovery network element registers the information of the second data analysis network element.
  • each of the one or more second data analysis network elements may register respective information of each second data analysis network element at the service discovery network element.
  • Steps 701 to 702 in the embodiment of the present application are the process for the first data analysis network element to register the information of the first data analysis network element with the service discovery network element, and steps 703 to step 704 are the process for the second data analysis network element to discover the service In the process of the network element registering the information of the second data analysis network element, steps 701 to 702 and steps 703 to 704 are performed in no particular order.
  • Whether to register the data analysis network element (for example, the first data analysis network element or the second data analysis network element) information at the service discovery network element can be independently determined by the data analysis network element, or determined by agreement, or by other network elements
  • the data analysis network element is triggered to perform the registration process, which is not limited in the embodiment of the present application.
  • Steps 705 to 707 are the same as steps 601 to 603, and will not be repeated here.
  • the method provided in the embodiment of the present application may further include after step 707:
  • Step 708 The first data analysis network element determines information of a third data analysis network element capable of distributed learning according to the information of one or more second data analysis network elements, and the number of third data analysis network elements is one or more. indivual.
  • the one or more third data analysis network elements in the embodiment of the present application may be all the second data analysis network elements among the one or more second data analysis network elements, or part of the second data analysis network elements.
  • one or more third data analysis network elements may be data analysis network element 201 and data analysis network element 202 , And the data analysis network element 203.
  • the third data analysis network element satisfies any one of the following conditions in Example 1 to Example 3:
  • Example 1 The load of the third data analysis network element is lower than the preset load threshold.
  • step 708 can be implemented in the following manner: the first data analysis network element obtains load information of one or more second data analysis network elements.
  • the first data analysis network element determines, according to the load information of the one or more second data analysis network elements, the second data analysis network element whose load is lower than the preset load threshold among the one or more second data analysis network elements as capable of performing The third data analysis network element of distributed learning.
  • Example 2 The priority of the third data analysis network element is higher than the preset priority threshold.
  • step 708 can be implemented in the following manner: the first data analysis network element obtains the priority of one or more second data analysis network elements.
  • the first data analysis network element determines, according to the priority of the one or more second data analysis network elements, the second data analysis network element whose priority is higher than the preset priority threshold among the one or more second data analysis network elements as The third data analysis network element capable of distributed learning.
  • Example 3 The third data analysis network element is located within the range of the first data analysis network element.
  • step 708 can be implemented in the following manner: the first data analysis network element obtains the range of one or more second data analysis network elements. According to the range of one or more second data analysis network elements, the first data analysis network element determines the second data analysis network element located within the range of the first data analysis network element as the third data analysis capable of performing distributed learning Network element.
  • the first request does not carry the range of the first data analysis network element, there may be some data analysis network elements located in the first data analysis network element among the one or more second data analysis network elements provided by the service discovery network element to the first data analysis network element.
  • One data analysis network element is outside the range, while other part of the second data analysis network element is located within the range of the first data analysis network element. Therefore, after the first data analysis network element obtains the information of one or more second data analysis network elements, it can also filter according to the location information of each second data analysis network element to obtain the information located in the first data analysis network element.
  • One or more third data analysis network elements within the range are examples of the range.
  • the first request carries the range of the first data analysis network element
  • one or more second data analysis network elements provided by the service discovery network element to the first data analysis network element are located within the range of the first data analysis network element
  • one or more third data analysis network elements are also located within the range of the first data analysis network element.
  • example 1, example 2, and example 3 can be used alone or in combination to determine the third data analysis network element from one or more second data analysis network elements as the first data analysis network element conditions of.
  • the load of the third data analysis network element is not only lower than the preset load threshold, but also the priority is higher than the preset priority threshold, and the third data analysis network element is also located in the first Within the scope of data analysis network elements.
  • the foregoing examples 1 to 3 are only examples in which the first data analysis network element determines the third data analysis network element from one or more second data analysis network elements.
  • the first data analysis network element may also The third data analysis network element is determined from one or more second data analysis network elements according to other methods, which is not limited in the embodiment of the present application.
  • Step 709 Each of the one or more third data analysis network elements determines a sub-model (Sub-Model).
  • the sub-model determined by any third data analysis network element is obtained by training the third data analysis network element according to the data obtained by the third data analysis network element.
  • the data acquired by the third data analysis network element refers to the data acquired by the third data analysis network element from the scope of the third data analysis network element.
  • the third data analysis network element obtains terminal data (from the UE), service data (from the AF network element), and network data obtained from one or more (Consumer NF) network elements within the scope of the third data analysis network element (From core network elements, such as AMF network elements or SMF network elements or PCF network elements or UPF network elements), base station data (from access network elements, such as RAN or gNB), network management data (from OAM network elements), etc.
  • core network elements such as AMF network elements or SMF network elements or PCF network elements or UPF network elements
  • base station data from access network elements, such as RAN or gNB
  • network management data from OAM network elements
  • the third data analysis network element may determine the sub-model under the trigger of the first data analysis network element.
  • step 709 in the embodiment of the present application can be implemented in the following manner: any third data analysis network element sets the parameters used when training the acquired data according to the configuration parameters, and is based on The third data analyzes the data (shown in Table 1) obtained by training on the local smart chip (such as graphics processing unit (GPU)) of the network element to train to obtain the sub-model.
  • the training process can refer to the horizontal federated training process in FIG. 3 taking the linear regression algorithm as an example.
  • the training architecture of other algorithms is similar, and will not be repeated here.
  • the configuration parameters in the embodiments of the present application may be pre-configured at the third data analysis network element, or the configuration parameters may also be provided by the first data analysis network element.
  • the method provided in this embodiment of the present application may further include:
  • the third data analysis network element sends configuration parameters, and correspondingly, one or more third data analysis network elements receive the configuration parameters from the first data analysis network element.
  • This configuration parameter is used for the third data analysis network element to determine the parameters used when training the sub-model.
  • the first data analysis network element sends the aforementioned configuration parameters to each of the one or more third data analysis network elements.
  • the configuration parameters include one or more of the following information: initial model, training set selection criteria, feature generation method, training termination condition, maximum training time, and maximum waiting time.
  • the initial model includes algorithm type and model initial parameters.
  • Training set selection criteria For each feature's restriction conditions, such as service experience model training, the measured RSRP of the terminal needs to be restricted. When the RSRP value is less than -130dB or greater than -100dB, the corresponding The sample data should be discarded.
  • Feature generation method the calculation method for each feature, for example, for service experience model training, RSRP needs to be normalized from 0 to 1, then the first data analysis network element needs to instruct the third data analysis network element to perform RSRP Normalization method, for example, maximum and minimum normalization.
  • Training termination conditions such as the maximum number of iterations, the training is terminated when the number of iterations reaches the maximum number of iterations.
  • the loss function will be reduced in each iteration of training. When the loss function is reduced to the required maximum loss function value, the training can be terminated.
  • Maximum training time It is used to indicate the maximum time of each round of iterative training. When the time of one round of iterative training exceeds the maximum training time, it may affect the entire federated training process. Therefore, the first data analysis network element can limit the third The time for the data analysis network element to perform each round of iterative training.
  • Maximum waiting time used to indicate the maximum time for the first data analysis network element to wait for the third data analysis network element to feed back the sub-model during each round of iterative training.
  • the first data analysis network element is required to limit the first data analysis network element. Three data analysis network element for each round of iterative training time.
  • the transmission of the sub-model from the third data analysis network element to the first data analysis network element also requires transmission time, so the maximum waiting time includes the maximum training time and the transmission time.
  • Step 710 The one or more third data analysis network elements send their respective sub-models to the first data analysis network element, and correspondingly, the first data analysis network element receives the sub-models from the one or more third data analysis network elements .
  • Step 711 The first data analysis network element determines an updated model according to one or more sub-models of the third data analysis network element.
  • the first data analysis network element aggregates the sub-models provided by each third data analysis network element to obtain an updated model.
  • the sub-model provided by data analysis network element 201 is sub-model 1.
  • Data analysis network The sub-model provided by the element 202 is sub-model 2 and the sub-model provided by the data analysis network element 203 is sub-model 3.
  • the first data analysis network element can aggregate and update sub-model 1, sub-model 2, and sub-model 3 Model.
  • Step 712 The first data analysis network element sends the updated model to one or more third data analysis network elements. Accordingly, each third data analysis network element of the one or more third data analysis network elements can download from The first data analysis network element obtains the updated model.
  • step 709 is performed in a loop until the configuration is reached.
  • the training termination condition indicated by the parameter indicated by the parameter.
  • the third data analysis network element may perform N rounds of iterative training, and each round of iterative training of the third data analysis network element will report to the first data analysis network element.
  • the sub-model obtained in this round of iterative training is N times.
  • the method provided in the embodiment of the present application may further include after step 712:
  • Step 713 The first data analysis network element determines the target model according to the updated model.
  • step 713 in the embodiment of the present application can be implemented in the following manner: the first data analysis network element determines that the set maximum number of federated training times (also called the maximum number of iterations) has been reached, and the updated model is determined as Target model. That is, when the maximum number of training times is reached, the first data analysis network element determines the updated model as the target model.
  • the set maximum number of federated training times also called the maximum number of iterations
  • the maximum number of federated training times is the number of times that the first data analysis network element performs sub-model aggregation.
  • the maximum number of iterations in the training termination condition refers to the number of iterations in the process of generating the sub-model each time the third data analysis network element reports the sub-model.
  • Step 714 The first data analysis network element sends the target model to one or more second data analysis network elements, and one or more of the following information corresponding to the target model: model ID, model version number (Version) ID) or analytics ID (analytics ID).
  • model ID model version number
  • analytics ID analytics ID
  • the method provided in the embodiment of the present application may further include: the first data analysis network element assigns a model identifier, a model version number, or a data analysis identifier to the target model.
  • Figure 8 takes the type of the first data analysis network element as the server, which can be called server NWDAF, and the type of the second data analysis network element is the client, which can be called The second data analysis network element is the client NWDAF, the service discovery network element is the NRF network element, and the type of distributed learning is horizontal federated learning.
  • the method includes :
  • step 801 the server NWDAF triggers the network element management_network element registration request service operation (Nnrf_NFManagement_NFRegister Request) to the NRF network element, and correspondingly, the NRF network element receives the network element management network element registration request service operation from the server NWDAF.
  • Nnrf_NFManagement_NFRegister Request the network element management_network element registration request service operation
  • the network element management network element registration request service operation is used to request the registration of server NWDAF information at the NRF network element.
  • the information of the server NWDAF includes one or more of the following information: basic network element information, the scope of the server NWDAF, federated learning capability information, or second indication information.
  • the NRF network element stores the server NWDAF information to complete the registration of the server NWDAF information.
  • the network element management_network element registration request service operation may carry indication information for instructing the registration of the server NWDAF at the NRF network element.
  • Step 802 The NRF network element triggers the network element management_network element registration response service operation (Nnrf_NFManagement_NFRegister Response) of the server NWDAF, and correspondingly, the server NWDAF receives the network element management_network element registration response service operation from the NRF network element.
  • Nnrf_NFManagement_NFRegister Response the network element management_network element registration response service operation
  • the network element management_network element registration response service operation is used to indicate that the NRF network element has successfully registered server NWDAF information at the NRF network element.
  • the network element management_network element registration response service operation carries a registration success indication, and the registration success indication is used to indicate that the NRF network element has successfully registered server NWDAF information at the NRF network element.
  • Step 803 The client NWDAF triggers the network element management_network element registration request service operation of the NRF network element, and correspondingly, the NRF network element receives the network element management_network element registration request service operation from the client NWDAF.
  • the network element management_network element registration request service operation is used to request the registration of client NWDAF information at the NRF network element.
  • the information of the client NWDAF includes one or more of the following information: basic information of the client NWDAF, scope of the client NWDAF, federated learning capability information of the client NWDAF, and third indication information.
  • the basic information of the client NWDAF may be the type of the client NWDAF or the identification of the client NWDAF (for example, the client NWDAF ID) or the location of the client NWDAF or the address information of the client NWDAF.
  • the NRF network element After receiving the client NWDAF information, the NRF network element stores the client NWDAF information to complete the registration of the client NWDAF information.
  • the network element management_network element registration request service operation may carry indication information for instructing the registration of the client NWDAF at the NRF network element.
  • Step 804 The NRF network element triggers the network element management_network element registration response service operation (Nnrf_NFManagement_NFRegister Response) of the client NWDAF, and correspondingly, the client NWDAF receives the network element management_network element registration response service operation from the NRF network element.
  • Nnrf_NFManagement_NFRegister Response the network element management_network element registration response service operation
  • the network element management_network element registration response service operation is used to indicate that the NRF network element has successfully registered the client NWDAF information at the NRF network element.
  • the network element management_network element registration response service operation carries a registration success indication, and the registration success indication is used to indicate that the NRF network element has successfully registered the client NWDAF information at the NRF network element.
  • Step 805 The server NWDAF determines to trigger the horizontal federated learning training.
  • step 805 reference may be made to the above-mentioned first data analysis network element to determine to trigger the distributed learning and training process. I won't repeat them here.
  • Step 806 The server NWDAF requests the NRF network element for the first client NWDAF list capable of horizontal federated learning.
  • step 806 in the embodiment of the present application can be implemented in the following manner: the server NWDAF triggers a network element discovery request (Nnrf_NFDiscovery_Request) to the NRF network element, and correspondingly, the NRF network element receives the network element discovery request from the server NWDAF .
  • the network element discovery request is used to request the first client NWDAF list that can perform horizontal federated learning from the NRF network element.
  • the network element discovery request includes: the scope of the server NWDAF, and the first indication information.
  • the first indication information is used to indicate to the NRF network element the type of client NWDAF required by the server NWDAF, or the algorithm performance requirements.
  • the network element discovery request carries indication information y
  • the indication information y is used to indicate a request to the NRF network element for the first client NWDAF list that can perform horizontal federated learning.
  • Step 807 The NRF network element determines the first client NWDAF list that can perform horizontal federated learning.
  • the first client NWDAF list includes the information of each client NWDAF among clients NWDAF1 to client NWDAF n.
  • the PLMN corresponding to client NWDAF1 is PLMN1, TA is TA1, the slice instance is slice instance 1, the equipment vendor of client NWDAF1 is equipment vendor 1, the DNAI of client NWDAF1 is DNAI1, and the PLMN corresponding to client NWDAF2 Is PLMN2, TA is TA2, slice instance is slice instance 2, the equipment vendor of client NWDAF2 is equipment vendor 2, the DNAI of client NWDAF2 is DNAI2, and the information of each client NWDAF is deduced in turn.
  • Step 808 The NRF network element sends the first client NWDAF list to the server NWDAF, and correspondingly, the server NWDAF receives the first client NWDAF list from the NRF network element.
  • the first client NWDAF includes one or more client NWDAFs that meet the requirements of the server NWDAF.
  • step 808 can be implemented in the following manner: the NRF network element sends a network element discovery response to the server NWDAF, where the network element discovery response includes the first client NWDAF list.
  • the method provided in this embodiment of the present application may further include: according to the request of the server NWDAF, the NRF network element queries the client NWDAF1 to client NWDAF n that satisfy the server NWDAF request at the NRF network element, and obtains the first A client NWDAF list.
  • Step 809 The server NWDAF determines the load (Load) information of each client NWDAF in the first client NWDAF list.
  • step 808 in the embodiment of this application can be implemented in the following manner: the server NWDAF queries the OAM or NRF network element or the NWDAF capable of analyzing the load information of the client NWDAF to query each client NWDAF in the NWDAF list of the first client Load information.
  • the load information of the client NWDAF corresponds to one or more of the following information:
  • -NF Resource Usage for example, central processing unit (CPU), storage (memory), hard disk (disk));
  • load information of the client NWDAF in step 808 can also be replaced with the priority of the client NWDAF.
  • Step 810 The server NWDAF determines a second client NWDAF list that can perform horizontal federated learning according to the load information of each client NWDAF.
  • the second client NWDAF list includes all or part of the client NWDAF information among clients NWDAF1 to client NWDAF n.
  • step 810 in the embodiment of the present application may be implemented in the following manner: the load of the client NWDAF included in the second client NWDAF list is lower than the preset load threshold.
  • the server NWDAF sorts the first client NWDAF list according to Load from small to large, and then takes the client NWDAF whose Load is less than the preset load threshold to perform horizontal federated learning.
  • the purpose of this move is to ensure that the selected client NWDAF has sufficient resources for training the sub-models, thereby improving the training efficiency of the entire federation.
  • step 810 in the embodiment of the present application can be replaced in the following manner: the server NWDAF determines the second client NWDAF list that can perform horizontal federated learning according to the priority of each client NWDAF. At this time, the priority of the client NWDAF included in the second client NWDAF list is higher than the preset priority threshold.
  • each client NWDAF has a corresponding priority.
  • the algorithm performance of the high-priority client NWDAF is higher than the algorithm performance of the low-priority client NWDAF, or the algorithm performance of the high-priority client NWDAF is higher than the expected algorithm performance.
  • the load of the client NWDAF with high priority is lower than the load of the client NWDAF with low priority, or the load of the client NWDAF with the high priority is lower than the preset load threshold.
  • the algorithm performance evaluation index of the high-priority client NWDAF is higher than the algorithm performance evaluation index of the low-priority client NWDAF, or the algorithm performance evaluation index of the high-priority client NWDAF meets the preset algorithm performance evaluation index threshold.
  • Algorithm performance evaluation indicators can include: square error, accuracy, recall, F-Score (average score after reconciling accuracy and recall).
  • the server NWDAF or client NWDAF registers their respective federated capability information to the NRF network element, and assists the 5G network (such as server NWDAF) if horizontal federated learning is required, find the appropriate one through the NRF network element
  • the client NWDAF conducts federal training.
  • the network element management_network element registration request service operation in step 801 in the embodiment shown in FIG. 8 corresponds to the above-mentioned second request.
  • the network element management_network element registration request service operation in step 803 corresponds to the aforementioned third request.
  • the coverage range of the server NWDAF may correspond to the range of the first data analysis network element in the foregoing embodiment.
  • the range of the client NWDAF may correspond to the range of the second data analysis network element in the foregoing embodiment.
  • the network element discovery request in step 806 corresponds to the first request in the foregoing embodiment.
  • client NWDAF1 to client NWDAF n correspond to one or more second data analysis network elements in the foregoing embodiment. All the clients NWDAF included in the second client NWDAF list correspond to one or more third data analysis network elements in the foregoing embodiment.
  • Fig. 9 is an embodiment of a model training method provided by an embodiment of the application.
  • the method uses the server NWDAF to determine that the client NWDAF performing horizontal federated training is client NWDAF1 and client NWDAF3 as an example.
  • the method includes:
  • Step 901 The server NWDAF sends configuration parameters to the client NWDAF1, and correspondingly, the client NWDAF1 receives the configuration parameters from the server NWDAF. This configuration parameter is used by the client NWDAF1 to determine the parameters used when training the sub-model.
  • step 901 may be implemented in the following manner: the server NWDAF triggers the Nnwdaf_HorizontalFL_Create request service operation to the client NWDAF1, and correspondingly, the client NWDAF1 receives the Nnwdaf_HorizontalFL_Create request service operation from the server NWDAF.
  • the Nnwdaf_HorizontalFL_Create request service includes the above configuration parameters.
  • the content of the configuration parameters reference may be made to the description in the foregoing embodiment, which will not be repeated here.
  • Step 902 The server NWDAF sends configuration parameters to the client NWDAF3, and correspondingly, the client NWDAF3 receives the configuration parameters from the server NWDAF. This configuration parameter is used by the client NWDAF3 to determine the parameters used when training the sub-model.
  • the client NWDAF3 or client NWDAF1 can also send a response indication to the server NWDAF, and the response indication is used to indicate that the client NWDAF successfully configures the parameters used when training the sub-model.
  • step 903 the client NWDAF1 or client NWDAF3 executes a training process according to the data and configuration parameters respectively obtained to obtain a sub-model.
  • client NWDAF1 or client NWDAF3 can perform multi-wheel iterative training internally, and each wheel of iterative training corresponds to a maximum number of sub-iterations.
  • Client NWDAF1 or client NWDAF3 can When the maximum number of sub-iterations corresponding to each wheel iteration training is reached, the obtained model is used as the sub-model.
  • Step 904 The client NWDAF1 sends the sub-model trained by the client NWDAF1 to the server NWDAF.
  • the client NWDAF1 triggers the Nnwdaf_HorizontalFL_Update request service operation of the server NWDAF to send the sub-model trained by the client NWDAF1 to the server NWDAF.
  • Step 905 The client NWDAF3 sends the sub-model trained by the client NWDAF3 to the server NWDAF.
  • the client NWDAF3 triggers the Nnwdaf_HorizontalFL_Update request service operation of the server NWDAF to send the sub-model trained by the client NWDAF3 to the server NWDAF.
  • the sub-model itself can be a black box, which is sent to the server NWDAF as a model file (Model File).
  • Model File a model file
  • Sub-models can also be defined specifically, including algorithm types, model parameters, and so on.
  • the client NWDAF may also request an updated model from the server NWDAF.
  • the client NWDAF3 sends the submodel 3 trained by the client NWDAF3 to the server NWDAF
  • the client NWDAF1 sends the submodel 1 trained by the client NWDAF1 to the server NWDAF.
  • Step 906 The server NWDAF aggregates the sub-model trained by the client NWDAF1 and the sub-model trained by the client NWDAF3 to obtain an updated model after the current iteration.
  • Step 907 The server NWDAF sends the updated model to the client NWDAF1 and the client NWDAF3.
  • each client NWDAF will perform multi-wheel iterative training, and each cient NWDAF in each wheel iterative training will train to obtain the sub-model corresponding to this wheel iterative training. After each wheel iterative training obtains the sub-model, each client NWDAF will report to the server NWDAF the corresponding sub-model for the iterative training of this wheel.
  • the foregoing steps 903 to 907 may be performed in a loop until the training termination condition set when the client NWDAF1 and client NWDAF3 perform sub-model training is reached.
  • Step 908 After the server NWDAF determines that the federated training is terminated, it determines the target model according to the updated model.
  • Step 909 The server NWDAF may assign a corresponding version number (Version ID) and/or analysis result type identification (analytics ID) to the target model (referred to as Trained Model or Global Model or Optimal Model).
  • Version ID version number
  • analytics ID analysis result type identification
  • Step 910 The server NWDAF sends the target model, the version number corresponding to the target model, and the analysis result type identifier to all or part of the client NWDAF within the scope of the server NWDAF.
  • the server NWDAF triggers the Nnwdaf_HorizontalFL_Update Acknowledge service operation of all or part of the client NWDAF within the scope of the server NWDAF to send the target model and the version number Version ID and analysis corresponding to the target model to all or part of the client NWDAF within the scope of the server NWDAF
  • the result type identifies analytics ID.
  • the server NWDAF sends at least one of the target model and the model ID corresponding to the target model, the Model ID, the version number Version ID, and the analysis result type ID, analytics ID to the clients NWDAF1 to client NWDAFn.
  • client NWDAF1 and client NWDAF3 within the scope of the server NWDAF participated in the training during the model training, the other clients except the client NWDAF1 and client NWDAF3 within the scope of the server NWDAF did not participate in the training, but others The client NWDAF can still share the target model.
  • step 911 the client NWDAF1 and the client NWDAF3 send at least one of the target model and the model ID corresponding to the target model, the Model ID, the version number Version ID, and the analysis result type ID analytics ID, to the NEF network element.
  • the client NWDAF1 and client NWDAF3 respectively trigger the Nnrf_NFManagement_NFRegister_request service operation to the NRF network element to register the analytics ID, Version ID and the corresponding valid range (area, time period, etc.) corresponding to the target model to inform the NRF network element of the client NWDAF1
  • the client NWDAF3 supports the analysis of the analytics ID.
  • the valid range corresponding to the analytics ID in this step is determined by each client NWDAF according to the data participating in the training of the target model. For other client NWDAF and server NWDAF, the training data is unknown.
  • Step 912 The server NWDAF registers the supported analytics ID and the corresponding valid range to the NRF network element.
  • the valid range corresponding to the analytics ID in step 912 includes the valid range of the analytics ID on the client NWDAF.
  • the analytics ID supported by the server NWDAF is also registered to the NRF network element, which can cope with the scenario of NWDAF hierarchical deployment.
  • a third-party AF network element or OAM network element requests the data analysis result corresponding to the analytics ID in a large area from the network side NWDAF.
  • the AF network element or OAM network element first queries the server NWDAF from the NRF network element Immediately, the server NWDAF can request the sub-area data analysis results from other clients NWDAF respectively, and then send it to the AF network element or OAM network element after integration.
  • the federated learning and training process is introduced into the 5G network, so that the data does not go out of the local domain of each client NWDAF participating in the federated learning and training, and each client NWDAF participating in the federated learning and training performs processing based on the acquired data.
  • Sub-model training, and then each client NWDAF participating in federated learning training provides the server NWDAF with the sub-model obtained in each round of training, and finally the server NWDAF aggregates the updated model according to the sub-model, and then obtains the target model, so that the model The training process.
  • This method can not only avoid data leakage, but also because the data training is performed by the client NWDAF, this distributed training process can also speed up the entire model training.
  • a server NWDAF can be deployed to provide services for the network slice a, and then in different areas served by the network slice a or different slice instances corresponding to the network slice a At least one client NWDAF is deployed on it.
  • network slice a serves area 1, area 2, and area 3.
  • slice instances are deployed: network slice instance (NSI) 1, NSI2, and NSI3.
  • NWI network slice instance
  • Deploy client NWDAF1 in area 1 or client NWDAF1 serves NSI1.
  • Deploy client NWDAF2 in area 2 or client NWDAF2 serves NSI2.
  • Deploy client NWDAF3 in area 3, or client NWDAF3 serves NSI3,
  • client NWDAF1, client NWDAF2, and client NWDAF3 support Horizontal FL, and the type is client.
  • the OAM triggers a subscription request to the server NWDAF, and the subscription request is used to subscribe to the quality of experience (QoE) or service experience (service experience or service mean opinion score or service MOS) of the service of the network slice a.
  • the target client NWDAF whose Load is lower than the load threshold (for example, client NWDAF1, client NWDAF2, and client NWDAF3) participate in horizontal federation training.
  • the server NWDAF In the preparation phase of federated learning, the server NWDAF first determines the need to determine the relationship model between business experience and network data through linear regression. That is, the Service MOS model, which can be characterized as:
  • -D is the dimension of network data
  • D is the dimension of weight
  • the embodiment of this application can be called sub Intermediate results of model or client NWDAF training.
  • the client NWDAF1 to client NWDAF3 report the sub-models obtained by their respective training and the number of samples participating in the training (that is, the number of service flows in Table 2 and Table 3) to the server NWDAF.
  • Sever NWDAF can use the model aggregation module in Sever NWDAF to aggregate the sub-models reported by all target clients NWDAF participating in horizontal federated training on a weighted average to obtain an updated model.
  • Sever NWDAF sends the updated model to each client NWDAF from client NWDAF1 to client NWDAF3 participating in horizontal federation training. After that, client NWDAF1 to client NWDAF3 update local parameters according to the updated model. When any one of the client NWDAF1 to client NWDAF3 determines that the number of iterations reaches the maximum number of sub-iterations, the client NWDAF terminates the training, and continues to send the sub-model obtained when the maximum number of iterations is reached to the second NWDAF.
  • the model management module assigns one or more of the identifier of the target model, the version number of the target model, and the analytics ID corresponding to the business QoE in the network slice a for the target model.
  • Sever NWDAF sends the target model, one or more of the target model identifier, target model version number, and Analytics ID to each of the client NWDAF1 to client NWDAF3.
  • OAM subscribes to the server NWDAF for service QoE information of network slice a.
  • Each client NWDAF of the client NWDAF1 to client NWDAF3 under the jurisdiction of the server NWDAF requests the service QoE information in each corresponding sub-area or slice instance.
  • the client NWDAF1 sends the service QoE information of subarea 1 or NSI1 to the server NWDAF
  • the client NWDAF2 sends the service QoE information of subarea 2 or NSI2 to the server NWDAF
  • the client NWDAF3 sends the service QoE information of subarea 3 or NSI3 to the server NWDAF.
  • the server NWDAF summarizes the service QoE information in all sub-areas or slice instances to obtain the service QoE information of network slice a, and sends it to OAM.
  • the client NWDAF1 obtains the service QoE information of subarea 1 or NSI1 according to the target model and the data corresponding to subarea 1 or NSI1.
  • the client NWDAF2 obtains the service QoE information of subarea 2 or NSI2 according to the target model and the data corresponding to subarea 2 or NSI2.
  • the client NWDAF3 obtains the service QoE information of subarea 3 or NSI3 according to the target model and the data corresponding to subarea 3 or NSI3.
  • OAM determines whether the SLA of network slice a is satisfied according to the service QoE information of network slice a. If not, the SLA of network slice a can be satisfied by adjusting the air interface resources or core network resources or transmission network configuration of network slice a. .
  • each network element such as the first data analysis network element, the service discovery network element, the third data analysis network element, etc.
  • each network element includes hardware structures and/or software modules corresponding to each function in order to realize the above functions.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • the embodiment of the present application can divide the functional units according to the above-mentioned method examples, the first data analysis network element, the service discovery network element, and the third data analysis network element.
  • each functional unit can be divided corresponding to each function, or two or More than two functions are integrated in one processing unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit. It should be noted that the division of units in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • the method of the embodiment of the present application is described above in conjunction with FIG. 6 to FIG. 11, and the communication device provided in the embodiment of the present application for performing the foregoing method is described below. Those skilled in the art can understand that the method and the device can be combined and referenced.
  • the communication device provided in the embodiment of the present application can execute the above method for determining the strategy.
  • FIG. 12 shows the communication device involved in the foregoing embodiment.
  • the communication device may include: a communication unit 1202 and a processing unit 1201.
  • the processing unit 1201 is used to support the communication device to perform information processing actions.
  • the communication unit 1202 is used to support the communication device to perform an action of receiving or sending information.
  • the communication device is a first data analysis network element, or a chip applied to the first data analysis network element.
  • the communication unit 1202 is configured to support the communication device to perform the sending action performed by the first data analysis network element in step 601 of FIG. 6 in the foregoing embodiment.
  • the communication unit 1202 is configured to support the communication device to perform the receiving action performed by the first data analysis network element in step 603 of FIG. 6.
  • the processing unit is further configured to support the communication device to execute the processing actions performed by the first data analysis network element in the foregoing embodiment.
  • the communication unit 1202 is further configured to support the communication device to perform the sending actions performed by the first data analysis network element in step 701, step 712, and step 714 in the foregoing embodiment.
  • the processing unit 1201 is further configured to support the communication device to execute step 708, step 711, and step 713 in the foregoing embodiment.
  • the communication device is a third data analysis network element, or a chip applied to the third data analysis network element.
  • the processing unit 1201 is configured to support the communication device to execute the processing action performed by the third data analysis network element in step 709 in the foregoing embodiment.
  • the communication unit 1202 is configured to support the communication device to execute the sending action performed by the third data analysis network element in step 710 in the foregoing embodiment.
  • the communication unit 1202 is further configured to support the communication device to perform the receiving action performed by the third data analysis network element in step 712 in the foregoing embodiment, and in step 714, the second data analysis network element
  • the executed receiving action is the sending action executed by the second data analysis network element in step 703.
  • the communication device is a service discovery network element or a chip applied to the service discovery network element.
  • the communication unit 1202 is configured to support the communication device to perform the receiving action performed by the service discovery network element in step 601 of FIG. 6 in the above embodiment.
  • the processing unit 1201 is further configured to support the communication device to execute the processing actions performed by the service discovery network element in step 602 in the foregoing embodiment.
  • the communication unit 1202 is configured to support the communication device to perform the sending action performed by the service discovery network element in step 603 of FIG. 6.
  • the communication unit 1202 is further configured to support the communication device to perform the receiving actions performed by the service discovery network element in step 701 and step 703 in the foregoing embodiment.
  • the processing unit 1201 is configured to support the communication device to perform the actions performed by the service discovery network element in step 702 and step 704 in the foregoing embodiment.
  • FIG. 13 shows a schematic diagram of a possible logical structure of the communication device involved in the foregoing embodiment.
  • the communication device includes: a processing module 1312 and a communication module 1313.
  • the processing module 1312 is used to control and manage the actions of the communication device.
  • the processing module 1312 is used to perform information/data processing steps on the communication device.
  • the communication module 1313 is used to support the communication device to send or receive information/data.
  • the communication device may further include a storage module 1311 for storing program codes and data that can be used by the communication device.
  • the communication device is a first data analysis network element, or a chip applied to the first data analysis network element.
  • the communication module 1313 is used to support the communication device to execute the sending action performed by the first data analysis network element in step 601 of FIG. 6 in the foregoing embodiment.
  • the communication module 1313 is configured to support the communication device to perform the receiving action performed by the first data analysis network element in step 603 of FIG. 6.
  • the processing unit is further configured to support the communication device to execute the processing actions performed by the first data analysis network element in the foregoing embodiment.
  • the communication module 1313 is further configured to support the communication device to execute the sending actions performed by the first data analysis network element in step 701, step 712, and step 714 in the foregoing embodiment.
  • the processing module 1312 is also used to support the communication device to execute step 708, step 711, and step 713 in the foregoing embodiment.
  • the communication device is a third data analysis network element, or a chip applied to the third data analysis network element.
  • the processing module 1312 is used to support the communication device to execute the processing action performed by the third data analysis network element in step 709 in the foregoing embodiment.
  • the communication module 1313 is configured to support the communication device to execute the sending action performed by the third data analysis network element in step 710 in the foregoing embodiment.
  • the communication module 1313 is further configured to support the communication device to perform the receiving action performed by the third data analysis network element in step 712 in the foregoing embodiment, and in step 714, the second data analysis network element
  • the executed receiving action is the sending action executed by the second data analysis network element in step 703.
  • the communication device is a service discovery network element or a chip applied to the service discovery network element.
  • the communication module 1313 is configured to support the communication device to perform the receiving action performed by the service discovery network element in step 601 of FIG. 6 in the foregoing embodiment.
  • the processing module 1312 is also used to support the communication device to execute the processing actions performed by the service discovery network element in step 602 in the foregoing embodiment.
  • the communication module 1313 is configured to support the communication device to perform the sending action performed by the service discovery network element in step 603 of FIG. 6.
  • the communication module 1313 is further configured to support the communication device to perform the receiving actions performed by the service discovery network element in step 701 and step 703 in the foregoing embodiment.
  • the processing module 1312 is configured to support the communication device to perform the actions performed by the service discovery network element in steps 702 and 704 in the foregoing embodiment.
  • the processing module 1312 may be a processor or a controller, such as a central processing unit, a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic devices, transistor logic devices, Hardware components or any combination thereof. It can implement or execute various exemplary logical blocks, modules, and circuits described in conjunction with the disclosure of this application.
  • the processor may also be a combination of computing functions, for example, a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and so on.
  • the communication module 1313 may be a transceiver, a transceiver circuit, or a communication interface.
  • the storage module 1311 may be a memory.
  • the processing module 1312 is the processor 1401 or the processor 1405, the communication module 1313 is the communication interface 1403, and the storage module 1311 is the memory 1402, the communication device involved in this application may be the communication device shown in FIG. 14.
  • FIG. 14 shows a schematic diagram of the hardware structure of a communication device provided by an embodiment of the application.
  • the communication device includes a processor 1401, a communication line 1404, and at least one communication interface (in FIG. 14 it is only an example, taking the communication interface 1403 as an example for illustration).
  • the communication device may further include a memory 1402.
  • the processor 1401 can be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more programs for controlling the execution of the program of this application. integrated circuit.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the communication line 1404 may include a path to transmit information between the aforementioned components.
  • Communication interface 1403 which uses any device such as a transceiver to communicate with other devices or communication networks, such as Ethernet, radio access network (RAN), wireless local area networks (WLAN), etc. .
  • RAN radio access network
  • WLAN wireless local area networks
  • the memory 1402 can be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types that can store information and instructions
  • the dynamic storage device can also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disk storage, optical disc storage (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by a computer Any other media accessed, but not limited to this.
  • the memory can exist independently, and is connected to the processor through a communication line 1404. The memory can also be integrated with the processor.
  • the memory 1402 is used to store computer-executable instructions for executing the solution of the present application, and the processor 1401 controls the execution.
  • the processor 1401 is configured to execute computer-executable instructions stored in the memory 1402, so as to implement the communication method provided in the following embodiments of the present application.
  • the computer-executable instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
  • the processor 1401 may include one or more CPUs, such as CPU0 and CPU1 in FIG. 14.
  • the communication device may include multiple processors, such as the processor 1401 and the processor 1405 in FIG. 14.
  • processors can be a single-CPU (single-CPU) processor or a multi-core (multi-CPU) processor.
  • the processor here may refer to one or more devices, circuits, and/or processing cores for processing data (for example, computer program instructions).
  • the communication device is a first data analysis network element, or a chip applied to the first data analysis network element.
  • the communication interface 1403 is used to support the communication device to execute the sending action performed by the first data analysis network element in step 601 of FIG. 6 in the foregoing embodiment.
  • the communication interface 1403 is used to support the communication device to perform the receiving action performed by the first data analysis network element in step 603 of FIG. 6.
  • the processing unit is further configured to support the communication device to execute the processing actions performed by the first data analysis network element in the foregoing embodiment.
  • the communication interface 1403 is also used to support the communication device to execute the sending actions performed by the first data analysis network element in step 701, step 712, and step 714 in the foregoing embodiment.
  • the processor 1401 and the processor 1405 are also configured to support the communication device to execute step 708, step 711, and step 713 in the foregoing embodiment.
  • the communication device is a third data analysis network element, or a chip applied to the third data analysis network element.
  • the processor 1401 and the processor 1405 are used to support the communication device to execute the processing actions performed by the third data analysis network element in step 709 in the foregoing embodiment.
  • the communication interface 1403 is used to support the communication device to execute the sending action performed by the third data analysis network element in step 710 in the foregoing embodiment.
  • the communication interface 1403 is also used to support the communication device to perform the receiving action performed by the third data analysis network element in step 712 in the above embodiment, and the second data analysis network element in step 714
  • the executed receiving action is the sending action executed by the second data analysis network element in step 703.
  • the communication device is a service discovery network element or a chip applied to the service discovery network element.
  • the communication interface 1403 is used to support the communication device to perform the receiving action performed by the service discovery network element in step 601 of FIG. 6 in the foregoing embodiment.
  • the processor 1401 and the processor 1405 are further configured to support the communication device to execute the processing actions performed by the service discovery network element in step 602 in the foregoing embodiment.
  • the communication interface 1403 is used to support the communication device to perform the sending action performed by the service discovery network element in step 603 of FIG. 6.
  • the communication interface 1403 is also used to support the communication device to execute the receiving action performed by the service discovery network element in step 701 and step 703 in the foregoing embodiment.
  • the processor 1401 and the processor 1405 are configured to support the communication device to perform the actions performed by the service discovery network element in steps 702 and 704 in the foregoing embodiment.
  • FIG. 15 is a schematic diagram of the structure of a chip 150 provided by an embodiment of the present application.
  • the chip 150 includes one or more (including two) processors 1510 and a communication interface 1530.
  • the chip 150 further includes a memory 1540.
  • the memory 1540 may include a read-only memory and a random access memory, and provides operation instructions and data to the processor 1510.
  • a part of the memory 1540 may also include a non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 1540 stores the following elements, execution modules or data structures, or their subsets, or their extended sets.
  • the corresponding operation is executed by calling the operation instruction stored in the memory 1540 (the operation instruction may be stored in the operating system).
  • One possible implementation manner is that the chips used in the first data analysis network element, the third data analysis network element, and the service discovery network element have similar structures, and different devices can use different chips to realize their respective functions.
  • the processor 1510 controls processing operations of any one of the first data analysis network element, the third data analysis network element, and the service discovery network element.
  • the processor 1510 may also be referred to as a central processing unit (CPU).
  • the memory 1540 may include a read-only memory and a random access memory, and provides instructions and data to the processor 1510. A part of the memory 1540 may also include NVRAM.
  • the memory 1540, the communication interface 1530, and the memory 1540 are coupled together through a bus system 1520, where the bus system 1520 may include a power bus, a control bus, and a status signal bus in addition to a data bus.
  • bus system 1520 may include a power bus, a control bus, and a status signal bus in addition to a data bus.
  • various buses are marked as the bus system 1520 in FIG. 15.
  • the methods disclosed in the foregoing embodiments of the present application may be applied to the processor 1510 or implemented by the processor 1510.
  • the processor 1510 may be an integrated circuit chip with signal processing capabilities. In the implementation process, the steps of the foregoing method can be completed by hardware integrated logic circuits in the processor 1510 or instructions in the form of software.
  • the above-mentioned processor 1510 may be a general-purpose processor, a digital signal processing (digital signal processing, DSP), an ASIC, an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic devices, discrete gates or transistors. Logic devices, discrete hardware components.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory 1540, and the processor 1510 reads the information in the memory 1540, and completes the steps of the foregoing method in combination with its hardware.
  • the communication interface 1530 is used to perform the steps of receiving and sending the first data analysis network element, the third data analysis network element, and the service discovery network element in the embodiment shown in FIGS. 6-7 .
  • the processor 1510 is configured to execute the processing steps of the first data analysis network element, the third data analysis network element, and the service discovery network element in the embodiments shown in FIGS. 6-7.
  • the above communication unit may be a communication interface of the device for receiving signals from other devices.
  • the transceiver unit is a communication interface for the chip to receive signals or send signals from other chips or devices.
  • a computer-readable storage medium is provided, and instructions are stored in the computer-readable storage medium.
  • the instructions are executed, the functions of the first data analysis network element shown in Figs. 6-7 are realized.
  • a computer-readable storage medium is provided, and instructions are stored in the computer-readable storage medium.
  • the instructions are executed, the function of the third data analysis network element shown in Figs. 6-7 is realized.
  • a computer-readable storage medium is provided, and instructions are stored in the computer-readable storage medium.
  • the functions of the service discovery network element in FIG. 6 to FIG. 7 is realized.
  • a computer program product including instructions.
  • the computer program product includes instructions. When the instructions are executed, the functions of the first data analysis network element shown in Figs. 6-7 are realized.
  • a computer program product including instructions.
  • the computer program product includes instructions. When the instructions are executed, the function of the third data analysis network element in FIG. 6 to FIG. 7 is realized.
  • a computer program product including instructions.
  • the computer program product includes instructions. When the instructions are executed, the functions of the service discovery network element in FIG. 6 to FIG. 7 are realized.
  • a chip is provided.
  • the chip is applied to a first data analysis network element.
  • the chip includes at least one processor and a communication interface.
  • the communication interface is coupled to the at least one processor.
  • a chip is provided.
  • the chip is applied to a third data analysis network element.
  • the chip includes at least one processor and a communication interface.
  • the communication interface is coupled to the at least one processor. 6 ⁇ The function of the third data analysis network element in Figure 7.
  • a chip is provided.
  • the chip is applied to a service discovery network element.
  • the chip includes at least one processor and a communication interface.
  • the communication interface is coupled to the at least one processor.
  • An embodiment of the present application provides a communication system, which includes: a first data analysis network element and a service discovery network element.
  • the first data analysis network element is used to perform the functions performed by the first data analysis network element in any one of the drawings in Figure 6 to Figure 7, and the service discovery network element is used to perform any one of Figures 6 to 7 The steps performed by the service discovery network element.
  • the communication system may further include a third data analysis network element.
  • the third data analysis network element is used to perform the functions performed by the first data analysis network element and the third data analysis network element in FIGS. 6-7.
  • the computer program product includes one or more computer programs or instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, network equipment, user equipment, or other programmable devices.
  • the computer program or instruction may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer program or instruction may be downloaded from a website, computer, The server or data center transmits to another website site, computer, server or data center through wired or wireless means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center that integrates one or more available media.
  • the usable medium may be a magnetic medium, such as a floppy disk, a hard disk, and a magnetic tape; it may also be an optical medium, such as a digital video disc (digital video disc, DVD); and it may also be a semiconductor medium, such as a solid state drive (solid state drive). , SSD).

Abstract

本申请实施例提供一种通信方法、装置及系统,涉及数据分析领域,该方法能够扩展数据分析的应用场景。该方法包括:第一数据分析网元向服务发现网元发送第一请求,第一请求用于请求第二数据分析网元的信息,第一请求包括:分布式学习的信息和第一指示信息中的一个或多个,其中,分布式学习的信息包括分布式学习的类型,第一指示信息用于指示第二数据分析网元的类型。第一数据分析网元接收来自服务发现网元的一个或多个第二数据分析网元的信息,第二数据分析网元支持分布式学习的类型。

Description

一种通信方法、装置及系统
本申请要求于2020年04月29日提交国家知识产权局、申请号为202010359339.6、申请名称为“一种通信方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及数据分析领域,尤其涉及一种通信方法、装置及系统。
背景技术
网络数据分析功能(network data analytics function,NWDAF)网元具有如下功能:数据收集(例如,收集核心网数据、网管数据、业务数据、终端数据)、数据分析以及数据分析结果反馈。
目前,各个域(如终端、接入网、核心网、网管以及业务提供方)之间出于利益考虑,不愿意开放自己的数据给其他域,数据处于孤岛状态,导致数据分析中心(如NWDAF网元)无法集中各域数据,不支持各个域之间的协同数据分析,从而限制了数据分析的场景。
发明内容
本申请实施例提供一种通信方法、装置及系统,能够扩展数据分析的应用场景。
第一方面,本申请实施例提供一种通信方法,包括:第一数据分析网元向服务发现网元发送用于请求第二数据分析网元的信息的第一请求。该第一请求包括:分布式学习的信息、和用于指示第二数据分析网元的类型的第一指示信息中的一个或多个。其中,分布式学习的信息包括第一数据分析网元请求的分布式学习的类型。第一数据分析网元接收来自服务发现网元的一个或多个第二数据分析网元的信息,该第二数据分析网元支持第一数据分析网元请求的上述分布式学习的类型。
本申请实施例提供一种通信方法,该方法中由第一数据分析网元向服务发现网元发送第一请求,利用第一请求向服务发现网元请求第一数据分析网元所需要的第二数据分析网元的特征。这样便于服务发现网元根据第一请求为第一数据分析网元提供支持分布式学习的类型的一个或多个第二数据分析网元的信息。此外该第二数据分析网元的类型与第一数据分析网元请求的第二数据分析网元的类型相同。该方案一方面可以实现第一数据分析网元通过服务发现网元找到能够进行分布式学习训练的数据分析网元的目的,另一方面,由于第一数据分析网元得到该一个或多个第二数据分析网元的信息之后,后续第一数据分析网元在需要进行模型训练时可以与一个或多个第二数据分析网元进行协同实现模型的训练,从而能够扩展数据分析的应用场景。
在一种可能的实现方式中,本申请实施例提供的方法还可以包括:第一数据分析网元根据一个或多个第二数据分析网元的信息确定进行分布式学习的第三数据分析网元的信息。该第三数据分析网元的数量为一个或多个,换言之,第一数据分析网元根据一个或多个第二数据分析网元的信息确定进行分布式学习的一个或多个第三数据分 析网元的信息。该方案中由于一个或多个第三数据分析网元能够进行分布式学习训练,这样便于在后续分布式学习训练过程中,第三数据分析网元可以无需向第一数据分析网元提供数据,使得数据可以不出第三数据分析网元的本域,依然可以由第一数据分析网元进行模型训练。一方面从而避免了数据泄露的问题,另一方面,在第一数据分析网元和第三数据分析网元之间不可以进行数据交互的情况下,依然可以进行模型训练。再者由于数据训练在每个第三数据分析网元处进行,这种分布式训练过程同样可以加快整个模型训练的速度。
在一种可能的实现方式中,第三数据分析网元的负载低于预设负载阈值,或者,第三数据分析网元的优先级高于预设优先级阈值。其中,第三数据分析网元的范围位于第一数据分析网元的范围内。第三数据分析网元的范围包括:第三数据分析网元归属的公用陆地移动网PLMN的标识、第三数据分析网元服务的网络切片实例的范围、第三数据分析网元服务的数据网络名称DNN、第三数据分析网元的设备商信息。
在一种可能的实现方式中,第一请求还包括第一数据分析网元的范围,相应的,第二数据分析网元的范围或者第三数据分析网元的范围位于第一数据分析网元的范围内。如果第一请求还包括第一数据分析网元的范围,则第一请求用于请求位于第一数据分析网元的范围内且支持第一数据分析网元请求的分布式学习的类型的一个或多个第二数据分析网元。
在一种可能的实现方式中,第一数据分析网元的范围包括以下信息中的一个或者多个:第一数据分析网元服务的区域、第一数据分析网元归属的公用陆地移动网PLMN的标识、第一数据分析网元服务的网络切片的信息、第一数据分析网元服务的数据网络名称DNN、第一数据分析网元的设备商信息。
在一种可能的实现方式中,分布式学习的信息还包括分布式学习支持的算法信息,相应的,第二数据分析网元或者第三数据分析网元支持分布式学习支持的算法信息对应的算法。这样便于服务发现网元向第一数据分析网元提供的一个或多个第二数据分析网元还支持该算法信息。
在一种可能的实现方式中,分布式学习支持的算法信息包括算法类型、算法标识以及算法性能中的一个或者多个。可用理解的是,不同的第二数据分析网元支持的算法信息可以相同,也可以不同。
在一种可能的实现方式中,本申请实施例提供的方法还包括:第一数据分析网元接收来自一个或多个第三数据分析网元的子模型。该子模型由第三数据分析网元根据第三数据分析网元获取到的数据进行训练得到。第一数据分析网元根据一个或多个第三数据分析网元的子模型确定更新的模型。第一数据分析网元向一个或多个第三数据分析网元发送更新的模型。由于第一数据分析网元是根据一个或多个第三数据分析网元中不同数据分析网元提供的子模型得到更新的模型,可以使得各个第三数据分析网元无需向第一数据分析网元提供用于进行训练的数据,避免了数据泄露。
在一种可能的实现方式中,本申请实施例提供的方法还包括:第一数据分析网元根据更新的模型确定目标模型。第一数据分析网元向一个或多个第二数据分析网元发送目标模型,以及目标模型对应的以下信息中的一个或者多个:模型标识、模型版本号或者数据分析标识。可以使得每个第二数据分析网元均可以获得由第一数据分析网 元确定的目标模型。例如,目标模型可以为业务体验模型。
在一种可能的实现方式中,第一数据分析网元接收来自一个或多个第三数据分析网元的子模型之前,本申请实施例提供的方法还包括:第一数据分析网元向一个或多个第三数据分析网元发送配置参数,该配置参数为第三数据分析网元确定训练子模型时使用的参数。便于第三数据分析网元根据配置参数配置分布式学习训练过程中涉及到的相关参数。
在一种可能的实现方式中,配置参数包括以下信息中的一个或多个:初始模型、训练集选择标准、特征生成方法、训练终止条件、最大训练时间、最大等待时间。
在一种可能的实现方式中,分布式学习的类型包括横向学习、纵向学习以及迁移学习中的一个。
在一种可能的实现方式中,第二数据分析网元的类型为以下中的一个:客户端、本地训练器、或者局部训练者。
在一种可能的实现方式中,本申请实施例提供的方法还包括:第一数据分析网元向服务发现网元发送用于请求注册第一数据分析网元的信息的第二请求。该第一数据分析网元的信息包括第一数据分析网元对应的以下信息中的一个或者多个:分布式学习的信息、或第二指示信息,该第二指示信息用于指示第一数据分析网元的类型。以便于将第一数据分析网元的信息进行注册,便于后续其他设备通过服务发现网元确定第一数据分析网元。
在一种可能的实现方式中,第一数据分析网元的信息还包括第一数据分析网元的范围、第一数据分析网元的标识、第一数据分析网元的地址信息中的一个或多个。
在一种可能的实现方式中,第一数据分析网元的类型包括以下信息中的一个:服务端、协调器、中心训练者、全局训练者。
在一种可能的实现方式中,分布式学习为联邦学习。
在一种可能的实现方式中,第二数据分析网元为终端。
第二方面,本申请实施例提供一种通信方法,该方法包括:服务发现网元接收来自第一数据分析网元的用于请求第二数据分析网元的信息的第一请求,第一请求包括以下信息中的一个或者多个:分布式学习的信息和第一指示信息,其中,分布式学习的信息包括第一数据分析网元请求的分布式学习的类型,第一指示信息用于指示第二数据分析网元的类型。服务发现网元根据第一请求确定支持分布式学习的类型的一个或多个第二数据分析网元的信息。服务发现网元向第一数据分析网元发送一个或多个第二数据分析网元的信息。
在一种可能的实现方式中,本申请实施例提供的方法中的第一请求中还包括第一数据分析网元的范围,相应的,第二数据分析网元的范围位于第一数据分析网元的范围内,换言之,服务发现网元根据第一请求确定支持分布式学习的类型的一个或多个第二数据分析网元的信息,包括:服务发现网元将位于第一数据分析网元的范围内,且支持分布式学习的类型的数据分析网元,确定为一个或多个第二数据分析网元。
在一种可能的实现方式中,分布式学习的信息还包括分布式学习支持的算法信息,相应的,该第二数据分析网元支持分布式学习支持的算法信息对应的算法。换言之,服务发现网元根据第一请求确定支持分布式学习的类型的一个或多个第二数据分析网 元的信息,包括:服务发现网元将既支持分布式学习的类型又支持分布式学习支持的算法信息的数据分析网元确定为一个或多个第二数据分析网元。
在一种可能的实现方式中,本申请实施例提供的方法还包括:服务发现网元接收来自第一数据分析网元的用于请求注册第一数据分析网元的信息的第二请求,该第一数据分析网元的信息包括第一数据分析网元对应的以下信息中的一个或者多个:分布式学习的信息、或第二指示信息,第二指示信息用于指示第一数据分析网元的类型。服务发现网元根据第二请求,注册第一数据分析网元的信息。
在一种可能的实现方式中,第一数据分析网元的信息还包括第一数据分析网元的范围、第一数据分析网元的标识、第一数据分析网元的地址信息中的一个或多个。
在一种可能的实现方式中,服务发现网元根据第二请求,注册第一数据分析网元的信息包括:服务发现网元将第一数据分析网元的信息存储在服务发现网元中,或者服务发现网元将第一数据分析网元的信息存储在用户数据管理网元中。
在一种可能的实现方式中,本申请实施例提供的方法还包括:服务发现网元接收来自一个或多个第二数据分析网元的用于请求注册第二数据分析网元的信息的第三请求,该第二数据分析网元的信息包括第二数据分析网元对应的以下信息中的一个或者多个:分布式学习的信息和第三指示信息中的一个或多个,第三指示信息用于指示第二数据分析网元的类型。服务发现网元根据第三请求,注册一个或多个第二数据分析网元的信息。
在一种可能的实现方式中,第二数据分析网元的信息还包括第二数据分析网元的范围,或第二数据分析网元的标识或第二数据分析网元的地址信息中的一个或多个。
在一种可能的实现方式中,服务发现网元根据第三请求,注册一个或多个第二数据分析网元的信息,包括:服务发现网元将一个或多个第二数据分析网元的信息存储在服务发现网元中。
在一种可能的实现方式中,服务发现网元根据第三请求,注册一个或多个第二数据分析网元的信息,包括:服务发现网元将一个或多个第二数据分析网元的信息存储在用户数据管理网元中。
在一种可能的实现方式中,第一数据分析网元的类型包括以下信息中的一个:服务端、协调器、中心训练者、全局训练者。
在一种可能的实现方式中,第二数据分析网元的类型包括以下信息中的一个:客户端、本地训练器、或者局部训练者。
在一种可能的实现方式中,分布式学习为联邦学习。
在一种可能的实现方式中,第二数据分析网元为终端。
第三方面,本申请实施例提供一种通信方法,该方法包括:第三数据分析网元确定子模型,该子模型由第三数据分析网元根据第三数据分析网元获取到的数据进行训练得到。第三数据分析网元向第一数据分析网元发送子模型。
在一种可能的实现方式中,本申请实施例提供的方法还可以包括:子模型由第三数据分析网元根据第三数据分析网元在第三数据分析网元的范围内获取到的数据进行训练得到。
在一种可能的实现方式中,本申请实施例提供的方法还可以包括:第三数据分析 网元接收来自第一数据分析网元的更新的模型,该更新的模型由多个不同第三数据分析网元提供的子模型得到。
在一种可能的实现方式中,本申请实施例提供的方法还可以包括:第三数据分析网元接收来自第一数据分析网元的目标模型。
在一种可能的实现方式中,本申请实施例提供的方法还可以包括:第三数据分析网元接收来自第一数据分析网元的配置参数,该配置参数为第三数据分析网元确定训练子模型时使用的参数。
在一种可能的实现方式中,配置参数包括以下信息中的一个或多个:初始模型、训练集选择标准、特征生成方法、训练终止条件、最大训练时间、最大等待时间。
在一种可能的实现方式中,分布式学习的类型包括横向学习、纵向学习以及迁移学习中的一个。
在一种可能的实现方式中,第三数据分析网元的类型为以下中的一个:客户端、本地训练器、或者局部训练者。
在一种可能的实现方式中,第三数据分析网元的范围位于第一数据分析网元的范围内。
在一种可能的实现方式中,本申请实施例提供的方法还可以包括:第三数据分析网元向服务发现网元发送用于请求注册第三数据分析网元的信息的第三请求,该第三数据分析网元的信息包括第三数据分析网元对应的以下信息中的一个或者多个:分布式学习的信息、或第三指示信息,第三指示信息用于指示第三数据分析网元的类型。其中,第三数据分析网元对应的分布式学习的信息包括第三数据分析网元支持的分布式学习的类型和/或第三数据分析网元支持的分布式学习支持的算法信息。
在一种可能的实现方式中,第三数据分析网元的信息还包括第三数据分析网元的范围,或第三数据分析网元的标识或第三数据分析网元的地址信息中的一个或多个。
在一种可能的实现方式中,第一数据分析网元的类型包括以下信息中的一个:服务端、协调器、中心训练者、全局训练者。
在一种可能的实现方式中,分布式学习为联邦学习。
第四方面,本申请实施例提供一种通信装置,该通信装置可以实现第一方面或第一方面的任意一种可能的实现方式中描述的一种通信方法,因此也可以实现第一方面或第一方面任意一种可能的实现方式中的有益效果。该通信装置可以为第一数据分析网元,也可以为可以支持第一数据分析网元网元实现第一方面或第一方面的任意一种可能的实现方式中的装置。例如应用于第一数据分析网元中的芯片。该通信装置可以通过软件、硬件、或者通过硬件执行相应的软件实现上述方法。
一种示例,本申请实施例提供一种通信装置,包括:通信单元以及处理单元,其中,通信单元用于接收和发送信息,处理单元用于处理信息。例如,通信单元,用于向服务发现网元发送用于请求第二数据分析网元的信息的第一请求。该第一请求包括:分布式学习的信息、和用于指示第二数据分析网元的类型的第一指示信息中的一个或多个,其中,分布式学习的信息包括第一数据分析网元请求的分布式学习的类型。通信单元,还用于接收来自服务发现网元的一个或多个第二数据分析网元的信息,该第二数据分析网元支持第一数据分析网元请求的上述分布式学习的类型。
在一种可能的实现方式中,处理单元,用于根据一个或多个第二数据分析网元的信息确定进行分布式学习的第三数据分析网元的信息,其中,第三数据分析网元的数量为一个或多个。
在一种可能的实现方式中,第三数据分析网元的负载低于预设负载阈值,或者,第三数据分析网元的优先级高于预设优先级阈值。其中,第三数据分析网元的范围位于第一数据分析网元的范围内。
在一种可能的实现方式中,第一请求还包括第一数据分析网元的范围,相应的,第二数据分析网元的范围或者第三数据分析网元的范围位于第一数据分析网元的范围内。可以理解的是,如果第一请求还包括第一数据分析网元的范围,则第一请求用于请求位于第一数据分析网元的范围内且支持第一数据分析网元请求的分布式学习的类型的一个或多个第二数据分析网元。
在一种可能的实现方式中,第一数据分析网元的范围包括以下信息中的一个或者多个:第一数据分析网元服务的区域、第一数据分析网元归属的公用陆地移动网PLMN的标识、第一数据分析网元服务的网络切片的信息、第一数据分析网元服务的数据网络名称DNN、第一数据分析网元的设备商信息。
在一种可能的实现方式中,分布式学习的信息还包括分布式学习支持的算法信息,相应的,第二数据分析网元或者第三数据分析网元支持分布式学习支持的算法信息对应的算法。这样便于服务发现网元向第一数据分析网元提供的一个或多个第二数据分析网元还支持分布式学习支持的算法信息。
在一种可能的实现方式中,分布式学习支持的算法信息包括算法类型、算法标识以及算法性能中的一个或者多个。可用理解的是,不同的第二数据分析网元或者第三数据分析网元支持的算法信息可用相同,也可以不同。
在一种可能的实现方式中,通信单元,还用于接收来自一个或多个第三数据分析网元的子模型。该子模型由第三数据分析网元根据第三数据分析网元获取到的数据进行训练得到。处理单元,用于根据一个或多个第三数据分析网元的子模型确定更新的模型。通信单元,还用于向一个或多个第三数据分析网元发送更新的模型。
在一种可能的实现方式中,处理单元,还用于根据更新的模型确定目标模型。通信单元,还用于向一个或多个第二数据分析网元发送目标模型,以及目标模型对应的以下信息中的一个或者多个:模型标识或者模型版本号或数据分析标识。虽然并非一个或多个第二数据分析网元中的全部第二数据分析网元参与目标模型的训练过程,但是向一个或多个第二数据分析网元发送目标模型可以使得每个第二数据分析网元均可以获得由第一数据分析网元确定的目标模型。例如,目标模型可以为业务体验模型。
在一种可能的实现方式中,通信单元,还用于向一个或多个第三数据分析网元发送配置参数,该配置参数为第三数据分析网元确定训练子模型时使用的参数。便于第三数据分析网元根据配置参数配置分布式学习训练过程中涉及到的相关参数。
在一种可能的实现方式中,配置参数包括以下信息中的一个或多个:初始模型、训练集选择标准、特征生成方法、训练终止条件、最大训练时间、最大等待时间。
在一种可能的实现方式中,分布式学习的类型包括横向学习、纵向学习以及迁移学习中的一个。
在一种可能的实现方式中,第二数据分析网元的类型为以下中的一个:客户端、本地训练器、或者局部训练者。
在一种可能的实现方式中,通信单元,还用于向服务发现网元发送用于请求注册第一数据分析网元的信息的第二请求。该第一数据分析网元的信息包括第一数据分析网元对应的以下信息中的一个或者多个:分布式学习的信息和第二指示信息,该第二指示信息用于指示第一数据分析网元的类型。以便于将第一数据分析网元的信息进行注册,便于后续其他设备通过服务发现网元确定第一数据分析网元。
在一种可能的实现方式中,第一数据分析网元的信息还包括第一数据分析网元的范围、或第一数据分析网元的标识、第一数据分析网元的地址信息中的一个或多个。
在一种可能的实现方式中,第一数据分析网元的类型包括以下信息中的一个:服务端、协调器、中心训练者、全局训练者。
在一种可能的实现方式中,分布式学习为联邦学习。
在一种可能的实现方式中,第二数据分析网元为终端。
另一种示例,本申请实施例提供一种通信装置,该通信装置可以是第一数据分析网元,也可以应用于第一数据分析网元中的装置(例如,芯片)。该通信装置可以包括:处理单元和通信单元。该通信装置还可以包括存储单元。该存储单元,用于存储计算机程序代码,计算机程序代码包括指令。该处理单元执行该存储单元所存储的指令,以使该通信装置实现第一方面或第一方面的任意一种可能的实现方式中描述的方法。当该通信装置是第一数据分析网元时,该处理单元可以是处理器。通信单元可以为通信接口。该存储单元可以是存储器。当该通信装置是第一数据分析网元内的芯片时,该处理单元可以是处理器,该通信单元可以统称为:通信接口。例如,通信接口可以为输入/输出接口、管脚或电路等。该处理单元执行存储单元所存储的计算机程序代码,以使该第一数据分析网元实现第一方面或第一方面的任意一种可能的实现方式中描述的方法,该存储单元可以是该芯片内的存储单元(例如,寄存器、缓存等),也可以是该第一数据分析网元内的位于该芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
一种可能的实现方式中,处理器、通信接口和存储器相互耦合。
第五方面,本申请实施例提供一种通信装置,该通信装置可以实现第二方面或第二方面的任意一种可能的实现方式中描述的一种通信方法,因此也可以实现第二方面或第二方面任意一种可能的实现方式中的有益效果。该通信装置可以为服务发现网元,也可以为可以支持服务发现网元实现第二方面或第二方面的任意一种可能的实现方式中的装置。例如应用于服务发现网元中的芯片。该通信装置可以通过软件、硬件、或者通过硬件执行相应的软件实现上述方法。
一种示例,本申请实施例提供一种通信装置,包括:通信单元,用于接收来自第一数据分析网元的用于请求第二数据分析网元的信息的第一请求,第一请求包括以下信息中的一个或者多个:分布式学习的信息和第一指示信息中的一个或多个。其中,分布式学习的信息包括第一数据分析网元请求的分布式学习的类型,第一指示信息用于指示第二数据分析网元的类型。处理单元,用于根据第一请求确定支持分布式学习的类型的一个或多个第二数据分析网元的信息。通信单元,还用于向第一数据分析网 元发送一个或多个第二数据分析网元的信息。
在一种可能的实现方式中,本申请实施例提供的方法中的第一请求中还包括第一数据分析网元的范围,相应的,第二数据分析网元位于第一数据分析网元的范围内,换言之,服务发现网元根据第一请求确定支持分布式学习的类型的一个或多个第二数据分析网元的信息,包括:服务发现网元将位于第一数据分析网元的范围内,且支持分布式学习的类型的数据分析网元,确定为一个或多个第二数据分析网元。
在一种可能的实现方式中,分布式学习的信息还包括分布式学习支持的算法信息,相应的,该第二数据分析网元支持分布式学习支持的算法信息对应的算法。换言之,处理单元,用于根据第一请求确定支持分布式学习的类型的一个或多个第二数据分析网元的信息,包括:处理单元,用于将既支持分布式学习的类型又支持分布式学习支持的算法信息的数据分析网元确定为一个或多个第二数据分析网元。
在一种可能的实现方式中,通信单元,还用于接收来自第一数据分析网元的用于请求注册第一数据分析网元的信息的第二请求,该第一数据分析网元的信息包括第一数据分析网元对应的以下信息中的一个或者多个:分布式学习的信息和第二指示信息。该第二指示信息用于指示第一数据分析网元的类型。处理单元,用于根据第二请求,注册第一数据分析网元的信息。其中,第一数据分析网元对应的分布式学习的信息包括第一数据分析网元支持的分布式学习的类型和/或第一数据分析网元支持的分布式学习支持的算法信息。
在一种可能的实现方式中,第一数据分析网元的信息还包括第一数据分析网元的范围、第一数据分析网元的标识、第一数据分析网元的地址信息中的一个或多个。
在一种可能的实现方式中,处理单元,用于根据第二请求,注册第一数据分析网元的信息包括:处理单元,用于将第一数据分析网元的信息存储在服务发现网元中,或者处理单元,用于将第一数据分析网元的信息存储在用户数据管理网元中。
在一种可能的实现方式中,通信单元,还用于接收来自一个或多个第二数据分析网元的用于请求注册第二数据分析网元的信息的第三请求,该第二数据分析网元的信息包括第二数据分析网元对应的以下信息中的一个或者多个:分布式学习的信息、或第三指示信息,第三指示信息用于指示第二数据分析网元的类型。处理单元,用于根据第三请求,注册一个或多个第二数据分析网元的信息。其中,第二数据分析网元对应的分布式学习的信息包括第二数据分析网元支持的分布式学习的类型和/或第二数据分析网元支持的分布式学习支持的算法信息。
在一种可能的实现方式中,第二数据分析网元的信息还包括第二数据分析网元的范围,或第二数据分析网元的标识或第二数据分析网元的地址信息中的一个或多个。
在一种可能的实现方式中,处理单元,用于根据第三请求,注册一个或多个第二数据分析网元的信息,包括:处理单元,用于将一个或多个第二数据分析网元的信息存储在服务发现网元中,或者处理单元,用于将一个或多个第二数据分析网元的信息存储在用户数据管理网元中。
在一种可能的实现方式中,第一数据分析网元的类型包括以下信息中的一个或者多个:服务端、协调器、中心训练者、全局训练者。
在一种可能的实现方式中,第二数据分析网元的类型为以下中的一个:客户端、 本地训练器、或者局部训练者。
在一种可能的实现方式中,分布式学习包括联邦学习。
在一种可能的实现方式中,第二数据分析网元为终端。
另一种示例,本申请实施例提供一种通信装置,该通信装置可以是服务发现网元,也可以是服务发现网元内的芯片。该通信装置可以包括:处理单元和通信单元。该通信装置还可以包括存储单元。该存储单元,用于存储计算机程序代码,计算机程序代码包括指令。该处理单元执行该存储单元所存储的指令,以使该通信装置实现第二方面或第二方面的任意一种可能的实现方式中描述的方法。当该通信装置是服务发现网元时,该处理单元可以是处理器。通信单元可以为通信接口。该存储单元可以是存储器。当该通信装置是服务发现网元内的芯片时,该处理单元可以是处理器,该通信单元可以统称为:通信接口。例如,通信接口可以为输入/输出接口、管脚或电路等。该处理单元执行存储单元所存储的计算机程序代码,以使该服务发现网元实现第二方面或第二方面的任意一种可能的实现方式中描述的方法,该存储单元可以是该芯片内的存储单元(例如,寄存器、缓存等),也可以是该服务发现网元内的位于该芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
一种可能的实现方式,处理器、通信接口和存储器相互耦合。
第六方面,本申请实施例提供一种通信装置,该通信装置可以实现第三方面或第三方面的任意一种可能的实现方式中描述的一种通信方法,因此也可以实现第三方面或第三方面任意一种可能的实现方式中的有益效果。该通信装置可以为第三数据分析网元,也可以为可以支持第三数据分析网元实现第三方面或第三方面的任意一种可能的实现方式中的装置。例如应用于第三数据分析网元中的芯片。该通信装置可以通过软件、硬件、或者通过硬件执行相应的软件实现上述方法。
一种示例,本申请实施例提供一种通信装置,该装置包括:处理单元,用于确定子模型,该子模型由处理单元根据通信单元获取到的数据进行训练得到。通信单元,用于向第一数据分析网元发送子模型。
在一种可能的实现方式中,通信单元,还用于接收来自第一数据分析网元的更新的模型,该更新的模型由多个不同第三数据分析网元提供的子模型得到。
在一种可能的实现方式中,通信单元,还用于接收来自第一数据分析网元的目标模型。
在一种可能的实现方式中,通信单元,还用于接收来自第一数据分析网元的配置参数,该配置参数为第三数据分析网元确定训练子模型时使用的参数。
在一种可能的实现方式中,配置参数包括以下信息中的一个或多个:初始模型、训练集选择标准、特征生成方法、训练终止条件、最大训练时间、最大等待时间。
在一种可能的实现方式中,分布式学习的类型包括横向学习、纵向学习以及迁移学习中的一个。
在一种可能的实现方式中,通信单元,还用于向服务发现网元发送用于请求注册第三数据分析网元的信息的第三请求,该第三数据分析网元的信息包括第三数据分析网元对应的以下信息中的一个或者多个:分布式学习的信息、或第三指示信息,第三指示信息用于指示第三数据分析网元的类型。其中,第三数据分析网元对应的分布式 学习的信息包括第三数据分析网元支持的分布式学习的类型和/或第三数据分析网元支持的分布式学习支持的算法信息。
在一种可能的实现方式中,第三数据分析网元的信息还包括第三数据分析网元的范围,或第三数据分析网元的标识或第三数据分析网元的地址信息中的一个或多个。
在一种可能的实现方式中,第一数据分析网元的类型包括以下信息中的一个:服务端、协调器、中心训练者、全局训练者。
在一种可能的实现方式中,分布式学习为联邦学习。
在一种可能的实现方式中,第三数据分析网元的类型为以下信息中的一个:客户端、本地训练器、或者局部训练者。
在一种可能的实现方式中,第三数据分析网元的范围位于第一数据分析网元的范围内。
另一种示例,本申请实施例提供一种通信装置,该通信装置可以是第三数据分析网元,也可以是第三数据分析网元内的芯片。该通信装置可以包括:处理单元和通信单元。该通信装置还可以包括存储单元。该存储单元,用于存储计算机程序代码,计算机程序代码包括指令。该处理单元执行该存储单元所存储的指令,以使该通信装置实现第三方面或第三方面的任意一种可能的实现方式中描述的方法。当该通信装置是第三数据分析网元时,该处理单元可以是处理器。通信单元可以为通信接口。该存储单元可以是存储器。当该通信装置是第三数据分析网元内的芯片时,该处理单元可以是处理器,该通信单元可以统称为:通信接口。例如,通信接口可以为输入/输出接口、管脚或电路等。该处理单元执行存储单元所存储的计算机程序代码,以使该第三数据分析网元实现第三方面或第三方面的任意一种可能的实现方式中描述的方法,该存储单元可以是该芯片内的存储单元(例如,寄存器、缓存等),也可以是该第三数据分析网元内的位于该芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
一种可能的实现方式,处理器、通信接口和存储器相互耦合。
第七方面,本申请实施例提供一种包括指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行第一方面或第一方面的各种可能的实现方式中描述的一种通信方法。
第八方面,本申请实施例提供一种包括指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行第二方面或第二方面的各种可能的实现方式中描述的一种通信方法。
第九方面,本申请实施例提供一种包括指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行第三方面或第三方面的各种可能的实现方式中描述的一种通信方法。
第十方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行如第一方面或第一方面的各种可能的实现方式中描述的一种通信方法。
第十一方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行如第二方面或第二方面的各种可能的实现方式中描述的一种通信方法。
第十二方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行如第三方面或第三方面的各种可能的实现方式中描述的一种通信方法。
第十三方面,本申请实施例提供一种通信装置,该通信装置包括至少一个处理器,该至少一个处理器用于运行存储器中存储的计算机程序或指令,以实现如第一方面或第一方面的各种可能的实现方式中描述的一种通信方法。
第十四方面,本申请实施例提供一种通信装置,该通信装置包括至少一个处理器,该至少一个处理器用于运行存储器中存储的计算机程序或指令,以实现如第二方面或第二方面的各种可能的实现方式中描述的一种通信方法。
第十五方面,本申请实施例提供一种通信装置,该通信装置包括至少一个处理器,该至少一个处理器用于运行存储器中存储的计算机程序或指令,以实现如第三方面或第三方面的各种可能的实现方式中描述的一种通信方法。
在一种可能的实现方式中,第十四方面~第十六方面描述的通信装置还可以包括存储器。
第十六方面,本申请实施例提供一种通信装置,该通信装置包括处理器和存储介质,存储介质存储有指令,指令被处理器运行时,实现如第一方面或第一方面的各种可能的实现方式描述的通信方法。
第十七方面,本申请实施例提供一种通信装置,该通信装置包括处理器和存储介质,存储介质存储有指令,指令被处理器运行时,实现如第二方面或第二方面的各种可能的实现方式描述的通信方法。
第十八方面,本申请实施例提供一种通信装置,该通信装置包括处理器和存储介质,存储介质存储有指令,指令被处理器运行时,实现如第三方面或第三方面的各种可能的实现方式描述的通信方法。
第十九方面,本申请实施例提供了一种通信装置,该通信装置包括一个或者多个模块,用于实现上述第一方面、第二方面、第三方面的方法,该一个或者多个模块可以与上述第一方面、第二方面、第三方面的方法中的各个步骤相对应。
第二十方面,本申请实施例提供一种芯片,该芯片包括处理器和通信接口,通信接口和处理器耦合,处理器用于运行计算机程序或指令,以实现第一方面或第一方面的各种可能的实现方式中所描述的一种通信方法。通信接口用于与芯片之外的其它模块进行通信。
第二十一方面,本申请实施例提供一种芯片,该芯片包括处理器和通信接口,通信接口和处理器耦合,处理器用于运行计算机程序或指令,以实现第二方面或第二方面的各种可能的实现方式中所描述的一种通信方法。通信接口用于与芯片之外的其它模块进行通信。
第二十二方面,本申请实施例提供一种芯片,该芯片包括处理器和通信接口,通信接口和处理器耦合,处理器用于运行计算机程序或指令,以实现第三方面或第三方面的各种可能的实现方式中所描述的一种通信方法。通信接口用于与芯片之外的其它模块进行通信。
具体的,本申请实施例中提供的芯片还包括存储器,用于存储计算机程序或指令。
第二十三方面,本申请实施例提供一种用来执行第一方面或第一方面的各种可能的实现方式中所描述的一种通信方法的装置。
第二十四方面,本申请实施例提供一种用来执行第二方面或第二方面的各种可能的实现方式中所描述的一种通信方法的装置。
第二十五方面,本申请实施例提供一种用来执行第三方面或第三方面的各种可能的实现方式中所描述的一种通信方法的装置。
上述提供的任一种装置或计算机存储介质或计算机程序产品或芯片或通信系统均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文提供的对应的方法中对应方案的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种通信系统的架构图;
图2为本申请实施例提供的一种5G网络架构图;
图3为本申请实施例提供的一种联邦学习的架构图;
图4为本申请实施例提供的一种场景示意图;
图5为本申请实施例提供的另一种场景示意图;
图6为本申请实施例提供的一种通信方法的流程示意图;
图7为本申请实施例提供的另一种通信方法的流程示意图;
图8为本申请实施例提供的一种通信方法的详细实施例;
图9为本申请实施例提供的另一种通信方法的详细实施例;
图10为本申请实施例提供的一种模型训练的架构示意图;
图11为本申请实施例提供的另一种模型训练的架构示意图;
图12为本申请实施例提供的一种通信装置的结构示意图;
图13为本申请实施例提供的另一种通信装置的结构示意图;
图14为本申请实施例提供的一种通信设备的结构示意图;
图15为本申请实施例提供的一种芯片的结构示意图。
具体实施方式
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。例如,第一指示信息和第一指示信息仅仅是为了区分不同的指示信息,并不对其先后顺序进行限定。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。例如,第一数据分析网元可以是一个或者多个数据分析网元,第二数据分析网元也可以是一个或者多个数据分析网元。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者 复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
本申请实施例的技术方案可以应用于各种通信系统,例如:码分多址(code division multiple access,CDMA)、时分多址(time division multiple access,TDMA)、频分多址(frequency division multiple access,FDMA)、正交频分多址(orthogonal frequency-division multiple access,OFDMA)、单载波频分多址(single carrier FDMA,SC-FDMA)和其它系统等。术语“系统”可以和“网络”相互替换。3GPP在长期演进(long term evolution,LTE)和基于LTE演进的各种版本是使用E-UTRA的UMTS的新版本。5G通信系统、新空口(new radio,NR)是正在研究当中的下一代通信系统。此外,通信系统还可以适用于面向未来的通信技术,都适用本申请实施例提供的技术方案。
参考图1,为本申请实施例提供的一种通信系统的架构,该通信系统包括:数据分析网元100、与数据分析网元100通信的一个或多个数据分析网元(例如,数据分析网元201~数据分析网元20n)、以及服务发现网元300。其中,n为大于或等于1的整数。
其中,数据分析网元100以及一个或多个数据分析网元(例如,数据分析网元201~数据分析网元20n)均具有分布式学习能力。
例如,数据分析网元100的类型或者数据分析网元100在分布式学习中充当的角色可以为以下信息中的一个或者多个:服务端(server)、协调器(coordinator)、中心训练者(centralized trainer)、全局训练者(global Trainer)。数据分析网元201~数据分析网元20n中任一个数据分析网元的类型或者任何一个数据分析网元在分布式学习中充当的角色可以为以下中的一个或多个:客户端(client)、本地训练器(local trainer)、分布式训练器(distributed trainer)、或者局部训练者。可以称图1所示的部署模式为server(服务端)-client(客户端)模式。
本申请实施例中数据分析网元的类型可以指该数据分析网元在分布式学习中担任的角色。例如,数据分析网元100的类型为服务(server)端,则表示数据分析网元100在分布式学习中担任的角色为服务器类型。
本申请实施例中可以将数据分析网元100看作(中心)服务端节点,将数据分析网元201~数据分析网元20n看作(边缘)客户端(client)节点。
其中,数据分析网元201~数据分析网元20n中每个数据分析网元具有各自的范围,而数据分析网元201~数据分析网元20n中存在部分或者全部数据分析网元位于数据分析网元100的范围内。
本申请实施例中任一个数据分析网元可以单独部署,也可以与5G网络中的网络功能网元(例如,会话管理功能(session management function,SMF)网元、接入和移动性管理功能(access and mobility management function,AMF)网元、策略控制功能(policy control function,PCF)网元等)合设部署,例如,可根据网元数据量或者功能需要,部署在现有5GC NF上,比如,将具有终端移动(UE Mobility或者UE Moving  Trajectory)分析能力的数据分析网元与AMF网元合一部署,这样AMF网元上的终端位置信息就不会从核心网出局,规避了用户数据隐私和数据安全问题。此外,如果是网元内部数据分析,也可以考虑每个5GC NF内置智能模块(如内置NWDAF的功能模块),基于自身数据自闭环即可,仅对于跨网元数据闭环,需要数据分析网元基于数据流闭环控制。本申请实施例对此不做限定。
为例避免数据泄露,数据分析网元201~数据分析网元20n中的每个数据分析网元上都分布有各自获取的原始数据,数据分析网元100可以不具有原始数据,或者,数据分析网元100无法收集得到数据分析网元201~数据分析网元20n中的每个数据分析网元上都分布有各自获取的原始数据,并且数据分析网元201~数据分析网元20n中的每个数据分析网元可以不用将各自具有的原始数据发送给数据分析网元100。
上述数据分析网元100以及数据分析网元201~数据分析网元20n的部署的粒度可以是跨运营商的(Inter-Public Land Mobile Network,Inter-PLMN)、同一个运营商跨区域的(Intra-PLMN或者Inter-Region)、跨网络切片的、网络切片内部的、跨厂商的、同一个厂商内部的、跨数据网络名称(data network name,DNN)的或者同一个DNN内部的。每一种粒度内部,都存在以server(服务端)-client(客户端)模式部署的数据分析网元。
例如,如果部署粒度为厂商粒度,则该厂商内部署有至少一个数据分析网元100以及一个或多个数据分析网元。例如,如果部署粒度为DNN粒度,则该DNN内部署有至少一个数据分析网元100以及一个或多个数据分析网元。
当然,也存在交叉粒度的数据分析网元部署,比如,同一个网络切片内部,部署一个数据分析网元100,在该网络切片服务的不同网络区域中的每个网络区域部署一个或多个数据分析网元。
一种可能的实现方式,图1所示的通信系统可以应用于目前的5G网络架构以及未来出现的其它的网络架构,本申请实施例对此不作具体限定。
下述将以如图1所示的通信系统适用于5G网络架构为例,例如,以图1所示的通信系统适用于如图2所示的5G网络架构为例。
示例性的,以图1所示的通信系统应用于5G网络架构中的基于接口的架构为例,则如图2所示,上述的数据分析网元100,或者数据分析网元201~数据分析网元20n中的任一个数据分析网元所对应的网元或者实体可以为如图2所示的5G网络架构中的网络数据分析功能(network data analytics function,NWDAF)网元,也可以是网管的管理数据分析功能(management data analytics function,MDAF)网元,甚至可以是RAN侧的数据分析网元或者数据分析设备。
本申请实施例中的任一个数据分析网元所对应的网元或者实体也可以为NWDAF网元、MDAF网元、或者RAN侧的数据分析网元或者数据分析设备中的一个模块,本申请实施例对此不作限定。
当然,数据分析网元100,或者数据分析网元201~数据分析网元20n中的任一个数据分析网元所对应的网元或者实体可以为如图2所示的终端。
需要说明的是中的数据分析网元100,或者数据分析网元201~数据分析网元20n中的任一个数据分析网元所对应的网元或者实体并不局限于终端或者NWDAF网元等,但凡具有模型训练功能,或者支持分布式学习的网元均可以作为本申请实施例中的数据分析网元。
服务发现网元300支持网络功能或网络服务的注册、发现、更新、认证功能。例如,服务发现网元300所对应的网元或者实体可以为如图2所示的5G网络架构中的网络存储功能(network repository function,NRF)网元或者统一数据管理(unified data management,UDM)网元或者统一数据存储库(unified data repository,UDR)网元。或者服务发现网元300可以为域名系统(domain name system,DNS)服务器。
需要说明的是,本申请实施例以服务发现网元300为NRF网元为例,在未来网络中,服务发现网元300可以是NRF网元或有其他名称,本申请不作限定。
此外,如图2所示,该5G网络架构中还可以包括:终端、接入设备(例如,接入网络(access network,AN)或者无线接入网络(radio access network,RAN))、应用功能(application function,AF)网元、运行、管理和维护(运维)(operation,administration,and maintenance,OAM)网元(也可以称为运行管理维护网元)、PCF网元,SMF网元、用户面功能(user plane function,UPF)网元、数据网络(data network,DN)、AMF网元、鉴权服务器功能(authentication server function,AUSF)网元、网络能力开放功能(network exposure function,NEF)网元、UDR网元、或者UDM网元等,本申请实施例对此不作具体限定。
其中,终端通过下一代网络(next generation,N1)接口(简称N1)与AMF网元通信。接入设备通过N2接口(简称N2)与AMF网元通信。接入设备通过N3接口(简称N3)与UPF网元通信。UPF网元通过N6接口(简称N6)与DN通信。UPF网元通过N4接口(简称N4)与SMF网元通信。AMF网元、AUSF网元、SMF网元、UDM网元、UDR网元、NRF网元、NEF网元、或者PCF网元采用服务化接口进行交互。比如,AMF网元对外提供的服务化接口可以为Namf。SMF网元对外提供的服务化接口可以为Nsmf。UDM网元对外提供的服务化接口可以为Nudm。UDR网元对外提供的服务化接口可以为Nudr。PCF网元对外提供的服务化接口可以为Npcf。NEF网元对外提供的服务化接口可以为Nnef。NRF网元对外提供的服务化接口可以为Nnrf。NWDAF网元对外提供的服务化接口可以为Nnwdaf。应理解,图2中各种服务化接口的名称的相关描述可以参考现有技术中的5G系统架构(5G system architecture)图,在此不予赘述。
应理解,在图2中以5GC中的部分网元(AMF网元、AUSF网元、SMF网元、UDM网元、UDR网元、NRF网元、NEF网元、或者PCF网元)采用服务化接口进行交互为例,当然AMF网元也可以通过N11接口(简称N11)与SMF网元通信。AMF网元也可以通过N8接口(简称N8)与UDM网元通信。SMF网元也可以通过N7接口(简称N7)与PCF网元通信。SMF网元也可以通过N10接口(简称N10)与UDM网元通信。AMF网元也可以通过N12接口(简称N12)与AUSF网元通信。UDM网元也可以与UDR网元通过彼此间的接口通信。PCF网元也可以与UDR网元通过彼此间的接口通信,本申请实施例对此不做限定。
AMF网元主要负责移动网络中的移动性管理,如用户位置更新、用户注册网络、用户切换等。
SMF网元主要负责移动网络中的会话管理,如会话建立、修改、释放。具体功能如为用户分配IP地址、选择提供报文转发功能的UPF网元等。
PCF网元用于制定背景流量传输策略。
UDM网元或者UDR网元用于存储用户数据,如任一个数据分析网元的信息。
UPF网元主要负责对用户报文进行处理,如转发、计费等。
DN指的是为终端提供数据传输服务的运营商网络,如IP多媒体业务(IP multi-media service,IMS)、Internet等。
数据分析网元,是能够进行大数据分析的网元设备,可以但不限于是网络数据分析功能网元等,例如,网络数据分析功能网元可以是NWDAF。在本申请实施例中,数据分析网元能够进行分布式学习训练或者推理。
NRF网元支持网络功能或网络服务的注册、发现、更新、认证功能。
应用网元,具体的该应用网元可以但不限于是运营商的AF网元、终端、第三方设备,例如非运营商的AF网元(也可称之为第三方的AF网元)等。其中所述运营商的AF网元可以但不限于是运营商的业务管控服务器;第三方的AF网元可以但不限于是第三方的业务服务器。
在介绍本申请实施例之前对本申请实施例涉及到的相关名词作如下释义:
联邦学习(federated learning):是一种新兴的人工智能基础技术,其设计目标是在保障大数据交换时的信息安全、保护终端数据和个人数据隐私、保证合法合规的前提下,在多参与方或多计算结点之间开展高效率的机器学习。可以在原始数据不出本域的情况下实现模型跨域联合训练,既可以提高训练的效率,最重要的,可以通过联邦学习技术,避免数据汇聚到数据分析中心时带来的安全问题(比如,原始数据在传输过程中被劫持,原始数据被数据中心错误使用等)。
详细地,联邦学习主要可以分为以下三种类别:
-横向联邦学习(horizontal federated learning,Horizontal FL或者HFL):特征重复度非常高,但是数据样本之间差异较大。
-纵向联邦学习(vertical federated learning,VFL):特征重复度非常低,但是数据样本重复度较高。例如,横向联邦学习中来源于A的数据特征和来源于B的数据特征的重复度高于纵向联邦学习中来源于A的数据特征和来源于B的数据特征的重复度。
-迁移学习(transfer learning,TL):特征以及数据样本差异都很大。
如图3所示,图3以线性回归为例描述了本申请实施例提供的一种横向联邦学习的训练过程。从图3中可以看出可以看到横向联邦包括一个中心服务器(server)节点以及多个边缘客户端(client)节点(例如,client节点A、client节点B以及client节点C),这其中,原始数据都分布在各个client节点,server节点不具有原始数据,并且client节点不允许将原始数据发送给server节点。
首先,各个client节点上的数据集(假设共K个client节点,也就是存在K个数据集)分别是:
Figure PCTCN2021075317-appb-000001
其中,x为样本数据,y为样本数据对应的标签数据。横向联邦学习中每个样本数据都包括标签,即标签和数据存放在一起。
然后,每个client节点上的数据分析模块可以根据线性回归算法各自训练自己的模型,称之为子模型,即:
h(x i)=Θ Ax i A,h(x j)=Θ Bx i B,...,h(x K)=Θ KKx k K
假设线性回归所使用的损失函数是均方误差(Mean Squared Error,MSE),那么每个
子模型训练的目标函数(整个训练的过程就是使得上述损失函数的值最小)为:
Figure PCTCN2021075317-appb-000002
下面才真正开始训练过程,针对每一次迭代过程,
(1)每个client节点生成的子模型梯度如下:
Figure PCTCN2021075317-appb-000003
(2)每个client上报样本个数以及本地梯度值,即:
N I以及
Figure PCTCN2021075317-appb-000004
其中,N I表示样本个数,
Figure PCTCN2021075317-appb-000005
表示本地梯度值。
(3)server节点收到上述信息后,对梯度进行聚合,如下:
Figure PCTCN2021075317-appb-000006
其中,||K||为client节点的个数,P I=N I/∑ IN I
(4)server节点将聚合后的梯度下发给每一个参与训练的client节点,然后client节点本地更新模型参数,如下:
Figure PCTCN2021075317-appb-000007
(5)client节点进行模型参数更新后,计算损失函数值L I,转至步骤(1)。
上述训练过程,server节点可以通过迭代次数控制训练结束,比如训练10000次终止训练,或者通过设置损失函数的阈值控制训练结束,比如L I≤0.0001时,训练结束。
训练结束后,每个client节点都会保留着同一份模型(可以来自server节点,也可以是本地进一步根据来自server节点本地个性化所得),用于本地推理。
上述描述了横向联邦学习的训练过程,但是目前5G网络中未涉及如何应用横向联邦学习的训练过程进行模型训练,特别是针对如图4或图5所描述的场景。比如:
如图4所示,场景一,同一家运营商内,跨厂商。例如,某移动运营商A可能同时采购了X厂商和Y厂商两个厂商的设备,但是X厂商和Y厂商的设备之间出于隐私保护无法直接交互数据,换言之,X厂商和Y厂商的设备均不向移动运营商A中的数据分析网元提供各自收集得到的数据。这时虽然移动运营商A中的数据分析网元(如上述Server类型的数据分析网元)可以采用联邦学习技术训练一个整网的模型,但是进行联邦学习技术训练的前提是:数据分析网元能够准确获知不同厂商的设备中支持横向联邦学习的网元或设备(例如,每个厂商均具有一个为该厂商提供服务的Client类型数据分析网元)。因此,移动运营商A中的数据分析网元如何发现不同厂商的设备中是否支持横向联邦学习的设备是亟需解决的问题。
如图5所示,场景二,同一个网络内,跨运营商场景,例如,移动运营商A与移动运营商B共享基站侧资源(如,频谱),并且两个运营商希望训练一个整网模型,然后移动运营商A与移动运营商B之间互相分享数据分析的成果,但是移动运营商A与移动运营商B之间不愿意上报原始数据,可以采用联邦学习技术训练得到整网模型。综上所述,移动运营商A中的数据分析网元或者移动运营商B中的数据分析网元如何发现对方的网元或者设备 中是否支持横向联邦学习的设备是亟需解决的问题。
其他不愿意交互原始数据的场景,如:同一个网络切片(通过单网络切片选择支撑信息(single network slice selection assistance information,S-NSSAI)标识)下,不同的网络切片实例(network slice instance,NSI)之间如果无法交互原始数据;同一个大区(以中国为例,大区包括东北、华北、华东、中南、西北、西南等)下面,不同城市之间无法交互原始数据。如果同一个网络切片中每个NSI对应一个数据分析网元,换言之该数据分析网元可以为该NSI服务,同一个大区中不同城市中每个城市也可以对应一个数据分析网元,换言之该数据分析网元可以为该城市服务,则在同一个网络切片中,不同NSI间如果无法交互原始数据,或者,在同一个大区内不同的城市之间如果无法交换数据,都可以采用联邦学习技术训练技术得到目标模型,但是实现联邦学习的前提是数据分析网元(server类型)能够获取服务各个NSI的数据分析网元(client类型)的信息,或者服务各个城市的数据分析网元(client类型)的信息,否则可能无法继续进行横向联邦学习。
基于此,本申请实施例结合图6~图7描述一种通信方法,该方法可以实现第一数据分析网元准确地获取支持分布式学习的一个或多个第二数据分析网元的信息。
下面将结合图1至图5对本申请实施例提供的通信方法进行具体阐述。
需要说明的是,本申请下述实施例中各个网元之间的消息名字或消息中各参数的名字等只是一个示例,具体实现中也可以是其他的名字,本申请实施例对此不作具体限定。
需要指出的是,本申请各实施例之间可以相互借鉴或参考,例如,相同或相似的步骤或者相同或相似的名词,方法实施例、通信系统实施例和装置实施例之间,均可以相互参考,不予限制。
下述将以图6和图7为例,描述本申请实施例提供的一种通信方法的交互实施例,该一种通信方法中的执行主体可以为第一数据分析网元,也可以为应用于第一数据分析网元中的装置(例如,芯片)。一种通信方法的执行主体可以为第二数据分析网元,也可以为应用于第二数据分析网元中的装置(例如,芯片)。一种通信方法的执行主体可以为服务发现网元,也可以为应用于服务发现网元中的装置(例如,芯片)。下述实施例将以通信方法的执行主体为第一数据分析网元、和第二数据分析网元以及服务发现网元为例进行说明。可以理解的是,但凡由第一数据分析网元执行的步骤也可以由应用于第一数据分析网元中的装置执行,由第二数据分析网元执行的步骤也可以由应用于第二数据分析网元中的装置执行,由服务发现网元执行的步骤也可以由应用于服务发现网元中的装置执行,此处统一说明,后续不再赘述。
以本申请实施例提供的通信方法应用于图1~图3所示的通信系统为例,如图6所示,为本申请实施例提供的一种通信方法交互的示意图,该方法包括如下步骤:
步骤601、第一数据分析网元向服务发现网元发送第一请求,相应的,服务发现网元接收来自第一数据分析网元的第一请求。该第一请求用于请求第二数据分析网元的信息。
示例性的,第一请求包括:分布式学习的信息和第一指示信息中的一个或多个。其中,分布式学习的信息包括分布式学习的类型,该第一指示信息用于指示第一数据分析网元需要的第二数据分析网元的类型。
应理解,第一请求中携带的分布式学习的类型为第一数据分析网元请求的第二数据分析网元应当具有的分布式学习的类型。
本申请实施例中以分布式学习为联邦学习为例。示例性的,分布式学习的类型包括横向学习、纵向学习以及迁移学习中的一个。
在一种可能的实现方式中,该第一请求中还可以携带第四指示信息,该第四指示信息用于指示第一数据分析网元向服务发现网元请求第二数据分析网元的信息。
可以理解的是,本申请实施例中步骤601中第一数据分析网元利用第一请求向服务发现网元请求的第二数据分析网元可以为泛指,这时第一数据分析网元可能并不知道该第二数据分析网元的标识,第一数据分析网元通过在第一请求中携带第一数据分析网元的需求信息(例如,分布式学习的信息或者第一指示信息),以便于服务发现网元根据需求信息为第一数据分析网元提供满足需求信息的一个或多个第二数据分析网元。
示例性的,第二数据分析网元的类型为以下中的一个:客户端、本地训练器、或者局部训练者。
举例说明,第一数据分析网元可以为图1所示的数据分析网元100。服务发现网元可以为服务发现网元300。
步骤602、服务发现网元根据第一请求,确定一个或多个第二数据分析网元。
应理解,步骤602中服务发现网元确定的一个或多个第二数据分析网元支持第一数据分析网元请求的分布式学习的类型,和/或,该一个或多个第二数据分析网元的类型与第一指示信息指示的第二数据分析网元的类型相同。
举例说明,一个或多个第二数据分析网元可以为图1所示的数据分析网元201~数据分析网元20n中的全部或者部分数据分析网元。
举例说明,如果第一请求中携带的分布式学习的类型为横向学习,换言之第一数据分析网元请求可以进行横向学习的第二数据分析网元,第一指示信息指示的第二数据分析网元的类型为客户端或本地训练器,则服务发现网元确定的一个或多个第二数据分析网元支持的分布式学习的类型应为横向学习。此外,一方面,该一个或多个第二数据分析网元中至少存在部分的第二数据分析网元的类型为客户端,另外部分第二数据分析网元的类型为本地训练器。或者另一方面,该一个或多个第二数据分析网元的类型均为客户端或者本地训练器,本申请实施例对此不做限定。
可以理解的是,如果第一指示信息指示的第二数据分析网元的类型为A和B,或者,第一指示信息指示的第二数据分析网元的类型为A或B,该一个或多个第二数据分析网元中至少存在部分的第二数据分析网元的类型为A,而另外部分第二数据分析网元的类型为B,换言之,服务发现网元向第一数据分析网元提供的一个或多个第二数据分析网元不仅要包括类型为A的第二数据分析网元,还要包括类型为B的第二数据分析网元。
此外,当第一指示信息指示的第二数据分析网元的类型为A或B时,该一个或多个第二数据分析网元的类型可以全部为A,或者全部B。换言之,服务发现网元向第一数据分析网元提供的一个或多个第二数据分析网元的类型可以全部为B,或者全部为A,本申请实施例对此不做限定。
应理解,第一指示信息指示的第二数据分析网元的类型为A和B并不代表第一数据分析网元所请求的第二数据分析网元的类型既为A也为B,也就是说,第一数据分析网元请求的第二数据分析网元可以是类型A或者类型B中的一个。
一种示例,如果分布式学习的类型包括横向学习、纵向学习以及迁移学习中的多个, 则一个或多个第二数据分析网元中可以包括支持横向学习的第二数据分析网元,支持纵向学习的第二数据分析网元以及支持迁移学习的第二数据分析网元。
举例说明,以一个或多个第二数据分析网元包括数据分析网元201、数据分析网元202以及数据分析网元203为例,则数据分析网元201可以支持横向学习,数据分析网元202可以支持纵向学习,数据分析网元203可以支持迁移学习。
另一种示例,如果分布式学习的类型包括横向学习、纵向学习以及迁移学习中的多个,则一个或多个第二数据分析网元中每个第二数据分析网元均需要支持横向学习、纵向学习以及迁移学习。
应理解,服务发现网元中至少具有一个或多个第二数据分析网元的信息,或者服务发现网元可以根据第一请求从其它设备处获取一个或多个第二数据分析网元的信息。该一个或多个第二数据分析网元的信息例如可以为:第二数据分析网元对应的以下信息中的一个或者多个:分布式学习的信息、第二数据分析网元的范围、或第二指示信息,该第二指示信息用于指示第二数据分析网元的类型。第二数据分析网元对应的分布式学习的信息包括第二数据分析网元支持的分布式学习的类型和/或第二数据分析网元支持的分布式学习的算法信息。
示例性的,本申请实施例中涉及到的分布式学习支持的算法信息包括算法类型(algorithm type)、算法标识(algorithm ID)以及算法性能中的一个或者多个,此处统一说明,后续不再赘述。
例如,算法类型可以为:线性回归、逻辑斯特回归、神经网络、K-Means、增强学习等中的一个或多个。算法性能可以为训练时间、收敛速度等中的一个或多个,算法性能主要用于辅助数据分析网元在进行模型训练时选择算法性能高于预设算法阈值的(如,训练时间小于预设时间阈值或者收敛速度高于预设速度阈值)的算法。
步骤603、服务发现网元向第一数据分析网元发送一个或多个第二数据分析网元的信息。相应的,第一数据分析网元接收来自服务发现网元的一个或多个第二数据分析网元的信息。
应理解,一个或多个第二数据分析网元中不同第二数据分析网元的类型可以相同,也可以不相同。不同第二数据分析网元支持的分布式学习的类型可以相同,也可以不同。不同第二数据分析网元支持的分布式学习的算法信息可以相同,也可以不同,本申请实施例对此不作限定。
本申请实施例提供一种通信方法,该方法中由第一数据分析网元向服务发现网元发送第一请求,利用第一请求向服务发现网元请求第一数据分析网元所需要的第二数据分析网元的特征。这样便于服务发现网元根据第一请求为第一数据分析网元提供支持分布式学习的类型的一个或多个第二数据分析网元的信息。此外该第二数据分析网元的类型与第一数据分析网元请求的第二数据分析网元的类型相同。该方案一方面可以实现第一数据分析网元通过服务发现网元找到能够进行分布式学习训练的数据分析网元的目的,另一方面,由于第一数据分析网元得到该一个或多个第二数据分析网元的信息之后,后续第一数据分析网元在需要进行模型训练时可以与一个或多个第二数据分析网元进行协同实现模型的训练,从而能够扩展数据分析的应用场景。
在一种可能的实施例中,本申请实施例提供的方法在步骤601之前还可以包括:第一 数据分析网元确定触发分布式学习训练。
一种示例,第一数据分析网元确定触发分布式学习训练可以通过以下方式实现:第一数据分析网元基于配置信息或者人工指示,确定触发分布式学习训练。
另一种示例,第一数据分析网元确定触发分布式学习训练可以通过以下方式实现:第一数据分析网元主动发起分布式学习训练。
再一种示例,第一数据分析网元确定触发分布式学习训练可以通过以下方式实现:第一数据分析网元基于消费者功能(Consumer NF)网元的数据分析结果请求,确定触发分布式学习训练。以Consumer NF网元为SMF网元为例,如果SMF网元请求第一数据分析网元对流经UPF网元上的数据包进行业务识别,但是此时第一数据分析网元发现还没有训练业务识别模型,于是触发分布式学习训练。
为了使得服务发现网元为第一数据分析网元提供的一个或多个第二数据分析网元满足第一数据分析网元的需求,在一种可能的实施例中,第一请求还包括第一数据分析网元的范围,相应的,服务发现网元向第一数据分析网元提供的一个或多个第二数据分析网元的范围位于第一数据分析网元的范围内,换言之本申请实施例中的步骤602可以通过以下方式实现:服务发现网元将位于第一数据分析网元的范围的且支持第一数据分析网元所请求的分布式学习的信息的第二数据分析网元作为一个或多个第二数据分析网元。或者,本申请实施例中的步骤602可以通过以下方式实现:服务发现网元将位于第一数据分析网元的范围内,且类型与第一指示信息指示的第二数据分析网元的类型相同的第二数据分析网元作为一个或多个第二数据分析网元。
举例说明,第一数据分析网元的范围包括以下信息中的一个或者多个:第一数据分析网元服务的区域、第一数据分析网元归属的PLMN的标识、第一数据分析网元服务的网络切片的信息、第一数据分析网元服务的数据网络名称(data network name,DNN)、第一数据分析网元的设备商信息。其中,网络切片的信息用于识别网络切片。例如,网络切片的信息可以为单网络切片选择辅助信息(single network slice selection assistance information,S-NSSAI)。
可以将第一数据分析网元服务的网络切片的范围作为该第一数据分析网元的范围。
举例说明,第二数据分析网元的范围包括以下信息中的一个或者多个:第二数据分析网元服务的区域、第二数据分析网元归属的PLMN的标识、第二数据分析网元服务的网络切片实例的范围、第二数据分析网元服务的DNN、第二数据分析网元的设备商信息。
在一种可能的实施例中,本申请实施例中的分布式学习的信息还包括分布式学习支持的算法信息,相应的,服务发现网元向第一数据分析网元提供的一个或多个第二数据分析网元还支持分布式学习支持的算法信息。
应理解,如果第一请求中的分布式学习的信息包括分布式学习的类型以及分布式学习支持的算法信息,则服务发现网元向第一数据分析网元提供的一个或多个第二数据分析网元不仅要支持分布式学习的类型还要支持分布式学习支持的算法信息。
举例说明,如果第一数据分析网元通过第一请求用于请求服务发现网元查找支持横向学习以及支持的算法类型为“线性回归”的第二数据分析网元,则服务发现网元向第一数据分析网元提供的一个或多个第二数据分析网元不仅支持横向学习,且该一个或多个第二数据分析网元支持的算法类型为“线性回归”。
作为一种可能的示例,第一请求中携带的分布式学习的信息包括分布式学习的类型以及分布式学习支持的算法信息。作为另一种可能的示例,第一请求中携带的分布式学习的信息包括分布式学习的类型以及分布式学习支持的算法信息,且第一请求中还携带第一数据分析网元的范围。
参考图7,图7示出了本申请实施例提供的另一种可能的实施例,该方法包括:注册阶段、网元发现阶段、模型训练阶段。其中,注册阶段包括步骤701~步骤704。网元发现阶段包括步骤705~步骤707。模型训练阶段包括步骤708~步骤714。
步骤701、第一数据分析网元向服务发现网元发送第二请求,相应的,服务发现网元接收来自第一数据分析网元的第二请求。该第二请求用于请求注册第一数据分析网元的信息。
其中,第一数据分析网元的信息包括第一数据分析网元对应的以下信息中的一个或者多个:分布式学习的信息、第一数据分析网元的范围或第二指示信息。该第二指示信息用于指示第一数据分析网元的类型。
其中,第一数据分析网元对应的分布式学习的信息包括第一数据分析网元支持的分布式学习的类型和第一数据分析网元支持的分布式学习支持的算法信息中的一个或多个。例如,第二请求可以为注册请求消息。
作为一种可能的实现方式,该第二请求中还可以包括第五指示信息,该第五指示信息用于请求注册第一数据分析网元的信息。
示例性的,第一数据分析网元的类型包括以下信息中的一个或者多个:服务端(server)、协调器(coordinator)、中心(central或者centralized)训练者、全局(global)训练者。
在一种可能的实现方式中,第一数据分析网元的信息还可以包括第一数据分析网元的标识、第一数据分析网元的地址信息。
步骤702、服务发现网元注册第一数据分析网元的信息。
作为一种可能的实现方式,本申请实施例中的步骤702可以通过以下方式实现:服务发现网元在服务发现网元处注册第一数据分析网元的信息。例如,服务发现网元将第一数据分析网元的信息存储在服务发现网元的存储设备中。
作为一种可能的实现方式,本申请实施例中的步骤702可以通过以下方式实现:服务发现网元将第一数据分析网元的信息发送给外部存储设备(例如,UDM网元或者UDR网元)。以由外部存储设备存储第一数据分析网元的信息。后续服务发现网元可以从外部存储设备中获取第一数据分析网元的信息。
作为一种可能的实现方式,本申请实施例中的服务发现网元还可以是UDM网元或者UDR网元,也就是说,UDM网元或者UDR网元中存储了第一数据分析网元的信息。
值得说明的是,第一数据分析网元将第一数据分析网元的信息注册在服务发现网元处,这样后续Consumer NF网元可以通过服务发现网元查询支持分布式学习,且类型为服务端或者类型为协调器的第一数据分析网元的信息。然后Consumer NF网元向第一数据分析网元请求对流经UPF网元上的数据包进行业务识别。
步骤703、第二数据分析网元向服务发现网元发送第三请求,相应的,服务发现网元接收来自第二数据分析网元的第三请求。该第三请求用于请求注册第二数据分析网元的信息。
其中,第二数据分析网元的信息包括第二数据分析网元对应的以下信息中的一个或者多个:分布式学习的信息、第二数据分析网元的范围、或第一指示信息,该第一指示信息用于指示第二数据分析网元的类型。其中,第二数据分析网元对应的分布式学习的信息可以包括第二数据分析网元支持的分布式学习的类型和第二数据分析网元支持的分布式学习的算法信息中的一个或多个。
作为一种可能的实现方式,该第三请求中还可以包括第六指示信息,该第六指示信息用于请求注册第二数据分析网元的信息。
在一种可能的实现方式中,第二数据分析网元的信息还可以包括第二数据分析网元的标识、第二数据分析网元的地址信息。
步骤704、服务发现网元注册第二数据分析网元的信息。
步骤704的实现可以参考步骤702处的描述,此处不再赘述。区别在于:服务发现网元注册的是第二数据分析网元的信息。
可以理解的是,一个或多个第二数据分析网元中的每个第二数据分析网元均可以在服务发现网元处注册每个第二数据分析网元各自的信息。
本申请实施例中的步骤701~步骤702为第一数据分析网元向服务发现网元注册第一数据分析网元的信息的过程,步骤703~步骤704为第二数据分析网元向服务发现网元注册第二数据分析网元的信息的过程,步骤701~步骤702与步骤703~步骤704不分执行的先后顺序。
是否在服务发现网元处注册数据分析网元(例如,第一数据分析网元或第二数据分析网元)的信息可以由数据分析网元自主确定,或者由协议确定,或者由其他网元触发数据分析网元执行注册过程,本申请实施例对此不做限定。
步骤705~步骤707同步骤601~步骤603,此处不再赘述。
在一种可能的实施例中,如图7所示,本申请实施例提供的方法在步骤707之后还可以包括:
步骤708、第一数据分析网元根据一个或多个第二数据分析网元的信息确定能够进行分布式学习的第三数据分析网元的信息,第三数据分析网元的数量为一个或多个。
本申请实施例中一个或多个第三数据分析网元可以为一个或多个第二数据分析网元中的全部第二数据分析网元,或者部分第二数据分析网元。
以一个或多个第二数据分析网元为数据分析网元201~数据分析网元20n为例,则一个或多个第三数据分析网元可以为数据分析网元201、数据分析网元202、以及数据分析网元203。
其中,第三数据分析网元满足以下示例1~示例3中的任一个条件:
示例1、第三数据分析网元的负载低于预设负载阈值。
可以理解的是,步骤708可以通过以下方式实现:第一数据分析网元获取一个或多个第二数据分析网元的负载信息。第一数据分析网元根据一个或多个第二数据分析网元的负载信息将一个或多个第二数据分析网元中负载低于预设负载阈值的第二数据分析网元确定为能够进行分布式学习的第三数据分析网元。
示例2、第三数据分析网元的优先级高于预设优先级阈值。
可以理解的是,步骤708可以通过以下方式实现:第一数据分析网元获取一个或多个第二数据分析网元的优先级。第一数据分析网元根据一个或多个第二数据分析网元的优先 级将一个或多个第二数据分析网元中优先级高于预设优先级阈值的第二数据分析网元确定为能够进行分布式学习的第三数据分析网元。
示例3、第三数据分析网元位于第一数据分析网元的范围内。
可以理解的是,步骤708可以通过以下方式实现:第一数据分析网元获取一个或多个第二数据分析网元的范围。第一数据分析网元根据一个或多个第二数据分析网元的范围,将位于第一数据分析网元的范围内的第二数据分析网元确定为能够进行分布式学习的第三数据分析网元。
如果第一请求中未携带第一数据分析网元的范围,则服务发现网元向第一数据分析网元提供的一个或多个第二数据分析网元中可能存在部分数据分析网元位于第一数据分析网元的范围外,而另外部分第二数据分析网元位于第一数据分析网元的范围内。因此,第一数据分析网元在得到一个或多个第二数据分析网元的信息之后,还可以根据每个第二数据分析网元的位置信息进行筛选以得到位于第一数据分析网元的范围内的一个或多个第三数据分析网元。
如果第一请求中携带第一数据分析网元的范围,则服务发现网元向第一数据分析网元提供的一个或多个第二数据分析网元位于第一数据分析网元的范围内,则毋庸置疑地一个或多个第三数据分析网元也位于第一数据分析网元的范围内。
值得说明的是,上述示例1、示例2以及示例3可以单独使用,也可以组合使用,以作为第一数据分析网元从一个或多个第二数据分析网元中确定第三数据分析网元的条件。在示例1、示例2以及示例3组合使用时,第三数据分析网元的负载不仅低于预设负载阈值,优先级高于预设优先级阈值,且第三数据分析网元也位于第一数据分析网元的范围内。
上述示例1~示例3仅是描述第一数据分析网元从一个或多个第二数据分析网元中确定第三数据分析网元的示例,本申请实施例中第一数据分析网元还可以根据其他方式从一个或多个第二数据分析网元中确定第三数据分析网元,本申请实施例对此不做限定。
步骤709、一个或多个第三数据分析网元中每个第三数据分析网元确定子模型(Sub-Model)。任一个第三数据分析网元确定的子模型由该第三数据分析网元根据该第三数据分析网元获取到的数据进行训练得到。
其中,第三数据分析网元获取到的数据是指第三数据分析网元从第三数据分析网元的范围内获取到的数据。例如,第三数据分析网元从第三数据分析网元的范围内的一个或多个(Consumer NF)网元处获取的终端数据(来自UE)、业务数据(来自AF网元)、网络数据(来自核心网网元,如AMF网元或SMF网元或PCF网元或UPF网元)、基站数据(来自接入网网元,如RAN或gNB)、网管数据(来自OAM网元)等,具体地,从各个网元获取的数据示例如表1所示。
表1第三数据分析网元从其他网元获取的数据
Figure PCTCN2021075317-appb-000008
Figure PCTCN2021075317-appb-000009
Figure PCTCN2021075317-appb-000010
Figure PCTCN2021075317-appb-000011
应理解,第三数据分析网元可以在第一数据分析网元的触发下确定子模型。
作为一种可能的实现方式,本申请实施例中的步骤709可以通过以下方式实现:任一个第三数据分析网元根据配置参数设置对获取到的数据进行训练时使用的参数,待设置之后基于第三数据分析网元本地智能芯片(如图形处理器(graphics processing unit,GPU))训练获取到的数据(如表1所示)进行训练,以得到子模型。示例性的,训练过程可以参考图3以线性回归算法为例的横向联邦训练过程,其他算法的训练架构类似,此处不再赘述。
本申请实施例中的配置参数可以预先配置在第三数据分析网元处,或者该配置参数也可以由第一数据分析网元提供。
如果配置参数由第一数据分析网元向一个或多个第三数据分析网元提供,则本申请实施例提供的方法在步骤709之前还可以包括:第一数据分析网元向一个或多个第三数据分析网元发送配置参数,相应的,一个或多个第三数据分析网元接收来自第一数据分析网元的配置参数。该配置参数用于第三数据分析网元确定训练子模型时使用的参数。
值得说明的是,第一数据分析网元向一个或多个第三数据分析网元中每个第三数据分析网元发送上述配置参数。
示例性的,配置参数包括以下信息中的一个或多个:初始模型、训练集选择标准、特征生成方法、训练终止条件、最大训练时间、最大等待时间。
举例说明,初始模型包括算法类型、模型初始参数。训练集选择标准:针对每个特征的限制条件,比如针对业务体验(service experience)模型训练时,需要对终端的测量的RSRP进行限制,当RSRP的值小于-130dB或者大于-100dB,则对应的样本数据应该被舍弃。特征生成方法:针对每个特征的计算方法,比如针对service experience模型训练时,需要对RSRP进行0~1的归一化,那么第一数据分析网元需要指示第三数据分析网元对RSRP的归一化方法,例如,最大最小值归一化。训练终止条件:比如最大迭代次数,当迭代次数达到最大迭代次数时终止训练。再比如最大损失函数值,损失函数在每一轮迭代训练时都会减小,当损失函数减小到要求的最大损失函数值时可以终止训练。最大训练时间:用于指示每一轮迭代训练的最大时间,当一轮迭代训练的时间超出该最大训练时间时,可能会影响整个联邦训练进程,因此需要第一数据分析网元可以限制第三数据分析网元进行每一轮迭代训练的时间。最大等待时间:用于指示每一轮迭代训练时,第一数据分析网元等待第三数据分析网元反馈子模型的最大时间。如果一轮迭代训练第一数据分析网元等待第三数据分析网元反馈子模型的时间当超出该最大等待时间时,可能会影响整个联邦训练进程,因此需要第一数据分析网元可以限制第三数据分析网元进行每一轮迭代训练的时间。
注意:子模型从第三数据分析网元传输到第一数据分析网元还需要传输时间,因此最大等待时间包括最大训练时间和该传输时间。
步骤710、一个或多个第三数据分析网元向第一数据分析网元发送各自的子模型,相应的,第一数据分析网元接收来自一个或多个第三数据分析网元的子模型。
步骤711、第一数据分析网元根据一个或多个第三数据分析网元的子模型确定更新的模型。
应理解,第一数据分析网元将每个第三数据分析网元提供的子模型进行聚合便可以得到更新的模型。
以一个或多个第三数据分析网元为数据分析网元201、数据分析网元202、以及数据分析网元203为例,数据分析网元201提供的子模型为子模型1、数据分析网元202提供的子模型为子模型2、以及数据分析网元203提供的子模型为子模型3,则第一数据分析网元可以将子模型1、子模型2以及子模型3进行聚合得到更新的模型。
步骤712、第一数据分析网元向一个或多个第三数据分析网元发送更新的模型,相应地,一个或多个第三数据分析网元中的每个第三数据分析网元可以从第一数据分析网元获取更新的模型。
应理解,第三数据分析网元获取更新的模型后,可以进行下一轮迭代训练,得到下一轮迭代对应的子模型,也就是说执行步骤712后转到步骤709循环进行,直到达到配置参数指示的训练终止条件。
应理解,如果训练终止条件中的最大迭代次数是N次,则第三数据分析网元可能会进行N轮迭代训练,每轮迭代训练第三数据分析网元均会向第一数据分析网元本轮迭代训练得到的子模型。
在一种可能的实施例中,本申请实施例提供的方法在步骤712之后还可以包括:
步骤713、第一数据分析网元根据更新的模型确定目标模型。
一种示例,本申请实施例中的步骤713可以通过以下方式实现:第一数据分析网元确定到达设定的最大联邦训练次数(也可以称为最大迭代次数),则将更新的模型确定为目标模型。也就是说,达到最大训练次数时,第一数据分析网元将更新的模型确定为目标模型。
值得说明的是,最大联邦训练次数为第一数据分析网元进行子模型聚合的次数。而训练终止条件中的最大迭代次数是指第三数据分析网元在每一次上报子模型之前,生成该子模型过程中的迭代次数。
步骤714、第一数据分析网元向一个或多个第二数据分析网元发送目标模型,以及目标模型对应的以下信息中的一个或者多个:模型标识(model ID)、模型版本号(Version ID)或者数据分析标识(analytics ID)。
在一种具体实现中,本申请实施例提供的方法在步骤714之前还可以包括:第一数据分析网元为目标模型分配模型标识、模型版本号或者数据分析标识。
如图8所示,图8以第一数据分析网元的类型为服务端(server),可以称第一数据分析网元为server NWDAF,第二数据分析网元的类型为客户端,可以称第二数据分析网元为client NWDAF,服务发现网元为NRF网元,分布式学习的类型为横向联邦学习为例示出了本申请实施例提供的一种通信方法的详细实施例,该方法包括:
步骤801、server NWDAF触发到NRF网元的网元管理_网元注册请求服务操作(Nnrf_NFManagement_NFRegister Request),相应的,NRF网元接收来自server NWDAF 的网元管理网元注册请求服务操作。
其中,网元管理网元注册请求服务操作用于请求在NRF网元处注册server NWDAF的信息。其中,server NWDAF的信息包括以下信息中的一个或多个:网元基本信息、server NWDAF的范围、联邦学习能力信息、或第二指示信息。
可以理解的是,NRF网元接收到server NWDAF的信息之后,存储server NWDAF的信息,以完成server NWDAF的信息的注册。
一种可能的实现方式,该网元管理_网元注册请求服务操作中可以携带用于指示在NRF网元处注册server NWDAF的信息的指示信息。
步骤802、NRF网元触发到server NWDAF的网元管理_网元注册响应服务操作(Nnrf_NFManagement_NFRegister Response),相应的,server NWDAF接收来自NRF网元的网元管理_网元注册响应服务操作。
其中,网元管理_网元注册响应服务操作用于表示NRF网元已在NRF网元处成功注册server NWDAF的信息。一种可能的实现方式,该网元管理_网元注册响应服务操作中携带注册成功指示,该注册成功指示用于表示NRF网元已在NRF网元处成功注册server NWDAF的信息。
步骤803、client NWDAF触发到NRF网元的网元管理_网元注册请求服务操作,相应的,NRF网元接收来自client NWDAF的网元管理_网元注册请求服务操作。
其中,网元管理_网元注册请求服务操作用于请求在NRF网元处注册client NWDAF的信息。例如,client NWDAF的信息包括以下信息中的一个或多个:client NWDAF的基本信息、client NWDAF的范围、client NWDAF的联邦学习能力信息、第三指示信息。
其中,client NWDAF的基本信息可以为client NWDAF的类型或者client NWDAF的标识(例如,client NWDAF ID)或者client NWDAF的位置或者client NWDAF的地址信息。
可以理解的是,NRF网元接收到client NWDAF的信息之后,存储client NWDAF的信息,以完成client NWDAF的信息的注册。
一种可能的实现方式,该网元管理_网元注册请求服务操作中可以携带用于指示在NRF网元处注册client NWDAF的信息的指示信息。
步骤804、NRF网元触发到client NWDAF的网元管理_网元注册响应服务操作(Nnrf_NFManagement_NFRegister Response),相应的,client NWDAF接收来自NRF网元的网元管理_网元注册响应服务操作。
其中,网元管理_网元注册响应服务操作用于表示NRF网元已在NRF网元处成功注册client NWDAF的信息。一种可能的实现方式,该网元管理_网元注册响应服务操作中携带注册成功指示,该注册成功指示用于表示NRF网元已在NRF网元处成功注册client NWDAF的信息。
步骤805、server NWDAF确定触发横向联邦学习训练。
关于步骤805的实现可以参考上述第一数据分析网元确定触发分布式学习训练过程。此处不再赘述。
步骤806、server NWDAF向NRF网元请求能够进行横向联邦学习的第一client NWDAF列表。
作为一种示例,本申请实施例中的步骤806可以通过以下方式实现:server NWDAF触 发到NRF网元的网元发现请求(Nnrf_NFDiscovery_Request),相应的,NRF网元接收来自server NWDAF的网元发现请求。其中,网元发现请求用于向NRF网元请求可以进行横向联邦学习的第一client NWDAF列表。
示例性的,网元发现请求中包括:server NWDAF的范围,以及第一指示信息。
可以理解的是,第一指示信息用于向NRF网元指示server NWDAF所需要的client NWDAF的类型,或者算法性能要求。
一种可能的实现方式,网元发现请求中携带指示信息y,指示信息y用于指示向NRF网元请求可以进行横向联邦学习的第一client NWDAF列表。
步骤807、NRF网元确定能够进行横向联邦学习的第一client NWDAF列表。其中,第一client NWDAF列表包括client NWDAF1~client NWDAF n中每个client NWDAF的信息。
举例说明,如图10所示,client NWDAF1对应的PLMN为PLMN1,TA为TA1,切片实例为切片实例1,client NWDAF1的设备商为设备商1,client NWDAF1的DNAI为DNAI1,client NWDAF2对应的PLMN为PLMN2,TA为TA2,切片实例为切片实例2,client NWDAF2的设备商为设备商2,client NWDAF2的DNAI为DNAI2,依次类推各个client NWDAF的信息。
步骤808、NRF网元向server NWDAF发送第一client NWDAF列表,相应的,server NWDAF接收来自NRF网元的第一client NWDAF列表。
可以理解的是,第一client NWDAF包括满足server NWDAF需求的一个或多个client NWDAF。
作为一种可能的实现,步骤808可以通过以下方式实现:NRF网元向server NWDAF发送网元发现响应,其中,网元发现响应中包括第一client NWDAF列表。
可以理解的是,在步骤808之前,本申请实施例提供的方法还可以包括:NRF网元根据server NWDAF的请求,在NRF网元处查询满足server NWDAF请求的client NWDAF1~client NWDAF n,得到第一client NWDAF列表。
步骤809、server NWDAF确定第一client NWDAF列表中每个client NWDAF的负载(Load)信息。
作为一种可能的实现,本申请实施例中的步骤808可以通过以下方式实现:server NWDAF向OAM或者NRF网元或者能够分析client NWDAF的负载信息的NWDAF查询第一client NWDAF列表中每个client NWDAF的负载信息。
示例性的,client NWDAF的负载信息对应如下信息中的一个或多个:
-状态(Status)(注册(registered)暂停的(suspended),未被发现的(undiscoverable);
-NF资源利用(Resource Usage)(例如,中央处理器(CPU),存储(memory),硬盘(disk));
-NF负载(Load,真实值或者平均值或者方差值);
-NF高峰负载(peak load)。
可以理解的是,步骤808中client NWDAF的负载信息也可以使用client NWDAF的优先级替换。
步骤810、server NWDAF根据每个client NWDAF的负载信息,确定能够进行横向联邦学习的第二client NWDAF列表。
其中,第二client NWDAF列表中包括client NWDAF1~client NWDAF n中的全部或者部分client NWDAF的信息。
作为一种实现,本申请实施例中的步骤810可以通过以下方式实现:第二client NWDAF列表中包括的client NWDAF的负载低于预设负载阈值。
示例性的,server NWDAF将第一client NWDAF列表按照Load从小到大进行排序,然后取Load小于预设负载阈值的client NWDAF进行横向联邦学习。此举的目的是保障所选择的client NWDAF有足够的资源用于训练子模型,从而提高整个联邦学习的训练效率。
一种可替代的实现方式,本申请实施例中的步骤810可以通过以下方式替换:server NWDAF根据每个client NWDAF的优先级,确定能够进行横向联邦学习的第二client NWDAF列表。此时,第二client NWDAF列表中包括的client NWDAF的优先级高于预设优先级阈值。
作为一种实现,每个client NWDAF具有对应的优先级,优先级高的client NWDAF的算法性能高于优先级低的client NWDAF的算法性能,或者,优先级高的client NWDAF的算法性能高于预设算法性能阈值。或者优先级高的client NWDAF的负载低于优先级低的client NWDAF的负载,或者优先级高的client NWDAF的负载低于预设负载阈值。优先级高的client NWDAF的算法性能评价指标高于优先级低的client NWDAF的算法性能评价指标,或者优先级高的client NWDAF的算法性能评价指标满足预设算法性能评价指标阈值。算法性能评价指标可以包括:方误差、准确率、召回率、F-Score(调和准确率和召回率后的平均成绩)。
在图8所示的实施例中,server NWDAF或client NWDAF将各自具备的联邦能力信息注册到NRF网元,辅助5G网络(例如server NWDAF)如果需要进行横向联邦学习时,通过NRF网元找到合适的client NWDAF进行联邦训练。
图8所示的实施例中步骤801中网元管理_网元注册请求服务操作对应上述第二请求。步骤803中的网元管理_网元注册请求服务操作对应上述第三请求。server NWDAF的覆盖范围可以对应上述实施例中的第一数据分析网元的范围。client NWDAF的范围可以对应上述实施例中的第二数据分析网元的范围。步骤806中的网元发现请求对应上述实施例中的第一请求。client NWDAF1~client NWDAF n对应上述实施例中的一个或多个第二数据分析网元。第二client NWDAF列表中包括的全部client NWDAF对应上述实施例中的一个或多个第三数据分析网元。
图9为本申请实施例提供的一种模型训练方法的实施例,该方法以server NWDAF确定执行横向联邦训练的client NWDAF为client NWDAF1和client NWDAF3为例,该方法包括:
步骤901、server NWDAF向client NWDAF1发送配置参数,相应的,client NWDAF1接收来自server NWDAF的配置参数。该配置参数用于client NWDAF1确定训练子模型时使用的参数。
示例性的,步骤901可以通过以下方式实现:server NWDAF触发到client NWDAF1的Nnwdaf_HorizontalFL_Create request服务操作,相应的,client NWDAF1接收来自server NWDAF的Nnwdaf_HorizontalFL_Create request服务操作。
其中,Nnwdaf_HorizontalFL_Create request服务中包括上述配置参数。例如,配置参数的内容可以参考上述实施例中的描述,此处不再赘述。
步骤902、server NWDAF向client NWDAF3发送配置参数,相应的,client NWDAF3接收来自server NWDAF的配置参数。该配置参数用于client NWDAF3确定训练子模型时使用的参数。
可以理解的是,client NWDAF3或client NWDAF1接收到配置参数之后,还可以向server NWDAF发送响应指示,该响应指示用于指示client NWDAF成功配置训练子模型时使用的参数。
步骤903、client NWDAF1或client NWDAF3根据各自获取的数据以及配置参数执行训练过程,得到子模型。
可以理解的是,client NWDAF1或client NWDAF3在每一次上报子模型过程中,client NWDAF1或client NWDAF3内部可以执行多轮子迭代训练,每轮子迭代训练对应一个最大子迭代次数,client NWDAF1或client NWDAF3可以将达到每轮子迭代训练对应的最大子迭代次数时,得到的模型作为子模型。
步骤904、client NWDAF1向server NWDAF发送client NWDAF1训练得到的子模型。
示例性的,client NWDAF1触发到server NWDAF的Nnwdaf_HorizontalFL_Update request服务操作向server NWDAF发送client NWDAF1训练得到的子模型。
步骤905、client NWDAF3向server NWDAF发送client NWDAF3训练得到的子模型。
示例性的,client NWDAF3触发到server NWDAF的Nnwdaf_HorizontalFL_Update request服务操作向server NWDAF发送client NWDAF3训练得到的子模型。
其中,子模型本身可以是个黑盒,作为一个模型文件(Model File)发送给server NWDAF。子模型还可以具体定义,包括算法类型、模型参数等。
在一种可能的实现方式中,client NWDAF1或client NWDAF3向server NWDAF提供各自训练得到的子模型之后,还可以向server NWDAF请求更新的模型。
如图10所示,client NWDAF3向server NWDAF发送client NWDAF3训练得到的子模型3,client NWDAF1向server NWDAF发送client NWDAF1训练得到的子模型1。
步骤906、server NWDAF将由client NWDAF1训练得到的子模型以及由client NWDAF3训练得到的子模型进行聚合,得到本轮迭代后的更新的模型。
步骤907、server NWDAF向client NWDAF1以及client NWDAF3发送更新的模型。
可以理解的是,各个client NWDAF均会执行多轮子迭代训练,每一轮子迭代训练中各个cient NWDAF均会训练得到本轮子迭代训练对应的子模型。每轮子迭代训练得到子模型之后,各个client NWDAF均会向server NWDAF上报本轮子迭代训练对应的子模型。
上述步骤903~步骤907可以循环进行,直到达到client NWDAF1以及client NWDAF3进行子模型训练时设置的训练终止条件。
步骤908、server NWDAF确定联邦训练终止后,根据更新的模型确定目标模型。
步骤909、server NWDAF可以为目标模型(称之为Trained Model或者Global Model或者Optimal Model)分配对应的版本号(Version ID)和/或分析结果类型标识(analytics ID)。
步骤910、server NWDAF向server NWDAF的范围内的全部或者部分client NWDAF发送目标模型以及目标模型对应的版本号以及分析结果类型标识。
例如,server NWDAF触发到server NWDAF的范围内的全部或者部分client NWDAF的Nnwdaf_HorizontalFL_Update Acknowledge服务操作,以向server NWDAF的范围内的全部 或者部分client NWDAF发送目标模型以及目标模型对应的版本号Version ID以及分析结果类型标识analytics ID。
如图10所示,server NWDAF向client NWDAF1~client NWDAFn发送目标模型以及目标模型对应的模型标识Model ID、版本号Version ID以及分析结果类型标识analytics ID中的至少一个。
需要说明的是,虽然在进行模型训练时,server NWDAF的范围内的client NWDAF1和client NWDAF3参与了训练,而server NWDAF的范围内除client NWDAF1和client NWDAF3外的其他client NWDAF未参与训练,但是其他client NWDAF依然也可以分享目标模型。
步骤911、client NWDAF1和client NWDAF3向NEF网元发送目标模型以及目标模型对应的模型标识Model ID、版本号Version ID以及分析结果类型标识analytics ID中的至少一个。
例如,client NWDAF1和client NWDAF3分别触发到NRF网元的Nnrf_NFManagement_NFRegister_request服务操作将目标模型对应的analytics ID、Version ID以及对应的有效范围(区域、时间段等)注册到,以告知NRF网元该client NWDAF1和client NWDAF3支持该analytics ID的分析。
注意:本步骤中analytics ID对应的有效范围由每个client NWDAF根据参与目标模型训练的数据进行确定。对于其他client NWDAF以及server NWDAF来说该参与训练的数据是未知的。
步骤912、server NWDAF将支持的analytics ID以及对应的有效范围注册到NRF网元。
本申请实施例中步骤912中analytics ID对应的有效范围包括client NWDAF上该analytics ID的有效范围。
将server NWDAF支持的analytics ID也注册到NRF网元,可以应对NWDAF分层部署的场景。假设第三方AF网元或者OAM网元向网络侧NWDAF请求大的区域内该analytics ID对应的数据分析结果,在这种情况下,AF网元或者OAM网元首先从NRF网元查询到server NWDAF,紧接着server NWDAF可以分别向其他client NWDAF请求子区域数据分析结果,然后整合以后发送给AF网元或者OAM网元。
图9所示的实施例中,在5G网络中引入联邦学习训练过程,使得数据可以不出参与联邦学习训练的各个client NWDAF的本域,由参与联邦学习训练的各个client NWDAF根据获取的数据进行子模型训练,然后参与联邦学习训练的各个client NWDAF向server NWDAF提供各自在每轮训练中得到的子模型,以最终由server NWDAF根据子模型聚合得到更新的模型,进而得到目标模型,以使得模型训练过程。该方法不仅可以避免数据泄露,并且由于数据训练由client NWDAF执行,这种分布式训练过程同样可以加快整个模型训练的速度。
图11所示,针对S-NSSAI=A类型的网络切片a,可以部署一个server NWDAF为该网络切片a提供服务,然后在该网络切片a服务的不同区域或者该网络切片a对应的不同切片实例上部署至少一个client NWDAF。如图11所示,网络切片a服务于区域1、区域2以及区域3,网络切片a中部署了切片实例:切片实例(network slice instance,NSI)1、NSI2以及NSI3。在区域1部署client NWDAF1,或者,client NWDAF1为NSI1服务。在区域2部署client NWDAF2,或者,client NWDAF2为NSI2服务。 在区域3部署client NWDAF3,或者,client NWDAF3为NSI3服务,
在NWDAF的信息注册过程中,server NWDAF将支持的NWDAF类型(例如,server)、支持的联邦学习能力信息(横向联邦学习类型、算法信息)、支持analytics ID=Service Experience数据分析等信息注册到NRF网元。client NWDAF1~client NWDAF3将各自支持的NWDAF类型(例如,client)、支持的联邦学习能力信息(横向联邦学习类型、算法信息)、支持analytics ID=aervice experience数据分析等信息注册到NRF网元,参见上述步骤801~步骤804的注册过程。例如,client NWDAF1、client NWDAF2以及client NWDAF3支持Horizontal FL,且类型为client。
之后,OAM触发到server NWDAF的订阅请求,该订阅请求用于订阅该网络切片a的业务的质量体验(quality of experience,QoE)或者业务体验(service experience或者service mean opinion score或者service MOS)。基于来自OAM的订阅请求的触发,server NWDAF根据需求的client NWDAF支持的类型、范围、联邦学习能力信息、analytics ID=Service Experience通过NRF网元查询能够进行横向联邦学习的client NWDAF列表,并从中筛选出Load低于负载阈值的目标client NWDAF(例如,client NWDAF1、client NWDAF2以及client NWDAF3)参与横向联邦训练。
在联邦学习准备阶段,server NWDAF首先确定需要通过线性回归确定业务体验与网络数据的之间的关系模型。即业务体验(Service MOS)模型,可以表征为:
h(x)=w 0x 0+w 1x 1+w 2x 2+w 3x 3+w 4x 4+w 5x 5+...+w Dx D
其中,
-h(x)表示业务体验,即Service MOS,如表2所示。
-x i(i=0,1,2,...,D)表示网络数据,如表3所示。
-D为网络数据的维度;w i(i=0,1,2,...,D)为每个网络数据影响业务体验的权重,D为权重的维度。
表2来自AF网元的业务数据
Figure PCTCN2021075317-appb-000012
表3来自5G NF的网络数据
Figure PCTCN2021075317-appb-000013
Figure PCTCN2021075317-appb-000014
在训练阶段:
1)、server NWDAF首先基于历史确定一个初始Service MOS模型,然后将该初始Service MOS模型、每个x i(i=0,1,2,...,D)对应的数据类型(也称为特征Feature)、算法类型线性回归、最大迭代次数等下发给参与训练的client NWDAF1~client NWDAF3。
2)、client NWDAF1~client NWDAF3中的每个client NWDAF分别计算各自的损失函数针对w i(i=0,1,2,...,D)的梯度,本申请实施例可以称之为子模型或者client NWDAF训练中间结果。之后,client NWDAF1~client NWDAF3将各自训练得到的子模型以及各自参与训练的样本数目(即表2和表3中业务流个数)上报给server NWDAF。
3)、sever NWDAF可以利用sever NWDAF中的模型聚合模块将所有参与横向联邦训练的目标client NWDAF上报的子模型进行加权平均聚合,得到更新的模型。
4)、sever NWDAF将更新的模型分别发送给各个参与横向联邦训练的client NWDAF1~ client NWDAF3中的每个client NWDAF。之后,client NWDAF1~client NWDAF3根据更新的模型更新本地参数。client NWDAF1~client NWDAF3中的任一个client NWDAF判定迭代次数达到最大子迭代次数时,终止训练,并继续向sever NWDAF发送达到最大迭代次数时得到的子模型。
5)、sever NWDAF在确定达到联邦训练的终止条件时(如上述2)~4)的迭代次数达到了server NWDAF端配置的最大迭代次数),根据更新的模型得到目标模型,然后由sever NWDAF中的模型管理模块为目标模型分配目标模型的标识、目标模型的版本号以及网络切片a中业务QoE对应的analytics ID中的一个或者多个。
6)、sever NWDAF向client NWDAF1~client NWDAF3中的每个client NWDAF发送目标模型、以及目标模型的标识、目标模型的版本号、Analytics ID三者中的一个或者多个。
在推理阶段:
A、假设OAM为了优化网络切片a的资源配置,向server NWDAF订阅网络切片a的业务QoE信息。
B、server NWDAF向下辖的client NWDAF1~client NWDAF3中的每个client NWDAF请求各自对应的每个子区域或者切片实例内的业务QoE信息。
C、client NWDAF1向server NWDAF发送子区域1或者NSI1的业务QoE信息,client NWDAF2向server NWDAF发送子区域2或者NSI2的业务QoE信息,client NWDAF3向server NWDAF发送子区域3或者NSI3的业务QoE信息,之后由server NWDAF汇总所有的子区域或者切片实例内的业务QoE信息得到网络切片a的业务QoE信息,发送给OAM。
具体的,client NWDAF1根据目标模型以及子区域1或者NSI1对应的数据,得到子区域1或者NSI1的业务QoE信息。client NWDAF2根据目标模型以及子区域2或者NSI2对应的数据,得到子区域2或者NSI2的业务QoE信息。client NWDAF3根据目标模型以及子区域3或者NSI3对应的数据,得到子区域3或者NSI3的业务QoE信息。
D、OAM根据网络切片a的业务QoE信息判定网络切片a的SLA是否满足,如果不满足,可以通过调整网络切片a的空口资源或者核心网资源或者传输网配置使得该网络切片a的SLA得到满足。
上述主要从各个网元之间交互的角度对本申请实施例的方案进行了介绍。可以理解的是,各个网元,例如第一数据分析网元、服务发现网元、第三数据分析网元等为了实现上述功能,其包括了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例第一数据分析网元、服务发现网元、第三数据分析网元进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分 方式。
上面结合图6至图11,对本申请实施例的方法进行了说明,下面对本申请实施例提供的执行上述方法的通信装置进行描述。本领域技术人员可以理解,方法和装置可以相互结合和引用,本申请实施例提供的通信装置可以执行上述确定策略的方法中由第一数据分析网元、服务发现网元、第三数据分析网元执行的步骤。
图12示出了上述实施例中所涉及的通信装置,该通信装置可以包括:通信单元1202和处理单元1201。其中,处理单元1201用于支持该通信装置执行信息处理的动作。通信单元1202用于支持该通信装置执行信息接收或者发送的动作。
一种示例,该通信装置为第一数据分析网元,或者为应用于第一数据分析网元中的芯片。在这种情况下,通信单元1202用于支持该通信装置执行上述实施例的图6的步骤601中由第一数据分析网元执行的发送动作。通信单元1202,用于支持该通信装置执行图6的步骤603中由第一数据分析网元执行的接收动作。处理单元,还用于支持通信装置执行上述实施例中由第一数据分析网元执行的处理的动作。
在一种可能的实施例中,通信单元1202,还用于支持通信装置执行上述实施例中的步骤701、步骤712、步骤714中由第一数据分析网元执行的发送动作。处理单元1201,还用于支持通信装置执行上述实施例中步骤708、步骤711、步骤713。
另一种示例,该通信装置为第三数据分析网元,或者为应用于第三数据分析网元中的芯片。在这种情况下处理单元1201,用于支持该通信装置执行上述实施例中的步骤709中由第三数据分析网元执行的处理动作。通信单元1202,用于支持该通信装置执行上述实施例中的步骤710中由第三数据分析网元执行的发送动作。
在一种可能的实现方式中,通信单元1202还用于支持该通信装置执行上述实施例中的步骤712中由第三数据分析网元执行的接收动作,步骤714中由第二数据分析网元执行的接收动作,步骤703中由第二数据分析网元执行的发送动作。
再一种示例,该通信装置为服务发现网元,或者为应用于服务发现网元中的芯片。在这种情况下,通信单元1202用于支持该通信装置执行上述实施例的图6的步骤601中由服务发现网元执行的接收动作。处理单元1201,还用于支持通信装置执行上述实施例中步骤602中由服务发现网元执行的处理动作。通信单元1202,用于支持该通信装置执行图6的步骤603中由服务发现网元执行的发送动作。
在一种可能的实施例中,通信单元1202,还用于支持通信装置执行上述实施例中的步骤701、步骤703中由服务发现网元执行的接收动作。处理单元1201,用于支持该通信装置执行上述实施例中的步骤702、步骤704中由服务发现网元执行的动作。
图13示出了上述实施例中所涉及的通信装置的一种可能的逻辑结构示意图。该通信装置包括:处理模块1312和通信模块1313。处理模块1312用于对通信装置的动作进行控制管理,例如,处理模块1312用于执行在通信装置进行信息/数据处理的步骤。通信模块1313用于支持通信装置进行信息/数据发送或者接收的步骤。
在一种可能的实施例中,通信装置还可以包括存储模块1311,用于存储通信装置可的程序代码和数据。
一种示例,该通信装置为第一数据分析网元,或者为应用于第一数据分析网元中的芯片。在这种情况下,通信模块1313用于支持该通信装置执行上述实施例的图6的步 骤601中由第一数据分析网元执行的发送动作。通信模块1313,用于支持该通信装置执行图6的步骤603中由第一数据分析网元执行的接收动作。处理单元,还用于支持通信装置执行上述实施例中由第一数据分析网元执行的处理的动作。
在一种可能的实施例中,通信模块1313,还用于支持通信装置执行上述实施例中的步骤701、步骤712、步骤714中由第一数据分析网元执行的发送动作。处理模块1312,还用于支持通信装置执行上述实施例中步骤708、步骤711、步骤713。
另一种示例,该通信装置为第三数据分析网元,或者为应用于第三数据分析网元中的芯片。在这种情况下处理模块1312,用于支持该通信装置执行上述实施例中的步骤709中由第三数据分析网元执行的处理动作。通信模块1313,用于支持该通信装置执行上述实施例中的步骤710中由第三数据分析网元执行的发送动作。
在一种可能的实现方式中,通信模块1313还用于支持该通信装置执行上述实施例中的步骤712中由第三数据分析网元执行的接收动作,步骤714中由第二数据分析网元执行的接收动作,步骤703中由第二数据分析网元执行的发送动作。
再一种示例,该通信装置为服务发现网元,或者为应用于服务发现网元中的芯片。在这种情况下,通信模块1313用于支持该通信装置执行上述实施例的图6的步骤601中由服务发现网元执行的接收动作。处理模块1312,还用于支持通信装置执行上述实施例中步骤602中由服务发现网元执行的处理动作。通信模块1313,用于支持该通信装置执行图6的步骤603中由服务发现网元执行的发送动作。
在一种可能的实施例中,通信模块1313,还用于支持通信装置执行上述实施例中的步骤701、步骤703中由服务发现网元执行的接收动作。处理模块1312,用于支持该通信装置执行上述实施例中的步骤702、步骤704中由服务发现网元执行的动作。
其中,处理模块1312可以是处理器或控制器,例如可以是中央处理器单元,通用处理器,数字信号处理器,专用集成电路,现场可编程门阵列或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理器和微处理器的组合等等。通信模块1313可以是收发器、收发电路或通信接口等。存储模块1311可以是存储器。
当处理模块1312为处理器1401或处理器1405,通信模块1313为通信接口1403时,存储模块1311为存储器1402时,本申请所涉及的通信装置可以为图14所示的通信设备。
图14所示为本申请实施例提供的通信设备的硬件结构示意图。该通信设备包括处理器1401,通信线路1404以及至少一个通信接口(图14中仅是示例性的以包括通信接口1403为例进行说明)。
一种可能的实现方式,该通信设备还可以包括存储器1402。
处理器1401可以是一个通用中央处理器(central processing unit,CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制本申请方案程序执行的集成电路。
通信线路1404可包括一通路,在上述组件之间传送信息。
通信接口1403,使用任何收发器一类的装置,用于与其他设备或通信网络通信,如以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area  networks,WLAN)等。
存储器1402可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过通信线路1404与处理器相连接。存储器也可以和处理器集成在一起。
其中,存储器1402用于存储执行本申请方案的计算机执行指令,并由处理器1401来控制执行。处理器1401用于执行存储器1402中存储的计算机执行指令,从而实现本申请下述实施例提供的通信方法。
一种可能的实现方式,本申请实施例中的计算机执行指令也可以称之为应用程序代码,本申请实施例对此不作具体限定。
在具体实现中,作为一种实施例,处理器1401可以包括一个或多个CPU,例如图14中的CPU0和CPU1。
在具体实现中,作为一种实施例,通信设备可以包括多个处理器,例如图14中的处理器1401和处理器1405。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
一种示例,该通信设备为第一数据分析网元,或者为应用于第一数据分析网元中的芯片。在这种情况下,通信接口1403用于支持该通信设备执行上述实施例的图6的步骤601中由第一数据分析网元执行的发送动作。通信接口1403,用于支持该通信设备执行图6的步骤603中由第一数据分析网元执行的接收动作。处理单元,还用于支持通信设备执行上述实施例中由第一数据分析网元执行的处理的动作。
在一种可能的实施例中,通信接口1403,还用于支持通信设备执行上述实施例中的步骤701、步骤712、步骤714中由第一数据分析网元执行的发送动作。处理器1401以及处理器1405,还用于支持通信设备执行上述实施例中步骤708、步骤711、步骤713。
另一种示例,该通信设备为第三数据分析网元,或者为应用于第三数据分析网元中的芯片。在这种情况下处理器1401以及处理器1405,用于支持该通信设备执行上述实施例中的步骤709中由第三数据分析网元执行的处理动作。通信接口1403,用于支持该通信设备执行上述实施例中的步骤710中由第三数据分析网元执行的发送动作。
在一种可能的实现方式中,通信接口1403还用于支持该通信设备执行上述实施例中的步骤712中由第三数据分析网元执行的接收动作,步骤714中由第二数据分析网元执行的接收动作,步骤703中由第二数据分析网元执行的发送动作。
再一种示例,该通信设备为服务发现网元,或者为应用于服务发现网元中的芯片。在这种情况下,通信接口1403用于支持该通信设备执行上述实施例的图6的步骤601中由服务发现网元执行的接收动作。处理器1401以及处理器1405,还用于支持通信设备 执行上述实施例中步骤602中由服务发现网元执行的处理动作。通信接口1403,用于支持该通信设备执行图6的步骤603中由服务发现网元执行的发送动作。
在一种可能的实施例中,通信接口1403,还用于支持通信设备执行上述实施例中的步骤701、步骤703中由服务发现网元执行的接收动作。处理器1401以及处理器1405,用于支持该通信设备执行上述实施例中的步骤702、步骤704中由服务发现网元执行的动作。
图15是本申请实施例提供的芯片150的结构示意图。芯片150包括一个或两个以上(包括两个)处理器1510和通信接口1530。
一种可能的实现方式,该芯片150还包括存储器1540,存储器1540可以包括只读存储器和随机存取存储器,并向处理器1510提供操作指令和数据。存储器1540的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。
在一些实施方式中,存储器1540存储了如下的元素,执行模块或者数据结构,或者他们的子集,或者他们的扩展集。
在本申请实施例中,通过调用存储器1540存储的操作指令(该操作指令可存储在操作系统中),执行相应的操作。
一种可能的实现方式中为:第一数据分析网元、第三数据分析网元、服务发现网元所用的芯片的结构类似,不同的装置可以使用不同的芯片以实现各自的功能。
处理器1510控制第一数据分析网元、第三数据分析网元、服务发现网元中任一个的处理操作,处理器1510还可以称为中央处理单元(central processing unit,CPU)。
存储器1540可以包括只读存储器和随机存取存储器,并向处理器1510提供指令和数据。存储器1540的一部分还可以包括NVRAM。例如应用中存储器1540、通信接口1530以及存储器1540通过总线系统1520耦合在一起,其中总线系统1520除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图15中将各种总线都标为总线系统1520。
上述本申请实施例揭示的方法可以应用于处理器1510中,或者由处理器1510实现。处理器1510可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1510中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1510可以是通用处理器、数字信号处理器(digital signal processing,DSP)、ASIC、现成可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1540,处理器1510读取存储器1540中的信息,结合其硬件完成上述方法的步骤。
一种可能的实现方式中,通信接口1530用于执行图6-图7所示的实施例中的第一数据分析网元、第三数据分析网元、服务发现网元的接收和发送的步骤。处理器1510用于执行图6-图7所示的实施例中的第一数据分析网元、第三数据分析网元、服务发现网 元的处理的步骤。
以上通信单元可以是该装置的一种通信接口,用于从其它装置接收信号。例如,当该装置以芯片的方式实现时,该收发单元是该芯片用于从其它芯片或装置接收信号或发送信号的通信接口。
一方面,提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当指令被运行时,实现如图6~图7中第一数据分析网元的功能。
一方面,提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当指令被运行时,实现如图6~图7中第三数据分析网元的功能。
一方面,提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当指令被运行时,实现如图6~图7中服务发现网元的功能。
一方面,提供一种包括指令的计算机程序产品,计算机程序产品中包括指令,当指令被运行时,实现如图6~图7中第一数据分析网元的功能。
又一方面,提供一种包括指令的计算机程序产品,计算机程序产品中包括指令,当指令被运行时,实现如图6~图7中第三数据分析网元的功能。
又一方面,提供一种包括指令的计算机程序产品,计算机程序产品中包括指令,当指令被运行时,实现如图6~图7中服务发现网元的功能。
一方面,提供一种芯片,该芯片应用于第一数据分析网元中,芯片包括至少一个处理器和通信接口,通信接口和至少一个处理器耦合,处理器用于运行指令,以实现如图6~图7中第一数据分析网元的功能。
又一方面,提供一种芯片,该芯片应用于第三数据分析网元中,芯片包括至少一个处理器和通信接口,通信接口和至少一个处理器耦合,处理器用于运行指令,以实现如图6~图7中第三数据分析网元的功能。
又一方面,提供一种芯片,该芯片应用于服务发现网元中,芯片包括至少一个处理器和通信接口,通信接口和至少一个处理器耦合,处理器用于运行指令,以实现如图6~图7中服务发现网元的功能。
本申请实施例提供一种通信系统,该通信系统包括:第一数据分析网元和服务发现网元。其中,第一数据分析网元用于执行如图6~图7中任一个附图中由第一数据分析网元执行的功能,服务发现网元用于执行图6~图7中的任一个由服务发现网元执行的步骤。
一种可能的实现方式,该通信系统还可以包括第三数据分析网元。其中,第三数据分析网元用于执行图6~图7中由第一数据分析网元以及第三数据分析网元执行的功能。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行所述计算机程序或指令时,全部或部分地执行本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其它可编程装置。所述计算机程序或指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序或指令可以从一个 网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,例如,软盘、硬盘、磁带;也可以是光介质,例如,数字视频光盘(digital video disc,DVD);还可以是半导体介质,例如,固态硬盘(solid state drive,SSD)。
尽管在此结合各实施例对本申请进行了描述,然而,在实施所要求保护的本申请过程中,本领域技术人员通过查看附图、公开内容、以及所附权利要求书,可理解并实现公开实施例的其他变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其他单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本申请的示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包括这些改动和变型在内。

Claims (35)

  1. 一种通信方法,其特征在于,包括:
    第一数据分析网元向服务发现网元发送第一请求,所述第一请求用于请求第二数据分析网元的信息,所述第一请求包括:分布式学习的信息和第一指示信息中的一个或多个,其中,所述分布式学习的信息包括分布式学习的类型,所述第一指示信息用于指示所述第二数据分析网元的类型;
    所述第一数据分析网元接收来自所述服务发现网元的一个或多个所述第二数据分析网元的信息,所述第二数据分析网元支持所述分布式学习的类型。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述第一数据分析网元根据所述一个或多个所述第二数据分析网元的信息确定进行分布式学习的第三数据分析网元的信息。
  3. 根据权利要求2所述的方法,其特征在于,所述第三数据分析网元的负载低于预设负载阈值,或者,所述第三数据分析网元的优先级高于预设优先级阈值。
  4. 根据权利要求2~3任一项所述的方法,其特征在于,所述分布式学习的信息还包括分布式学习支持的算法信息,所述第二数据分析网元或所述第三数据分析网元支持所述分布式学习支持的算法信息对应的算法。
  5. 根据权利要求2~4任一项所述的方法,其特征在于,所述方法还包括:
    所述第一数据分析网元接收来自所述第三数据分析网元的子模型,所述子模型由所述第三数据分析网元根据所述第三数据分析网元获取到的数据进行训练得到;
    所述第一数据分析网元根据所述第三数据分析网元的子模型确定更新的模型;
    所述第一数据分析网元向所述第三数据分析网元发送所述更新的模型。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    所述第一数据分析网元根据所述更新的模型确定目标模型;
    所述第一数据分析网元向所述第二数据分析网元发送所述目标模型,以及所述目标模型对应的以下信息中的一个或者多个:模型标识、模型版本号或者数据分析标识。
  7. 根据权利要求5~6任一项所述的方法,其特征在于,所述第一数据分析网元接收来自所述第三数据分析网元的子模型之前,所述方法还包括:
    所述第一数据分析网元向所述第三数据分析网元发送配置参数,所述配置参数为所述第三数据分析网元确定所述子模型时使用的参数。
  8. 根据权利要求7所述的方法,其特征在于,所述配置参数包括以下信息中的一个或多个:初始模型、训练集选择标准、特征生成方法、训练终止条件、最大训练时间、或最大等待时间。
  9. 根据权利要求1~8任一项所述的方法,其特征在于,所述分布式学习的类型包括横向学习、纵向学习以及迁移学习中的一个;
    所述第二数据分析网元的类型为以下中的一个:客户端、本地训练器、或者局部训练者。
  10. 根据权利要求1~9任一项所述的方法,其特征在于,所述方法还包括:
    所述第一数据分析网元向所述服务发现网元发送第二请求,所述第二请求用于请求注册所述第一数据分析网元的信息,所述第一数据分析网元的信息包括所述第一数 据分析网元对应的以下信息中的一个或者多个:所述分布式学习的信息、所述第一数据分析网元的范围、或第二指示信息,所述第二指示信息用于指示所述第一数据分析网元的类型。
  11. 根据权利要求2~10任一项所述的方法,其特征在于,所述第一请求还包括所述第一数据分析网元的范围,所述第二数据分析网元的范围或所述第三数据分析网元的范围位于所述第一数据分析网元的范围内。
  12. 根据权利要求11所述的方法,其特征在于,所述第一数据分析网元的范围包括以下信息中的一个或者多个:所述第一数据分析网元服务的区域、所述第一数据分析网元归属的公用陆地移动网PLMN的标识、所述第一数据分析网元服务的网络切片的信息、所述第一数据分析网元服务的数据网络名称DNN、或所述第一数据分析网元的设备商信息。
  13. 根据权利要求10~12任一项所述的方法,其特征在于,所述第一数据分析网元的类型包括以下信息中的一个:服务端、协调器、中心训练者、全局训练者。
  14. 根据权利要求1~13任一项所述的方法,其特征在于,所述分布式学习为联邦学习。
  15. 根据权利要求1~14任一项所述的方法,其特征在于,所述第二数据分析网元为终端。
  16. 一种通信装置,其特征在于,包括:
    通信单元,用于向服务发现网元发送第一请求,所述第一请求用于请求第二数据分析网元的信息,所述第一请求包括:分布式学习的信息和第一指示信息中的一个或多个,其中,所述分布式学习的信息包括分布式学习的类型,所述第一指示信息用于指示所述第二数据分析网元的类型;
    所述通信单元,还用于接收来自所述服务发现网元的一个或多个所述第二数据分析网元的信息,所述第二数据分析网元支持所述分布式学习的类型。
  17. 根据权利要求16所述的装置,其特征在于,所述装置还包括:处理单元,用于根据所述一个或多个所述第二数据分析网元的信息确定进行分布式学习的第三数据分析网元的信息。
  18. 根据权利要求17所述的装置,其特征在于,所述第三数据分析网元的负载低于预设负载阈值,或者,所述第三数据分析网元的优先级高于预设优先级阈值。
  19. 根据权利要求16~18任一项所述的装置,其特征在于,所述分布式学习的信息还包括分布式学习支持的算法信息,相应的,所述第二数据分析网元或所述第三数据分析网元支所述分布式学习支持的算法信息对应的算法。
  20. 根据权利要求17~19任一项所述的装置,其特征在于,所述通信单元,还用于接收来自所述第三数据分析网元的子模型,所述子模型由所述第三数据分析网元根据所述第三数据分析网元获取到的数据进行训练得到;
    处理单元,还用于根据所述第三数据分析网元的子模型确定更新的模型;
    所述通信单元,还用于向所述第三数据分析网元发送所述更新的模型。
  21. 根据权利要求20所述的装置,其特征在于,所述处理单元,还用于根据所述更新的模型确定目标模型;
    所述通信单元,还用于向所述第二数据分析网元发送所述目标模型,以及所述目标模型对应的以下信息中的一个或者多个:模型标识、模型版本号或者数据分析标识。
  22. 根据权利要求20~21任一项所述的装置,其特征在于,所述通信单元,还用于向一个或多个所述第三数据分析网元发送配置参数,所述配置参数为所述第三数据分析网元确定所述子模型时使用的参数。
  23. 根据权利要求22所述的装置,其特征在于,所述配置参数包括以下信息中的一个或多个:初始模型、训练集选择标准、特征生成方法、训练终止条件、最大训练时间、或最大等待时间。
  24. 根据权利要求16~23任一项所述的装置,其特征在于,所述分布式学习的类型包括横向学习、纵向学习以及迁移学习中的一个;
    所述第二数据分析网元的类型为以下中的一个:客户端、本地训练器、或者局部训练者。
  25. 根据权利要求16~24任一项所述的装置,其特征在于,所述通信单元,还用于向所述服务发现网元发送第二请求,所述第二请求用于请求注册第一数据分析网元的信息,所述第一数据分析网元的信息包括所述第一数据分析网元对应的以下信息中的一个或者多个:所述分布式学习的信息、所述第一数据分析网元的范围、或第二指示信息,所述第二指示信息用于指示所述第一数据分析网元的类型。
  26. 根据权利要求17~25任一项所述的装置,其特征在于,所述第一请求还包括所述第一数据分析网元的范围,所述第二数据分析网元的范围或所述第三数据分析网元的范围位于所述第一数据分析网元的范围内。
  27. 根据权利要求26所述的装置,其特征在于,所述第一数据分析网元的范围包括以下信息中的一个或者多个:所述第一数据分析网元服务的区域、所述第一数据分析网元归属的公用陆地移动网PLMN的标识、所述第一数据分析网元服务的网络切片的信息、所述第一数据分析网元服务的数据网络名称DNN、或所述第一数据分析网元的设备商信息。
  28. 根据权利要求25~27任一项所述的装置,其特征在于,所述第一数据分析网元的类型包括以下信息中的一个:服务端、协调器、中心训练者、全局训练者。
  29. 根据权利要求16~28任一项所述的装置,其特征在于,所述分布式学习包括联邦学习。
  30. 根据权利要求16~29任一项所述的装置,其特征在于,所述第二数据分析网元为终端。
  31. 一种芯片,其特征在于,所述芯片包括至少一个处理器和通信接口,所述通信接口和所述至少一个处理器耦合,所述至少一个处理器用于运行计算机程序或指令,以实现如权利要求1-15中任一项所述的通信方法,所述通信接口用于与所述芯片之外的其它模块进行通信。
  32. 一种通信装置,其特征在于,包括:处理器和通信接口;
    其中,所述通信接口用于执行如权利要求1~15中任一项所述的通信方法中在第一数据分析网元中进行消息收发的操作;
    所述处理器运行指令以执行如权利要求1~15中任一项所述的通信方法中在所述 第一数据分析网元中进行处理或控制的操作。
  33. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当所述指令被运行时,以实现上述权利要求1~15任一项所述的方法。
  34. 一种通信装置,其特征在于,包括:至少一个处理器,所述至少一个处理器用于运行存储器中存储的计算机程序或指令,以实现权利要求1~15任一项所述的方法。
  35. 一种通信系统,其特征在于,包括:第一数据分析网元以及服务发现网元;
    其中,所述第一数据分析网元用于执行上述权利要求1~15任一项所述的方法,所述服务发现网元用于根据所述第一数据分析网元的第一请求,向所述第一数据分析网元提供一个或多个第二数据分析网元的信息,一个或多个所述第二数据分析网元支持所述第一数据分析网元请求的分布式学习的类型。
PCT/CN2021/075317 2020-04-29 2021-02-04 一种通信方法、装置及系统 WO2021218274A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21797488.0A EP4132066A4 (en) 2020-04-29 2021-02-04 COMMUNICATION METHOD, DEVICE AND SYSTEM
US17/976,261 US20230083982A1 (en) 2020-04-29 2022-10-28 Communication method, apparatus, and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010359339.6 2020-04-29
CN202010359339.6A CN113573331B (zh) 2020-04-29 2020-04-29 一种通信方法、装置及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/976,261 Continuation US20230083982A1 (en) 2020-04-29 2022-10-28 Communication method, apparatus, and system

Publications (1)

Publication Number Publication Date
WO2021218274A1 true WO2021218274A1 (zh) 2021-11-04

Family

ID=78158683

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/075317 WO2021218274A1 (zh) 2020-04-29 2021-02-04 一种通信方法、装置及系统

Country Status (4)

Country Link
US (1) US20230083982A1 (zh)
EP (1) EP4132066A4 (zh)
CN (2) CN117320034A (zh)
WO (1) WO2021218274A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710330A (zh) * 2022-03-22 2022-07-05 华东师范大学 一种基于异构分层联邦学习的异常检测方法
WO2023125660A1 (zh) * 2021-12-29 2023-07-06 华为技术有限公司 一种通信方法及装置
WO2023141985A1 (zh) * 2022-01-28 2023-08-03 华为技术有限公司 通信方法和装置
WO2023185826A1 (zh) * 2022-03-28 2023-10-05 维沃移动通信有限公司 网元注册方法、模型请求方法、装置、网元、通信系统及存储介质
WO2023187679A1 (en) * 2022-03-30 2023-10-05 Telefonaktiebolaget Lm Ericsson (Publ) Distributed machine learning or federated learning in 5g core network
WO2024088572A1 (en) * 2023-01-05 2024-05-02 Lenovo (Singapore) Pte. Ltd. Registering and discovering external federated learning clients in a wireless communication system

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116227943A (zh) * 2021-11-29 2023-06-06 中国电信股份有限公司 客户自主分析方法、装置以及介质
CN114339834B (zh) * 2021-12-29 2024-03-29 中国联合网络通信集团有限公司 QoE配置方法、通信装置及存储介质
WO2023137711A1 (en) * 2022-01-21 2023-07-27 Lenovo (Beijing) Limited Methods and apparatuses for artificial intelligence applications
CN114548426B (zh) * 2022-02-17 2023-11-24 北京百度网讯科技有限公司 异步联邦学习的方法、业务服务的预测方法、装置及系统
WO2023185822A1 (zh) * 2022-03-28 2023-10-05 维沃移动通信有限公司 网元注册方法、模型确定方法、装置、网元、通信系统及存储介质
WO2024036453A1 (zh) * 2022-08-15 2024-02-22 华为技术有限公司 一种联邦学习方法及相关装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109600243A (zh) * 2017-09-30 2019-04-09 华为技术有限公司 数据分析方法和装置
WO2019158777A1 (en) * 2018-02-19 2019-08-22 NEC Laboratories Europe GmbH Network data analytics functionality enhancement and new service consumers
CN110677299A (zh) * 2019-09-30 2020-01-10 中兴通讯股份有限公司 网络数据采集方法、装置和系统
CN110831029A (zh) * 2018-08-13 2020-02-21 华为技术有限公司 一种模型的优化方法和分析网元

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110047778A (ko) * 2009-10-30 2011-05-09 삼성전자주식회사 무선 통신 망을 기반으로 하는 개인 학습장치 및 방법
CN110119808A (zh) * 2018-02-06 2019-08-13 华为技术有限公司 一种基于机器学习的数据处理方法以及相关设备
CN110972193B (zh) * 2018-09-28 2021-12-03 华为技术有限公司 一种切片信息处理方法及装置
WO2020076144A1 (ko) * 2018-10-12 2020-04-16 엘지전자 주식회사 무선 통신 시스템에서 복수개의 무선 접속 방식을 지원하는 단말의 능력을 네트워크에 설정하는 방법 및 이를 위한 장치
CN114287007A (zh) * 2019-06-18 2022-04-05 摩洛科公司 用于提供机器学习服务的方法和系统
US11616804B2 (en) * 2019-08-15 2023-03-28 Nec Corporation Thwarting model poisoning in federated learning
US11636438B1 (en) * 2019-10-18 2023-04-25 Meta Platforms Technologies, Llc Generating smart reminders by assistant systems
US11487969B2 (en) * 2020-02-18 2022-11-01 Xayn Ag Apparatuses, computer program products, and computer-implemented methods for privacy-preserving federated learning
KR20210108785A (ko) * 2020-02-26 2021-09-03 삼성전자주식회사 무선 통신 시스템에서 서비스를 선택하는 방법 및 장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109600243A (zh) * 2017-09-30 2019-04-09 华为技术有限公司 数据分析方法和装置
WO2019158777A1 (en) * 2018-02-19 2019-08-22 NEC Laboratories Europe GmbH Network data analytics functionality enhancement and new service consumers
CN110831029A (zh) * 2018-08-13 2020-02-21 华为技术有限公司 一种模型的优化方法和分析网元
CN110677299A (zh) * 2019-09-30 2020-01-10 中兴通讯股份有限公司 网络数据采集方法、装置和系统

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHINA MOBILE; OPPO; ETRI: "KI#2, Sol#24: Update to Federated Learning among Multiple NWDAF Instances", 3GPP DRAFT; S2-2008025, vol. SA WG2, 25 October 2020 (2020-10-25), pages 1 - 5, XP051948223 *
See also references of EP4132066A4

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023125660A1 (zh) * 2021-12-29 2023-07-06 华为技术有限公司 一种通信方法及装置
WO2023141985A1 (zh) * 2022-01-28 2023-08-03 华为技术有限公司 通信方法和装置
CN114710330A (zh) * 2022-03-22 2022-07-05 华东师范大学 一种基于异构分层联邦学习的异常检测方法
CN114710330B (zh) * 2022-03-22 2023-01-24 华东师范大学 一种基于异构分层联邦学习的异常检测方法
WO2023185826A1 (zh) * 2022-03-28 2023-10-05 维沃移动通信有限公司 网元注册方法、模型请求方法、装置、网元、通信系统及存储介质
WO2023187679A1 (en) * 2022-03-30 2023-10-05 Telefonaktiebolaget Lm Ericsson (Publ) Distributed machine learning or federated learning in 5g core network
WO2024088572A1 (en) * 2023-01-05 2024-05-02 Lenovo (Singapore) Pte. Ltd. Registering and discovering external federated learning clients in a wireless communication system

Also Published As

Publication number Publication date
CN113573331B (zh) 2023-09-01
EP4132066A4 (en) 2023-08-30
CN113573331A (zh) 2021-10-29
EP4132066A1 (en) 2023-02-08
CN117320034A (zh) 2023-12-29
US20230083982A1 (en) 2023-03-16

Similar Documents

Publication Publication Date Title
WO2021218274A1 (zh) 一种通信方法、装置及系统
US10856183B2 (en) Systems and methods for network slice service provisioning
CA3112926C (en) Slice information processing method and apparatus
WO2021258986A1 (zh) 数据处理方法和装置
WO2020103693A1 (zh) 一种资源信息发送方法、装置及系统
WO2020108003A1 (zh) 一种用户接入控制方法、信息发送方法及装置
WO2020108002A1 (zh) 一种传输策略确定方法、策略控制方法及装置
WO2021031562A1 (zh) 一种获取信息的方法及装置
EP3632083A1 (en) Edge cloud broker and method therein for allocating edge cloud resources
WO2022007899A1 (zh) Upf选择方法及装置
WO2022033115A1 (zh) 一种通信方法和通信装置
WO2022062362A1 (zh) 通信方法、装置及系统
WO2020103517A1 (zh) 终端的能力信息的获取方法、装置及系统
WO2020147756A1 (zh) 一种会话管理方法及装置
WO2021204299A1 (zh) 一种确定策略的方法、装置及系统
WO2019242698A1 (zh) 一种管理网元的方法、设备及系统
US20230098362A1 (en) Background Data Transfer Policy Formulation Method, Apparatus, and System
WO2022033116A1 (zh) 一种通信方法和通信装置以及系统
WO2020015503A1 (zh) 一种选择会话管理功能网元的方法、装置及系统
CN116325686A (zh) 一种通信方法和装置
WO2022188670A1 (zh) 网络分析转移方法、装置及网络功能实体
WO2021218270A1 (zh) 一种通信方法、装置及系统
WO2022032546A1 (zh) 通信方法及装置
WO2024032031A1 (zh) 一种数据分析方法及装置
WO2023050787A1 (zh) 一种计费方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21797488

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021797488

Country of ref document: EP

Effective date: 20221104

NENP Non-entry into the national phase

Ref country code: DE