WO2023146461A1 - Apprentissage dissimulé - Google Patents

Apprentissage dissimulé Download PDF

Info

Publication number
WO2023146461A1
WO2023146461A1 PCT/SE2023/050074 SE2023050074W WO2023146461A1 WO 2023146461 A1 WO2023146461 A1 WO 2023146461A1 SE 2023050074 W SE2023050074 W SE 2023050074W WO 2023146461 A1 WO2023146461 A1 WO 2023146461A1
Authority
WO
WIPO (PCT)
Prior art keywords
local
radio node
model
master
training
Prior art date
Application number
PCT/SE2023/050074
Other languages
English (en)
Inventor
Philipp BRUHN
Dinand Roeland
Göran HALL
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Publication of WO2023146461A1 publication Critical patent/WO2023146461A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0495Quantised networks; Sparse networks; Compressed networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • the present disclosure relates, in general, to wireless communications and, more particularly, systems and methods for concealed learning.
  • machine learning may be used in a large scale, distributed, decentralized, cloud/virtualized environment where multiple stakeholders are operating and where performance is important from day one.
  • ML in mobile wireless networks is a highly complex issue where properties such as robustness, scalability, latency, and many other metrics need to be considered.
  • FL Federated Learning
  • UE user equipment
  • gNB gNodeB
  • AI Artificial Intelligence
  • ML Machine Learning
  • each local learner has its own (local) training dataset that does not need to be uploaded to the central server. Instead, each local learner computes an update to the latest global model, and only this update is communicated to the server. See, H. B. McMahan et al., “Communication-Efficient Learning of Deep Networks from Decentralized Data,” AISTATS, 2017.
  • a central entity referred to as master trainer, generates a single, average, model from all the local models. The new global model is then sent back to the local learners.
  • FIGURE 1 illustrates an overview of FL.
  • the global model resides in the top node, and training is done in the access sites or clients, depending on the use-case.
  • FL enables users to leverage the benefits of shared AI/ML models trained from a large, distributed dataset without the need of sharing the data to a central entity.
  • the principal goal of an AI/ML model is to minimize a loss function, such as linear regression, logistic regression, etc.
  • the AI/ML models used in FL may, for example, be neural networks.
  • a neural network consists of a set of neurons or activations connected by certain rules.
  • the input layer is responsible for receiving the input data.
  • the rightmost layer is called the output layer and gets the output data.
  • the hidden layers in the middle of the multi-hop neural network compute a function of the inputs.
  • each neuron on the nth layer is connected to all neurons on the (n — l)th layer.
  • the output of the (n — l)th layer is the input of the nth layer.
  • Each inter-neuron connection has an associated multiplicative weight, and each neuron in a layer has an associated additive bias term.
  • FIGURE 2 illustrates an example of a neural network.
  • SGD Stochastic Gradient Descent
  • FL is not restricted to neural network models and SGD learning. Models may very well be of a different type, like a decision tree or a Markov chain.
  • FIGURE 3 illustrates a schematic overview of FL.
  • local models are trained at multiple locations, which are represented as Location A and Location B in FIGURE 3, and sent to some other location, which is represented as Location C in FIGURE 3.
  • Location C all local models are combined into a master model.
  • the illustrated example of FIGURE 3 includes two local trainers. In general, however, there are one or more local trainers.
  • the master model is sent back to local trainers and can be used for local inference as well as further local training.
  • Each training and data pre-processing function may be under the responsibility of a different organization or vendor.
  • vendor X may be responsible for data preprocessing and model training at location A
  • vendor Y may be responsible for the same at location B
  • vendor Z may be responsible for master training at location C.
  • the master trainer controls the FL by policies for the local sites. At least these training policies must describe what architecture and model packaging the master trainer expects from the local trainers. The master trainer needs to have enough information to interpret the local models when building the master model. This is indicated in FIGURE 3 by the dashed lines from the master trainer to the local trainers. Optionally, the master trainer may also prescribe how the feature engineering such as, for example, data pre-processing, needs to be performed. This is indicated in FIGURE 3 by the dashed lines from the master trainer to the local data processing blocks performed by the local trainers. In an extreme case, the training and data preprocessing policies sent by the master trainer provide details for how the local training shall be done (this may include all the hyperparameters) and how the feature engineering shall be done. In practice, the provision of these details by the master trainer basically demotes the local trainers to dumb entities, leaving all the intelligence at the master trainer.
  • FL is a very useful technology for certain use cases. As such, it is now being discussed in many standardization fora including 3GPP SA2, 3GPP SA5 and ORAN. It is expected to come up in other fora as well, like the 3GPP RAN groups. However, when performing FL in a multi-organization data, policies and the details of AI/ML models of an organization are necessarily exposed to the organizations.
  • a method by a first radio node operating as a master trainer for concealed learning includes transmitting, to a second radio node operating as a local trainer, one or more software packages for performing local training of and/or generating a local model, the one or more software packages being transmitted in a concealed format that prevents the second radio node from decrypting the one or more software packages.
  • the first radio node receives the local model from the second radio node in a concealed format that only the first radio node operating as the master trainer is able to decrypt. Based on at least the local model received from the second radio node, the first radio node generates a master model and transmits the master model to the second radio node.
  • a first radio node operating as a master trainer for concealed learning is adapted to transmit, to a second radio node operating as a local trainer, one or more software packages for performing local training of and/or generating a local model, the one or more software packages being transmitted in a concealed format that prevents the second radio node from decrypting the one or more software packages.
  • the first radio node receives the local model from the second radio node in a concealed format that only the first radio node operating as the master trainer is able to decrypt. Based on at least the local model received from the second radio node, the first radio node generates a master model and transmits the master model to the second radio node.
  • a method by a second radio node operating as a local trainer for concealed learning includes receiving, from a first radio node operating as a master trainer 120, one or more software packages for performing local training of and/or generate of a local model.
  • the one or more software packages is received in a concealed format that prevents the second radio node from decrypting the one or more software packages.
  • the second radio node uses the one or more software packages to perform the local training of and/or the generating of local model.
  • the second radio node transmits the local model to the first radio node in a concealed format that the master trainer is able to decrypt.
  • the second radio node receives, from the first radio node, a master model that is generated based on the local model.
  • a second radio node operating as a local trainer for concealed learning is adapted to receive, from a first radio node operating as a master trainer, one or more software packages for performing local training of and/or generate of a local model.
  • the one or more software packages is received in a concealed format that prevents the second radio node from decrypting the one or more software packages.
  • the second radio node is adapted to use the one or more software packages to perform the local training of and/or the generating of local model.
  • the second radio node is adapted to transmit the local model to the first radio node in a concealed format that the master trainer is able to decrypt.
  • the second radio node is adapted to receive, from the first radio node, a master model that is generated based on the local model.
  • a technical advantage may be that certain embodiments allow a first radio node (e.g., a master trainer located at master site) to provide a model to a second radio node (e.g., a local trainer located at local site) in a concealed form that allows the first radio node to maintain full control over the training. Certain embodiments eliminate the necessity for the first radio node to send detailed and complex training policies to the second radio node and/or eliminate the necessity for the first radio node to trust the second radio node that the training policies are applied fully.
  • a first radio node e.g., a master trainer located at master site
  • a second radio node e.g., a local trainer located at local site
  • a technical advantage may be that certain embodiments allow the first radio node to maintain full control over the data pre-processing. Accordingly, certain embodiments may eliminate the necessity for the first radio node to send detailed and complex data pre-processing policies to the second node and/or eliminate the necessity for the first radio node to trust the second radio node that the data pre-processing policies are applied fully.
  • a technical advantage may be that certain embodiments protect IP comprised within the model.
  • a technical advantage may be that certain embodiments protect IP comprised within the data pre-processing.
  • a technical advantage may be that certain embodiments maintain the advantages of FL such as, for example, the protection of local raw data and possible saving data transport resources, while at the same time enabling the second node to further train the model.
  • a first radio node can maintain full control over which data is used to train the local model at a second radio node, without having to send detailed and complex data detection policies to the second radio node and having to trust the second radio node that such data detection policies are applied fully.
  • This enables the first radio node to ensure that certain (e.g., corrupted) data is not used to train the local model and, thus, does not adversely affect the performance of the model.
  • certain (e.g., corrupted) data is not used to train the local model and, thus, does not adversely affect the performance of the model.
  • FL this also ensures that such data does not adversely affect the performance of the master model either.
  • FIGURE 1 illustrates an overview of FL
  • FIGURE 2 illustrates an example of a neural network
  • FIGURE 3 illustrates a schematic overview of FL
  • FIGURE 4 illustrates a schematic overview of concealed FL, according to certain embodiments
  • FIGURE 5 illustrates a schematic overview of concealed FL as applied in a multivendor context, according to certain embodiments
  • FIGURE 6 illustrates example Application Program Interfaces (APIs) between training and execution and a platform, according to certain embodiments
  • FIGURE 7 illustrates example of APIs between data pre-processing and a platform, according to certain embodiments
  • FIGURE 8 illustrates an example communication system, according to certain embodiments.
  • FIGURE 9 illustrates an example UE, according to certain embodiments.
  • FIGURE 10 illustrates an example network node, according to certain embodiments.
  • FIGURE 11 illustrates a block diagram of a host, according to certain embodiments.
  • FIGURE 12 illustrates a virtualization environment in which functions implemented by some embodiments may be virtualized, according to certain embodiments
  • FIGURE 13 illustrates a host communicating via a network node with a UE over a partially wireless connection, according to certain embodiments
  • FIGURE 14 illustrates an example method by a first radio node operating as a master trainer, according to certain embodiments
  • FIGURE 15 illustrates an example method by a second radio node operating as a local trainer, according to certain embodiments
  • FIGURE 16 illustrates another example method by a first radio node operating as a master trainer, according to certain embodiments.
  • FIGURE 17 illustrates another example method by a second radio node operating as a local trainer, according to certain embodiments.
  • node or ‘radio node’ can be a network node or a UE.
  • NodeB NodeB, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB (eNB), gNodeB (gNB), Master eNB (MeNB), Secondary eNB (SeNB), integrated access backhaul (IAB) node, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), Central Unit (e.g. in a gNB), Distributed Unit (e.g.
  • gNB Baseband Unit
  • C-RAN access point
  • AP access point
  • RRU Remote Radio Unit
  • RRH Remote Radio Head
  • DAS distributed antenna system
  • core network node e.g. Mobile Switching Center (MSC), Mobility Management Entity (MME), etc.
  • O&M Operations & Maintenance
  • OSS Operations Support System
  • SON Self Organizing Network
  • positioning node e.g. E-SMLC
  • UE user equipment
  • D2D device to device
  • V2V vehicular to vehicular
  • MTC UE machine type UE
  • M2M machine to machine
  • PDA Personal Digital Assistant
  • Tablet mobile terminals
  • smart phone laptop embedded equipment
  • LME laptop mounted equipment
  • USB Unified Serial Bus
  • radio access technology may refer to any RAT such as, for example, Universal Terrestrial Radio Access Network (UTRA), Evolved Universal Terrestrial Radio Access Network (E-UTRA), narrow band internet of things (NB-IoT), WiFi, Bluetooth, next generation RAT, NR, 4G, 5G, etc.
  • UTRA Universal Terrestrial Radio Access Network
  • E-UTRA Evolved Universal Terrestrial Radio Access Network
  • NB-IoT narrow band internet of things
  • WiFi next generation RAT
  • NR next generation RAT
  • 4G 4G
  • 5G 5G
  • Any of the equipment denoted by the terms node, network node or radio network node may be capable of supporting a single or multiple RATs.
  • a network node can also be a RAN node, a Core Network node, an 0AM, an Service Management & Orchestration (SMOO node or system, a Network Management System (NMS), a Non-Real Time RAN Intelligent Controller (Non-RT RIC), a Real-Time RAN Intelligent Controller (RT-RIC), a gNB, eNB, Enhanced-gNB (en-gNB), Next Generation-eNB (ng-eNB), gNB-CU, gNB-Centralized Unit- Control Plane (gNB-CU-CP), gNB-Centralized Unit-User Plane (gNB-CU-UP), eNB- Centralized Unit (eNB-CU), eNB-Centralized Unit-Control Plane (eNB-CU-CP), eNB- Centralized Unit-User Plane (eNB-CU-UP), Integrated Access and Backhaul (IAB) node, IAB- donor Distributed Unit (DU), lAB-donor Central
  • FL is a technique that allows local entities such as, for example, a UE, a gNB, and other network nodes, to collectively use the advantages of shared AI/ML models trained by multiple local entities.
  • the problem that arises, however, is how to support the FL approach in standardized, multi-organization (e.g., multi-vendor) systems, while at the same time allowing for the protection of Intellectual Property (IP) that is included in the details of AI/ML models.
  • IP Intellectual Property
  • This IP includes, amongst others, training techniques and data preprocessing steps.
  • the master trainer may give the local trainers detailed instructions (i.e., policies) on how the local data pre-processing and local training shall be done.
  • the master trainer must provide at least enough instructions so that the different local models can be combined into a master model. By giving those instructions, the master trainer reveals sensitive details about the AI/ML model, such as hyperparameters, and the data pre-processing steps, like feature engineering.
  • Certain embodiments enable FL to be fully controlled by, and only visible to, one organization but to be done in multiple locations that can, in some embodiments, belong to different organizations.
  • certain embodiments described herein enable concealed FL for standardized, multi-organization systems.
  • ordinary (i.e., non-concealed) FL local data from the different sites remains local data in the different sites. More specifically, in a particular embodiment, the local data remains on site and is not visible to the organization responsible for training.
  • the organization responsible for training i.e., the master trainer
  • the organization responsible for training i.e., the master trainer
  • a master trainer sends a software package to one or more locations. At each location, the software package performs local training, which results in a local model.
  • the trained local models are sent back to the master trainer, where the master trainer generates a single master model.
  • the master trainer may send policies on how to perform training together with the training software package or separately.
  • the master model is sent together with those policies (this master model becomes the new local model).
  • the master trainer sends a second software package to the locations to perform local data pre-processing, where the pre-processed data is input for the local training.
  • the master trainer may send policies on how to perform the pre-processing together with the pre-processing software package or separately.
  • software packages, training and data pre-processing policies, and models and/or any pieces or portions thereof may be sent in a concealed way, in some embodiments.
  • transfer learning where a model is initially trained at a first location, and then send to a second location for further training (including the data pre-processing). The policies would be sent from the first location to the second location using methods and techniques described herein.
  • data pre-processing and training and execution functions are done using software packages (of any form), which are executed on a provided platform at a local location associated with a local trainer.
  • the data pre-processing and training policies are sent as concealed messages from a master training environment associated with the master trainer to the local site of the local trainer location, where they are fed into the software packages to configure the data preprocessing and training and execution functions.
  • the concealed messages are encrypted, for example, these messages may only be decryptable and comprehensible in the given software and, thus, are not decryptable or otherwise decodable by the platform or, in general, the organization owning or running the platform or the local trainer owning the local data.
  • the models discussed herein may be Al and/or ML models.
  • RANs Radio Access Networks
  • gNBs are the local trainers
  • CNs Core Networks
  • FIGURE 4 illustrates a high-level schematic 100 of concealed FL, according to certain embodiments. Similar to FIGURE 3 described above, local models 102a and 102b are trained by local trainers 105a and 105b at respective local locations, which are represented as Location A and Location B. The local models 102a and 102b are sent to master location, which is represented as Location C. There, in a master training environment 115, the master trainer 120 combines the local models 102a and 102b to generate a master model 125 based on the local models.
  • the example scenario depicted in FIGURE 4 includes two local trainers at two local locations for generating two local models, there may be any number of local trainers and/or locations for generating any number of local models.
  • the models mentioned herein may be Al and/or ML models, in particular embodiments.
  • each of local models 102a and 102b include training & execution modules 130a and 130b, respectively.
  • the training and execution modules 130a and 130b generate the local models 102a and 102b, respectively.
  • the master trainer 120 controls the FL by providing policies for the training and execution modules 130a and 130b to use when training the local models 102a and 102b. This is indicated in FIGURE 4 by the dashed lines from the master trainer 120 to the training and execution modules 130a and 130b.
  • each of local trainers 105a and 105b also include data preprocessing modules 135a and 135b, respectively , which generate features that are then input into the respective training and execution modules 130a and 130b.
  • the master trainer 120 may prescribe how the data pre-processing needs to be performed. This is indicated in FIGURE 4 by the dashed lines from the master trainer 120 to the data pre-processing modules 135a and 135b.
  • the data pre-processing and training policies are sent as encrypted messages from a server or other computing device associated with the master trainer 120 to the local trainers 105a and 105b.
  • the data pre-processing modules 135a and 135b and the training and execution modules 130a and 135 are software functions and/or packages running on top of one or more execution platforms 140a and 140b at the local locations.
  • the platform operates to feed the data pre-processing and training policies into the software packages to configure the data pre-processing and training. Messages containing the software package and/or data and training policies are only decryptable and comprehensible in the given software module.
  • neither the platform operated by the local trainers 105a and 105b nor the organization owning or running the platform is able to decode or otherwise determine the contents of the software packages and/or data and training policies provided from the master trainer 120.
  • modules 130a and 130b and 135a and 135b that receive data and policies to perform the training of the local models 102a and 102b based on that input.
  • the policies provided by the master trainer 120 and the details relating to how the training is performed by these modules is not revealed to anyone other than the master trainer 12O.At each location associated with the local trainers 105a and 105b, the software package performs local training, which results in a local model 102a and 102b. As described above, the trained local models 102a and 102b are sent back to the master trainer 120, and the master trainer 120 generates a single master model 125 from the locally trained local models 102a and 102b. The master trainer 120 then sends the master model back to the local trainers 105a and 105b. In a particular embodiment, the master model 125 then becomes the new local model used by the local trainers 105a and 105b.
  • the master trainer 120 sends software package(s), training and data pre-processing policies, and local trained models 102a and 102b to the local trainers 105a and 105.
  • local trainers 105a and 105b send local trained models 102a and 102b to the master trainer 120.
  • the master trainer 120 may send training and data pre-processing policies along with or within the training and/or data pre-processing software package.
  • the master trainer 120 may send the policies for how to perform training and/or data pre-processing separately from the software package.
  • each of the software packages, policies, and models are sent in a concealed way.
  • the embodiments described herein enable FL to be performed using concealed software packages, policies, and AI/ML models so as to protect the IP of both the master trainer and the local trainers.
  • the methods, techniques, and systems described herein also apply to non-FL.
  • the methods and techniques may be applied to transfer learning, which includes a model that is initially trained at a first location and then sent to a second location for further training.
  • the training performed at the second location may also include data pre-processing. Similar to the embodiments described above with regard to FL, the policies for the data and pre-processing used to performed the training would be sent from the first location to the second location and, thus, is sent in a concealed manner that prohibits the second location from decoding or otherwise detecting the contents of the policies.
  • the embodiments described herein can be applied to several domains.
  • the embodiments may apply to Radio Access Networks (RANs), where gNBs are the local trainers.
  • RANs Radio Access Networks
  • gNBs are the local trainers.
  • CNs Core Networks
  • NWDAFs are the local trainers.
  • FIGURE 5 illustrates an example high-level schematic 200 of concealed FL as applied in a multi-vendor context, according to certain embodiments. Many of the components and features illustrated in FIGURE 5 are similar to those described above with regard to FIGURE 4 and have not been described in detail.
  • FIGURE 5 illustrates a multi-vendor setting.
  • vendor X may be responsible for data pre-processing and model training at Location A
  • vendor Y may be responsible for data pre-processing and model training at Location B
  • vendor Z may be responsible for masting training at Location C.
  • all boxes shown as white boxes could be one company’s components, while the boxes shown with the pattern fill could be other vendors or the Communications Service Provider (CSP) itself.
  • CSP Communications Service Provider
  • all of the white boxes may be the responsibility of a vendor or operator that is associated with the master trainer, and the patterned boxes may be the responsibility of one or more other vendors associated with the local trainers.
  • the interfaces between the different boxes may be standardized, but the information carried over these interfaces is not revealed. For example, local models may still be transported to the master trainer via a standardized interface, but the internals of the local models are kept hidden.
  • data pre-processing is not concealed (in that case, data pre-processing would be depicted as a patterned box in FIGURE 5), but only the training is concealed.
  • the master trainer provides a software package for training & execution to the local trainer, and another software package for data pre-processing.
  • one package contains both training & execution and data pre-processing.
  • the package may be, for example, an executable or an Open Container Initiative (OCI) package.
  • OCI Open Container Initiative
  • the internals of the package(s) are concealed by appropriate measures for IP protection. Only information on how to execute and interact with the software package(s) (e.g., how to use the APIs between local platform and local training & execution as well as the APIs between local platform and local data pre-processing) may be provided by the master trainer to the platform, if any.
  • policies may be provided as part of the software package(s) or may be provided separately such as, for example, at a later time. These policies are encrypted from the platform’s point of view and can only be decrypted and decoded by the training & execution or data pre-processing function inside of the respective software package.
  • the master model may be inside that software package or combined with the policies or the master model may be provided separately.
  • the latter two approaches would be advantageous if the FL consists of multiple rounds; in this case, a master model is received by a local trainer, made to be the new local model, further trained locally, and after that sent back to the master trainer.
  • the local model can be received from the training & execution module, or, in other words, extracted from the software package in a concealed (i.e. , encrypted) form, so that the internals of the local model are kept hidden. Only the master trainer can decrypt and comprehend the local models received from local trainers.
  • the master trainer can ensure that certain (e.g., corrupted) data is not used to train a local model and, thus, does not adversely affect the performance of the local model.
  • certain data is identified by means of plausibility checks with domain-knowledge-based rules.
  • data is identified using anomaly detection techniques. The identified data can be discarded, or alternatively flagged, as part of the data pre-processing or prior to training. In case of FL, this further ensures that such data does not adversely affect the performance of the master model.
  • FIGURE 6 illustrates example APIs 300 between a training & execution module 305 and the platform 310 on which the module is run, according to certain embodiments.
  • APIs 300 may include a training policies API for sending policies used for training, for example, from the platform 310 to the training & execution module 305.
  • the policies communicated via the API 300 may include one or more of:
  • optimization algorithm for minimizing the cost/loss function e.g., (stochastic) gradient descent, Adam, or LBFGS (Limited-memory Broyden-Fletcher-Goldfarb-Shanno)
  • regularization type/method and parameters e.g., amount/strength
  • maximum depth of a tree in case of tree-based AI/ML models such as Random Forest or gradient-boosted decision trees as used for example by XGBoost).
  • policies communicated via the API 300 may include one or more of:
  • activation function in a neural network e.g., Sigmoid, ReLU, or Tanh
  • APIs 300 may include a training data API for inputting one or more new input-output vector pairs to the training & execution software package for further training of the (local) AI/ML model.
  • APIs 300 may include a master model API for sending a new master model to the training & execution module so that the new master model can replacing the current local trained model.
  • APIs 300 may include a local model API for reading the current local trained model from the training & execution module so that the local trained module can be sent to the master trainer.
  • APIs 300 may include an inference API for querying the AI/ML model, e.g., to provide a new input in order to receive the corresponding output (e.g., prediction).
  • APIs 300 may include a test API for testing the current local trained model, i.e., access the performance of the AI/ML model with a test dataset comprised in the training & execution software package.
  • APIs 300 may include a setup API for enabling installation of a new training & execution software package, or for resetting the current local trained model to the latest received master model.
  • FIGURE 7 illustrates example APIs 400 between a data pre-preprocessing module 405 and the platform 410 on which the module is run, according to certain embodiments.
  • APIs 400 may include a pre-processing policies API for sending policies used for pre-processing from the platform 410 to the data pre-processing module 405.
  • the policies communicated via the API 400 may include one or more of:
  • ⁇ feature encoding e.g., label/ordinal encoding and/or one-hot encoding
  • ⁇ feature transformation such as SVD-based transformation (e.g., PCA), and
  • APIs 400 may include a raw data API for inputting data to the data pre-processing module 405.
  • APIs 400 may include a training data API for outputting data from the data pre-processing module 405 to the platform 410.
  • APIs 400 may include a setup API to enable installation of a new data pre-processing software package in the data pre-processing module 405.
  • FIGURE 8 shows an example of a communication system 500 in accordance with some embodiments.
  • the communication system 500 includes a telecommunication network 502 that includes an access network 504, such as a radio access network (RAN), and a core network 506, which includes one or more core network nodes 508.
  • the access network 504 includes one or more access network nodes, such as network nodes 510a and 510b (one or more of which may be generally referred to as network nodes 510), or any other similar 3 rd Generation Partnership Project (3GPP) access node or non-3GPP access point.
  • 3GPP 3 rd Generation Partnership Project
  • the network nodes 510 facilitate direct or indirect connection of user equipment (UE), such as by connecting UEs 512a, 512b, 512c, and 512d (one or more of which may be generally referred to as UEs 512) to the core network 506 over one or more wireless connections.
  • UE user equipment
  • Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors.
  • the communication system 500 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
  • the communication system 500 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
  • the UEs 512 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 510 and other communication devices.
  • the network nodes 510 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 512 and/or with other network nodes or equipment in the telecommunication network 502 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 502.
  • the core network 506 connects the network nodes 510 to one or more hosts, such as host 516. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts.
  • the core network 506 includes one more core network nodes (e.g., core network node 508) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 508.
  • Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).
  • the host 516 may be under the ownership or control of a service provider other than an operator or provider of the access network 504 and/or the telecommunication network 502, and may be operated by the service provider or on behalf of the service provider.
  • the host 516 may host a variety of applications to provide one or more service.
  • Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server.
  • the communication system 500 of FIGURE 8 enables connectivity between the UEs, network nodes, and hosts.
  • the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any low-power wide-area network (LPWAN) standards such as LoRa and Sigfox.
  • GSM Global System for Mobile Communications
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • the telecommunication network 502 is a cellular network that implements 3GPP standardized features. Accordingly, the telecommunications network 502 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 502. For example, the telecommunications network 502 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)/Massive loT services to yet further UEs.
  • URLLC Ultra Reliable Low Latency Communication
  • eMBB Enhanced Mobile Broadband
  • mMTC Massive Machine Type Communication
  • the UEs 512 are configured to transmit and/or receive information without direct human interaction.
  • a UE may be designed to transmit information to the access network 504 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 504.
  • a UE may be configured for operating in single- or multi-RAT or multi -standard mode.
  • a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).
  • MR-DC multi-radio dual connectivity
  • the hub 514 communicates with the access network 504 to facilitate indirect communication between one or more UEs (e.g., UE 512c and/or 512d) and network nodes (e.g., network node 510b).
  • the hub 514 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs.
  • the hub 514 may be a broadband router enabling access to the core network 506 for the UEs.
  • the hub 514 may be a controller that sends commands or instructions to one or more actuators in the UEs.
  • the hub 514 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data.
  • the hub 514 may be a content source. For example, for a UE that is a VR headset, display, loudspeaker or other media delivery device, the hub 514 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which the hub 514 then provides to the UE either directly, after performing local processing, and/or after adding additional local content.
  • the hub 514 acts as a proxy server or orchestrator for the UEs, in particular in if one or more of the UEs are low energy loT devices.
  • the hub 514 may have a constant/persistent or intermittent connection to the network node 510b.
  • the hub 514 may also allow for a different communication scheme and/or schedule between the hub 514 and UEs (e.g., UE 512c and/or 512d), and between the hub 514 and the core network 506.
  • the hub 514 is connected to the core network 506 and/or one or more UEs via a wired connection.
  • the hub 514 may be configured to connect to an M2M service provider over the access network 504 and/or to another UE over a direct connection.
  • UEs may establish a wireless connection with the network nodes 510 while still connected via the hub 514 via a wired or wireless connection.
  • the hub 514 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 510b.
  • the hub 514 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and network node 510b, but which is additionally capable of operating as a communication start and/or end point for certain data channels.
  • FIGURE 9 shows a UE 600 in accordance with some embodiments.
  • a UE refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other UEs.
  • Examples of a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle-mounted or vehicle embedded/integrated wireless device, etc.
  • Other examples include any UE identified by the 3rd Generation Partnership Project (3GPP), including a narrow band internet of things (NB-IoT) UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.
  • 3GPP 3rd Generation Partnership Project
  • NB-IoT narrow band internet of things
  • MTC machine type communication
  • eMTC enhanced MTC
  • a UE may support device-to-device (D2D) communication, for example by implementing a 3 GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or vehicle- to-everything (V2X).
  • D2D device-to-device
  • DSRC Dedicated Short-Range Communication
  • V2V vehicle-to-vehicle
  • V2I vehicle-to-infrastructure
  • V2X vehicle- to-everything
  • a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device.
  • a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller).
  • a UE may represent a device that is not intended for sale
  • the UE 600 includes processing circuitry 602 that is operatively coupled via a bus 604 to an input/ output interface 606, a power source 608, a memory 610, a communication interface 612, and/or any other component, or any combination thereof.
  • Certain UEs may utilize all or a subset of the components shown in FIGURE 9. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
  • the processing circuitry 602 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 610.
  • the processing circuitry 602 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above.
  • the processing circuitry 602 may include multiple central processing units (CPUs).
  • the input/output interface 606 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices.
  • Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof.
  • An input device may allow a user to capture information into the UE 600.
  • Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like.
  • the presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user.
  • a sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof.
  • An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.
  • USB Universal Serial Bus
  • the power source 608 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used.
  • the power source 608 may further include power circuitry for delivering power from the power source 608 itself, and/or an external power source, to the various parts of the UE 600 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of the power source 608.
  • Power circuitry may perform any formatting, converting, or other modification to the power from the power source 608 to make the power suitable for the respective components of the UE 600 to which power is supplied.
  • the memory 610 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth.
  • the memory 610 includes one or more application programs 614, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 616.
  • the memory 610 may store, for use by the UE 600, any of a variety of various operating systems or combinations of operating systems.
  • the memory 610 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof.
  • RAID redundant array of independent disks
  • HD-DVD high-density digital versatile disc
  • HDDS holographic digital data storage
  • DIMM external mini-dual in-line memory module
  • SDRAM synchronous dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • the UICC may for example be an embedded UICC (eUICC), integrated UICC (iUICC) or a removable UICC commonly known as ‘SIM card.’
  • eUICC embedded UICC
  • iUICC integrated UICC
  • SIM card removable UICC commonly known as ‘SIM card.’
  • the memory 610 may allow the UE 600 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data.
  • An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in the memory 610, which may be or comprise a device-readable storage medium.
  • the processing circuitry 602 may be configured to communicate with an access network or other network using the communication interface 612.
  • the communication interface 612 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 622.
  • the communication interface 612 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network).
  • Each transceiver may include a transmitter 618 and/or a receiver 620 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth).
  • the transmitter 618 and receiver 620 may be coupled to one or more antennas (e.g., antenna 622) and may share circuit components, software or firmware, or alternatively be implemented separately.
  • communication functions of the communication interface 612 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof.
  • GPS global positioning system
  • Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11, Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/intemet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.
  • CDMA Code Division Multiplexing Access
  • WCDMA Wideband Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • GSM Global System for Mobile communications
  • LTE Long Term Evolution
  • NR New Radio
  • UMTS Worldwide Interoperability for Microwave Access
  • WiMax Ethernet
  • TCP/IP transmission control protocol/intemet protocol
  • SONET synchronous optical networking
  • ATM Asynchronous Transfer Mode
  • QUIC Hypertext Transfer Protocol
  • HTTP Hypertext Transfer Protocol
  • a UE may provide an output of data captured by its sensors, through its communication interface 612, via a wireless connection to a network node.
  • Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE.
  • the output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., when moisture is detected an alert is sent), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient).
  • a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection.
  • the states of the actuator, the motor, or the switch may change.
  • the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.
  • a UE when in the form of an Internet of Things (loT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare.
  • loT device are a device which is or which is embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a head-mounted display for Augmented Reality (AR) or Virtual Reality (VR), a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-t
  • AR Augmented
  • a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node.
  • the UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device.
  • the UE may implement the 3GPP NB-IoT standard.
  • a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
  • any number of UEs may be used together with respect to a single use case.
  • a first UE might be or be integrated in a drone and provide the drone’s speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone.
  • the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone’s speed.
  • the first and/or the second UE can also include more than one of the functionalities described above.
  • a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.
  • FIGURE 10 shows a network node 700 in accordance with some embodiments.
  • network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network.
  • network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
  • APs access points
  • BSs base stations
  • Node Bs Node Bs
  • eNBs evolved Node Bs
  • gNBs NR NodeBs
  • Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
  • a base station may be a relay node or a relay donor node controlling a relay.
  • a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • RRUs remote radio units
  • RRHs Remote Radio Heads
  • Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
  • DAS distributed antenna system
  • network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
  • MSR multi-standard radio
  • RNCs radio network controllers
  • BSCs base station controllers
  • BTSs base transceiver stations
  • OFDM Operation and Maintenance
  • OSS Operations Support System
  • SON Self-Organizing Network
  • positioning nodes e.g., Evolved Serving Mobile Location Centers (E-SMLCs)
  • the network node 700 includes a processing circuitry 702, a memory 704, a communication interface 706, and a power source 708.
  • the network node 700 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
  • the network node 700 comprises multiple separate components (e.g., BTS and BSC components)
  • one or more of the separate components may be shared among several network nodes.
  • a single RNC may control multiple NodeBs.
  • each unique NodeB and RNC pair may in some instances be considered a single separate network node.
  • the network node 700 may be configured to support multiple radio access technologies (RATs).
  • RATs radio access technologies
  • some components may be duplicated (e.g., separate memory 704 for different RATs) and some components may be reused (e.g., a same antenna 710 may be shared by different RATs).
  • the network node 700 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 700, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 700.
  • RFID Radio Frequency Identification
  • the processing circuitry 702 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 700 components, such as the memory 704, to provide network node 700 functionality.
  • the processing circuitry 702 includes a system on a chip (SOC). In some embodiments, the processing circuitry 702 includes one or more of radio frequency (RF) transceiver circuitry 712 and baseband processing circuitry 714. In some embodiments, the radio frequency (RF) transceiver circuitry 712 and the baseband processing circuitry 714 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 712 and baseband processing circuitry 714 may be on the same chip or set of chips, boards, or units.
  • SOC system on a chip
  • the memory 704 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 702.
  • volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-
  • the memory 704 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 702 and utilized by the network node 700.
  • the memory 704 may be used to store any calculations made by the processing circuitry 702 and/or any data received via the communication interface 706.
  • the processing circuitry 702 and memory 704 is integrated.
  • the communication interface 706 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, the communication interface 706 comprises port(s)/terminal(s) 716 to send and receive data, for example to and from a network over a wired connection.
  • the communication interface 706 also includes radio front-end circuitry 718 that may be coupled to, or in certain embodiments a part of, the antenna 710. Radio front-end circuitry 718 comprises filters 720 and amplifiers 722.
  • the radio front-end circuitry 718 may be connected to an antenna 710 and processing circuitry 702.
  • the radio front-end circuitry may be configured to condition signals communicated between antenna 710 and processing circuitry 702.
  • the radio front-end circuitry 718 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection.
  • the radio front-end circuitry 718 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 720 and/or amplifiers 722.
  • the radio signal may then be transmitted via the antenna 710.
  • the antenna 710 may collect radio signals which are then converted into digital data by the radio front-end circuitry 718.
  • the digital data may be passed to the processing circuitry 702.
  • the communication interface may comprise different components and/or different combinations of components.
  • the network node 700 does not include separate radio front-end circuitry 718, instead, the processing circuitry 702 includes radio front-end circuitry and is connected to the antenna 710.
  • the processing circuitry 702 includes radio front-end circuitry and is connected to the antenna 710.
  • all or some of the RF transceiver circuitry 712 is part of the communication interface 706.
  • the communication interface 706 includes one or more ports or terminals 716, the radio front-end circuitry 718, and the RF transceiver circuitry 712, as part of a radio unit (not shown), and the communication interface 706 communicates with the baseband processing circuitry 714, which is part of a digital unit (not shown).
  • the antenna 710 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals.
  • the antenna 710 may be coupled to the radio front-end circuitry 718 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly.
  • the antenna 710 is separate from the network node 700 and connectable to the network node 700 through an interface or port.
  • the antenna 710, communication interface 706, and/or the processing circuitry 702 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, the antenna 710, the communication interface 706, and/or the processing circuitry 702 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.
  • the power source 708 provides power to the various components of network node 700 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component).
  • the power source 708 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 700 with power for performing the functionality described herein.
  • the network node 700 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 708.
  • the power source 708 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.
  • Embodiments of the network node 700 may include additional components beyond those shown in FIGURE 10 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
  • the network node 700 may include user interface equipment to allow input of information into the network node 700 and to allow output of information from the network node 700. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 700.
  • FIGURE 11 is a block diagram of a host 800, which may be an embodiment of the host 516 of FIGURE 8, in accordance with various aspects described herein.
  • the host 800 may be or comprise various combinations hardware and/or software, including a standalone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, container, or processing resources in a server farm.
  • the host 800 may provide one or more services to one or more UEs.
  • the host 800 includes processing circuitry 802 that is operatively coupled via a bus 804 to an input/output interface 806, a network interface 808, a power source 810, and a memory 812.
  • processing circuitry 802 that is operatively coupled via a bus 804 to an input/output interface 806, a network interface 808, a power source 810, and a memory 812.
  • Other components may be included in other embodiments. Features of these components may be substantially similar to those described with respect to the devices of previous figures, such as FIGURES 6 and 7, such that the descriptions thereof are generally applicable to the corresponding components of host 800.
  • the memory 812 may include one or more computer programs including one or more host application programs 814 and data 816, which may include user data, e.g., data generated by a UE for the host 800 or data generated by the host 800 for a UE.
  • Embodiments of the host 800 may utilize only a subset or all of the components shown.
  • the host application programs 814 may be implemented in a container-based architecture and may provide support for video codecs (e.g., Versatile Video Coding (VVC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), MPEG, VP9) and audio codecs (e.g., FL AC, Advanced Audio Coding (AAC), MPEG, G.711), including transcoding for multiple different classes, types, or implementations of UEs (e.g., handsets, desktop computers, wearable display systems, heads-up display systems).
  • the host application programs 814 may also provide for user authentication and licensing checks and may periodically report health, routes, and content availability to a central node, such as a device in or on the edge of a core network.
  • the host 800 may select and/or indicate a different host for over-the-top services for a UE.
  • the host application programs 814 may support various protocols, such as the HTTP Live Streaming (HLS) protocol, Real-Time Messaging Protocol (RTMP), Real-Time Streaming Protocol (RTSP), Dynamic Adaptive Streaming over HTTP (MPEG-DASH), etc.
  • HLS HTTP Live Streaming
  • RTMP Real-Time Messaging Protocol
  • RTSP Real-Time Streaming Protocol
  • MPEG-DASH Dynamic Adaptive Streaming over HTTP
  • FIGURE 12 is a block diagram illustrating a virtualization environment 900 in which functions implemented by some embodiments may be virtualized.
  • virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
  • virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components.
  • Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 900 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host.
  • VMs virtual machines
  • the virtual node does not require radio connectivity (e.g., a core network node or host)
  • the node may be entirely virtualized.
  • Applications 902 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment 900 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
  • Hardware 904 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth.
  • Software may be executed by the processing circuitry to instantiate one or more virtualization layers 906 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 908a and 908b (one or more of which may be generally referred to as VMs 908), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein.
  • the virtualization layer 906 may present a virtual operating platform that appears like networking hardware to the VMs 908.
  • the VMs 908 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 906.
  • a virtualization layer 906 Different embodiments of the instance of a virtual appliance 902 may be implemented on one or more of VMs 908, and the implementations may be made in different ways.
  • Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
  • NFV network function virtualization
  • a VM 908 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
  • Each of the VMs 908, and that part of hardware 904 that executes that VM be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements.
  • a virtual network function is responsible for handling specific network functions that run in one or more VMs 908 on top of the hardware 904 and corresponds to the application 902.
  • Hardware 904 may be implemented in a standalone network node with generic or specific components. Hardware 904 may implement some functions via virtualization. Alternatively, hardware 904 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 910, which, among others, oversees lifecycle management of applications 902.
  • hardware 904 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
  • some signaling can be provided with the use of a control system 912 which may alternatively be used for communication between hardware nodes and radio units.
  • FIGURE 13 shows a communication diagram of a host 1002 communicating via a network node 1004 with a UE 1006 over a partially wireless connection in accordance with some embodiments.
  • host 1002 Like host 800, embodiments of host 1002 include hardware, such as a communication interface, processing circuitry, and memory.
  • the host 1002 also includes software, which is stored in or accessible by the host 1002 and executable by the processing circuitry.
  • the software includes a host application that may be operable to provide a service to a remote user, such as the UE 1006 connecting via an over-the-top (OTT) connection 1050 extending between the UE 1006 and host 1002.
  • OTT over-the-top
  • a host application may provide user data which is transmitted using the OTT connection 1050.
  • the network node 1004 includes hardware enabling it to communicate with the host 1002 and UE 1006.
  • the connection 1060 may be direct or pass through a core network (like core network 506 of FIGURE 8) and/or one or more other intermediate networks, such as one or more public, private, or hosted networks.
  • a core network like core network 506 of FIGURE 8
  • one or more other intermediate networks such as one or more public, private, or hosted networks.
  • an intermediate network may be a backbone network or the Internet.
  • the UE 1006 includes hardware and software, which is stored in or accessible by UE 1006 and executable by the UE’s processing circuitry.
  • the software includes a client application, such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via UE 1006 with the support of the host 1002.
  • a client application such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via UE 1006 with the support of the host 1002.
  • an executing host application may communicate with the executing client application via the OTT connection 1050 terminating at the UE 1006 and host 1002.
  • the UE's client application may receive request data from the host's host application and provide user data in response to the request data.
  • the OTT connection 1050 may transfer both the request data and the user data.
  • the UE's client application may interact with the user to generate the user data that it provides to the host application through the OTT
  • the OTT connection 1050 may extend via a connection 1060 between the host 1002 and the network node 1004 and via a wireless connection 1070 between the network node 1004 and the UE 1006 to provide the connection between the host 1002 and the UE 1006.
  • the connection 1060 and wireless connection 1070, over which the OTT connection 1050 may be provided, have been drawn abstractly to illustrate the communication between the host 1002 and the UE 1006 via the network node 1004, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
  • the host 1002 provides user data, which may be performed by executing a host application.
  • the user data is associated with a particular human user interacting with the UE 1006.
  • the user data is associated with a UE 1006 that shares data with the host 1002 without explicit human interaction.
  • the host 1002 initiates a transmission carrying the user data towards the UE 1006.
  • the host 1002 may initiate the transmission responsive to a request transmitted by the UE 1006.
  • the request may be caused by human interaction with the UE 1006 or by operation of the client application executing on the UE 1006.
  • the transmission may pass via the network node 1004, in accordance with the teachings of the embodiments described throughout this disclosure.
  • the network node 1004 transmits to the UE 1006 the user data that was carried in the transmission that the host 1002 initiated, in accordance with the teachings of the embodiments described throughout this disclosure.
  • the UE 1006 receives the user data carried in the transmission, which may be performed by a client application executed on the UE 1006 associated with the host application executed by the host 1002.
  • the UE 1006 executes a client application which provides user data to the host 1002.
  • the user data may be provided in reaction or response to the data received from the host 1002.
  • the UE 1006 may provide user data, which may be performed by executing the client application.
  • the client application may further consider user input received from the user via an input/ output interface of the UE 1006. Regardless of the specific manner in which the user data was provided, the UE 1006 initiates, in step 1018, transmission of the user data towards the host 1002 via the network node 1004.
  • the network node 1004 receives user data from the UE 1006 and initiates transmission of the received user data towards the host 1002.
  • the host 1002 receives the user data carried in the transmission initiated by the UE 1006.
  • One or more of the various embodiments improve the performance of OTT services provided to the UE 1006 using the OTT connection 1050, in which the wireless connection 1070 forms the last segment. More precisely, the teachings of these embodiments may improve one or more of, for example, data rate, latency, and/ or power consumption and, thereby, provide benefits such as, for example, reduced user waiting time, relaxed restriction on file size, improved content resolution, better responsiveness, and/or extended battery lifetime.
  • factory status information may be collected and analyzed by the host 1002.
  • the host 1002 may process audio and video data which may have been retrieved from a UE for use in creating maps.
  • the host 1002 may collect and analyze real-time data to assist in controlling vehicle congestion (e.g., controlling traffic lights).
  • the host 1002 may store surveillance video uploaded by a UE.
  • the host 1002 may store or control access to media content such as video, audio, VR or AR which it can broadcast, multicast or unicast to UEs.
  • the host 1002 may be used for energy pricing, remote control of non-time critical electrical load to balance power generation needs, location services, presentation services (such as compiling diagrams etc. from data collected from remote devices), or any other function of collecting, retrieving, storing, analyzing and/or transmitting data.
  • a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve.
  • the measurement procedure and/or the network functionality for reconfiguring the OTT connection may be implemented in software and hardware of the host 1002 and/or UE 1006.
  • sensors may be deployed in or in association with other devices through which the OTT connection 1050 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software may compute or estimate the monitored quantities.
  • the reconfiguring of the OTT connection 1050 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not directly alter the operation of the network node 1004. Such procedures and functionalities may be known and practiced in the art.
  • measurements may involve proprietary UE signaling that facilitates measurements of throughput, propagation times, latency and the like, by the host 1002.
  • the measurements may be implemented in that software causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection 1050 while monitoring propagation times, errors, etc.
  • FIGURE 14 illustrates a method 1100 by a first radio node for concealed FL, according to certain embodiments.
  • the method begins at step 1102, when the first radio transmits, to one or more other radio nodes that are located remotely from the first radio node, a local training package for performing local training and/or generating a local model.
  • the first radio node receives one or more local models from the one or more other radio nodes. Based on the one or more local models received from the one or more other radio nodes, the first radio node generates a master model, at step 1106.
  • the first radio node transmits the master model to the one or more other radio nodes.
  • the first radio node is operating as a training node and the one or more other radio nodes are operating as learning nodes.
  • the local training package comprises a software package that can be executed on a platform at/by the other radio nodes.
  • the first radio node transmits, to the one or more other radio nodes, at least one policy for performing the local training.
  • the at least one policy for performing the local training is transmitted to the one or more radio nodes with the local training package.
  • the at least one policy for performing the local training is transmitted to the one or more radio nodes separately from the local training package.
  • the at least one policy for performing the local training is concealed from the one or more radio nodes.
  • the operations associated with the local training package for performing the local training and/or generating the local model are concealed from the one or more radio nodes.
  • the one or more local models are received in concealed form.
  • the master model is transmitted in concealed form.
  • the first radio node transmits, to the one or more other radio nodes, a pre-processing package for performing data pre-processing at the respective radio node.
  • the data pre-processing is performed prior to generating the local model, and the output of performing the data pre-processing is the input for the local training package.
  • the pre-processing package comprises a software package that can be executed on a platform at/by the other radio nodes.
  • the first radio node transmits, to the one or more other radio nodes, at least one policy for performing the data pre-processing.
  • the at least one policy for performing the data preprocessing is transmitted to the one or more radio nodes with the pre-processing package. In a further particular embodiment, the at least one policy for performing the data preprocessing is transmitted to the one or more radio nodes separately from the pre-processing package.
  • the at least one policy for performing the data preprocessing is concealed from the one or more radio nodes.
  • the operations associated with the pre-processing package for performing the data pre-processing are concealed from the one or more radio nodes.
  • the one or more radio nodes include a plurality of radio node.
  • receiving the one or more local models includes receiving a plurality of local models, wherein each local model is associated with a respective one of the plurality of radio nodes.
  • the master model is generated based on the plurality of local models.
  • the first radio node is a user equipment (UE).
  • UE user equipment
  • the first radio node provides user data and forwards the user data to a host via the transmission to a network node.
  • the first radio node is a network node.
  • the first radio node obtains user data and forwards the user data to a host or a user equipment.
  • FIGURE 15 illustrates a method 1200 by a second radio node for concealed FL, according to certain embodiments.
  • the method begins at step 1202 when the second radio node receives, from a first radio node, a local training package for performing local training to generate a local model. Based on the local training package, the second radio node performs the local training to generate the local model, at step 1204.
  • the second radio node transmits, to the first radio node, the local model.
  • the second radio node receives, from the first radio node, a master model that is generated based on the local model and at least one other local model associated with at least one other radio node.
  • the second radio node stores the master model.
  • the second radio node is operating as a learning node and the first radio node is operating as a training node.
  • the local training package comprises a software package that can be executed on a platform at/by the second radio node.
  • the second radio node receives, from the first radio node, at least one policy for performing the local training.
  • the at least one policy for performing the local training is received with the local training package.
  • the at least one policy for performing the local training is received separately from the local training package.
  • the at least one policy for performing the local training is concealed from the second radio node.
  • operations associated with the local training package for performing the local training and/or generating the local model are concealed.
  • the one or more local models are transmitted in concealed form.
  • the master model is received in concealed form.
  • the second radio node receives, from the first radio node, a pre-processing package for performing data pre-processing at the second radio node.
  • the data pre-processing is performed prior to generating the local model, and the output of performing the data pre-processing is the input for the local training package.
  • the pre-processing package comprises a software package that can be executed on a platform at/by the second radio node.
  • the second radio node receives, from the first radio node, at least one policy for performing the data pre-processing.
  • the at least one policy for performing the data preprocessing is received with the pre-processing package.
  • the at least one policy for performing the data preprocessing is received separately from the pre-processing package.
  • the at least one policy for performing the data preprocessing is concealed from the second radio node.
  • operations associated with the pre-processing package for performing the data pre-processing are concealed from the second radio node.
  • the second radio node is a UE.
  • the UE provides user data and forwards the user data to a host via the transmission to a network node.
  • the second radio node is a network node.
  • FIGURE 16 illustrates an example method 1300 by a first radio node operating as a master trainer for concealed learning, according to certain embodiments.
  • the method begins at step 1302 when the first radio node transmits, to a second radio node operating as a local trainer, one or more software packages for performing local training of and/or generating a local model, the one or more software packages being transmitted in a concealed format that prevents the second radio node from decrypting the one or more software packages.
  • the first radio node receives the local model from the second radio node in a concealed format that only the first radio node operating as the master trainer is able to decrypt. Based on at least the local model received from the second radio node, the first radio node generates a master model and transmits the master model to the second radio node.
  • the concealed format is an encrypted format.
  • the concealed format prevents any third party from decrypting the one or more software packages.
  • the one or more software packages comprise one or more Open Container Initiative (OCI) packages.
  • OCI Open Container Initiative
  • transmitting the master model to the second radio node includes transmitting a version of the software package that is updated based on the master model or transmitting a portion of the software package that is updated based on the master model.
  • the one or more software packages include at least one policy for performing the local training of and/or generating of the local model and/or at least one policy for performing data pre-processing for generating at least one feature for the local training of and/or generating of the local model.
  • the first radio node transmits information to the second radio node operating as the local trainer.
  • the information is transmitted in a concealed format that prevents the second radio node from decrypting the information.
  • the information includes at least one training policy for performing the local training of and/or generating of the local model and/or at least one data pre-processing policy for generating at least one feature for the local training of and/or generating of the local model.
  • the information is communicated between the one or more software packages and a platform operating on the one or more other radio nodes via at least one API.
  • the platform is unable to decrypt the one or more software packages and/or the information.
  • the information comprises the master model.
  • the at least one training policy indicates at least one of: a batch size; a learning rate; an optimization algorithm for minimizing a cost function; a regularization type or method; at least one parameter associated with a regularization type or method; and a maximum depth of a decision tree used by the local model.
  • the at least one data pre-processing policy indicates at least one of: imputation of missing values, feature scaling and/or encoding, generation of polynomial features, SVD-based transformation, and discretization and/or quantization of continuous values into discrete features.
  • the information is based on the master model.
  • the first radio node when receiving the local model, receives an updated version of the one or more software packages that includes the local model.
  • the first radio node transmits, to a third radio node operating as an additional local trainer, the one or more software packages for performing local training of and/or generating a local model, the one or more software packages transmitted in the concealed format.
  • the first radio node receives, from the third radio node operating as the additional local trainer, a second local model in the concealed format that only the first radio node operating as the master trainer is able to decrypt.
  • the first radio node transmits the master model to the third radio node operating as the additional local trainer, and the master model is generated based on the first local model and the second local model.
  • the second radio node and the third radio node are located at different locations.
  • At least one of the software package, the first local model, and the master model comprises an Al model.
  • the first radio node operating as the master trainer is associated with at least one of: an Operation & Maintenance node or system, a SMO node or system, aNon-RT RIC, aNear-RT RIC, a Core Network node, a gNB, and a gNB-CU.
  • FIGURE 17 illustrates an example method 1400 by a second radio node operating as a local trainer, according to certain embodiments.
  • the method begins at step 1402 when the second radio node receives, from a first radio node operating as a master trainer 120, one or more software packages for performing local training of and/or generate of a local model.
  • the one or more software packages is received in a concealed format that prevents the second radio node from decrypting the one or more software packages.
  • the second radio node uses the one or more software packages to perform the local training of and/or the generating of local model.
  • the second radio node transmits the local model to the first radio node in a concealed format that the master trainer is able to decrypt.
  • the second radio node receive, from the first radio node, a master model that is generated based on the local model.
  • the concealed format is an encrypted format.
  • the concealed format prevents any third party from decrypting the one or more software packages.
  • the one or more software packages comprise one or more OCI packages.
  • the second radio node when receiving the master model from the first radio node, receives a version of the software package that is updated based on the master model or a portion of the software package that is updated based on the master model.
  • the one or more software packages include at least one policy for performing the local training of and/or generating of the local mode and/or at least one policy for performing data pre-processing for generating at least one feature for the local training of and/or generating of the local model.
  • the second radio node receives information from the first radio node operating as the master trainer.
  • the information is received in a concealed format that prevents the second radio node from decrypting the information, and the information includes at least one training policy for performing the local training of and/or generating of the local model and/or at least one data pre-processing policy for generating at least one feature for the local training of and/or generating of the local model.
  • the information is communicated between the one or more software packages and a platform operating on the one or more other radio nodes via at least one API.
  • the platform is unable to decrypt the one or more software packages and/or the information.
  • the information comprises the master model.
  • the at least one training policy indicates at least one of: a batch size; a learning rate; an optimization algorithm for minimizing a cost function; a regularization type or method; at least one parameter associated with a regularization type or method; and a maximum depth of a decision tree used by the local model.
  • the at least one data pre-processing policy indicates at least one of: imputation of missing values, feature scaling and/or encoding, generation of polynomial features, SVD-based transformation, and discretization and/or quantization of continuous values into discrete features.
  • the information is based on the master model.
  • the second radio node when transmitting the local model, transmits an updated version of the one or more software packages that includes the local model.
  • At least one of the one or more software packages, the local model, and the master model comprises an Al model.
  • the second radio node operating as the local trainer is associated with at least one of a gNB, a UE, and a Near-RT RIC.
  • computing devices described herein may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • processing circuitry may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
  • a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface.
  • non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
  • processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer- readable storage medium.
  • some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner.
  • the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally.
  • Example Embodiment Al A method by a UE for concealed FL, the method comprising: any of the UE steps, features, or functions described above, either alone or in combination with other steps, features, or functions described above.
  • Example Embodiment A2 The method of the previous embodiment, further comprising one or more additional UE steps, features or functions described above.
  • Example Embodiment A3 The method of any of the previous embodiments, further comprising: providing user data; and forwarding the user data to a host computer via the transmission to the network node.
  • Example Embodiment BL A method performed by a network node for concealed FL, the method comprising: any of the network node steps, features, or functions described above, either alone or in combination with other steps, features, or functions described above.
  • Example Embodiment B2 The method of the previous embodiment, further comprising one or more additional network node steps, features or functions described above.
  • Example Embodiment B3 The method of any of the previous embodiments, further comprising: obtaining user data; and forwarding the user data to a host or a UE.
  • Group C Example Embodiments
  • Example Embodiment Cl A method by a first radio node for concealed FL, the method comprising: transmitting, to one or more other radio nodes (e.g., at least a second radio node) that are located remotely from the first radio node, a local training package for performing local training and/or generating a local model; receiving one or more local models from the one or more other radio nodes; and based on the one or more local models received from the one or more other radio nodes, generating a master model; and transmitting the master model to the one or more other radio nodes.
  • one or more other radio nodes e.g., at least a second radio node
  • Example Embodiment C2 The method of Example Embodiment Cl, further comprising transmitting, to the one or more other radio nodes, at least one policy for performing the local training.
  • Example Embodiment C3 The method of Example Embodiment C2, wherein the at least one policy for performing the local training is transmitted to the one or more radio nodes with the local training package.
  • Example Embodiment C4 The method of Example Embodiment C2, wherein the at least one policy for performing the local training is transmitted to the one or more radio nodes separately from the local training package.
  • Example Embodiment C5 The method of any one of Example Embodiments C2 to C4, wherein the at least one policy for performing the local training is concealed from the one or more radio nodes.
  • Example Embodiment C6 The method of any one of Example Embodiments Cl to C5, further comprising, transmitting, to the one or more other radio nodes, a pre-processing package for performing data pre-processing at the respective radio node, the data preprocessing performed prior to generating the local model, wherein the output of performing the data pre-processing is the input for the local training package.
  • Example Embodiment C7 The method of Example Embodiment C6, further comprising transmitting, to the one or more other radio nodes, at least one policy for performing the data pre-processing.
  • Example Embodiment C8 The method of Example Embodiment C7, wherein the at least one policy for performing the data pre-processing is transmitted to the one or more radio nodes with the pre-processing package.
  • Example Embodiment C9. The method of Example Embodiment C7, wherein the at least one policy for performing the data pre-processing is transmitted to the one or more radio nodes separately from the pre-processing package.
  • Example Embodiment CIO The method of any one of Example Embodiments C7 to C9, wherein the at least one policy for performing the data pre-processing is concealed from the one or more radio nodes.
  • Example Embodiment Cl 1. The method of any one of Example Embodiments C6 to CIO, wherein operations associated with the pre-processing package for performing the data pre-processing are concealed from the one or more radio nodes.
  • Example Embodiment Cl 2 The method of any one of Example Embodiments Cl to Cl 1, wherein: the one or more radio nodes comprise a plurality of radio nodes, and receiving the one or more local models comprises receiving a plurality of local models, wherein a local model is associated with a respective one of the plurality of radio nodes; and the master model is generated based on the plurality of local models.
  • Example Embodiment Cl 3 The method of any one of Example Embodiments Cl to Cl 2, wherein operations associated with the local training package for performing the local training and/or generating the local model are concealed from the one or more radio nodes.
  • Example Embodiment Cl 4 The method of any one of Example Embodiments Cl to Cl 3, wherein the radio node is a user equipment (UE).
  • UE user equipment
  • Example Embodiment Cl 5 The method of Example Embodiment C14, further comprising: providing user data; and forwarding the user data to a host via the transmission to the network node.
  • Example Embodiment Cl 6 The method of any one of Example Embodiments Cl to Cl 3, wherein the radio node is a network node.
  • Example Embodiment Cl 7 The method of Example Embodiment Cl 6, further comprising: obtaining user data; and forwarding the user data to a host or a user equipment.
  • Example Embodiment Cl 8 A radio node comprising processing circuitry configured to perform any of the methods of Example Embodiments Cl to Cl 7.
  • Example Embodiment Cl 9 A radio node comprising processing circuitry configured to perform any of the methods of Example Embodiments Cl to Cl 7.
  • Example Embodiment C20 A computer program comprising instructions which when executed on a computer perform any of the methods of Example Embodiments Cl to C17.
  • Example Embodiment C21 A computer program product comprising computer program, the computer program comprising instructions which when executed on a computer perform any of the methods of Example Embodiments Cl to C17.
  • Example Embodiment C22 A non-transitory computer readable medium storing instructions which when executed by a computer perform any of the methods of Example Embodiments Cl to Cl 7.
  • Example Embodiment DI A method by a second radio node for concealed FL, the method comprising: receiving, from a first radio node, a local training package for performing local training to generate a local model; based on the local training package, performing the local training to generate the local model; and transmitting, to the first radio node, the local model; and receiving, from the first radio node, a master model that is generated based on the local model and at least one other local model associated with at least one other radio node; and storing the master model.
  • Example Embodiment D2 The method of Example Embodiment DI, further comprising receiving, from the first radio node, at least one policy for performing the local training.
  • Example Embodiment D3 The method of Example Embodiment D2, wherein the at least one policy for performing the local training is received with the local training package.
  • Example Embodiment D4 The method of Example Embodiment D2, wherein the at least one policy for performing the local training is received separately from the local training package.
  • Example Embodiment D5 The method of any one of Example Embodiments D2 to D4, wherein the at least one policy for performing the local training is concealed from second radio node.
  • Example Embodiment D6 The method of any one of Example Embodiments DI to D5, further comprising, receiving, from the first radio node, a pre-processing package for performing data pre-processing at the second radio node, the data pre-processing performed prior to generating the local model, wherein the output of performing the data pre-processing is the input for the local training package.
  • Example Embodiment D7 The method of Example Embodiment D6, further comprising receiving, from the first radio node, at least one policy for performing the data preprocessing.
  • Example Embodiment D8 The method of Example Embodiment D7, wherein the at least one policy for performing the data pre-processing is received with the pre-processing package.
  • Example Embodiment D9 The method of Example Embodiment D7, wherein the at least one policy for performing the data pre-processing is received separately from the preprocessing package.
  • Example Embodiment DIO The method of any one of Example Embodiments D7 to D9, wherein the at least one policy for performing the data pre-processing is concealed from the second radio node.
  • Example Embodiment Dl l The method of any one of Example Embodiments D6 to DIO, wherein operations associated with the pre-processing package for performing the data pre-processing are concealed from the second radio node.
  • Example Embodiment DI 2 The method of any one of Example Embodiments DI to Dll, wherein operations associated with the local training package for performing the local training and/or generating the local model are concealed from the second radio node.
  • Example Embodiment DI 3 The method of any one of Example Embodiments DI to DI 2, wherein the second radio node is a UE.
  • Example Embodiment DI 4 The method of Example Embodiment D13, further comprising: providing user data; and forwarding the user data to a host via the transmission to the network node.
  • Example Embodiment DI 5 The method of any one of Example Embodiments DI to DI 2, wherein the second radio node is a network node.
  • Example Embodiment DI 6 The method of Example Embodiment D15, further comprising: obtaining user data; and forwarding the user data to a host or a user equipment.
  • Example Embodiment DI 7 A second radio node comprising processing circuitry configured to perform any of the methods of Example Embodiments DI to DI 6.
  • Example Embodiment DI 8 A second radio node comprising processing circuitry configured to perform any of the methods of Example Embodiments DI to DI 6.
  • Example Embodiment DI 9. A computer program comprising instructions which when executed on a computer perform any of the methods of Example Embodiments DI to D16.
  • Example Embodiment D20 A computer program product comprising computer program, the computer program comprising instructions which when executed on a computer perform any of the methods of Example Embodiments DI to DI 6.
  • Example Embodiment D21 A non-transitory computer readable medium storing instructions which when executed by a computer perform any of the methods of Example Embodiments DI to DI 6.
  • a UE for concealed FL comprising: processing circuitry configured to perform any of the steps of any of the Group A, C, and D Example Embodiments; and power supply circuitry configured to supply power to the processing circuitry.
  • Example Embodiment E2 A network node for concealed FL, the network node comprising: processing circuitry configured to perform any of the steps of any of the Group B, C, and D Example Embodiments; power supply circuitry configured to supply power to the processing circuitry.
  • Example Embodiment E3 A UE for concealed FL, the UE comprising: an antenna configured to send and receive wireless signals; radio front-end circuitry connected to the antenna and to processing circuitry, and configured to condition signals communicated between the antenna and the processing circuitry; the processing circuitry being configured to perform any of the steps of any of the Group A, C, and D Example Embodiments; an input interface connected to the processing circuitry and configured to allow input of information into the UE to be processed by the processing circuitry; an output interface connected to the processing circuitry and configured to output information from the UE that has been processed by the processing circuitry; and a battery connected to the processing circuitry and configured to supply power to the UE.
  • Example Embodiment E4 A host configured to operate in a communication system to provide an OTT service, the host comprising: processing circuitry configured to provide user data; and a network interface configured to initiate transmission of the user data to a cellular network for transmission to a UE, wherein the UE comprises a communication interface and processing circuitry, the communication interface and processing circuitry of the UE being configured to perform any of the steps of any of the Group A, C, and D Example Embodiments to receive the user data from the host.
  • Example Embodiment E5 The host of the previous Example Embodiment, wherein the cellular network further includes a network node configured to communicate with the UE to transmit the user data to the UE from the host.
  • Example Embodiment E6 The host of the previous 2 Example Embodiments, wherein: the processing circuitry of the host is configured to execute a host application, thereby providing the user data; and the host application is configured to interact with a client application executing on the UE, the client application being associated with the host application.
  • Example Embodiment E7 A method implemented by a host operating in a communication system that further includes a network node and a UE, the method comprising: providing user data for the UE; and initiating a transmission carrying the user data to the UE via a cellular network comprising the network node, wherein the UE performs any of the operations of any of the Group A embodiments to receive the user data from the host.
  • Example Emboidment E8 The method of the previous Example Embodiment, further comprising: at the host, executing a host application associated with a client application executing on the UE to receive the user data from the UE.
  • Example Embodiment E9 The method of the previous Example Embodiment, further comprising: at the host, transmitting input data to the client application executing on the UE, the input data being provided by executing the host application, wherein the user data is provided by the client application in response to the input data from the host application.
  • Example Emboidment El A host configured to operate in a communication system to provide an OTT service, the host comprising: processing circuitry configured to provide user data; and a network interface configured to initiate transmission of the user data to a cellular network for transmission to a E), wherein the UE comprises a communication interface and processing circuitry, the communication interface and processing circuitry of the UE being configured to perform any of the steps of any of the Group A, C, and D Example Embodiments to transmit the user data to the host.
  • Example Emboidment El 1 The host of the previous Example Embodiment, wherein the cellular network further includes a network node configured to communicate with the UE to transmit the user data from the UE to the host.
  • Example Embodiment El 2. The host of the previous 2 Example Embodiments, wherein: the processing circuitry of the host is configured to execute a host application, thereby providing the user data; and the host application is configured to interact with a client application executing on the UE, the client application being associated with the host application.
  • Example Embodiment El 3 A method implemented by a host configured to operate in a communication system that further includes a network node and a UE, the method comprising: at the host, receiving user data transmitted to the host via the network node by the UE, wherein the UE performs any of the steps of any of the Group A, C, and D Example Embodiments to transmit the user data to the host.
  • Example Embodiment El 4 The method of the previous Example Embodiment, further comprising: at the host, executing a host application associated with a client application executing on the UE to receive the user data from the UE.
  • Example Embodiment El 5 The method of the previous Example Embodiment, further comprising: at the host, transmitting input data to the client application executing on the UE, the input data being provided by executing the host application, wherein the user data is provided by the client application in response to the input data from the host application.
  • Example Embodiment El 6 A host configured to operate in a communication system to provide an OTT service, the host comprising: processing circuitry configured to provide user data; and a network interface configured to initiate transmission of the user data to a network node in a cellular network for transmission to a UE, the network node having a communication interface and processing circuitry, the processing circuitry of the network node configured to perform any of the operations of any of the Group B, C, and D Example Embodiments to transmit the user data from the host to the UE.
  • Example Embodiment El 7 The host of the previous Example Embodiment, wherein: the processing circuitry of the host is configured to execute a host application that provides the user data; and the UE comprises processing circuitry configured to execute a client application associated with the host application to receive the transmission of user data from the host.
  • Example Embodiment El 8 A method implemented in a host configured to operate in a communication system that further includes a network node and a UE, the method comprising: providing user data for the UE; and initiating a transmission carrying the user data to the UE via a cellular network comprising the network node, wherein the network node performs any of the operations of any of the Group B, C, and D Example Embodiments to transmit the user data from the host to the UE.
  • Example Embodiment El 9 The method of the previous Example Embodiment, further comprising, at the network node, transmitting the user data provided by the host for the UE.
  • Example Emboidment E20 The method of any of the previous 2 Example Embodiments, wherein the user data is provided at the host by executing a host application that interacts with a client application executing on the UE, the client application being associated with the host application.
  • Example Embodiment E21 A communication system configured to provide an over- the-top service, the communication system comprising: a host comprising: processing circuitry configured to provide user data for a user equipment (UE), the user data being associated with the over-the-top service; and a network interface configured to initiate transmission of the user data toward a cellular network node for transmission to the UE, the network node having a communication interface and processing circuitry, the processing circuitry of the network node configured to perform any of the operations of any of the Group B, C, and D Example Embodiments to transmit the user data from the host to the UE.
  • a host comprising: processing circuitry configured to provide user data for a user equipment (UE), the user data being associated with the over-the-top service; and a network interface configured to initiate transmission of the user data toward a cellular network node for transmission to the UE, the network node having a communication interface and processing circuitry, the processing circuitry of the network node configured to perform any of the operations of any of the Group B, C
  • Example Embodiment E22 The communication system of the previous Example Embodiment, further comprising: the network node; and/or the user equipment.
  • Example Embodiment E23 A host configured to operate in a communication system to provide an OTT service, the host comprising: processing circuitry configured to initiate receipt of user data; and a network interface configured to receive the user data from a network node in a cellular network, the network node having a communication interface and processing circuitry, the processing circuitry of the network node configured to perform any of the operations of any of the Group B, C, and D Example Embodiments to receive the user data from a UE for the host.
  • Example Embodiment E24 The host of the previous 2 Example Embodiments, wherein: the processing circuitry of the host is configured to execute a host application, thereby providing the user data; and the host application is configured to interact with a client application executing on the UE, the client application being associated with the host application.
  • Example Embodiment E25 The host of the any of the previous 2 Example Embodiments, wherein the initiating receipt of the user data comprises requesting the user data.
  • Example Embodiment E26. A method implemented by a host configured to operate in a communication system that further includes a network node and a UE, the method comprising: at the host, initiating receipt of user data from the UE, the user data originating from a transmission which the network node has received from the UE, wherein the network node performs any of the steps of any of the Group B, C, and D Example Embodiments to receive the user data from the UE for the host.
  • Example Embodiment E27 The method of the previous Example Embodiment, further comprising at the network node, transmitting the received user data to the host.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Un procédé (1300) par un premier nœud radio fonctionnant en tant qu'agent de formation maître pour un apprentissage dissimulé consiste à transmettre (1302), à un second nœud radio fonctionnant en tant qu'agent de formation local, un ou plusieurs progiciels pour effectuer une formation locale et/ou générer un modèle local. Le ou les progiciels sont transmis dans un format dissimulé qui empêche le second nœud radio de déchiffrer le ou les progiciels. Le premier nœud radio reçoit (1304) le modèle local en provenance du second nœud radio. Le modèle local est reçu dans un format dissimulé que seul le premier nœud radio fonctionnant en tant qu'agent de formation maître est apte à déchiffrer. Sur la base au moins du modèle local reçu en provenance du second nœud radio, le premier nœud radio génère (1306) un modèle maître et transmet (1308) le modèle maître au second nœud radio.
PCT/SE2023/050074 2022-01-28 2023-01-27 Apprentissage dissimulé WO2023146461A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263304047P 2022-01-28 2022-01-28
US63/304,047 2022-01-28

Publications (1)

Publication Number Publication Date
WO2023146461A1 true WO2023146461A1 (fr) 2023-08-03

Family

ID=85172408

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2023/050074 WO2023146461A1 (fr) 2022-01-28 2023-01-27 Apprentissage dissimulé

Country Status (1)

Country Link
WO (1) WO2023146461A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018111270A1 (fr) * 2016-12-15 2018-06-21 Schlumberger Technology Corporation Systèmes et procédés pour générer, déployer, découvrir et gérer des progiciels de modèle d'apprentissage machine
WO2021029797A1 (fr) * 2019-08-15 2021-02-18 Telefonaktiebolaget Lm Ericsson (Publ) Nœuds de réseau et procédés de gestion de modèles d'apprentissage automatique dans un réseau de communication
WO2021056043A1 (fr) * 2019-09-23 2021-04-01 Presagen Pty Ltd Système d'entrainement d'apprentissage machine / d'intelligence artificielle (ia) décentralisée
US20210312336A1 (en) * 2020-04-03 2021-10-07 International Business Machines Corporation Federated learning of machine learning model features

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018111270A1 (fr) * 2016-12-15 2018-06-21 Schlumberger Technology Corporation Systèmes et procédés pour générer, déployer, découvrir et gérer des progiciels de modèle d'apprentissage machine
WO2021029797A1 (fr) * 2019-08-15 2021-02-18 Telefonaktiebolaget Lm Ericsson (Publ) Nœuds de réseau et procédés de gestion de modèles d'apprentissage automatique dans un réseau de communication
WO2021056043A1 (fr) * 2019-09-23 2021-04-01 Presagen Pty Ltd Système d'entrainement d'apprentissage machine / d'intelligence artificielle (ia) décentralisée
US20210312336A1 (en) * 2020-04-03 2021-10-07 International Business Machines Corporation Federated learning of machine learning model features

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
B. MCMAHAN ET AL., FEDERATED LEARNING OF DEEP NETWORKS USING MODEL AVERAGING
BO LIU ET AL: "When Machine Learning Meets Privacy: A Survey and Outlook", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 24 November 2020 (2020-11-24), XP081821133 *
H. B. MCMAHAN ET AL.: "Communication-Efficient Learning of Deep Networks from Decentralized Data", AISTATS, 2017
QUOC-VIET PHAM ET AL: "A Survey of Multi-Access Edge Computing in 5G and Beyond: Fundamentals, Technology Integration, and State-of-the-Art", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 20 June 2019 (2019-06-20), XP081378326 *

Similar Documents

Publication Publication Date Title
WO2023191682A1 (fr) Gestion de modèles d'intelligence artificielle/d'apprentissage machine entre des nœuds radio sans fil
WO2023203240A1 (fr) Cas d'utilisation du découpage du réseau pour l'accès sans fil fixe (fwa)
WO2023022642A1 (fr) Signalisation de surchauffe prédite d'ue
WO2022255930A1 (fr) Procédés et appareil prenant en charge une configuration de vlan ethernet dynamique dans un système de cinquième génération
WO2023146461A1 (fr) Apprentissage dissimulé
WO2024125362A1 (fr) Procédé et appareil de commande de liaison de communication entre dispositifs de communication
US20230039795A1 (en) Identifying a user equipment, ue, for subsequent network reestablishment after a radio link failure during an initial network establishment attempt
WO2023230993A1 (fr) Procédé et appareil pour un élément de veille et un élément actif dans une grappe
EP4381812A1 (fr) Approches de signalisation pour plmn de catastrophe
WO2023147870A1 (fr) Prédiction de variable de réponse dans un réseau de communication
WO2023104353A1 (fr) Prise en charge configurable de ressources génériques de gestionnaire d'infrastructure virtualisé
WO2024063692A1 (fr) Gestion de signalisation de positionnement associée à un dispositif de communication au moyen d'une fonction de gestion d'accès et de mobilité locale
WO2024075129A1 (fr) Gestion d'agents séquentiels dans une infrastructure cognitive
WO2023061980A1 (fr) Optimisation d'architecture basée sur un service 5gc de sélection de saut suivant en itinérance qui est un mandataire de protection de périphérie de sécurité (sepp)
WO2024038340A1 (fr) Connexions en mode relais dans un réseau de communication
WO2023214378A1 (fr) Détection depuis de le sol et évitement d'objets aériens pour un emplacement
WO2023166448A1 (fr) Rapport de mesurage b1/a4 optimisé
WO2024096805A1 (fr) Communication basée sur un partage d'identifiant de configuration de réseau
WO2023088903A1 (fr) Indication de disponibilité pour l'utilisation de ressources logicielles d'un domaine temporel et d'un domaine fréquentiel à accès et raccordement intégrés
WO2024117960A1 (fr) Filtre de liste de bandes de fréquences appliquées prédéfinies
WO2023187678A1 (fr) Gestion de modèle de machine d'équipement utilisateur assistée par réseau d'apprentissage
WO2024072302A1 (fr) Mappage de ressources pour une liaison montante basée sur l'ia
WO2023239287A1 (fr) Apprentissage machine permettant une optimisation d'un réseau d'accès radio
WO2023057849A1 (fr) Ré- entraînement de modèle d'apprentissage automatique (ml) dans un réseau cœur 5g
WO2023209566A1 (fr) Gestion de partitions et de priorités d'accès aléatoire

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23703373

Country of ref document: EP

Kind code of ref document: A1