WO2023147870A1 - Response variable prediction in a communication network - Google Patents

Response variable prediction in a communication network Download PDF

Info

Publication number
WO2023147870A1
WO2023147870A1 PCT/EP2022/052712 EP2022052712W WO2023147870A1 WO 2023147870 A1 WO2023147870 A1 WO 2023147870A1 EP 2022052712 W EP2022052712 W EP 2022052712W WO 2023147870 A1 WO2023147870 A1 WO 2023147870A1
Authority
WO
WIPO (PCT)
Prior art keywords
predictions
response variable
loss
prediction
model
Prior art date
Application number
PCT/EP2022/052712
Other languages
French (fr)
Inventor
Jalil TAGHIA
Valentin Kulyk
Selim ICKIN
Mats Folkesson
Jörgen GUSTAFSSON
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2022/052712 priority Critical patent/WO2023147870A1/en
Publication of WO2023147870A1 publication Critical patent/WO2023147870A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present application relates to response variable prediction, such as may be usable for prediction of a response variable in a communication network.
  • Model-based prediction exploits a model for predicting a response variable. Training of the model involves adapting the model as needed for the model’s response variable predictions to align with training data. That is, the model is adapted so that its predictions "fit” well to the training data. How well the model’s predictions fit with the training data is typically represented with a so-called loss function that computes the loss between the predictions and the training data. In this case, the objective of model training is to minimize the loss.
  • Training a model in this way proves challenging, though, if a local solution to loss minimization is to make predictions that generally repeat past predictions. In this case, training the model to repeat past predictions achieves a low loss locally but jeopardizes the model’s ability to predict changes in the response variable. A model that lacks the ability to predict response variable changes is degenerate and risks making predictions that are misleading.
  • a model that predicts weather can be trained to minimize loss by predicting the weather for tomorrow to be the same as the weather for today. Indeed, over the course of a year, such predictions in fact minimize loss since many consecutive days will have similar weather. But the resulting model will be degenerate because it will not be able to predict weather changes.
  • Model degeneracy proves quite problematic in some contexts, such as where the response variable relates to a communication network. For example, if the response variable reflects a performance indicator of the communication network or an alarm about poor performance, a degenerate model’s predictions of the response variable would be unreliable and jeopardize the communication network’s performance.
  • Some embodiments herein apply regularization to a loss function to reduce model degeneracy. Some embodiments in this regard exploit regularization to penalize response variable predictions by the model that would repeat past predictions. The regularized loss function thereby discourages repeat predictions. By discouraging repeat predictions in this way, some embodiments advantageously mitigate model degeneracy and improve prediction reliability. In the context of a communication network, improving prediction reliability may advantageously improve the network’s performance. More particularly, embodiments herein include a method comprising computing a regularized loss between observations of a response variable over time in a communication network and predictions of the response variable over time as predicted by a model. In some embodiments, computing the regularized loss comprises penalizing predictions of the response variable based on an extent to which the predictions repeat past predictions of the response variable. The method also comprises adapting the model based on the regularized loss.
  • said computing comprises penalizing predictions of the response variable to a greater extent the greater the extent to which the predictions repeat past predictions of the response variable.
  • said computing comprises computing the regularized loss as a function of a prediction repeat penalty.
  • the prediction repeat penalty is a negative loss between predictions of the response variable and past predictions of the response variable.
  • the prediction repeat penalty For each time window i among T time windows, negative loss between predictions of the response variable over the time window i and past predictions of the response variable over a previous time window i - 1.
  • said computing comprises scaling the prediction repeat penalty by a prediction repeat hyperparameter.
  • said computing comprises computing the regularized loss also as a function of a prediction inaccuracy loss.
  • the prediction inaccuracy loss is a loss between predictions of the response variable and observations of the response variable.
  • the prediction inaccuracy penalty comprises P 0: w) + For each time window i among T time windows, oss between predictions of the response variable over the time window i and observations of the response variable over the time window i.
  • said computing comprises scaling at least a portion of the prediction inaccuracy loss by a prediction inaccuracy hyperparameter.
  • said computing comprises computing the regularized loss as this case, -f reg is the regularized loss.
  • -f reg is the regularized loss.
  • (1 — /?) is a prediction inaccuracy hyperparameter.
  • ⁇ (Yi W . ⁇ +i)w , lX : (i+i)iv) is a l° ss between predictions of the response variable over the time window i and observations of the response variable over the time window i.
  • f> is a prediction repeat hyperparameter.
  • for each time window i among T time windows is a negative loss between predictions of the response variable over the time window i and past predictions of the response variable over a previous time window i - 1.
  • adapting the model comprises adapting the model as needed to minimize the regularized loss.
  • the method further comprises predicting the response variable using the adapted model.
  • said computing and adapting is performed for each of multiple iterations as part of an iterative procedure to train the model.
  • the method further comprises adapting how the regularized loss is computed between iterations as needed for predictions of the response variable by the model to converge with observations of the response variable in a training dataset.
  • the response variable is a performance indicator for the communication network or a service-level metric in the communication network.
  • Other embodiments herein include equipment configured to compute a regularized loss between observations of a response variable over time in a communication network and predictions of the response variable over time as predicted by a model.
  • computing the regularized loss comprises penalizing predictions of the response variable based on an extent to which the predictions repeat past predictions of the response variable.
  • the equipment is also configured to adapt the model based on the regularized loss.
  • the equipment is configured to perform the steps described above.
  • a carrier containing the computer program is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • Other embodiments herein include equipment comprising processing circuitry configured to compute a regularized loss between observations of a response variable over time in a communication network and predictions of the response variable over time as predicted by a model.
  • computing the regularized loss comprises penalizing predictions of the response variable based on an extent to which the predictions repeat past predictions of the response variable.
  • the processing circuitry is also configured to adapt the model based on the regularized loss.
  • the processing circuitry is configured to perform the steps described above.
  • the present invention is not limited to the above features and advantages. Indeed, those skilled in the art will recognize additional features and advantages upon reading the following detailed description, and upon viewing the accompanying drawings.
  • Figure 1 is a block diagram of equipment for loss regularization according to some embodiments.
  • Figure 2 is graphs of regularized loss vs unregularized loss according to some embodiments.
  • Figure 3 is a block diagram of regularized loss computation according to some embodiments.
  • Figure 4 is a block diagram of equipment for loss regularization according to other embodiments.
  • Figure 5 is a block diagram of equipment for loss regularization according to yet other embodiments.
  • Figure 6 is a block diagram of a loss regulator according to some embodiments.
  • Figure 7 is a logic flow diagram of steps performed by a loss regulator according to some embodiments.
  • Figure 8 is a logic flow diagram of a method performed by equipment according to some embodiments.
  • Figure 9 is a block diagram of equipment according to some embodiments.
  • FIG. 10 is a block diagram of a communication system in accordance with some embodiments
  • Figure 11 is a block diagram of a user equipment according to some embodiments.
  • Figure 12 is a block diagram of a network node according to some embodiments.
  • Figure 13 is a block diagram of a host according to some embodiments.
  • Figure 14 is a block diagram of a virtualization environment according to some embodiments.
  • Figure 15 is a block diagram of a host communicating via a network node with a UE over a partially wireless connection in accordance with some embodiments.
  • Figure 1 shows a communication network 10 according to some embodiments.
  • the communication network 10 provides communication service to one or more communication devices 12.
  • the communication network 10 is a wireless communication network that provides wireless communication service to one or more wireless communication devices.
  • Figure 1 further shows a response variable 14 in the communication network 10, e.g., as generated by one or more network nodes in the communication network 10.
  • the response variable 14 may for example be a performance indicator for the communication network 10.
  • the performance indicator may indicate the value of a performance metric characterizing performance of the communication network 10 as a whole, performance of an individual cell or base station in the communication network 10, or performance of an individual communication device 12 served by the communication network 10.
  • the performance indicator may for instance include (i) a key performance indicator measuring utilization of transmission resources in the communication network 10; (ii) a key performance indicator measuring availability of the communication network 10 to users; (iii) a key performance indicator measuring whether and/or how well services requested by users can be accessed; (iv) a key performance indicator measuring energy consumption by the communication network 10; or (v) a key performance indicator measuring call drops in the communication network 10.
  • the response variable 14 may be a servicelevel metric in the communication network 10, e.g., so as to indicate the value of a metric that characterizes a service provided by the communication network 10 or a cell in the communication network 10.
  • the service-level metric may for example reflect the number of users in a cell or otherwise indicate cell load.
  • Figure 1 shows a model 16 that is trained to make predictions 18 of the response variable 14 over time. Making predictions 18 of the response variable 14 in this way, the model 16 may also be referred to as a forecast model, with the predictions 18 over time constituting a sort of “forecast” of the response variable 14. Regardless of the particular terminology, the model 16 in some embodiments may be implemented as a machine learning (ML) model so that the model 16 can be trained via machine learning.
  • ML machine learning
  • Equipment 15 in Figure 1 is configured to train the model 16.
  • the equipment 15 as shown includes a model adapter 17 that adapts the model 16 as needed for the predictions 18 of the model 16 to fit well with training data 20.
  • the training data 20 in this regard includes observations 22 of the response variable 14, e.g., known, real-world observations 22 exploited for training.
  • Training the model 16 may for instance be an iterative process whereby the equipment 15 iteratively adapts the model 16 over multiple rounds of training until the predictions 18 of the model 16 acceptably converge with the observations 22 from the training data 20. How well the model’s predictions 18 fit with the observations 22 for convergence is represented by the so-called loss between the observations 22 and the predictions 18.
  • the equipment 15 in some embodiments thereby trains the model 16 by adapting the model 16 as needed to minimize the loss between the observations 22 and the predictions 18.
  • the equipment 15 adapts the model 16 based on a regularized loss 24 between the observations 22 and the predictions 18 of the response variable 14 over time.
  • the regularized loss 24 is regularized in the sense that it is computed in a way that penalizes certain predictions 18. By penalizing certain predictions 18, the regularized loss 24 attributable to those predictions 18 is larger so as to discourage them.
  • the equipment 15 more particularly penalizes predictions 18 based on an extent to which the predictions 18 repeat past predictions of the response variable 14. In some embodiments, then, the equipment 15 penalizes predictions 18 of the response variable 14 to a greater extent the greater the extent to which the predictions 18 repeat past predictions of the response variable 14.
  • the equipment 15 computes the regularized loss 24 in a way that artificially increases the loss when predictions are repeated over time, so as to thereby discourage repeat predictions and/or encourage independent predictions. Training the model 16 based on such regularized loss 24 means the trained model will be less prone to degeneracy.
  • embodiments herein may exploit regularization for reducing model degeneracy in this way, in addition to or alternatively from exploiting regularizing for reducing model overfitting.
  • exploiting regularizing for reducing model degeneracy involves penalizing repeat predictions
  • exploiting regularizing for reducing model overfitting involves penalizing complex models.
  • Figure 2 shows a graphical depiction of the advantages realized from training the model 16 based on the regularized loss 24 as compared to an unregularized loss.
  • the model 16 were to be trained based on an unregularized loss 13
  • the resulting predictions 11 from the model 16 would tend to repeat past predictions.
  • the unregularized loss 13 would be low, the model 16 would not predict changes in observations 22 of the response variable 14 well so as to track response variable trends.
  • the resulting predictions 18 from the model 16 are more independent from one another. The model 16 thereby predicts changes in observations 22 of the response variable 14 well so as to track response variable trends.
  • the equipment 15 may train the model 16 in this way through use of a regularized loss computer 19.
  • the regularized loss computer 19 computes the regularized loss 24 between the observations 22 of the response variable 14 and the predictions 18 of the response variable 14 made by the model 16.
  • the equipment 15 further includes a prediction repeat penalizer 21 that computes a prediction repeat penalty 26.
  • the regularized loss computer 19 computes the regularized loss 24 as a function of this prediction repeat penalty 26.
  • the impact of computing the regularized loss 24 as a function of the prediction repeat penalty 26 is that predictions 18 which repeat past predictions of the response variable 14 are penalized in the loss computation, i.e., repeating predictions means the regularized loss 24 will be greater than it would have been otherwise.
  • the prediction repeat penalty 26 takes the form of a negative loss between predictions and past predictions, e.g., where this negative loss is added to the loss otherwise computed. In this case, if predictions are repeated to a relatively small extent, the prediction repeat penalty 26 will be a relatively large negative loss that operates to decrease the regularized loss 24 to a relatively large degree. On the other hand, if predictions are repeated to a relatively large extent, the prediction repeat penalty 26 will be a relatively small negative loss that operates to decrease the regularized loss 24 to a relatively small degree. Effectively, then, the prediction repeat penalty 26 is a penalty in the sense that it reduces the regularized loss 24 to a lesser extent the more predictions are repetitive, i.e., the penalty for repeating predictions is a larger regularized loss than would have otherwise been realized.
  • Figure 3 shows one example implementation.
  • the model 16 provides predictions 18 of the response variable 14 over multiple time windows.
  • Predictions 18 of the response variable 14 over time window 0 are represented as Y OW.1W
  • predictions 18 of the response variable 14 over time window 1 are represented as Y 1W.2W , and so on. Accordingly, —t(Y 0W.
  • the prediction repeat penalty 26 may be represented in some embodiments as the sum of these negative losses over the time windows, where, for each time window i among T time windows, —t(Y ⁇ - ⁇ w .iw, Y iw ⁇ i+ ⁇ w ) is the negative loss between predictions 18 of the response variable 14 over the time window i and past predictions of the response variable 14 over a previous time window (i - 1).
  • the prediction repeat penalty 26 applies on a time window basis. So, if one prediction 18 in a time window does not repeat a past prediction, the window of predictions might still be penalized if the overall trend within that time window is to repeat past predictions.
  • the prediction repeat penalty 26 in some embodiments may be scaled, e.g., so as to scale the impact that the prediction repeat penalty 26 has on the regularized loss 24 as part of training the model 16.
  • Figure 4 for example shows that equipment 15 in some embodiments further includes a scaler 25 interposed between the prediction repeat penalizer 21 and the regularized loss computer 19.
  • the scaler 25 scales the prediction repeat penalty 26 by a prediction repeat hyperparameter 27 to obtain a scaled prediction repeat penalty 26S.
  • the regularized loss computer 19 then computes the regularized loss 24 as a function of this scaled prediction repeat penalty 26S.
  • the equipment 15 scales the prediction repeat penalty 26 to adapt how the regularized loss 24 is computed between iterations of model training.
  • the equipment 15 in this regard may adapt the prediction repeat hyperparameter 27 as needed in order for predictions 18 of the response variable 14 to converge with observations 22 of the response variable 14 in the training dataset 20.
  • Figure 5 illustrates additional details of regularized loss computation according to still other embodiments.
  • the regularized loss computer 19 further includes a prediction inaccuracy loss computer 30.
  • the prediction inaccuracy loss computer 30 computes a prediction inaccuracy loss 31 as a loss between predictions 18 of the response variable 14 and observations 22 of the response variable 14.
  • the prediction inaccuracy loss computer 30 computes the prediction inaccuracy loss 31 +
  • the regularized loss computer 19 also includes a scaler 32 that scales at least a portion of the prediction inaccuracy loss 31 by a prediction inaccuracy hyperparameter 35.
  • the resulting scaled prediction inaccuracy loss is (Y o .w, Yo-.w) + where the latter term of the prediction inaccuracy loss is scaled by (1 - /?), with (1 - /?) being the prediction inaccuracy hyperparameter 35.
  • the regularized loss computer 19 may adapt this hyperparameter 35 as needed between iterations of model training.
  • the regularized loss computer 19 adds the prediction inaccuracy loss 31 , as scaled, to the scaled prediction repeat penalty 26S, to obtain the regularized loss 24.
  • the regularized loss 24 is computed as: where -f reg is the regularized loss 24, (Y 0.w , is the loss between predictions 18 of the response variable 14 over an initial time window and observations 22 of the response variable 14 over the initial time window, (1 - /?) is the prediction inaccuracy hyperparameter 35, for each time window i among T time windows, /'0n V :(i+i)iv, is the
  • the regularized loss 24 is computed based on loss between predictions from consecutive time windows, such need not be the case.
  • the regularized loss 24 is computed based on loss between predictions over the course of any number of time windows, e.g., according to: lv) where X is adaptable.
  • the regularized loss 24 may be computed by weighting predictions from different time frames differently, e.g., by weighting a more recent time window differently than a less recent time window.
  • the regularized loss computer 19 may implement a loss regulator 40 as shown in Figure 6.
  • the loss regulator 40 as shown takes as its inputs: (i) the loss function which computes the loss between the predictions 18 of the response variable 14 and the observations 22 of the response variable 14; and (ii) the time window W, also referred to as the forecast window.
  • the loss regulator 40 then computes the regularized loss reg , e.g., according to any of the formulas described above.
  • Step 7 shows the training procedure according to some embodiments.
  • Step 0 is to initialize the loss regulator 40 with a user-defined value of /?, the prediction repeat hyperparameter 27.
  • Step 1 is to feed the loss function to the loss regulator 40, which outputs the regularized loss ⁇ reg .
  • Step 2 is to replace the loss of the model 16 with the regularized loss ⁇ reg computed in Step 1. If convergence results, training is complete. Otherwise, if there is no convergence, steps 1 and 2 are repeated in an iterative fashion.
  • the loss regulator 40 in some embodiments takes as its input the loss function of the model 16 and modifies the loss such that the modified loss is less prone to degeneracy. Some embodiments therefore exploit a loss regulator 40 which penalizes degenerate solutions and encourages novelty aspects of the model 16.
  • embodiments may advantageously provide a solution to the problem of model degeneracy that is general and does not depend on the choice of the machine learning model or the loss function used for the construction of the model 16. Moreover, embodiments herein help avoid degenerate solutions and at the same time help reducing the degeneracy problem.
  • KPI key performance indicator
  • the response variable 14 herein is a KPI in the communication network 10.
  • the KPI may for example be the prbUtilization, which is a function of the allocated physical resource blocks and the total available physical resource blocks.
  • Other example KPIs, amongst many, are accessibility, retainability, call drops, and energy consumption.
  • prediction models for such KPIs may learn the overall trend in different times of the time interval, during the hours when there is a significant shift from very low traffic hours to very high traffic hours (or vise-versa), the models may tend to repeat their previous actual value. In those cases, embodiments herein exploit regularization in the learning process to penalize the learning from very near prior data points.
  • the response variable 14 may be unrelated to a communication network 10.
  • the response variable 14 may for example be related to (e.g., characterize performance of) augmented reality, virtual reality, internet-of- senses, a brain-computer interface, or any other context in which response variable prediction is useful.
  • the equipment 15 in Figure 1 is deployed in the communication network 10. In other embodiments, though, the equipment 15 may be deployed outside of the communication network 10, e.g., in the cloud. In still other embodiments, the equipment 15 may be or be part of a communication device 12.
  • Figure 8 depicts a method in accordance with particular embodiments.
  • the method comprises computing a regularized loss 24 between observations 22 of a response variable 14 over time (e.g., in a communication network 10) and predictions 18 of the response variable 14 over time as predicted by a model 16 (Block 100).
  • the response variable 14 is a performance indicator for the communication network 10 or a service-level metric in the communication network 10.
  • computing the regularized loss 24 comprises penalizing predictions 18 of the response variable 14 based on an extent to which the predictions 18 repeat past predictions of the response variable 14.
  • such computing comprises penalizing predictions 18 of the response variable 14 to a greater extent the greater the extent to which the predictions 18 repeat past predictions of the response variable 14.
  • the method also comprises adapting the model 16 based on the regularized loss 24 (Block 110).
  • adaptation may comprise adapting the model 16 as needed to minimize the regularized loss 24.
  • this computing (Block 100) and adapting (Block 110) is performed for each of multiple iterations as part of an iterative procedure to train the model 16.
  • the method further may also comprise adapting how the regularized loss 24 is computed between iterations, e.g., as needed for predictions of the response variable 14 by the model 16 to converge with observations 22 of the response variable 14 in a training dataset.
  • the method further comprises predicting the response variable 14 using the adapted model 16 (Block 120).
  • said computing comprises computing the regularized loss 24 as a function of a prediction repeat penalty 26.
  • the prediction repeat penalty 26 is a negative loss between predictions 18 of the response variable 14 and past predictions of the response variable 14.
  • the prediction repeat penalty 26 comprises For each time window i among T time windows, —((Y ⁇ - ⁇ w.iw. Yiw ⁇ i+ ⁇ w is a negative loss between predictions 18 of the response variable 14 over the time window i and past predictions of the response variable 14 over a previous time window i - 1.
  • said computing comprises scaling the prediction repeat penalty 26 by a prediction repeat hyperparameter 27.
  • said computing comprises computing the regularized loss 24 also as a function of a prediction inaccuracy loss 31 .
  • the prediction inaccuracy loss 31 is a loss between predictions 18 of the response variable 14 and observations 22 of the response variable 14.
  • the prediction inaccuracy penalty 31 comprises (Y 0.w , ? 0;W ) + For each time window i among T time windows, oss between predictions 18 of the response variable 14 over the time window i and observations 22 of the response variable 14 over the time window i.
  • said computing comprises scaling at least a portion of the prediction inaccuracy loss 31 by a prediction inaccuracy hyperparameter 35.
  • the regularized loss 24 is computed as this case, -f reg is the regularized loss 24.
  • (Y 0.w , ? 0;W ) is a loss between predictions 18 of the response variable 14 over an initial time window and observations 22 of the response variable 14 over the initial time window.
  • (1 — /?) is a prediction inaccuracy hyperparameter 26.
  • for each time window i among T time windows is a
  • f> is a prediction repeat hyperparameter 27.
  • for each time window i among T time windows is a negative loss between predictions 18 of the response variable 14 over the time window i and past predictions of the response variable 14 over a previous time window i - 1.
  • Embodiments herein also include corresponding apparatuses.
  • Embodiments herein for instance include equipment 14 configured to perform any of the steps of any of the embodiments described above for the equipment 14.
  • Embodiments also include equipment 14 comprising processing circuitry and power supply circuitry.
  • the processing circuitry is configured to perform any of the steps of any of the embodiments described above for the equipment 14.
  • the power supply circuitry is configured to supply power to the equipment 14.
  • Embodiments further include equipment 14 comprising processing circuitry.
  • the processing circuitry is configured to perform any of the steps of any of the embodiments described above for the equipment 14.
  • the equipment 14 further comprises communication circuitry.
  • Embodiments further include equipment 14 comprising processing circuitry and memory.
  • the memory contains instructions executable by the processing circuitry whereby the equipment 14 is configured to perform any of the steps of any of the embodiments described above for the equipment 14.
  • the apparatuses described above may perform the methods herein and any other processing by implementing any functional means, modules, units, or circuitry.
  • the apparatuses comprise respective circuits or circuitry configured to perform the steps shown in the method figures.
  • the circuits or circuitry in this regard may comprise circuits dedicated to performing certain functional processing and/or one or more microprocessors in conjunction with memory.
  • the circuitry may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like.
  • DSPs digital signal processors
  • the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc.
  • Program code stored in memory may include program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments.
  • the memory stores program code that, when executed by the one or more processors, carries out the techniques described herein.
  • Figure 9 for example illustrates equipment 14 as implemented in accordance with one or more embodiments.
  • the equipment 14 includes processing circuitry 210 and communication circuitry 220.
  • the communication circuitry 220 is configured to transmit and/or receive information to and/or from one or more other nodes, e.g., via any communication technology.
  • the processing circuitry 210 is configured to perform processing described above, e.g., in Figure 8, such as by executing instructions stored in memory 230.
  • the processing circuitry 210 in this regard may implement certain functional means, units, or modules.
  • a computer program comprises instructions which, when executed on at least one processor of equipment 14, cause the equipment 14 to carry out any of the respective processing described above.
  • a computer program in this regard may comprise one or more code modules corresponding to the means or units described above.
  • Embodiments further include a carrier containing such a computer program.
  • This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of equipment 14, cause the equipment 14to perform as described above.
  • Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by equipment 14.
  • This computer program product may be stored on a computer readable recording medium.
  • Figure 10 shows an example of a communication system 1000 in accordance with some embodiments.
  • the communication system 1000 includes a telecommunication network 1002 that includes an access network 1004, such as a radio access network (RAN), and a core network 1006, which includes one or more core network nodes 1008.
  • the access network 1004 includes one or more access network nodes, such as network nodes 1010a and 1010b (one or more of which may be generally referred to as network nodes 1010), or any other similar 3 rd Generation Partnership Project (3GPP) access node or non-3GPP access point.
  • 3GPP 3 rd Generation Partnership Project
  • the network nodes 1010 facilitate direct or indirect connection of user equipment (UE), such as by connecting UEs 1012a, 1012b, 1012c, and 1012d (one or more of which may be generally referred to as UEs 1012) to the core network 1006 over one or more wireless connections.
  • UE user equipment
  • Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors.
  • the communication system 1000 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
  • the communication system 1000 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
  • the UEs 1012 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 1010 and other communication devices.
  • the network nodes 1010 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 1012 and/or with other network nodes or equipment in the telecommunication network 1002 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 1002.
  • the core network 1006 connects the network nodes 1010 to one or more hosts, such as host 1016. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts.
  • the core network 1006 includes one more core network nodes (e.g., core network node 1008) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 1008.
  • Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).
  • MSC Mobile Switching Center
  • MME Mobility Management Entity
  • HSS Home Subscriber Server
  • AMF Access and Mobility Management Function
  • SMF Session Management Function
  • AUSF Authentication Server Function
  • SIDF Subscription Identifier De-concealing function
  • UDM Unified Data Management
  • SEPP Security Edge Protection Proxy
  • NEF Network Exposure Function
  • UPF User Plane Function
  • the host 1016 may be under the ownership or control of a service provider other than an operator or provider of the access network 1004 and/or the telecommunication network 1002, and may be operated by the service provider or on behalf of the service provider.
  • the host 1016 may host a variety of applications to provide one or more service. Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server.
  • the communication system 1000 of Figure 10 enables connectivity between the UEs, network nodes, and hosts.
  • the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any low- power wide-area network (LPWAN) standards such as LoRa and Sigfox.
  • GSM Global System for Mobile Communications
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • the telecommunication network 1002 is a cellular network that implements 3GPP standardized features. Accordingly, the telecommunications network 1002 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 1002. For example, the telecommunications network 1002 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)ZMassive loT services to yet further UEs.
  • URLLC Ultra Reliable Low Latency Communication
  • eMBB Enhanced Mobile Broadband
  • mMTC Massive Machine Type Communication
  • the UEs 1012 are configured to transmit and/or receive information without direct human interaction.
  • a UE may be designed to transmit information to the access network 1004 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 1004.
  • a UE may be configured for operating in single- or multi-RAT or multi-standard mode.
  • a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).
  • MR-DC multi-radio dual connectivity
  • the hub 1014 communicates with the access network 1004 to facilitate indirect communication between one or more UEs (e.g., UE 1012c and/or 1012d) and network nodes (e.g., network node 1010b).
  • the hub 1014 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs.
  • the hub 1014 may be a broadband router enabling access to the core network 1006 for the UEs.
  • the hub 1014 may be a controller that sends commands or instructions to one or more actuators in the UEs.
  • Commands or instructions may be received from the UEs, network nodes 1010, or by executable code, script, process, or other instructions in the hub 1014.
  • the hub 1014 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data.
  • the hub 1014 may be a content source. For example, for a UE that is a VR headset, display, loudspeaker or other media delivery device, the hub 1014 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which the hub 1014 then provides to the UE either directly, after performing local processing, and/or after adding additional local content.
  • the hub 1014 acts as a proxy server or orchestrator for the UEs, in particular in if one or more of the UEs are low energy loT devices.
  • the hub 1014 may have a constant/persistent or intermittent connection to the network node 1010b.
  • the hub 1014 may also allow for a different communication scheme and/or schedule between the hub 1014 and UEs (e.g., UE 1012c and/or 1012d), and between the hub 1014 and the core network 1006.
  • the hub 1014 is connected to the core network 1006 and/or one or more UEs via a wired connection.
  • the hub 1014 may be configured to connect to an M2M service provider over the access network 1004 and/or to another UE over a direct connection.
  • UEs may establish a wireless connection with the network nodes 1010 while still connected via the hub 1014 via a wired or wireless connection.
  • the hub 1014 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 1010b.
  • the hub 1014 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and network node 1010b, but which is additionally capable of operating as a communication start and/or end point for certain data channels.
  • a UE refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other UEs.
  • a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle-mounted or vehicle embedded/integrated wireless device, etc.
  • VoIP voice over IP
  • PDA personal digital assistant
  • gaming console or device music storage device, playback appliance
  • wearable terminal device wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle-mounted or vehicle embedded/integrated wireless device, etc.
  • UEs identified by the 3rd Generation Partnership Project (3GPP), including a narrow band internet of things (NB-loT) UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.
  • 3GPP 3rd Generation Partnership Project
  • NB-loT narrow band internet of things
  • MTC machine type communication
  • eMTC enhanced MTC
  • a UE may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or vehicle-to-everything (V2X).
  • D2D device-to-device
  • DSRC Dedicated Short-Range Communication
  • V2V vehicle-to-vehicle
  • V2I vehicle-to-infrastructure
  • V2X vehicle-to-everything
  • a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device.
  • a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller).
  • a UE may represent a device that is not intended for sale
  • the UE 1100 includes processing circuitry 1102 that is operatively coupled via a bus 1104 to an input/output interface 1106, a power source 1108, a memory 1110, a communication interface 1112, and/or any other component, or any combination thereof.
  • Certain UEs may utilize all or a subset of the components shown in Figure 11 .
  • the level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
  • the processing circuitry 1102 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 1110.
  • the processing circuitry 1102 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above.
  • the processing circuitry 1102 may include multiple central processing units (CPUs).
  • the input/output interface 1106 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices.
  • Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof.
  • An input device may allow a user to capture information into the UE 1100.
  • Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like.
  • the presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user.
  • a sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof.
  • An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.
  • USB Universal Serial Bus
  • the power source 1108 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used.
  • the power source 1108 may further include power circuitry for delivering power from the power source 1108 itself, and/or an external power source, to the various parts of the UE 1100 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of the power source 1108.
  • Power circuitry may perform any formatting, converting, or other modification to the power from the power source 1108 to make the power suitable for the respective components of the UE 1100 to which power is supplied.
  • the memory 1110 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth.
  • the memory 1110 includes one or more application programs 1114, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 1116.
  • the memory 1110 may store, for use by the UE 1100, any of a variety of various operating systems or combinations of operating systems.
  • the memory 1110 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof.
  • RAID redundant array of independent disks
  • HD-DVD high-density digital versatile disc
  • HDDS holographic digital data storage
  • DIMM external mini-dual in-line memory module
  • SDRAM synchronous dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • the UICC may for example be an embedded UICC (eUlCC), integrated UICC (iUICC) or a removable UICC commonly known as ‘SIM card.’
  • the memory 1110 may allow the UE 1100 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data.
  • An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in the memory 1110, which may be or comprise a device-readable storage medium.
  • the processing circuitry 1102 may be configured to communicate with an access network or other network using the communication interface 1112.
  • the communication interface 1112 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 1122.
  • the communication interface 1112 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network).
  • Each transceiver may include a transmitter 1118 and/or a receiver 1120 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth).
  • the transmitter 1118 and receiver 1120 may be coupled to one or more antennas (e.g., antenna 1122) and may share circuit components, software or firmware, or alternatively be implemented separately.
  • communication functions of the communication interface 1112 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof.
  • GPS global positioning system
  • Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11 , Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/internet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.
  • CDMA Code Division Multiplexing Access
  • WCDMA Wideband Code Division Multiple Access
  • GSM Global System for Mobile communications
  • LTE Long Term Evolution
  • NR New Radio
  • UMTS Worldwide Interoperability for Microwave Access
  • WiMax Ethernet
  • TCP/IP transmission control protocol/internet protocol
  • SONET synchronous optical networking
  • ATM Asynchronous Transfer Mode
  • QUIC Hypertext Transfer Protocol
  • HTTP Hypertext Transfer Protocol
  • a UE may provide an output of data captured by its sensors, through its communication interface 1112, via a wireless connection to a network node.
  • Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE.
  • the output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., when moisture is detected an alert is sent), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient).
  • a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection.
  • the states of the actuator, the motor, or the switch may change.
  • the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.
  • a UE when in the form of an Internet of Things (loT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare.
  • loT device are a device which is or which is embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a head-mounted display for Augmented Reality (AR) or Virtual Reality (VR), a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-t
  • AR Augmented
  • a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node.
  • the UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device.
  • the UE may implement the 3GPP NB-loT standard.
  • a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
  • any number of UEs may be used together with respect to a single use case.
  • a first UE might be or be integrated in a drone and provide the drone’s speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone.
  • the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone’s speed.
  • the first and/or the second UE can also include more than one of the functionalities described above.
  • a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.
  • FIG 12 shows a network node 1200 in accordance with some embodiments.
  • network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network.
  • network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
  • APs access points
  • BSs base stations
  • Node Bs Node Bs
  • eNBs evolved Node Bs
  • gNBs NR NodeBs
  • Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
  • a base station may be a relay node or a relay donor node controlling a relay.
  • a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • RRUs remote radio units
  • RRHs Remote Radio Heads
  • Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
  • DAS distributed antenna system
  • network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
  • MSR multi-standard radio
  • RNCs radio network controllers
  • BSCs base station controllers
  • BTSs base transceiver stations
  • OFDM Operation and Maintenance
  • OSS Operations Support System
  • SON Self-Organizing Network
  • positioning nodes e.g., Evolved Serving Mobile Location Centers (E-SMLCs)
  • the network node 1200 includes a processing circuitry 1202, a memory 1204, a communication interface 1206, and a power source 1208.
  • the network node 1200 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
  • the network node 1200 comprises multiple separate components (e.g., BTS and BSC components)
  • one or more of the separate components may be shared among several network nodes.
  • a single RNC may control multiple NodeBs.
  • each unique NodeB and RNC pair may in some instances be considered a single separate network node.
  • the network node 1200 may be configured to support multiple radio access technologies (RATs).
  • RATs radio access technologies
  • some components may be duplicated (e.g., separate memory 1204 for different RATs) and some components may be reused (e.g., a same antenna 1210 may be shared by different RATs).
  • the network node 1200 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 1200, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 1200.
  • RFID Radio Frequency Identification
  • the processing circuitry 1202 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 1200 components, such as the memory 1204, to provide network node 1200 functionality.
  • the processing circuitry 1202 includes a system on a chip (SOC). In some embodiments, the processing circuitry 1202 includes one or more of radio frequency (RF) transceiver circuitry 1212 and baseband processing circuitry 1214. In some embodiments, the radio frequency (RF) transceiver circuitry 1212 and the baseband processing circuitry 1214 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 1212 and baseband processing circuitry 1214 may be on the same chip or set of chips, boards, or units.
  • SOC system on a chip
  • the processing circuitry 1202 includes one or more of radio frequency (RF) transceiver circuitry 1212 and baseband processing circuitry 1214.
  • the radio frequency (RF) transceiver circuitry 1212 and the baseband processing circuitry 1214 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of
  • the memory 1204 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 1202.
  • volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-
  • the memory 1204 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 1202 and utilized by the network node 1200.
  • the memory 1204 may be used to store any calculations made by the processing circuitry 1202 and/or any data received via the communication interface 1206.
  • the processing circuitry 1202 and memory 1204 is integrated.
  • the communication interface 1206 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, the communication interface 1206 comprises port(s)/terminal(s) 1216 to send and receive data, for example to and from a network over a wired connection.
  • the communication interface 1206 also includes radio front-end circuitry 1218 that may be coupled to, or in certain embodiments a part of, the antenna 1210. Radio front-end circuitry 1218 comprises filters 1220 and amplifiers 1222.
  • the radio front-end circuitry 1218 may be connected to an antenna 1210 and processing circuitry 1202.
  • the radio front-end circuitry may be configured to condition signals communicated between antenna 1210 and processing circuitry 1202.
  • the radio front-end circuitry 1218 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection.
  • the radio front-end circuitry 1218 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 1220 and/or amplifiers 1222.
  • the radio signal may then be transmitted via the antenna 1210.
  • the antenna 1210 may collect radio signals which are then converted into digital data by the radio front-end circuitry 1218.
  • the digital data may be passed to the processing circuitry 1202.
  • the communication interface may comprise different components and/or different combinations of components.
  • the network node 1200 does not include separate radio front-end circuitry 1218, instead, the processing circuitry 1202 includes radio front-end circuitry and is connected to the antenna 1210.
  • the processing circuitry 1202 includes radio front-end circuitry and is connected to the antenna 1210.
  • all or some of the RF transceiver circuitry 1212 is part of the communication interface 1206.
  • the communication interface 1206 includes one or more ports or terminals 1216, the radio front-end circuitry 1218, and the RF transceiver circuitry 1212, as part of a radio unit (not shown), and the communication interface 1206 communicates with the baseband processing circuitry 1214, which is part of a digital unit (not shown).
  • the antenna 1210 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals.
  • the antenna 1210 may be coupled to the radio front-end circuitry 1218 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly.
  • the antenna 1210 is separate from the network node 1200 and connectable to the network node 1200 through an interface or port.
  • the antenna 1210, communication interface 1206, and/or the processing circuitry 1202 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, the antenna 1210, the communication interface 1206, and/or the processing circuitry 1202 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.
  • the power source 1208 provides power to the various components of network node 1200 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component).
  • the power source 1208 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 1200 with power for performing the functionality described herein.
  • the network node 1200 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 1208.
  • the power source 1208 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.
  • Embodiments of the network node 1200 may include additional components beyond those shown in Figure 12 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
  • the network node 1200 may include user interface equipment to allow input of information into the network node 1200 and to allow output of information from the network node 1200. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 1200.
  • FIG 13 is a block diagram of a host 1300, which may be an embodiment of the host 1016 of Figure 10, in accordance with various aspects described herein.
  • the host 1300 may be or comprise various combinations hardware and/or software, including a standalone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, container, or processing resources in a server farm.
  • the host 1300 may provide one or more services to one or more UEs.
  • the host 1300 includes processing circuitry 1302 that is operatively coupled via a bus 1304 to an input/output interface 1306, a network interface 1308, a power source 1310, and a memory 1312.
  • processing circuitry 1302 that is operatively coupled via a bus 1304 to an input/output interface 1306, a network interface 1308, a power source 1310, and a memory 1312.
  • Other components may be included in other embodiments. Features of these components may be substantially similar to those described with respect to the devices of previous figures, such as Figures 11 and 12, such that the descriptions thereof are generally applicable to the corresponding components of host 1300.
  • the memory 1312 may include one or more computer programs including one or more host application programs 1314 and data 1316, which may include user data, e.g., data generated by a UE for the host 1300 or data generated by the host 1300 for a UE.
  • Embodiments of the host 1300 may utilize only a subset or all of the components shown.
  • the host application programs 1314 may be implemented in a container-based architecture and may provide support for video codecs (e.g., Versatile Video Coding (WC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), MPEG, VP9) and audio codecs (e.g., FLAC, Advanced Audio Coding (AAC), MPEG, G.711), including transcoding for multiple different classes, types, or implementations of UEs (e.g., handsets, desktop computers, wearable display systems, heads-up display systems).
  • the host application programs 1314 may also provide for user authentication and licensing checks and may periodically report health, routes, and content availability to a central node, such as a device in or on the edge of a core network.
  • the host 1300 may select and/or indicate a different host for over-the-top services for a UE.
  • the host application programs 1314 may support various protocols, such as the HTTP Live Streaming (HLS) protocol, Real-Time Messaging Protocol (RTMP), Real-Time Streaming Protocol (RTSP), Dynamic Adaptive Streaming over HTTP (MPEG-DASH), etc.
  • HLS HTTP Live Streaming
  • RTMP Real-Time Messaging Protocol
  • RTSP Real-Time Streaming Protocol
  • MPEG-DASH Dynamic Adaptive Streaming over HTTP
  • FIG 14 is a block diagram illustrating a virtualization environment 1400 in which functions implemented by some embodiments may be virtualized.
  • virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
  • virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components.
  • Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 1400 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host.
  • VMs virtual machines
  • the virtual node does not require radio connectivity (e.g., a core network node or host)
  • the node may be entirely virtualized.
  • Applications 1402 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment Q400 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
  • Hardware 1404 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth.
  • Software may be executed by the processing circuitry to instantiate one or more virtualization layers 1406 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 1408a and 1408b (one or more of which may be generally referred to as VMs 1408), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein.
  • the virtualization layer 1406 may present a virtual operating platform that appears like networking hardware to the VMs 1408.
  • the VMs 1408 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 1406.
  • a virtualization layer 1406 Different embodiments of the instance of a virtual appliance 1402 may be implemented on one or more of VMs 1408, and the implementations may be made in different ways.
  • Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
  • NFV network function virtualization
  • a VM 1408 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
  • Each of the VMs 1408, and that part of hardware 1404 that executes that VM be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements.
  • a virtual network function is responsible for handling specific network functions that run in one or more VMs 1408 on top of the hardware 1404 and corresponds to the application 1402.
  • Hardware 1404 may be implemented in a standalone network node with generic or specific components. Hardware 1404 may implement some functions via virtualization. Alternatively, hardware 1404 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 1410, which, among others, oversees lifecycle management of applications 1402.
  • hardware 1404 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
  • some signaling can be provided with the use of a control system 1412 which may alternatively be used for communication between hardware nodes and radio units.
  • Figure 15 shows a communication diagram of a host 1502 communicating via a network node 1504 with a UE 1506 over a partially wireless connection in accordance with some embodiments.
  • host 1502 Like host 1300, embodiments of host 1502 include hardware, such as a communication interface, processing circuitry, and memory.
  • the host 1502 also includes software, which is stored in or accessible by the host 1502 and executable by the processing circuitry.
  • the software includes a host application that may be operable to provide a service to a remote user, such as the UE 1506 connecting via an over-the-top (OTT) connection 1550 extending between the UE 1506 and host 1502. In providing the service to the remote user, a host application may provide user data which is transmitted using the OTT connection 1550.
  • the network node 1504 includes hardware enabling it to communicate with the host 1502 and UE 1506.
  • connection 1560 may be direct or pass through a core network (like core network 1006 of Figure 10) and/or one or more other intermediate networks, such as one or more public, private, or hosted networks.
  • a core network like core network 1006 of Figure 10
  • intermediate networks such as one or more public, private, or hosted networks.
  • an intermediate network may be a backbone network or the Internet.
  • the UE 1506 includes hardware and software, which is stored in or accessible by UE 1506 and executable by the UE’s processing circuitry.
  • the software includes a client application, such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via UE 1506 with the support of the host 1502.
  • a client application such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via UE 1506 with the support of the host 1502.
  • an executing host application may communicate with the executing client application via the OTT connection 1550 terminating at the UE 1506 and host 1502.
  • the UE's client application may receive request data from the host's host application and provide user data in response to the request data.
  • the OTT connection 1550 may transfer both the request data and the user data.
  • the UE's client application may interact with the user to generate the user data that it provides to the host application through the OTT
  • the OTT connection 1550 may extend via a connection 1560 between the host 1502 and the network node 1504 and via a wireless connection 1570 between the network node 1504 and the UE 1506 to provide the connection between the host 1502 and the UE 1506.
  • the connection 1560 and wireless connection 1570, over which the OTT connection 1550 may be provided, have been drawn abstractly to illustrate the communication between the host 1502 and the UE 1506 via the network node 1504, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
  • the host 1502 provides user data, which may be performed by executing a host application.
  • the user data is associated with a particular human user interacting with the UE 1506.
  • the user data is associated with a UE 1506 that shares data with the host 1502 without explicit human interaction.
  • the host 1502 initiates a transmission carrying the user data towards the UE 1506.
  • the host 1502 may initiate the transmission responsive to a request transmitted by the UE 1506. The request may be caused by human interaction with the UE 1506 or by operation of the client application executing on the UE 1506.
  • the transmission may pass via the network node 1504, in accordance with the teachings of the embodiments described throughout this disclosure. Accordingly, in step 1512, the network node 1504 transmits to the UE 1506 the user data that was carried in the transmission that the host 1502 initiated, in accordance with the teachings of the embodiments described throughout this disclosure.
  • the UE 1506 receives the user data carried in the transmission, which may be performed by a client application executed on the UE 1506 associated with the host application executed by the host 1502. In some examples, the UE 1506 executes a client application which provides user data to the host 1502. The user data may be provided in reaction or response to the data received from the host 1502.
  • the UE 1506 may provide user data, which may be performed by executing the client application.
  • the client application may further consider user input received from the user via an input/output interface of the UE 1506.
  • the UE 1506 initiates, in step 1518, transmission of the user data towards the host 1502 via the network node 1504.
  • the network node 1504 receives user data from the UE 1506 and initiates transmission of the received user data towards the host 1502.
  • the host 1502 receives the user data carried in the transmission initiated by the UE 1506.
  • One or more of the various embodiments improve the performance of OTT services provided to the UE 1506 using the OTT connection 1550, in which the wireless connection 1570 forms the last segment.
  • factory status information may be collected and analyzed by the host 1502.
  • the host 1502 may process audio and video data which may have been retrieved from a UE for use in creating maps.
  • the host 1502 may collect and analyze real-time data to assist in controlling vehicle congestion (e.g., controlling traffic lights).
  • the host 1502 may store surveillance video uploaded by a UE.
  • the host 1502 may store or control access to media content such as video, audio, VR or AR which it can broadcast, multicast or unicast to UEs.
  • the host 1502 may be used for energy pricing, remote control of non-time critical electrical load to balance power generation needs, location services, presentation services (such as compiling diagrams etc. from data collected from remote devices), or any other function of collecting, retrieving, storing, analyzing and/or transmitting data.
  • a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve.
  • the measurement procedure and/or the network functionality for reconfiguring the OTT connection may be implemented in software and hardware of the host 1502 and/or UE 1506.
  • sensors (not shown) may be deployed in or in association with other devices through which the OTT connection 1550 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software may compute or estimate the monitored quantities.
  • the reconfiguring of the OTT connection 1550 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not directly alter the operation of the network node 1504. Such procedures and functionalities may be known and practiced in the art.
  • measurements may involve proprietary UE signaling that facilitates measurements of throughput, propagation times, latency and the like, by the host 1502.
  • the measurements may be implemented in that software causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection 1550 while monitoring propagation times, errors, etc.
  • computing devices described herein may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • processing circuitry may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
  • a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface.
  • non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
  • processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer- readable storage medium.
  • some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner.
  • the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally.

Abstract

Equipment computes a regularized loss (24) between observations (22) of a response variable (14) over time in a communication network (10) and predictions (18) of the response variable (14) over time as predicted by a model (16). Computing the regularized loss (24) comprises penalizing predictions (18) of the response variable (14) based on an extent to which the predictions (18) repeat past predictions of the response variable (14). The equipment adapts the model (16) based on the regularized loss (24).

Description

RESPONSE VARIABLE PREDICTION IN A COMMUNICATION NETWORK
TECHNICAL FIELD
The present application relates to response variable prediction, such as may be usable for prediction of a response variable in a communication network.
BACKGROUND
Model-based prediction exploits a model for predicting a response variable. Training of the model involves adapting the model as needed for the model’s response variable predictions to align with training data. That is, the model is adapted so that its predictions "fit” well to the training data. How well the model’s predictions fit with the training data is typically represented with a so-called loss function that computes the loss between the predictions and the training data. In this case, the objective of model training is to minimize the loss.
Training a model in this way proves challenging, though, if a local solution to loss minimization is to make predictions that generally repeat past predictions. In this case, training the model to repeat past predictions achieves a low loss locally but jeopardizes the model’s ability to predict changes in the response variable. A model that lacks the ability to predict response variable changes is degenerate and risks making predictions that are misleading.
For example, a model that predicts weather can be trained to minimize loss by predicting the weather for tomorrow to be the same as the weather for today. Indeed, over the course of a year, such predictions in fact minimize loss since many consecutive days will have similar weather. But the resulting model will be degenerate because it will not be able to predict weather changes.
Challenges therefore exist in training a model to make response variable predictions while also avoiding degeneracy of the model. Model degeneracy proves quite problematic in some contexts, such as where the response variable relates to a communication network. For example, if the response variable reflects a performance indicator of the communication network or an alarm about poor performance, a degenerate model’s predictions of the response variable would be unreliable and jeopardize the communication network’s performance.
SUMMARY
Some embodiments herein apply regularization to a loss function to reduce model degeneracy. Some embodiments in this regard exploit regularization to penalize response variable predictions by the model that would repeat past predictions. The regularized loss function thereby discourages repeat predictions. By discouraging repeat predictions in this way, some embodiments advantageously mitigate model degeneracy and improve prediction reliability. In the context of a communication network, improving prediction reliability may advantageously improve the network’s performance. More particularly, embodiments herein include a method comprising computing a regularized loss between observations of a response variable over time in a communication network and predictions of the response variable over time as predicted by a model. In some embodiments, computing the regularized loss comprises penalizing predictions of the response variable based on an extent to which the predictions repeat past predictions of the response variable. The method also comprises adapting the model based on the regularized loss.
In some embodiments, said computing comprises penalizing predictions of the response variable to a greater extent the greater the extent to which the predictions repeat past predictions of the response variable.
In some embodiments, said computing comprises computing the regularized loss as a function of a prediction repeat penalty. In some embodiments, the prediction repeat penalty is a negative loss between predictions of the response variable and past predictions of the response variable. In one or more of these embodiments, the prediction repeat penalty
Figure imgf000004_0001
For each time window i among T time windows,
Figure imgf000004_0002
negative loss between predictions of the response variable over the time window i and past predictions of the response variable over a previous time window i - 1. In one or more of these embodiments, said computing comprises scaling the prediction repeat penalty by a prediction repeat hyperparameter. In some embodiments, said computing comprises computing the regularized loss also as a function of a prediction inaccuracy loss. In some embodiments, the prediction inaccuracy loss is a loss between predictions of the response variable and observations of the response variable. In one or more of these embodiments, the prediction inaccuracy penalty comprises
Figure imgf000004_0003
P0:w) + For each time window i among T time windows,
Figure imgf000004_0004
oss between predictions of the response variable over the time window i and observations of the response variable over the time window i. In one or more of these embodiments, said computing comprises scaling at least a portion of the prediction inaccuracy loss by a prediction inaccuracy hyperparameter.
In some embodiments, said computing comprises computing the regularized loss as
Figure imgf000004_0005
this case, -freg is the regularized loss. In some embodiments,
Figure imgf000004_0006
is a loss between predictions of the response variable over an initial time window and observations of the response variable over the initial time window. In some embodiments, (1 — /?) is a prediction inaccuracy hyperparameter. In some embodiments, for each time window i among T time windows, ^(YiW.^+i)w, lX:(i+i)iv) is ass between predictions of the response variable over the time window i and observations of the response variable over the time window i. In some embodiments, f> is a prediction repeat hyperparameter. In some embodiments, for each time window i among T time windows,
Figure imgf000005_0001
is a negative loss between predictions of the response variable over the time window i and past predictions of the response variable over a previous time window i - 1.
In some embodiments, adapting the model comprises adapting the model as needed to minimize the regularized loss.
In some embodiments, the method further comprises predicting the response variable using the adapted model.
In some embodiments, said computing and adapting is performed for each of multiple iterations as part of an iterative procedure to train the model. In this case, the method further comprises adapting how the regularized loss is computed between iterations as needed for predictions of the response variable by the model to converge with observations of the response variable in a training dataset.
In some embodiments, the response variable is a performance indicator for the communication network or a service-level metric in the communication network.
Other embodiments herein include equipment configured to compute a regularized loss between observations of a response variable over time in a communication network and predictions of the response variable over time as predicted by a model. In some embodiments, computing the regularized loss comprises penalizing predictions of the response variable based on an extent to which the predictions repeat past predictions of the response variable. The equipment is also configured to adapt the model based on the regularized loss.
In some embodiments, the equipment is configured to perform the steps described above.
Other embodiments herein include a computer program comprising instructions which, when executed by at least one processor of equipment, causes the equipment to perform the steps described above. In one or more of these embodiments, a carrier containing the computer program is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
Other embodiments herein include equipment comprising processing circuitry configured to compute a regularized loss between observations of a response variable over time in a communication network and predictions of the response variable over time as predicted by a model. In some embodiments, computing the regularized loss comprises penalizing predictions of the response variable based on an extent to which the predictions repeat past predictions of the response variable. The processing circuitry is also configured to adapt the model based on the regularized loss.
In some embodiments, the processing circuitry is configured to perform the steps described above. Of course, the present invention is not limited to the above features and advantages. Indeed, those skilled in the art will recognize additional features and advantages upon reading the following detailed description, and upon viewing the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a block diagram of equipment for loss regularization according to some embodiments.
Figure 2 is graphs of regularized loss vs unregularized loss according to some embodiments.
Figure 3 is a block diagram of regularized loss computation according to some embodiments.
Figure 4 is a block diagram of equipment for loss regularization according to other embodiments.
Figure 5 is a block diagram of equipment for loss regularization according to yet other embodiments.
Figure 6 is a block diagram of a loss regulator according to some embodiments.
Figure 7 is a logic flow diagram of steps performed by a loss regulator according to some embodiments.
Figure 8 is a logic flow diagram of a method performed by equipment according to some embodiments.
Figure 9 is a block diagram of equipment according to some embodiments.
Figure 10 is a block diagram of a communication system in accordance with some embodiments
Figure 11 is a block diagram of a user equipment according to some embodiments.
Figure 12 is a block diagram of a network node according to some embodiments.
Figure 13 is a block diagram of a host according to some embodiments.
Figure 14 is a block diagram of a virtualization environment according to some embodiments.
Figure 15 is a block diagram of a host communicating via a network node with a UE over a partially wireless connection in accordance with some embodiments.
DETAILED DESCRIPTION
Figure 1 shows a communication network 10 according to some embodiments. The communication network 10 provides communication service to one or more communication devices 12. In one embodiment, for example, the communication network 10 is a wireless communication network that provides wireless communication service to one or more wireless communication devices.
Figure 1 further shows a response variable 14 in the communication network 10, e.g., as generated by one or more network nodes in the communication network 10. The response variable 14 may for example be a performance indicator for the communication network 10. As such, the performance indicator may indicate the value of a performance metric characterizing performance of the communication network 10 as a whole, performance of an individual cell or base station in the communication network 10, or performance of an individual communication device 12 served by the communication network 10. Examples of the performance indicator may for instance include (i) a key performance indicator measuring utilization of transmission resources in the communication network 10; (ii) a key performance indicator measuring availability of the communication network 10 to users; (iii) a key performance indicator measuring whether and/or how well services requested by users can be accessed; (iv) a key performance indicator measuring energy consumption by the communication network 10; or (v) a key performance indicator measuring call drops in the communication network 10. As another example, the response variable 14 may be a servicelevel metric in the communication network 10, e.g., so as to indicate the value of a metric that characterizes a service provided by the communication network 10 or a cell in the communication network 10. The service-level metric may for example reflect the number of users in a cell or otherwise indicate cell load.
No matter the particular nature of the response variable 14, Figure 1 shows a model 16 that is trained to make predictions 18 of the response variable 14 over time. Making predictions 18 of the response variable 14 in this way, the model 16 may also be referred to as a forecast model, with the predictions 18 over time constituting a sort of “forecast” of the response variable 14. Regardless of the particular terminology, the model 16 in some embodiments may be implemented as a machine learning (ML) model so that the model 16 can be trained via machine learning.
Equipment 15 in Figure 1 is configured to train the model 16. The equipment 15 as shown includes a model adapter 17 that adapts the model 16 as needed for the predictions 18 of the model 16 to fit well with training data 20. The training data 20 in this regard includes observations 22 of the response variable 14, e.g., known, real-world observations 22 exploited for training. Training the model 16 may for instance be an iterative process whereby the equipment 15 iteratively adapts the model 16 over multiple rounds of training until the predictions 18 of the model 16 acceptably converge with the observations 22 from the training data 20. How well the model’s predictions 18 fit with the observations 22 for convergence is represented by the so-called loss between the observations 22 and the predictions 18. The equipment 15 in some embodiments thereby trains the model 16 by adapting the model 16 as needed to minimize the loss between the observations 22 and the predictions 18.
Notably, though, the equipment 15 adapts the model 16 based on a regularized loss 24 between the observations 22 and the predictions 18 of the response variable 14 over time. The regularized loss 24 is regularized in the sense that it is computed in a way that penalizes certain predictions 18. By penalizing certain predictions 18, the regularized loss 24 attributable to those predictions 18 is larger so as to discourage them. According to embodiments herein, the equipment 15 more particularly penalizes predictions 18 based on an extent to which the predictions 18 repeat past predictions of the response variable 14. In some embodiments, then, the equipment 15 penalizes predictions 18 of the response variable 14 to a greater extent the greater the extent to which the predictions 18 repeat past predictions of the response variable 14. Accordingly, in these and other embodiments, the equipment 15 computes the regularized loss 24 in a way that artificially increases the loss when predictions are repeated over time, so as to thereby discourage repeat predictions and/or encourage independent predictions. Training the model 16 based on such regularized loss 24 means the trained model will be less prone to degeneracy.
Note that embodiments herein may exploit regularization for reducing model degeneracy in this way, in addition to or alternatively from exploiting regularizing for reducing model overfitting. In this regard, exploiting regularizing for reducing model degeneracy involves penalizing repeat predictions, whereas exploiting regularizing for reducing model overfitting involves penalizing complex models.
Figure 2 shows a graphical depiction of the advantages realized from training the model 16 based on the regularized loss 24 as compared to an unregularized loss. As shown, if the model 16 were to be trained based on an unregularized loss 13, the resulting predictions 11 from the model 16 would tend to repeat past predictions. Although the unregularized loss 13 would be low, the model 16 would not predict changes in observations 22 of the response variable 14 well so as to track response variable trends. By contrast, by training the model 16 based on the regularized loss 24, the resulting predictions 18 from the model 16 are more independent from one another. The model 16 thereby predicts changes in observations 22 of the response variable 14 well so as to track response variable trends.
Returning to Figure 1 , the equipment 15 may train the model 16 in this way through use of a regularized loss computer 19. The regularized loss computer 19 computes the regularized loss 24 between the observations 22 of the response variable 14 and the predictions 18 of the response variable 14 made by the model 16. The equipment 15 further includes a prediction repeat penalizer 21 that computes a prediction repeat penalty 26. The regularized loss computer 19 computes the regularized loss 24 as a function of this prediction repeat penalty 26. The impact of computing the regularized loss 24 as a function of the prediction repeat penalty 26 is that predictions 18 which repeat past predictions of the response variable 14 are penalized in the loss computation, i.e., repeating predictions means the regularized loss 24 will be greater than it would have been otherwise.
In some embodiments, the prediction repeat penalty 26 takes the form of a negative loss between predictions and past predictions, e.g., where this negative loss is added to the loss otherwise computed. In this case, if predictions are repeated to a relatively small extent, the prediction repeat penalty 26 will be a relatively large negative loss that operates to decrease the regularized loss 24 to a relatively large degree. On the other hand, if predictions are repeated to a relatively large extent, the prediction repeat penalty 26 will be a relatively small negative loss that operates to decrease the regularized loss 24 to a relatively small degree. Effectively, then, the prediction repeat penalty 26 is a penalty in the sense that it reduces the regularized loss 24 to a lesser extent the more predictions are repetitive, i.e., the penalty for repeating predictions is a larger regularized loss than would have otherwise been realized.
Figure 3 shows one example implementation. As shown, the model 16 provides predictions 18 of the response variable 14 over multiple time windows. The time window between time i=0 and time i=1 may be referred to as time window 01V: 11V, or simply time window 0. The time window between time i=1 and time i=2 may be referred to as time window 11V: 2 IT, or simply time window 1, etc. Predictions 18 of the response variable 14 over time window 0 are represented as YOW.1W, predictions 18 of the response variable 14 over time window 1 are represented as Y1W.2W, and so on. Accordingly, —t(Y0W. W, Ylw.2W) represents the negative loss between predictions 18 over time window 1 and past predictions 18 over time window 0, —t(Ylw.2W, Y2W.3W) represents the negative loss between predictions 18 over time window 2 and past predictions 18 over time window 1, etc. Generally, then, the prediction repeat penalty 26 may be represented in some embodiments as the sum of these negative losses over the time windows,
Figure imgf000009_0001
where, for each time window i among T time windows, —t(Y^-^w.iw, Yiw^i+^w) is the negative loss between predictions 18 of the response variable 14 over the time window i and past predictions of the response variable 14 over a previous time window (i - 1). Note that, in these embodiments, the prediction repeat penalty 26 applies on a time window basis. So, if one prediction 18 in a time window does not repeat a past prediction, the window of predictions might still be penalized if the overall trend within that time window is to repeat past predictions.
No matter how the prediction repeat penalty 26 is computed, though, the prediction repeat penalty 26 in some embodiments may be scaled, e.g., so as to scale the impact that the prediction repeat penalty 26 has on the regularized loss 24 as part of training the model 16. Figure 4 for example shows that equipment 15 in some embodiments further includes a scaler 25 interposed between the prediction repeat penalizer 21 and the regularized loss computer 19. In this case, the scaler 25 scales the prediction repeat penalty 26 by a prediction repeat hyperparameter 27 to obtain a scaled prediction repeat penalty 26S. The regularized loss computer 19 then computes the regularized loss 24 as a function of this scaled prediction repeat penalty 26S.
In some embodiments, for example, the equipment 15 scales the prediction repeat penalty 26 to adapt how the regularized loss 24 is computed between iterations of model training. The equipment 15 in this regard may adapt the prediction repeat hyperparameter 27 as needed in order for predictions 18 of the response variable 14 to converge with observations 22 of the response variable 14 in the training dataset 20. The larger the value of the prediction repeat hyperparameter 27, the greater the impact of the prediction repeat penalty 26 on the regularized loss 14, the more the equipment 15 encourages the model 16 to make predictions different than and/or independent from past predictions, e.g., by escaping local convergence solutions that cause model degeneracy.
Figure 5 illustrates additional details of regularized loss computation according to still other embodiments. As shown, the regularized loss computer 19 further includes a prediction inaccuracy loss computer 30. The prediction inaccuracy loss computer 30 computes a prediction inaccuracy loss 31 as a loss between predictions 18 of the response variable 14 and observations 22 of the response variable 14. In some embodiments, for example, the prediction inaccuracy loss computer 30 computes the prediction inaccuracy loss 31
Figure imgf000010_0001
+ Here, for each time window i among T time windows,
Figure imgf000010_0002
loss between predictions 18 of the response variable 14 over the time window i and observations 22 of the response variable 14 over the time window i.
In some embodiments as shown, the regularized loss computer 19 also includes a scaler 32 that scales at least a portion of the prediction inaccuracy loss 31 by a prediction inaccuracy hyperparameter 35. In some embodiments, for example, the resulting scaled prediction inaccuracy loss is (Yo.w, Yo-.w) +
Figure imgf000010_0003
where the latter term of the prediction inaccuracy loss is scaled by (1 - /?), with (1 - /?) being the prediction inaccuracy hyperparameter 35. As with the prediction repeat hyperparameter 27, the regularized loss computer 19 may adapt this hyperparameter 35 as needed between iterations of model training.
In any event, as shown, the regularized loss computer 19 adds the prediction inaccuracy loss 31 , as scaled, to the scaled prediction repeat penalty 26S, to obtain the regularized loss 24. In one or more embodiments, then, the regularized loss 24 is computed as:
Figure imgf000010_0004
where -freg is the regularized loss 24, (Y0.w,
Figure imgf000010_0005
is the loss between predictions 18 of the response variable 14 over an initial time window and observations 22 of the response variable 14 over the initial time window, (1 - /?) is the prediction inaccuracy hyperparameter 35, for each time window i among T time windows, /'0nV:(i+i)iv,
Figure imgf000010_0006
is the |oss between predictions
18 of the response variable 14 over the time window i and observations 22 of the response variable 14 over the time window i,
Figure imgf000010_0007
is a negative loss between predictions 18 of the response variable 14 over the time window i and past predictions 18 of the response variable 14 over a previous time window i - 1, and f> is the prediction repeat hyperparameter 27. In some embodiments, 0 < < 0.5. For larger values of /?, the equipment 14 will apply higher regularizations. In one embodiment, f> = 0.25 is a good starting point for training and the best values can be identified via a hyperparameter search.
Note that, although the above description exemplified the regularized loss 24 as being computed based on loss between predictions from consecutive time windows, such need not be the case. In other embodiments, the regularized loss 24 is computed based on loss between predictions over the course of any number of time windows, e.g., according to: lv)
Figure imgf000011_0001
where X is adaptable. Alternatively or additionally, the regularized loss 24 may be computed by weighting predictions from different time frames differently, e.g., by weighting a more recent time window differently than a less recent time window.
In some embodiments, the regularized loss computer 19 may implement a loss regulator 40 as shown in Figure 6. The loss regulator 40 as shown takes as its inputs: (i) the loss function which computes the loss between the predictions 18 of the response variable 14 and the observations 22 of the response variable 14; and (ii) the time window W, also referred to as the forecast window. The loss regulator 40 then computes the regularized loss reg, e.g., according to any of the formulas described above.
Figure 7 shows the training procedure according to some embodiments. As shown, Step 0 is to initialize the loss regulator 40 with a user-defined value of /?, the prediction repeat hyperparameter 27. Step 1 is to feed the loss function to the loss regulator 40, which outputs the regularized loss ^reg. Step 2 is to replace the loss of the model 16 with the regularized loss ^reg computed in Step 1. If convergence results, training is complete. Otherwise, if there is no convergence, steps 1 and 2 are repeated in an iterative fashion.
Generally, then, the loss regulator 40 in some embodiments takes as its input the loss function of the model 16 and modifies the loss such that the modified loss is less prone to degeneracy. Some embodiments therefore exploit a loss regulator 40 which penalizes degenerate solutions and encourages novelty aspects of the model 16.
These and other embodiments may advantageously provide a solution to the problem of model degeneracy that is general and does not depend on the choice of the machine learning model or the loss function used for the construction of the model 16. Moreover, embodiments herein help avoid degenerate solutions and at the same time help reducing the degeneracy problem.
Consider now a practical use case in the context of the communication network 10. Predicting what the key performance indicator (KPI) value of a network cell will be hours in advance may facilitate planning the network accordingly. Indeed, predicting the KPI in this way may enable the base station to be re-configured when needed, in order to avoid capacity shortages. Capacity shortages often cause degradation in quality of experience (QoE) which might in the long run impact operator revenue.
According to some embodiments in this regard, the response variable 14 herein is a KPI in the communication network 10. The KPI may for example be the prbUtilization, which is a function of the allocated physical resource blocks and the total available physical resource blocks. Other example KPIs, amongst many, are accessibility, retainability, call drops, and energy consumption. Although prediction models for such KPIs may learn the overall trend in different times of the time interval, during the hours when there is a significant shift from very low traffic hours to very high traffic hours (or vise-versa), the models may tend to repeat their previous actual value. In those cases, embodiments herein exploit regularization in the learning process to penalize the learning from very near prior data points.
Note that, although embodiments herein have been described as applicable in the context of a communication network 10, the embodiments may alternatively or additionally be applied in other contexts. That is, in other embodiments, the response variable 14 may be unrelated to a communication network 10. The response variable 14 may for example be related to (e.g., characterize performance of) augmented reality, virtual reality, internet-of- senses, a brain-computer interface, or any other context in which response variable prediction is useful.
In some embodiments, the equipment 15 in Figure 1 is deployed in the communication network 10. In other embodiments, though, the equipment 15 may be deployed outside of the communication network 10, e.g., in the cloud. In still other embodiments, the equipment 15 may be or be part of a communication device 12.
In view of the modifications and variations herein, Figure 8 depicts a method in accordance with particular embodiments. The method comprises computing a regularized loss 24 between observations 22 of a response variable 14 over time (e.g., in a communication network 10) and predictions 18 of the response variable 14 over time as predicted by a model 16 (Block 100). In some embodiments, for example, the response variable 14 is a performance indicator for the communication network 10 or a service-level metric in the communication network 10. Regardless, in some embodiments, computing the regularized loss 24 comprises penalizing predictions 18 of the response variable 14 based on an extent to which the predictions 18 repeat past predictions of the response variable 14. In some embodiments, for instance, such computing comprises penalizing predictions 18 of the response variable 14 to a greater extent the greater the extent to which the predictions 18 repeat past predictions of the response variable 14.
As shown, the method also comprises adapting the model 16 based on the regularized loss 24 (Block 110). For example, such adaptation may comprise adapting the model 16 as needed to minimize the regularized loss 24. In some embodiments, this computing (Block 100) and adapting (Block 110) is performed for each of multiple iterations as part of an iterative procedure to train the model 16. In this case, the method further may also comprise adapting how the regularized loss 24 is computed between iterations, e.g., as needed for predictions of the response variable 14 by the model 16 to converge with observations 22 of the response variable 14 in a training dataset.
In any event, in some embodiments, the method further comprises predicting the response variable 14 using the adapted model 16 (Block 120).
More particularly, in some embodiments, said computing comprises computing the regularized loss 24 as a function of a prediction repeat penalty 26. In some embodiments, the prediction repeat penalty 26 is a negative loss between predictions 18 of the response variable 14 and past predictions of the response variable 14. In one or more of these embodiments, the prediction repeat penalty 26 comprises
Figure imgf000013_0001
For each time window i among T time windows, —((Y^-^w.iw. Yiw^i+^w is a negative loss between predictions 18 of the response variable 14 over the time window i and past predictions of the response variable 14 over a previous time window i - 1. In one or more of these embodiments, said computing comprises scaling the prediction repeat penalty 26 by a prediction repeat hyperparameter 27. In some embodiments, said computing comprises computing the regularized loss 24 also as a function of a prediction inaccuracy loss 31 . In some embodiments, the prediction inaccuracy loss 31 is a loss between predictions 18 of the response variable 14 and observations 22 of the response variable 14. In one or more of these embodiments, the prediction inaccuracy penalty 31 comprises (Y0.w, ?0;W) + For each time window i among T time windows,
Figure imgf000013_0002
oss between predictions 18 of the response variable 14 over the time window i and observations 22 of the response variable 14 over the time window i. In one or more of these embodiments, said computing comprises scaling at least a portion of the prediction inaccuracy loss 31 by a prediction inaccuracy hyperparameter 35.
In some embodiments, the regularized loss 24 is computed as
Figure imgf000013_0003
this case, -freg is the regularized loss 24. In some embodiments, (Y0.w, ?0;W) is a loss between predictions 18 of the response variable 14 over an initial time window and observations 22 of the response variable 14 over the initial time window. In some embodiments, (1 — /?) is a prediction inaccuracy hyperparameter 26. In some embodiments, for each time window i among T time windows,
Figure imgf000013_0004
is a |oss between predictions 18 of the response variable 14 over the time window i and observations 22 of the response variable 14 over the time window i. In some embodiments, f> is a prediction repeat hyperparameter 27. In some embodiments, for each time window i among T time windows,
Figure imgf000013_0005
is a negative loss between predictions 18 of the response variable 14 over the time window i and past predictions of the response variable 14 over a previous time window i - 1.
Embodiments herein also include corresponding apparatuses. Embodiments herein for instance include equipment 14 configured to perform any of the steps of any of the embodiments described above for the equipment 14.
Embodiments also include equipment 14 comprising processing circuitry and power supply circuitry. The processing circuitry is configured to perform any of the steps of any of the embodiments described above for the equipment 14. The power supply circuitry is configured to supply power to the equipment 14.
Embodiments further include equipment 14 comprising processing circuitry. The processing circuitry is configured to perform any of the steps of any of the embodiments described above for the equipment 14. In some embodiments, the equipment 14 further comprises communication circuitry.
Embodiments further include equipment 14 comprising processing circuitry and memory. The memory contains instructions executable by the processing circuitry whereby the equipment 14 is configured to perform any of the steps of any of the embodiments described above for the equipment 14.
More particularly, the apparatuses described above may perform the methods herein and any other processing by implementing any functional means, modules, units, or circuitry. In one embodiment, for example, the apparatuses comprise respective circuits or circuitry configured to perform the steps shown in the method figures. The circuits or circuitry in this regard may comprise circuits dedicated to performing certain functional processing and/or one or more microprocessors in conjunction with memory. For instance, the circuitry may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory may include program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments. In embodiments that employ memory, the memory stores program code that, when executed by the one or more processors, carries out the techniques described herein.
Figure 9 for example illustrates equipment 14 as implemented in accordance with one or more embodiments. As shown, the equipment 14includes processing circuitry 210 and communication circuitry 220. The communication circuitry 220 is configured to transmit and/or receive information to and/or from one or more other nodes, e.g., via any communication technology. The processing circuitry 210 is configured to perform processing described above, e.g., in Figure 8, such as by executing instructions stored in memory 230. The processing circuitry 210 in this regard may implement certain functional means, units, or modules.
Those skilled in the art will also appreciate that embodiments herein further include corresponding computer programs.
A computer program comprises instructions which, when executed on at least one processor of equipment 14, cause the equipment 14 to carry out any of the respective processing described above. A computer program in this regard may comprise one or more code modules corresponding to the means or units described above.
Embodiments further include a carrier containing such a computer program. This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
In this regard, embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of equipment 14, cause the equipment 14to perform as described above.
Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by equipment 14. This computer program product may be stored on a computer readable recording medium.
Figure 10 shows an example of a communication system 1000 in accordance with some embodiments.
In the example, the communication system 1000 includes a telecommunication network 1002 that includes an access network 1004, such as a radio access network (RAN), and a core network 1006, which includes one or more core network nodes 1008. The access network 1004 includes one or more access network nodes, such as network nodes 1010a and 1010b (one or more of which may be generally referred to as network nodes 1010), or any other similar 3rd Generation Partnership Project (3GPP) access node or non-3GPP access point. The network nodes 1010 facilitate direct or indirect connection of user equipment (UE), such as by connecting UEs 1012a, 1012b, 1012c, and 1012d (one or more of which may be generally referred to as UEs 1012) to the core network 1006 over one or more wireless connections.
Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors. Moreover, in different embodiments, the communication system 1000 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections. The communication system 1000 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
The UEs 1012 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 1010 and other communication devices. Similarly, the network nodes 1010 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 1012 and/or with other network nodes or equipment in the telecommunication network 1002 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 1002.
In the depicted example, the core network 1006 connects the network nodes 1010 to one or more hosts, such as host 1016. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts. The core network 1006 includes one more core network nodes (e.g., core network node 1008) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 1008. Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).
The host 1016 may be under the ownership or control of a service provider other than an operator or provider of the access network 1004 and/or the telecommunication network 1002, and may be operated by the service provider or on behalf of the service provider. The host 1016 may host a variety of applications to provide one or more service. Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server.
As a whole, the communication system 1000 of Figure 10 enables connectivity between the UEs, network nodes, and hosts. In that sense, the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any low- power wide-area network (LPWAN) standards such as LoRa and Sigfox.
In some examples, the telecommunication network 1002 is a cellular network that implements 3GPP standardized features. Accordingly, the telecommunications network 1002 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 1002. For example, the telecommunications network 1002 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)ZMassive loT services to yet further UEs.
In some examples, the UEs 1012 are configured to transmit and/or receive information without direct human interaction. For instance, a UE may be designed to transmit information to the access network 1004 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 1004. Additionally, a UE may be configured for operating in single- or multi-RAT or multi-standard mode. For example, a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).
In the example, the hub 1014 communicates with the access network 1004 to facilitate indirect communication between one or more UEs (e.g., UE 1012c and/or 1012d) and network nodes (e.g., network node 1010b). In some examples, the hub 1014 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs. For example, the hub 1014 may be a broadband router enabling access to the core network 1006 for the UEs. As another example, the hub 1014 may be a controller that sends commands or instructions to one or more actuators in the UEs. Commands or instructions may be received from the UEs, network nodes 1010, or by executable code, script, process, or other instructions in the hub 1014. As another example, the hub 1014 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data. As another example, the hub 1014 may be a content source. For example, for a UE that is a VR headset, display, loudspeaker or other media delivery device, the hub 1014 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which the hub 1014 then provides to the UE either directly, after performing local processing, and/or after adding additional local content. In still another example, the hub 1014 acts as a proxy server or orchestrator for the UEs, in particular in if one or more of the UEs are low energy loT devices. The hub 1014 may have a constant/persistent or intermittent connection to the network node 1010b. The hub 1014 may also allow for a different communication scheme and/or schedule between the hub 1014 and UEs (e.g., UE 1012c and/or 1012d), and between the hub 1014 and the core network 1006. In other examples, the hub 1014 is connected to the core network 1006 and/or one or more UEs via a wired connection. Moreover, the hub 1014 may be configured to connect to an M2M service provider over the access network 1004 and/or to another UE over a direct connection. In some scenarios, UEs may establish a wireless connection with the network nodes 1010 while still connected via the hub 1014 via a wired or wireless connection. In some embodiments, the hub 1014 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 1010b. In other embodiments, the hub 1014 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and network node 1010b, but which is additionally capable of operating as a communication start and/or end point for certain data channels.
Figure 11 shows a UE 1100 in accordance with some embodiments. As used herein, a UE refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other UEs. Examples of a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle-mounted or vehicle embedded/integrated wireless device, etc. Other examples include any UE identified by the 3rd Generation Partnership Project (3GPP), including a narrow band internet of things (NB-loT) UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.
A UE may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or vehicle-to-everything (V2X). In other examples, a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller). Alternatively, a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter).
The UE 1100 includes processing circuitry 1102 that is operatively coupled via a bus 1104 to an input/output interface 1106, a power source 1108, a memory 1110, a communication interface 1112, and/or any other component, or any combination thereof. Certain UEs may utilize all or a subset of the components shown in Figure 11 . The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
The processing circuitry 1102 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 1110. The processing circuitry 1102 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above. For example, the processing circuitry 1102 may include multiple central processing units (CPUs).
In the example, the input/output interface 1106 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices. Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof. An input device may allow a user to capture information into the UE 1100. Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like. The presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user. A sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof. An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.
In some embodiments, the power source 1108 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used. The power source 1108 may further include power circuitry for delivering power from the power source 1108 itself, and/or an external power source, to the various parts of the UE 1100 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of the power source 1108. Power circuitry may perform any formatting, converting, or other modification to the power from the power source 1108 to make the power suitable for the respective components of the UE 1100 to which power is supplied.
The memory 1110 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth. In one example, the memory 1110 includes one or more application programs 1114, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 1116. The memory 1110 may store, for use by the UE 1100, any of a variety of various operating systems or combinations of operating systems.
The memory 1110 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof. The UICC may for example be an embedded UICC (eUlCC), integrated UICC (iUICC) or a removable UICC commonly known as ‘SIM card.’ The memory 1110 may allow the UE 1100 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data. An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in the memory 1110, which may be or comprise a device-readable storage medium.
The processing circuitry 1102 may be configured to communicate with an access network or other network using the communication interface 1112. The communication interface 1112 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 1122. The communication interface 1112 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network). Each transceiver may include a transmitter 1118 and/or a receiver 1120 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth). Moreover, the transmitter 1118 and receiver 1120 may be coupled to one or more antennas (e.g., antenna 1122) and may share circuit components, software or firmware, or alternatively be implemented separately.
In the illustrated embodiment, communication functions of the communication interface 1112 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof. Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11 , Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/internet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.
Regardless of the type of sensor, a UE may provide an output of data captured by its sensors, through its communication interface 1112, via a wireless connection to a network node. Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE. The output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., when moisture is detected an alert is sent), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient).
As another example, a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection. In response to the received wireless input the states of the actuator, the motor, or the switch may change. For example, the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.
A UE, when in the form of an Internet of Things (loT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare. Non-limiting examples of such an loT device are a device which is or which is embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a head-mounted display for Augmented Reality (AR) or Virtual Reality (VR), a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-tracking device, a sensor for monitoring a plant or animal, an industrial robot, an Unmanned Aerial Vehicle (UAV), and any kind of medical device, like a heart rate monitor or a remote controlled surgical robot. A UE in the form of an loT device comprises circuitry and/or software in dependence of the intended application of the loT device in addition to other components as described in relation to the UE 1100 shown in Figure 11 .
As yet another specific example, in an loT scenario, a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node. The UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device. As one particular example, the UE may implement the 3GPP NB-loT standard. In other scenarios, a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
In practice, any number of UEs may be used together with respect to a single use case. For example, a first UE might be or be integrated in a drone and provide the drone’s speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone. When the user makes changes from the remote controller, the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone’s speed. The first and/or the second UE can also include more than one of the functionalities described above. For example, a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.
Figure 12 shows a network node 1200 in accordance with some embodiments. As used herein, network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network. Examples of network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
Other examples of network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
The network node 1200 includes a processing circuitry 1202, a memory 1204, a communication interface 1206, and a power source 1208. The network node 1200 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which the network node 1200 comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple NodeBs. In such a scenario, each unique NodeB and RNC pair, may in some instances be considered a single separate network node. In some embodiments, the network node 1200 may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g., separate memory 1204 for different RATs) and some components may be reused (e.g., a same antenna 1210 may be shared by different RATs). The network node 1200 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 1200, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 1200.
The processing circuitry 1202 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 1200 components, such as the memory 1204, to provide network node 1200 functionality.
In some embodiments, the processing circuitry 1202 includes a system on a chip (SOC). In some embodiments, the processing circuitry 1202 includes one or more of radio frequency (RF) transceiver circuitry 1212 and baseband processing circuitry 1214. In some embodiments, the radio frequency (RF) transceiver circuitry 1212 and the baseband processing circuitry 1214 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 1212 and baseband processing circuitry 1214 may be on the same chip or set of chips, boards, or units.
The memory 1204 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 1202. The memory 1204 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 1202 and utilized by the network node 1200. The memory 1204 may be used to store any calculations made by the processing circuitry 1202 and/or any data received via the communication interface 1206. In some embodiments, the processing circuitry 1202 and memory 1204 is integrated.
The communication interface 1206 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, the communication interface 1206 comprises port(s)/terminal(s) 1216 to send and receive data, for example to and from a network over a wired connection. The communication interface 1206 also includes radio front-end circuitry 1218 that may be coupled to, or in certain embodiments a part of, the antenna 1210. Radio front-end circuitry 1218 comprises filters 1220 and amplifiers 1222. The radio front-end circuitry 1218 may be connected to an antenna 1210 and processing circuitry 1202. The radio front-end circuitry may be configured to condition signals communicated between antenna 1210 and processing circuitry 1202. The radio front-end circuitry 1218 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection. The radio front-end circuitry 1218 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 1220 and/or amplifiers 1222. The radio signal may then be transmitted via the antenna 1210. Similarly, when receiving data, the antenna 1210 may collect radio signals which are then converted into digital data by the radio front-end circuitry 1218. The digital data may be passed to the processing circuitry 1202. In other embodiments, the communication interface may comprise different components and/or different combinations of components.
In certain alternative embodiments, the network node 1200 does not include separate radio front-end circuitry 1218, instead, the processing circuitry 1202 includes radio front-end circuitry and is connected to the antenna 1210. Similarly, in some embodiments, all or some of the RF transceiver circuitry 1212 is part of the communication interface 1206. In still other embodiments, the communication interface 1206 includes one or more ports or terminals 1216, the radio front-end circuitry 1218, and the RF transceiver circuitry 1212, as part of a radio unit (not shown), and the communication interface 1206 communicates with the baseband processing circuitry 1214, which is part of a digital unit (not shown).
The antenna 1210 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. The antenna 1210 may be coupled to the radio front-end circuitry 1218 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In certain embodiments, the antenna 1210 is separate from the network node 1200 and connectable to the network node 1200 through an interface or port.
The antenna 1210, communication interface 1206, and/or the processing circuitry 1202 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, the antenna 1210, the communication interface 1206, and/or the processing circuitry 1202 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.
The power source 1208 provides power to the various components of network node 1200 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). The power source 1208 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 1200 with power for performing the functionality described herein. For example, the network node 1200 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 1208. As a further example, the power source 1208 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.
Embodiments of the network node 1200 may include additional components beyond those shown in Figure 12 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, the network node 1200 may include user interface equipment to allow input of information into the network node 1200 and to allow output of information from the network node 1200. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 1200.
Figure 13 is a block diagram of a host 1300, which may be an embodiment of the host 1016 of Figure 10, in accordance with various aspects described herein. As used herein, the host 1300 may be or comprise various combinations hardware and/or software, including a standalone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, container, or processing resources in a server farm. The host 1300 may provide one or more services to one or more UEs.
The host 1300 includes processing circuitry 1302 that is operatively coupled via a bus 1304 to an input/output interface 1306, a network interface 1308, a power source 1310, and a memory 1312. Other components may be included in other embodiments. Features of these components may be substantially similar to those described with respect to the devices of previous figures, such as Figures 11 and 12, such that the descriptions thereof are generally applicable to the corresponding components of host 1300.
The memory 1312 may include one or more computer programs including one or more host application programs 1314 and data 1316, which may include user data, e.g., data generated by a UE for the host 1300 or data generated by the host 1300 for a UE. Embodiments of the host 1300 may utilize only a subset or all of the components shown. The host application programs 1314 may be implemented in a container-based architecture and may provide support for video codecs (e.g., Versatile Video Coding (WC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), MPEG, VP9) and audio codecs (e.g., FLAC, Advanced Audio Coding (AAC), MPEG, G.711), including transcoding for multiple different classes, types, or implementations of UEs (e.g., handsets, desktop computers, wearable display systems, heads-up display systems). The host application programs 1314 may also provide for user authentication and licensing checks and may periodically report health, routes, and content availability to a central node, such as a device in or on the edge of a core network. Accordingly, the host 1300 may select and/or indicate a different host for over-the-top services for a UE. The host application programs 1314 may support various protocols, such as the HTTP Live Streaming (HLS) protocol, Real-Time Messaging Protocol (RTMP), Real-Time Streaming Protocol (RTSP), Dynamic Adaptive Streaming over HTTP (MPEG-DASH), etc.
Figure 14 is a block diagram illustrating a virtualization environment 1400 in which functions implemented by some embodiments may be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components. Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 1400 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host. Further, in embodiments in which the virtual node does not require radio connectivity (e.g., a core network node or host), then the node may be entirely virtualized.
Applications 1402 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment Q400 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
Hardware 1404 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth. Software may be executed by the processing circuitry to instantiate one or more virtualization layers 1406 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 1408a and 1408b (one or more of which may be generally referred to as VMs 1408), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein. The virtualization layer 1406 may present a virtual operating platform that appears like networking hardware to the VMs 1408.
The VMs 1408 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 1406. Different embodiments of the instance of a virtual appliance 1402 may be implemented on one or more of VMs 1408, and the implementations may be made in different ways. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
In the context of NFV, a VM 1408 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of the VMs 1408, and that part of hardware 1404 that executes that VM, be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements. Still in the context of NFV, a virtual network function is responsible for handling specific network functions that run in one or more VMs 1408 on top of the hardware 1404 and corresponds to the application 1402.
Hardware 1404 may be implemented in a standalone network node with generic or specific components. Hardware 1404 may implement some functions via virtualization. Alternatively, hardware 1404 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 1410, which, among others, oversees lifecycle management of applications 1402. In some embodiments, hardware 1404 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signaling can be provided with the use of a control system 1412 which may alternatively be used for communication between hardware nodes and radio units.
Figure 15 shows a communication diagram of a host 1502 communicating via a network node 1504 with a UE 1506 over a partially wireless connection in accordance with some embodiments. Example implementations, in accordance with various embodiments, of the UE (such as a UE 1012a of Figure 10 and/or UE 1100 of Figure 11), network node (such as network node 1010a of Figure 10 and/or network node 1200 of Figure 12), and host (such as host 1016 of Figure 10 and/or host 1300 of Figure 13) discussed in the preceding paragraphs will now be described with reference to Figure 15.
Like host 1300, embodiments of host 1502 include hardware, such as a communication interface, processing circuitry, and memory. The host 1502 also includes software, which is stored in or accessible by the host 1502 and executable by the processing circuitry. The software includes a host application that may be operable to provide a service to a remote user, such as the UE 1506 connecting via an over-the-top (OTT) connection 1550 extending between the UE 1506 and host 1502. In providing the service to the remote user, a host application may provide user data which is transmitted using the OTT connection 1550. The network node 1504 includes hardware enabling it to communicate with the host 1502 and UE 1506. The connection 1560 may be direct or pass through a core network (like core network 1006 of Figure 10) and/or one or more other intermediate networks, such as one or more public, private, or hosted networks. For example, an intermediate network may be a backbone network or the Internet.
The UE 1506 includes hardware and software, which is stored in or accessible by UE 1506 and executable by the UE’s processing circuitry. The software includes a client application, such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via UE 1506 with the support of the host 1502. In the host 1502, an executing host application may communicate with the executing client application via the OTT connection 1550 terminating at the UE 1506 and host 1502. In providing the service to the user, the UE's client application may receive request data from the host's host application and provide user data in response to the request data. The OTT connection 1550 may transfer both the request data and the user data. The UE's client application may interact with the user to generate the user data that it provides to the host application through the OTT connection 1550.
The OTT connection 1550 may extend via a connection 1560 between the host 1502 and the network node 1504 and via a wireless connection 1570 between the network node 1504 and the UE 1506 to provide the connection between the host 1502 and the UE 1506. The connection 1560 and wireless connection 1570, over which the OTT connection 1550 may be provided, have been drawn abstractly to illustrate the communication between the host 1502 and the UE 1506 via the network node 1504, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
As an example of transmitting data via the OTT connection 1550, in step 1508, the host 1502 provides user data, which may be performed by executing a host application. In some embodiments, the user data is associated with a particular human user interacting with the UE 1506. In other embodiments, the user data is associated with a UE 1506 that shares data with the host 1502 without explicit human interaction. In step 1510, the host 1502 initiates a transmission carrying the user data towards the UE 1506. The host 1502 may initiate the transmission responsive to a request transmitted by the UE 1506. The request may be caused by human interaction with the UE 1506 or by operation of the client application executing on the UE 1506. The transmission may pass via the network node 1504, in accordance with the teachings of the embodiments described throughout this disclosure. Accordingly, in step 1512, the network node 1504 transmits to the UE 1506 the user data that was carried in the transmission that the host 1502 initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step 1514, the UE 1506 receives the user data carried in the transmission, which may be performed by a client application executed on the UE 1506 associated with the host application executed by the host 1502. In some examples, the UE 1506 executes a client application which provides user data to the host 1502. The user data may be provided in reaction or response to the data received from the host 1502. Accordingly, in step 1516, the UE 1506 may provide user data, which may be performed by executing the client application. In providing the user data, the client application may further consider user input received from the user via an input/output interface of the UE 1506. Regardless of the specific manner in which the user data was provided, the UE 1506 initiates, in step 1518, transmission of the user data towards the host 1502 via the network node 1504. In step 1520, in accordance with the teachings of the embodiments described throughout this disclosure, the network node 1504 receives user data from the UE 1506 and initiates transmission of the received user data towards the host 1502. In step 1522, the host 1502 receives the user data carried in the transmission initiated by the UE 1506.
One or more of the various embodiments improve the performance of OTT services provided to the UE 1506 using the OTT connection 1550, in which the wireless connection 1570 forms the last segment.
In an example scenario, factory status information may be collected and analyzed by the host 1502. As another example, the host 1502 may process audio and video data which may have been retrieved from a UE for use in creating maps. As another example, the host 1502 may collect and analyze real-time data to assist in controlling vehicle congestion (e.g., controlling traffic lights). As another example, the host 1502 may store surveillance video uploaded by a UE. As another example, the host 1502 may store or control access to media content such as video, audio, VR or AR which it can broadcast, multicast or unicast to UEs. As other examples, the host 1502 may be used for energy pricing, remote control of non-time critical electrical load to balance power generation needs, location services, presentation services (such as compiling diagrams etc. from data collected from remote devices), or any other function of collecting, retrieving, storing, analyzing and/or transmitting data.
In some examples, a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve. There may further be an optional network functionality for reconfiguring the OTT connection 1550 between the host 1502 and UE 1506, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring the OTT connection may be implemented in software and hardware of the host 1502 and/or UE 1506. In some embodiments, sensors (not shown) may be deployed in or in association with other devices through which the OTT connection 1550 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software may compute or estimate the monitored quantities. The reconfiguring of the OTT connection 1550 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not directly alter the operation of the network node 1504. Such procedures and functionalities may be known and practiced in the art. In certain embodiments, measurements may involve proprietary UE signaling that facilitates measurements of throughput, propagation times, latency and the like, by the host 1502. The measurements may be implemented in that software causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection 1550 while monitoring propagation times, errors, etc.
Although the computing devices described herein (e.g., UEs, network nodes, hosts) may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Moreover, while components are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components. For example, a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface. In another example, non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
In certain embodiments, some or all of the functionality described herein may be provided by processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer- readable storage medium. In alternative embodiments, some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a non-transitory computer- readable storage medium or not, the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally.
Notably, modifications and other embodiments of the disclosed invention(s) will come to mind to one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the invention(s) is/are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of this disclosure. Although specific terms may be employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims

CLAIMS What is claimed is:
1 . A method comprising: computing (100) a regularized loss (24) between observations (22) of a response variable (14) over time in a communication network (10) and predictions (18) of the response variable (14) over time as predicted by a model (16), wherein computing the regularized loss (24) comprises penalizing predictions (18) of the response variable (14) based on an extent to which the predictions (18) repeat past predictions of the response variable (14); and adapting (110) the model (16) based on the regularized loss (24).
2. The method of claim 1 , wherein said computing comprises penalizing predictions (18) of the response variable (14) to a greater extent the greater the extent to which the predictions (18) repeat past predictions of the response variable (14).
3. The method of any of claims 1-2, wherein said computing comprises computing the regularized loss (24) as a function of a prediction repeat penalty, wherein the prediction repeat penalty is a negative loss between predictions (18) of the response variable (14) and past predictions of the response variable (14).
4. The method of claim 3, wherein the prediction repeat penalty comprises ijw)’ wherein, for each time window i among T time windows,
Figure imgf000032_0001
is a negative loss between predictions (18) of the response variable (14) over the time window i and past predictions of the response variable (14) over a previous time window (i - 1).
5. The method of any of claims 3-4, wherein said computing comprises scaling the prediction repeat penalty by a prediction repeat hyperparameter.
6. The method of any of claims 3-5, wherein said computing comprises computing the regularized loss (24) also as a function of a prediction inaccuracy loss, wherein the prediction inaccuracy loss is a loss between predictions (18) of the response variable (14) and observations (22) of the response variable (14).
7. The method of claim 6, wherein the prediction inaccuracy loss comprises (Y0.w, y0;W) + wherein, for each time window i among T time windows,
Figure imgf000032_0002
ss between predictions (18) of the response variable (14) over the time window i and observations (22) of the response variable (14) over the time window i.
8. The method of any of claims 6-7, wherein said computing comprises scaling at least a portion of the prediction inaccuracy loss by a prediction inaccuracy hyperparameter.
9. The method of any of claims 1-8, wherein said computing comprises computing the regularized loss (24) as:
Figure imgf000033_0001
wherein: reg is the regularized loss (24); wherein (Y0..w, Y0,w is a loss between predictions (18) of the response variable (14) over an initial time window and observations (22) of the response variable (14) over the initial time window; wherein (1 — /?) is a prediction inaccuracy hyperparameter; wherein, for each time window i among T time windows, ^0 (i+i;w,
Figure imgf000033_0002
is a |oss between predictions (18) of the response variable (14) over the time window i and observations (22) of the response variable (14) over the time window i wherein f> is a prediction repeat hyperparameter; wherein, for each time window i among T time windows, —
Figure imgf000033_0003
is a negative loss between predictions (18) of the response variable (14) over the time window i and past predictions of the response variable (14) over a previous time window i - 1.
10. The method of any of claims 1-9, wherein adapting the model (16) comprises adapting the model (16) as needed to minimize the regularized loss (24).
11. The method of any of claims 1-10, further comprising predicting (120) the response variable (14) using the adapted model (16).
12. The method of any of claims 1-11 , wherein said computing and adapting is performed for each of multiple iterations as part of an iterative procedure to train the model (16), wherein the method further comprises adapting how the regularized loss (24) is computed between iterations as needed for predictions (18) of the response variable (14) by the model (16) to converge with observations (22) of the response variable (14) in a training dataset.
13. The method of any of claims 1-12, wherein the response variable (14) is a performance indicator for the communication network (10) or a service-level metric in the communication network (10).
14. Equipment configured to: compute a regularized loss (24) between observations (22) of a response variable (14) over time in a communication network (10) and predictions (18) of the response variable (14) over time as predicted by a model (16), wherein computing the regularized loss (24) comprises penalizing predictions (18) of the response variable (14) based on an extent to which the predictions (18) repeat past predictions of the response variable (14); and adapt the model (16) based on the regularized loss (24).
15. The equipment of claim 14, configured to perform the method of any of claims 2-13.
16. A computer program comprising instructions which, when executed by at least one processor of equipment, causes the equipment to perform the method of any of claims 1-13.
17. A carrier containing the computer program of claim 16, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
18. Equipment comprising processing circuitry (210) configured to: compute a regularized loss (24) between observations (22) of a response variable (14) over time in a communication network (10) and predictions (18) of the response variable (14) over time as predicted by a model (16), wherein computing the regularized loss (24) comprises penalizing predictions (18) of the response variable (14) based on an extent to which the predictions (18) repeat past predictions of the response variable (14); and adapt the model (16) based on the regularized loss (24).
19. The equipment of claim 18, the processing circuitry (210) configured to perform the method of any of claims 2-13.
PCT/EP2022/052712 2022-02-04 2022-02-04 Response variable prediction in a communication network WO2023147870A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/052712 WO2023147870A1 (en) 2022-02-04 2022-02-04 Response variable prediction in a communication network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/052712 WO2023147870A1 (en) 2022-02-04 2022-02-04 Response variable prediction in a communication network

Publications (1)

Publication Number Publication Date
WO2023147870A1 true WO2023147870A1 (en) 2023-08-10

Family

ID=80625308

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/052712 WO2023147870A1 (en) 2022-02-04 2022-02-04 Response variable prediction in a communication network

Country Status (1)

Country Link
WO (1) WO2023147870A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017096312A1 (en) * 2015-12-04 2017-06-08 Google Inc. Regularization of machine learning models
WO2021001085A1 (en) * 2019-06-30 2021-01-07 Telefonaktiebolaget Lm Ericsson (Publ) Estimating quality metric for latency sensitive traffic flows in communication networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017096312A1 (en) * 2015-12-04 2017-06-08 Google Inc. Regularization of machine learning models
WO2021001085A1 (en) * 2019-06-30 2021-01-07 Telefonaktiebolaget Lm Ericsson (Publ) Estimating quality metric for latency sensitive traffic flows in communication networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BOLDT MARTIN ET AL: "Alarm Prediction in Cellular Base Stations Using Data-Driven Methods", IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, IEEE, USA, vol. 18, no. 2, 18 January 2021 (2021-01-18), pages 1925 - 1933, XP011860314, DOI: 10.1109/TNSM.2021.3052093 *
XU PENG ET AL: "A Novel Repetition Normalized Adversarial Reward for Headline Generation", ICASSP 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE, 12 May 2019 (2019-05-12), pages 7325 - 7329, XP033565787, DOI: 10.1109/ICASSP.2019.8683236 *

Similar Documents

Publication Publication Date Title
WO2023203240A1 (en) Network slicing fixed wireless access (fwa) use case
WO2023022642A1 (en) Reporting of predicted ue overheating
WO2023147870A1 (en) Response variable prediction in a communication network
WO2024040388A1 (en) Method and apparatus for transmitting data
WO2024027838A1 (en) Method and apparatus for stopping location reporting
WO2024062273A1 (en) Method and system for resource allocation using reinforcement learning
WO2023239287A1 (en) Machine learning for radio access network optimization
WO2023017102A1 (en) Systems and methods to optimize training of ai/ml models and algorithms
WO2024027839A1 (en) Method and apparatus for configuring location reporting type
WO2023146461A1 (en) Concealed learning
WO2023209566A1 (en) Handling of random access partitions and priorities
WO2024028142A1 (en) Performance analytics for assisting machine learning in a communications network
WO2023217557A1 (en) Artificial intelligence/machine learning (ai/ml) translator for 5g core network (5gc)
WO2023057849A1 (en) Machine learning (ml) model retraining in 5g core network
WO2023099969A1 (en) Detecting network function capacity deviations in 5g networks
WO2024038300A1 (en) Automated training of service quality models
WO2022260585A1 (en) Selection of global machine learning models for collaborative machine learning in a communication network
WO2023131822A1 (en) Reward for tilt optimization based on reinforcement learning (rl)
WO2024074881A1 (en) Method and system for feature selection to predict application performance
WO2023232743A1 (en) Systems and methods for user equipment assisted feature correlation estimation feedback
WO2023012351A1 (en) Controlling and ensuring uncertainty reporting from ml models
WO2023084277A1 (en) Machine learning assisted user prioritization method for asynchronous resource allocation problems
WO2023099970A1 (en) Machine learning (ml) model management in 5g core network
WO2023014260A1 (en) Signalling approaches for disaster plmns
WO2024075129A1 (en) Handling sequential agents in a cognitive framework

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22707364

Country of ref document: EP

Kind code of ref document: A1