WO2018072846A1 - Intermediate data between network functions - Google Patents

Intermediate data between network functions Download PDF

Info

Publication number
WO2018072846A1
WO2018072846A1 PCT/EP2016/075404 EP2016075404W WO2018072846A1 WO 2018072846 A1 WO2018072846 A1 WO 2018072846A1 EP 2016075404 W EP2016075404 W EP 2016075404W WO 2018072846 A1 WO2018072846 A1 WO 2018072846A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
neuron
data point
point
interface
Prior art date
Application number
PCT/EP2016/075404
Other languages
French (fr)
Inventor
Kimmo Kalervo Hatonen
Pekka Korja
Juha YLI-PENTTILÄ
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to EP16785154.2A priority Critical patent/EP3529699A1/en
Priority to PCT/EP2016/075404 priority patent/WO2018072846A1/en
Publication of WO2018072846A1 publication Critical patent/WO2018072846A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Definitions

  • the invention relates to data exchange between network functions.
  • FIG. 1 illustrates an exemplified wireless communication system
  • Figures 2 and 3 illustrate block diagrams
  • Figure 4 illustrates an example of a network slicing abstraction model with an exemplified function interaction model
  • Figure 12 illustrates an example of an intermediate data stream
  • Figure 13 is a schematic block diagram of an apparatus.
  • Embodiments and examples described herein may be implemented in any communications system, wired or wireless, such as in at least one of the following: Universal Mobile Telecommunication System (UMTS, 3G) based on basic wideband-code division multiple access (W-CDMA), high-speed packet access (HSPA), Long Term Evolution (LTE), LTE-Advanced, LTE-Advanced Pro, fifth gen- eration (5G) system, beyond 5G, and/or wireless local area networks (WLAN) based on IEEE 802.11 specifications on IEEE 802.15 specifications.
  • UMTS Universal Mobile Telecommunication System
  • 3G Universal Mobile Telecommunication System
  • W-CDMA high-speed packet access
  • HSPA high-speed packet access
  • LTE Long Term Evolution
  • LTE-Advanced LTE-Advanced
  • LTE-Advanced Pro LTE-Advanced Pro
  • WLAN wireless local area networks
  • the embodiments are not, however, restricted to the systems given as an example but a person skilled in the art may apply the solution to other communication systems provided
  • 5G has been envisaged to use multiple-input-multiple-output (MIMO) multi-antenna transmission techniques, more base stations or access nodes than the current network deployments of LTE, by using a so-called small cell concept including macro sites operating in co-operation with smaller local area access nodes, such as local ultra-dense deployment of small cells, and perhaps also employing a variety of radio technologies for better coverage and enhanced data rates.
  • MIMO multiple-input-multiple-output
  • 5G will likely be comprised of more than one radio access technology (RAT), each optimized for certain use cases and/or spectrum.
  • 5G system may also incorporate both cellular (3GPP) and non-cellular (e.g. IEEE) technologies.
  • 5G mobile commu- nications will have a wider range of use cases and related applications including video streaming, augmented reality, different ways of data sharing and various forms of machine type applications, including vehicular safety, different sensors and real-time control.
  • 5G is expected to have multiple radio interfaces, including apart from earlier deployed frequencies below 6GHz, also higher, that is cm Wave and mmWave frequencies, and also being capable of integrating with existing legacy radio access technologies, such as the LTE. Integration with the LTE may be implemented, at least in the early phase, as a system, where macro coverage is provided by the LTE and 5G radio interface access comes from small cells by aggregation to the LTE.
  • 5G is planned to support both inter- RAT operability (such as LTE-5G) and inter-RI operability (inter-radio interface operability, such as inter-RI operability between cm Wave and mmWave).
  • inter-RAT operability such as LTE-5G
  • inter-RI operability inter-radio interface operability, such as inter-RI operability between cm Wave and mmWave.
  • One of the concepts considered to be used in 5G networks is network slicing in which multiple independent and dedicated virtual sub-networks (network instances) may be created within the same infrastructure to run services that have different requirements on latency, reliability, throughput and mobility.
  • NFV network functions virtualization
  • a virtualized network function may comprise one or more virtual machines running computer program codes using standard or general type servers instead of customized hardware. Cloud computing or cloud data storage may also be utilized.
  • radio communications this may mean node operations to be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts.
  • Figure 1 An extremely general architecture of an exemplifying system 100 to which embodiments of the invention may be applied is illustrated in Figure 1.
  • Figure 1 is a simplified system architecture only showing some elements and functional entities, all being logical units whose implementation may differ from what is shown. It is apparent to a person skilled in the art that the system may comprise any number of the illustrated elements and functional entities.
  • a cellular communication system 100 formed by one or more cellular radio access networks, such as the Long Term Evolution (LTE), the LTE-Advanced (LTE-A) of the 3rd Generation Partnership Project (3GPP), or the predicted future 5G solutions, are typically composed of one or more network nodes that may be of different type.
  • LTE Long Term Evolution
  • LTE-A LTE-Advanced
  • 3GPP 3rd Generation Partnership Project
  • 5G solutions are typically composed of one or more network nodes that may be of different type.
  • a base station 110 such as an evolved NodeB (eNB)
  • eNB evolved NodeB
  • eNB evolved NodeB
  • MME mobility management entity
  • a base station 110 providing a wide area, medium range or local area coverage 101 for terminal devices 140, for example for the terminal devices to obtain wireless access to data in other networks 103 such as the Internet, either directly or via a core network 102 comprising devices (apparatuses) for different purposes, such as a mobility management entity (MME) 120 providing control plane function for mobility between different access networks and controlling high-level operations of the terminal devices.
  • MME mobility management entity
  • devices also called network nodes and apparatuses in the access network and/or devices, also called network nodes and apparatuses, in the core network may be configured to support streaming intermediate data.
  • the devices (network nodes, apparatuses) illustrated in Figure 1 comprises distributed many-to-many (N2N) organized data structure units 111, 121 (N2N-u), each N2N unit to provide an execution environment to N2N instances, as will be described in more detail below.
  • N2N instance may be called a data neuron, a data cell or a data instance. It could also be implemented, for example, as an enhanced/upgraded data bus or a data pool or a data stream.
  • N2N instance will be generally used as a synonym to a data neuron, and to the other alternatives.
  • the N2N units may be configured to instantiate N2N instances (data neurons) whenever a need is identified, and an N2N instance may be maintained also when it is empty.
  • the need may be identified when network function(s) are instantiated whereby the system instantiates also N2N instances needed for intermediate data between the network functions.
  • Another example includes dynamic instantiation during an execution of a service, when a network function or the system detects the need.
  • a further example is a combination of the two examples, i.e. at the begin- ning of the execution of the service and dynamically during the execution.
  • Intermediate data means herein one or more outputs of one or more network functions that are to be used as one or more inputs by one or more other network functions.
  • An output of a network function is called herein a data point.
  • a network function refers herein to any processing function, including different sub-functions, in a network.
  • Examples of a network function include but is not limited to user plane functions and control plane functions, such as a mobility management function with its sub-functions relating to session mapper, mobility, policy and authentication, a session management function with its sub-functions relating to gateway logic and software defined network logic, and context manager function.
  • a network function can be seen as an entity, or a func- tional block, within a network infrastructure, the network function providing a particular capability to support communication or distinct instance of a service and having a defined functional behavior and defined interfaces.
  • the network functions may be for the same purpose but they may differentiate from each other by having programmable properties and attributes that may take into account different environments and system set-ups.
  • a network function may be implemented as a network node (element) on a dedicated hardware, or as a computer program instance running on a dedicated hardware, or as a virtualized function instantiated on an appropriate platform.
  • FIG. 2 illustrates an example of an N2N instance 200 suitable for streaming intermediate data within an execution environment.
  • the N2N instance 200 comprises an inbound data interface 211 for inserting (receiving) data outputs 221 to temporarily to store them in a memory area 220 in an ordered data struc- ture, and an outbound data interface 212 via which the data outputs 221 are fetched.
  • the ordered data structure may be a queue that uses FIFO (first-in-first- out) principles, or a stack that uses LIFO (last-in-first-out) principles. Although in the illustrated example there is one data point it should be appreciated that the memory area 220 may comprise from zero to plurality of data points 221.
  • the inbound data interface 211 may be called an input data interface, and correspondingly the outbound data interface 212 may be called an output data interface.
  • the data interfaces may be implemented using any application programming interface (API), such as a uniform interface for representational state transfer (REST API), a socket API, and a data stream API.
  • API application programming interface
  • the data interfaces are interfaces providing direct memory access, the inbound for inserting a data point to the memory area 220 and the outbound for fetching a data point directly from the memory area 220.
  • An N2N instance 200 may comprise data points of the same type, i.e. be a data point type specific data instance. If such data instances are in use, the data instance may receive data points from several network functions whose outputs are of the same type. Basic principle is that outputs from a network function having all outputs of one type will end up to the same N2N instance. If a network function have outputs of two or more different types, the outputs of different type will end up to different N2N instances. However, an N2N instance 200 may comprise data points of different types, for example a sub-set of all possible types. If such a solution is used, outputs of different type from a network function may end up to the same N2N instance, or to different N2N instances, depending on network function settings and/or whether the different types are within the sub-set of an N2N instance.
  • horizontal scaling there may be a plurality of parallel N2N instances sharing the load, i.e. the data points from the same N2N instance may end up to different parallel N2N instances.
  • the parallel N2N instances may have a separate load balancing function between the network function(s) and the parallel N2N instances.
  • a data point may comprise attribute(s) or be associated with attributes.
  • an attribute may be a value inside a data point, that the N2N instance can dug from the data point, or be a kind of metadata on top of a data point, or a value coming with a data point.
  • Figure 3 illustrates another example of an N2N instance 300.
  • the N2N instance 300 is suitable for streaming intermediate data within an execution environment and also with other execution environments.
  • the N2N instance may be data point type -specific, or contain data points of different type, as described above.
  • the N2N instance 300 For functions in the same execution environment the N2N instance 300 comprises an internal inbound data interface 311a for inserting (receiving) data outputs 321 from functions to temporarily to store them in a memory area 320 in an ordered data structure, and an internal outbound data interface 312a via which functions in the same execution environment may fetch the data outputs 321.
  • the internal data interfaces correspond to data interfaces described above with Figure 2.
  • the N2N instance 300 comprises an external inbound data interface 311b for receiving (inserting) data outputs 321 from other execution environments and an external outbound data interface 312b for delivering data outputs received via the internal inbound data interface 311a to other execution environments.
  • the external data interfaces may be called in- bound/outbound synchronization interfaces, or inbound/outbound forwarding data interfaces, inbound/outbound sharing interfaces or inbound/outbound data axons.
  • the external data interfaces may be any of data interface type mentioned above. Further, they, or one of them, may be a publish-subscribe API.
  • the memory area comprises in an additional information area 322 information on connected N2N instances.
  • the information on connected N2N instances include those N2N instances who have subscribed data points inserted in the N2N instance. However, such information may be maintained somewhere else.
  • the information on connected N2N instances is set to contain information on those N2N instances whereto data points, or their memory addresses, are to be sent. It should be appreciated that instead of, or in addition to N2N instances, their execution environments may be addressed. There are no limitations how the connected N2N instances are determined.
  • the connected N2N instances may be determined using neighbor relationship, or there may be a separate N2N instance for virtual machines serving as a specific type of a device in a cloud, the separate N2N instance sharing information with or via a corresponding separate N2N instance in another cloud.
  • the neighbor relationship may be defined by the communication system, wherein connected N2N instances may be determined based on some relationship between network functions or network elements, or corresponding virtual machines, using a princi- pie that there are or that there may be with certain probability network function processes that will need a data point from a network function inserting outputs to the N2N instance.
  • each virtual machine comprises one or more N2N instances, realized by a fast cache for example, and the sharing is with neighboring virtual machines.
  • the sharing is with neighboring virtual machines.
  • scenario B for a radio access network cloud having an extremely fast cloud server, for example: N2N instances are implemented in a separate virtual machine.
  • the data point "a" is inserted in an N2N instance in the separate virtual machine, and is retrieva- ble therefrom by all functions in the radio access cloud.
  • Sharing between radio access network clouds may be organized by having in a radio access network cloud a separate virtual machine having a dedicated N2N instance for sharing data points with a corresponding separate virtual machines in neighboring radio access network clouds.
  • the dedicated N2N instance may forward data points received from other clouds to all, or some of the N2N instances in the virtual machines, and the dedicated N2N instance may be determined to be a neighboring N2N instance to all or some of the N2N instances in the same radio access network cloud. If the radio access network cloud implements scenario B, the dedicated N2N instance may be integrated to N2N instances in the separate virtual machine.
  • the additional information 322 may comprise rules for return order, rules for life-times, etc. Examples of different rules, will be described in more detail below with Figures 9 to 11.
  • the memory area 320 may comprise from zero to plurality of data points 321. Further, it should be appreciated that there may be no additional information or that the additional information does not comprise one or more rules, or the additional information does not comprise the information on connected N2N instances.
  • N2N instances include an N2N instance configured to comprise an internal inbound interface and an external outbound interface, or vice versa to comprise an external inbound interface and an internal out- bound interface, an N2N instance having only external interfaces (inbound and outbound), an N2N instance having both internal interfaces and one of the external interfaces, and vice versa an N2N instance having both external interfaces and one of the internal interfaces.
  • Figure 4 illustrates an example of a function interaction model for 5G, the model using N2N instances.
  • the model 400 comprises a service layer 401 and a network slice instance layer 402 mapped towards a resource layer.
  • the service layer 401 and the network slice instance layer 402 are network slicing service abstraction layers.
  • the service layer 401 comprises plurality of different service instances 411a, 411b.
  • the network slice instance layer 402 comprises plurality of slice instances 421a, 421b, 421c to which service instances are sliced.
  • the slice instances are in turn sliced to sub-network instances 422a, 422b in the network slice instance layer.
  • the resource layer 403 comprises plurality of execution environments 430, 430'.
  • the execution environments 403 comprise plurality of network functions 431a, 431b, 431c, 43 Id (only network functions of one execution environment are illustrated).
  • the network functions' 431a, 431b, 431c, 43 Id outputs are inserted to N2N instances 432-1, 432-2, 432-m in a function interaction model 432.
  • the network functions 431a, 431b, 431c, 43 Id may fetch inputs from the N2N instances 432-1, 432-2, 432-m.
  • the function interaction models 432, 432' in different execution environments are configured to share the outputs (data portion), illustrated by a thick line between the two func- tion interaction models.
  • a data point inserted into N2N instance 432- 2 may be shared with N2N instance 432'-q.
  • the resource layer further comprises plurality of interface functions 433a, 433b, 433c.
  • the interface functions have a dual role since they provide point-to-point legacy interfaces to and from external functions 434a, 434b, 434c, such as a radio access network (RAN) function, connectivity function and a session and subscription data function (S&S data), so that the external functions may also insert data points and/or fetch data points from the N2N instances.
  • RAN radio access network
  • S&S data session and subscription data function
  • Figures 5 to 8 illustrate different N2N instance functionalities, or N2N unit functionality, during data point insertion.
  • the N2N instance (data neuron) illustrated in Figure 2 or any N2N instance (data neuron) comprising one inbound interface, may be implemented as described in Figure 5, and the N2N instance (data neuron) illustrated in Figure 3 may be implemented using any of the functionalities described with Fig- ures 5 to 8.
  • term "subscribing N2N instance” means a connected (sharing) N2N instance whereto data points are to be forwarded regardless whether or not publish-subscribe paradigm is used.
  • the data point when a data point is received in block 501 via an inbound interface, the data point is inserted in block 502 into a memory area ac- cording to an order used.
  • the insertion time/insertion order may define the order data points are stored.
  • the data point is associated with insertion flag information and/or one or more attributes they are inserted with the data point in block 502.
  • a data point when a data point is received in block 601 via an inbound interface, its processing depends on whether or not it was received (block 602) via an internal interface. If the inbound interface was the internal inbound interface (block 602: yes), subscribing N2N instances are determined in block 603, and sending copies of the data point via an external outbound data interface is caused in step 604, and the data point is inserted in block 605 according to an order used, as described above. Naturally, if the data point is associated with insertion flag information and/or one or more attributes they are inserted with the data point, and copied to a copy of the data point that will be sent.
  • the process proceeds directly to block 605 to insert the received data point according to the order used.
  • Figure 7 illustrates an example in which some of the N2N instances with which data points are shared, have the access to the memory area and are close enough, whereas some of the N2N instances do not have access and/or are not close enough. Further, in the illustrated example it is assumed that shared data points may be further shared, the further sharing being indicated by forward information.
  • a data point when a data point is received in block 701 via an inbound interface, its processing depends on whether or not it was received (block 702) via an internal interface. If the inbound interface was the internal inbound interface (block 702: yes), it is checked in block 703, whether this data point should be shared more than once. The information may be received with the data point, for example by means of indication of number of times this should be shared. It should be appreciated that this check may be omitted if the N2N instance is instantiated with a rule that "share always X number of times".
  • the data point should be shared more than once (block 703: yes)
  • the number of how many times the data point is shared (t#) is set in block 704 to be one
  • the maximum number of sharing (t-a) is set in block 704 to be the received maximum number, or the maximum number determined during instantiation of the N2N instance.
  • subscribing N2N instances with memory access are determined in block 705, and sending a memory reference to a point whereto the data point will be (or is) inserted via an external outbound data interface is caused in step 706 (with or without forward information).
  • the data point is associated with insertion flag information and/or one or more attributes they are inserted with the data point, and copied to a copy of the data point that will be sent.
  • the inbound interface was the external inbound interface (block 702: no)
  • it is checked in block 710 whether the data point was received with forward information. If yes, it is checked in block 711, whether the number how many times the data point has been shared is smaller than the maximum. If yes, in block the number how many times the data point has been shared is incremented in block 712 by one and then the process proceeds to block 705 to determine subscribing N2N instances.
  • a memory reference if (or when) a memory reference is received in block 801 via an external inbound interface, it is treated as if a data point has been received, i.e. it is inserted in block 802 to the N2N instance according to an order used.
  • the memory reference acts as a flag that something is readable and retrievable (fetchable).
  • Figures 9 to 11 illustrate different N2N instance functionalities, or N2N unit functionality, during data point removal.
  • Any N2N instance (data neuron), such as those illustrated in Figures 2 and 3, may be implemented to use any of the functionalities described with Figures 9 to 11.
  • the N2N instance have been instantiated earlier, depicted only in Figure 9 by dashed line block 900, and that the N2N instance contains a data point.
  • the N2N instance has not been instantiated, it does not exits and nothing can be fetch.
  • retrieval (fetching) a data point will fail.
  • the simplest order may be to use FIFO if the data structure is a queue or LIFO if the data structure is a stack and insertion time of data points.
  • the order may be based on an at- tribute or an attribute combination that may be given as a rule.
  • the rule, or rules may utilize installation flags used during an N2N instance installation (i.e. creation).
  • a rule specified by installation flags may be: sort(inc ⁇ attO>, ⁇ attl>, ...) , or sort (dec ⁇ attO>, ⁇ attl>, ... ) .
  • Still a further example comprises a rule set in which each attribute is given order of its own: sort ( ⁇ attO> inc, ⁇ attl> dec, ⁇ att2> inc, ...) and next (dec, ⁇ attO>, ⁇ att 1>, ... . )
  • sort ⁇ attO> inc, ⁇ attl> dec, ⁇ att2> inc, ...)
  • next dec, ⁇ attO>, ⁇ att 1>, ... .
  • An example of such a rule include next (inc ⁇ attO>, ⁇ attl>, ...) , or next (dec ⁇ attO>, ⁇ attl>, ...) .
  • a still further possibility include to use indexing.
  • indexing is maintained and a rule specified by installation flag index ( ⁇ attO>, ⁇ attl>, ...) is used, i.e. an index on an attribute that can be used to fetch data points, by get ( ⁇ att:val>) for example, the indexing provides the order.
  • a time-to-live information may be utilized. For example, each data point may have the same time after which the data point will be removed unless it is earlier fetch, or data points may have data point -specific time-to-live information that may be set with an installation time flag. It is possible to even have data points that shall be kept until they are explicitly deleted . Examples of time related rules specified by an installation time flag include del ( ' -1' ) for "delete when fetched", del ( ' ⁇ time>' ) for a time-to-live from insertion, and del (no) for a data point that will be kept until explicitly deleted.
  • a rule of an N2N instance is applied to all data points inserted to the N2N instance.
  • an N2N instance may be configured to monitor (block 1001) data point -specifically, whether or not a storing time for the data point has expired. If the storing time expires (block 1001: yes) before the data point has been fetched, the data point is removed in block 1002 from the memory. Otherwise the monitoring is continued (block 1001: no).
  • Figure 11 illustrates a further example in which a data point will be needed as an input to more than one network function.
  • a data point that is the first one according to a fetching order is determined in block 1102, as described above, and sending the data point via the outbound data interface is caused in block 1103.
  • a number of requests for the data point, r-n is incremented in block 1104 by one. (The starting value for r-n is zero).
  • r-n is incremented in block 1104 by one.
  • the threshold can be one, two, three, etc. In other words, any value for the threshold may be used. If the number of requests exceeds the threshold (block 1105: yes), the data point is removed in block 1106 from the N2N instance. If the number of requests does not exceed the threshold (block 1105: no), storing the data point is continued in block 1107.
  • Figure 12 illustrates an example of an intermediate data stream.
  • N2N instances are instantiated at the beginning.
  • N2N instances 1201, 1202, 1203, 1204, 1205 in an execution environment 1200 may interact via interface drivers drl, dr2 with legacy network functions or legacy network elements, such as a base station eNodeB in LTE (not illustrated in Figure 12) and provide the intermediate data stream to network functions Fl, F2, F3 and F4.
  • Fl may be a mobility management function, F2 a session management function, F3 a context management function, etc.
  • arrows with solid line depict insertion of a data point
  • arrows with dash line depict fetching of a data point
  • arrows with dot- and-dash line depicts interactions with the legacy functions or legacy network elements.
  • a terminal device may have sent an attach request or a service request, just to mention couple of examples, to a legacy eNodeB, which will send the message towards a mobility management entity in a core network, message 12-1 depicting the message sent towards the mobility management entity.
  • the interface driver drl is configured to insert the request as a data point 1-1 to an N2N instance 1201, called herein a first N2N instance, wherefrom in the illustrated example the network function F3 fetches the data point 1-1. Then the network function F3 inserts its output, i.e. a data point 2-1, to another N2N instance 1202, called herein a second N2N.
  • the data point 2-1 is in turn fetched by the network function Fl, which inserts first a data point 2-2 to the second N2N instance 1202 and then another data point 3-1 to another N2N instance 1203, called herein a third N2N instance.
  • the data point 2-2 is fetched by the network function F2 that inserts a data point 4-1 to a further N2N instance 1204, called herein a fourth N2N instance.
  • the network function F3 fetches both the data point 3-1 and the data point 4-1 and then inserts its output as a data point 5-1 to a still another N2N instance 1205, called herein fifth N2N instance.
  • the interface driver dr2 fetches the data point 5-1, which may be sent in message 12-2 as a response to a request received in message 12-1.
  • the network function F2 inserts another data point 4-2 to the fourth N2N instance 1204, fetched by the network function F4.
  • the network function F4 inserts a data point 2-3 to the second N2N instance 1202. The illustrated process ends when the network function Fl fetches the data point 2-3.
  • any of the N2N instances may be configured to share (receive and/or send) data points.
  • the N2N instances are maintained even when they are empty.
  • the N2N instances may be maintained basically forever or deleted in response to a specific event occurring.
  • Each N2N instance in the execution environment may share the same "maintenance rule", or one or more of them may have N2N instance specific maintenance rules.
  • an N2N instance may be associated with a life-time, expiry of which causes deletion of the N2N instance.
  • N2N instance when all network functions associated with an N2N instance, and/or with an execution environment and/or with a service, are processed (i.e. are not running any more), the N2N instance or the N2N instances are deleted. Still a further example includes an additional rule that all connected N2N instances wherefrom data points may be received should be deleted before or simultaneously with the N2N instance. It should be appreciated that the above described list of examples is not an exhaustive list, and that the examples may be combined with each other.
  • N2N instances with the disclosed limited interfaces enables that the N2N instances may be used with network functions so that there is no need to define interfaces between different network functions. This facilitates flexible network function deployment, ease of interfacing, flexible chaining, and co-location of network functions.
  • the solution separates intermediate data communication (inputs/outputs) between network functions from data storage that provide data persistency and can mediate data from network functions to management functions and analytic engines.
  • This separation makes it possible to separate low latency data streams with small data points from big background information streams, for ex- ample. Thanks to the separation, a low latency data point and a background information junk are not in the same execution queue. That will end up to a more efficient data throughput during its life cycle.
  • the N2N instance i.e. the data neuron
  • the N2N instance can be seen as an enhanced/upgraded version: there is no need to a subscribing network function to listen everything that is inserted and pick up those ones that are relevant to the subscribing functions, as is the case with the conventional data bus or a data pool or separate data filters that perform the listening and picking up between a data bus and network functions or applications.
  • a device network node, apparatus
  • streaming of intermediate data i.e. configured to provide an execution environment, based on at least partly on what is disclosed above with any of Figures 1 to 12, including implementing one or more functions/operations described above with an embodiment/example, for example by means of any of Figures 2 to 12, comprises not only prior art means, but also means for implementing the one or more functions/operations of a corresponding functionality described with an embodiment/example, for example by means of any of Figures 2 to 12, and it may comprise separate means for each separate function/operation, or means may be configured to perform two or more functions/operations.
  • one or more of the means and/or the execution environment/N2N unit, or its sub-units, or data neurons (i.e. N2N instances) described above may be implemented in hardware (one or more devices), firmware (one or more devices), software (one or more modules), or combinations thereof.
  • the ap- paratus(es) of embodiments may be implemented within one or more application- specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, logic gates, other electronic units designed to perform the functions described herein by means of Figures 1 to 12, or a combination thereof.
  • ASICs application- specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, micro-controllers, microprocessors, logic gates, other electronic units designed to perform the functions described herein by means of Figures 1 to 12, or a combination thereof.
  • the implementation can be carried out through modules of at least one chipset (e.g. procedures, functions, and so on) that perform the functions described herein.
  • the memory unit may be implemented within the processor or externally to the processor. In the latter case, it can be communicatively coupled to the processor via various means, as is known in the art. Additionally, the components of the systems described herein may be rearranged and/or complemented by additional components in order to facilitate the achievements of the various aspects, etc., described with regard thereto, and they are not limited to the precise configu- rations set forth in the given figures, as will be appreciated by one skilled in the art.
  • FIG. 13 illustrates an apparatus (device) configured to carry out the functions described above with an example/examples to provide an execution environment.
  • Each apparatus may comprise one or more communication control circuitry, such as at least one processor 1302, and at least one memory 1304, includ- ing one or more algorithms 1303, such as a computer program code (software) wherein the at least one memory and the computer program code (software) are configured, with the at least one processor, to cause the apparatus to carry out any one of the exemplified functionalities of the terminal device.
  • communication control circuitry such as at least one processor 1302, and at least one memory 1304, includ- ing one or more algorithms 1303, such as a computer program code (software) wherein the at least one memory and the computer program code (software) are configured, with the at least one processor, to cause the apparatus to carry out any one of the exemplified functionalities of the terminal device.
  • the memory 1304 may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
  • the apparatus may further comprise different interfaces 1301, such as one or more communication interfaces (TX/RX) comprising hardware and/or soft- ware for realizing communication connectivity according to one or more communication protocols.
  • the communication interface may provide the apparatus with communication capabilities to communicate in the cellular communication system and enable communication between different network nodes and between terminal devices and different network nodes, for example.
  • the communication inter- face may comprise standard well-known components such as an amplifier, filter, frequency-converter, (de) modulator, and encoder/decoder circuitries and one or more antennas.
  • At least one of the communication control circuitries in the apparatus 1300 is configured to provide the execution environ- ment/N2N unit, or one or more of its sub-units, and to carry out functionalities described above by means of any of Figures 2 to 12 by one or more circuitries.
  • circuitry refers to all of the following: (a) hardware-only circuit implementations, such as implementations in only analog and/or digital circuitry, and (b) combinations of circuits and software (and/or firmware), such as (as applicable): (i) a combination of processor(s) or (ii) portions of processor(s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus to perform various functions, and (c) circuits, such as a microprocessor(s) or a portion of a micropro- cessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • circuitry' applies to all uses of this term in this application.
  • the term 'circuitry' would also cover an implementation of merely a processor (or multiple processors) or a portion of a processor and its (or their) accompanying software and/or firmware.
  • the term 'circuitry' would also cover, for example and if applicable to the particular element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network de-vice, or another network device.
  • the at least one processor, the memory, and the computer program code form processing means or comprises one or more computer program code portions for carrying out one or more operations according to any one of the embodiments/examples of Figures 4 to 11 or operations thereof.
  • Embodiments as described may also be carried out in the form of a computer process defined by a computer program or portions thereof. Embodiments of the methods described in connection with Figures 4 toll may be carried out by executing at least one portion of a computer program comprising corresponding instructions.
  • the computer program may be in source code form, object code form, or in some intermediate form, and it may be stored in some sort of carrier, which may be any entity or device capable of carrying the program.
  • the computer program may be stored on a computer pro-gram distribution medium readable by a computer or a processor.
  • the computer program medium may be, for example but not limited to, a record medium, computer memory, read-only memory, electrical carrier signal, telecommunications signal, and software distribution package, for example.
  • the computer program medium may be a non-transitory medium. Coding of software for carrying out the embodiments as shown and described is well within the scope of a per-son of ordinary skill in the art.

Abstract

A mechanism to stream intermediate data between network functions is disclosed. The mechanism is based on having in a memory area one or more data neurons for data points in a data structure, a data point being an output of a first network function, the output being usable as an input to at least one second network function. When a data point is received via an inbound data interface of a data neuron, the data point is inserted to the data neuron. When a request is received via an outbound data interface of the data neuron, sending, via the outbound data interface, of a data point that is according to a fetching order the first data point in the data neuron is caused and the data point is removed from the data neuron.

Description

INTERMEDIATE DATA BETWEEN NETWORK FUNCTIONS
TECHNICAL FIELD
The invention relates to data exchange between network functions. BACKGROUND
In recent years the phenomenal growth of mobile services and proliferation of smart phones and tablets have increased a demand for higher network capacity. It is anticipated that ultra-low or low latency applications, such as tactile Internet, augmented reality, factory automation, self-driving cars, will be the next success. Further, it is expected that network systems will consist of multiple phys- ical and/or virtual network functions. To achieve the required ultra-low latency, information exchange between the network functions should happen in a smoothly manner.
BRIEF DESCRIPTION
According to an aspect, there is provided the subject matter of the inde- pendent claims. Some embodiments are defined in the dependent claims.
One or more examples of implementations are set forth in more detail in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF DRAWINGS
In the following some embodiments will be described with reference to the attached drawings, in which
Figure 1 illustrates an exemplified wireless communication system; Figures 2 and 3 illustrate block diagrams;
Figure 4 illustrates an example of a network slicing abstraction model with an exemplified function interaction model;
Figures 5 to 11 illustrate exemplified processes;
Figure 12 illustrates an example of an intermediate data stream; and
Figure 13 is a schematic block diagram of an apparatus.
DETAILED DESCRIPTION OF SOME EMBODIMENTS
The following embodiments are exemplifying. Although the specification may refer to "an", "one", or "some" embodiment(s) and/or example(s) in several locations of the text, this does not necessarily mean that each reference is made to the same embodiment(s) or example (s), or that a particular feature only applies to a single embodiment and/or example. Single features of different embodiments and/or examples may also be combined to provide other embodiments and/or examples.
Embodiments and examples described herein may be implemented in any communications system, wired or wireless, such as in at least one of the following: Universal Mobile Telecommunication System (UMTS, 3G) based on basic wideband-code division multiple access (W-CDMA), high-speed packet access (HSPA), Long Term Evolution (LTE), LTE-Advanced, LTE-Advanced Pro, fifth gen- eration (5G) system, beyond 5G, and/or wireless local area networks (WLAN) based on IEEE 802.11 specifications on IEEE 802.15 specifications. The embodiments are not, however, restricted to the systems given as an example but a person skilled in the art may apply the solution to other communication systems provided with necessary properties. One example of a suitable communications system is the 5G system, as listed above.
5G has been envisaged to use multiple-input-multiple-output (MIMO) multi-antenna transmission techniques, more base stations or access nodes than the current network deployments of LTE, by using a so-called small cell concept including macro sites operating in co-operation with smaller local area access nodes, such as local ultra-dense deployment of small cells, and perhaps also employing a variety of radio technologies for better coverage and enhanced data rates. 5G will likely be comprised of more than one radio access technology (RAT), each optimized for certain use cases and/or spectrum. 5G system may also incorporate both cellular (3GPP) and non-cellular (e.g. IEEE) technologies. 5G mobile commu- nications will have a wider range of use cases and related applications including video streaming, augmented reality, different ways of data sharing and various forms of machine type applications, including vehicular safety, different sensors and real-time control. 5G is expected to have multiple radio interfaces, including apart from earlier deployed frequencies below 6GHz, also higher, that is cm Wave and mmWave frequencies, and also being capable of integrating with existing legacy radio access technologies, such as the LTE. Integration with the LTE may be implemented, at least in the early phase, as a system, where macro coverage is provided by the LTE and 5G radio interface access comes from small cells by aggregation to the LTE. In other words, 5G is planned to support both inter- RAT operability (such as LTE-5G) and inter-RI operability (inter-radio interface operability, such as inter-RI operability between cm Wave and mmWave). One of the concepts considered to be used in 5G networks is network slicing in which multiple independent and dedicated virtual sub-networks (network instances) may be created within the same infrastructure to run services that have different requirements on latency, reliability, throughput and mobility.
It should be appreciated that future networks will most probably utilize network functions virtualization (NFV) which is a network architecture concept that proposes virtualizing network functions (network node functions) into "building blocks" or entities that may be operationally connected or linked together to provide services. A virtualized network function (VNF) may comprise one or more virtual machines running computer program codes using standard or general type servers instead of customized hardware. Cloud computing or cloud data storage may also be utilized. In radio communications this may mean node operations to be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts. It should also be understood that the distribution of labour between core network operations and base station operations may differ from that of the LTE or even be non-existent. Some other technology advancements probably to be used are Software-Defined Networking (SDN), Big Data, and all-IP, which may change the way networks are being constructed and managed. For example, one or more of the below described network node functionalities may be migrated to any corresponding abstraction or apparatus or device. Therefore, all words and expressions should be interpreted broadly and they are intended to illustrate, not to restrict, the embodiment.
An extremely general architecture of an exemplifying system 100 to which embodiments of the invention may be applied is illustrated in Figure 1. Figure 1 is a simplified system architecture only showing some elements and functional entities, all being logical units whose implementation may differ from what is shown. It is apparent to a person skilled in the art that the system may comprise any number of the illustrated elements and functional entities.
Referring to Figure 1, a cellular communication system 100, formed by one or more cellular radio access networks, such as the Long Term Evolution (LTE), the LTE-Advanced (LTE-A) of the 3rd Generation Partnership Project (3GPP), or the predicted future 5G solutions, are typically composed of one or more network nodes that may be of different type. An example of such network nodes is a base station 110, such as an evolved NodeB (eNB), providing a wide area, medium range or local area coverage 101 for terminal devices 140, for example for the terminal devices to obtain wireless access to data in other networks 103 such as the Internet, either directly or via a core network 102 comprising devices (apparatuses) for different purposes, such as a mobility management entity (MME) 120 providing control plane function for mobility between different access networks and controlling high-level operations of the terminal devices. In order to provide modular, scalable, low latency solution, devices, also called network nodes and apparatuses in the access network and/or devices, also called network nodes and apparatuses, in the core network may be configured to support streaming intermediate data. For that purpose the devices (network nodes, apparatuses) illustrated in Figure 1 comprises distributed many-to-many (N2N) organized data structure units 111, 121 (N2N-u), each N2N unit to provide an execution environment to N2N instances, as will be described in more detail below. An N2N instance may be called a data neuron, a data cell or a data instance. It could also be implemented, for example, as an enhanced/upgraded data bus or a data pool or a data stream. Below the term N2N instance will be generally used as a synonym to a data neuron, and to the other alternatives. The N2N units may be configured to instantiate N2N instances (data neurons) whenever a need is identified, and an N2N instance may be maintained also when it is empty. There are no restrictions how a need is identified. For exam- pie, the need may be identified when network function(s) are instantiated whereby the system instantiates also N2N instances needed for intermediate data between the network functions. Another example includes dynamic instantiation during an execution of a service, when a network function or the system detects the need. Naturally a further example is a combination of the two examples, i.e. at the begin- ning of the execution of the service and dynamically during the execution.
Intermediate data means herein one or more outputs of one or more network functions that are to be used as one or more inputs by one or more other network functions. An output of a network function is called herein a data point.
A network function refers herein to any processing function, including different sub-functions, in a network. Examples of a network function include but is not limited to user plane functions and control plane functions, such as a mobility management function with its sub-functions relating to session mapper, mobility, policy and authentication, a session management function with its sub-functions relating to gateway logic and software defined network logic, and context manager function. In other words, a network function can be seen as an entity, or a func- tional block, within a network infrastructure, the network function providing a particular capability to support communication or distinct instance of a service and having a defined functional behavior and defined interfaces. The network functions may be for the same purpose but they may differentiate from each other by having programmable properties and attributes that may take into account different environments and system set-ups.
As described above, a network function may be implemented as a network node (element) on a dedicated hardware, or as a computer program instance running on a dedicated hardware, or as a virtualized function instantiated on an appropriate platform.
Figure 2 illustrates an example of an N2N instance 200 suitable for streaming intermediate data within an execution environment. The N2N instance 200 comprises an inbound data interface 211 for inserting (receiving) data outputs 221 to temporarily to store them in a memory area 220 in an ordered data struc- ture, and an outbound data interface 212 via which the data outputs 221 are fetched.
The ordered data structure may be a queue that uses FIFO (first-in-first- out) principles, or a stack that uses LIFO (last-in-first-out) principles. Although in the illustrated example there is one data point it should be appreciated that the memory area 220 may comprise from zero to plurality of data points 221.
The inbound data interface 211 may be called an input data interface, and correspondingly the outbound data interface 212 may be called an output data interface. The data interfaces may be implemented using any application programming interface (API), such as a uniform interface for representational state transfer (REST API), a socket API, and a data stream API. At the simplest the data interfaces are interfaces providing direct memory access, the inbound for inserting a data point to the memory area 220 and the outbound for fetching a data point directly from the memory area 220.
An N2N instance 200 may comprise data points of the same type, i.e. be a data point type specific data instance. If such data instances are in use, the data instance may receive data points from several network functions whose outputs are of the same type. Basic principle is that outputs from a network function having all outputs of one type will end up to the same N2N instance. If a network function have outputs of two or more different types, the outputs of different type will end up to different N2N instances. However, an N2N instance 200 may comprise data points of different types, for example a sub-set of all possible types. If such a solution is used, outputs of different type from a network function may end up to the same N2N instance, or to different N2N instances, depending on network function settings and/or whether the different types are within the sub-set of an N2N instance.
Still a further arrangement that may be used, for example for network functions producing data points with very fast speed, is a so called horizontal scaling. In the horizontal scaling there may be a plurality of parallel N2N instances sharing the load, i.e. the data points from the same N2N instance may end up to different parallel N2N instances. The parallel N2N instances may have a separate load balancing function between the network function(s) and the parallel N2N instances.
There are no restrictions what constitutes a data point, i.e. an output of a network function or network sub-function. For example, although not illustrated in Figure 2, or Figure 3, a data point may comprise attribute(s) or be associated with attributes. In other words, an attribute may be a value inside a data point, that the N2N instance can dug from the data point, or be a kind of metadata on top of a data point, or a value coming with a data point.
Figure 3 illustrates another example of an N2N instance 300. The N2N instance 300 is suitable for streaming intermediate data within an execution environment and also with other execution environments. The N2N instance may be data point type -specific, or contain data points of different type, as described above.
For functions in the same execution environment the N2N instance 300 comprises an internal inbound data interface 311a for inserting (receiving) data outputs 321 from functions to temporarily to store them in a memory area 320 in an ordered data structure, and an internal outbound data interface 312a via which functions in the same execution environment may fetch the data outputs 321. The internal data interfaces correspond to data interfaces described above with Figure 2. However, there are further data interfaces: the N2N instance 300 comprises an external inbound data interface 311b for receiving (inserting) data outputs 321 from other execution environments and an external outbound data interface 312b for delivering data outputs received via the internal inbound data interface 311a to other execution environments. The external data interfaces may be called in- bound/outbound synchronization interfaces, or inbound/outbound forwarding data interfaces, inbound/outbound sharing interfaces or inbound/outbound data axons. The external data interfaces may be any of data interface type mentioned above. Further, they, or one of them, may be a publish-subscribe API.
In the illustrated example, for sharing data points, the memory area comprises in an additional information area 322 information on connected N2N instances. If publish-subscribe paradigm is used, the information on connected N2N instances include those N2N instances who have subscribed data points inserted in the N2N instance. However, such information may be maintained somewhere else. If no publish-subscribe paradigm is used, the information on connected N2N instances is set to contain information on those N2N instances whereto data points, or their memory addresses, are to be sent. It should be appreciated that instead of, or in addition to N2N instances, their execution environments may be addressed. There are no limitations how the connected N2N instances are determined. The connected N2N instances may be determined using neighbor relationship, or there may be a separate N2N instance for virtual machines serving as a specific type of a device in a cloud, the separate N2N instance sharing information with or via a corresponding separate N2N instance in another cloud. The neighbor relationship may be defined by the communication system, wherein connected N2N instances may be determined based on some relationship between network functions or network elements, or corresponding virtual machines, using a princi- pie that there are or that there may be with certain probability network function processes that will need a data point from a network function inserting outputs to the N2N instance.
Thinking of a radio access network implemented in one or more clouds, having virtual machines of an evolved node B, following N2N data point sharing scenarios (and connected N2N instance information) can be imagined within a radio access network cloud:
scenario A: each virtual machine comprises one or more N2N instances, realized by a fast cache for example, and the sharing is with neighboring virtual machines. In other words, when a data point "a" is inserted, it is shared with all neighboring N2N instances.
scenario B for a radio access network cloud, having an extremely fast cloud server, for example: N2N instances are implemented in a separate virtual machine. In other words, the data point "a" is inserted in an N2N instance in the separate virtual machine, and is retrieva- ble therefrom by all functions in the radio access cloud. Sharing between radio access network clouds may be organized by having in a radio access network cloud a separate virtual machine having a dedicated N2N instance for sharing data points with a corresponding separate virtual machines in neighboring radio access network clouds. If the radio access network cloud implements scenario A, the dedicated N2N instance may forward data points received from other clouds to all, or some of the N2N instances in the virtual machines, and the dedicated N2N instance may be determined to be a neighboring N2N instance to all or some of the N2N instances in the same radio access network cloud. If the radio access network cloud implements scenario B, the dedicated N2N instance may be integrated to N2N instances in the separate virtual machine.
In the illustrated example of Figure 3 the additional information 322 may comprise rules for return order, rules for life-times, etc. Examples of different rules, will be described in more detail below with Figures 9 to 11.
It should be appreciated that the memory area 320 may comprise from zero to plurality of data points 321. Further, it should be appreciated that there may be no additional information or that the additional information does not comprise one or more rules, or the additional information does not comprise the information on connected N2N instances.
Further, it should be appreciated that that in the memory area of the example illustrated in Figure 2 there are one or more rules, or other additional information.
Still further possibilities for N2N instances include an N2N instance configured to comprise an internal inbound interface and an external outbound interface, or vice versa to comprise an external inbound interface and an internal out- bound interface, an N2N instance having only external interfaces (inbound and outbound), an N2N instance having both internal interfaces and one of the external interfaces, and vice versa an N2N instance having both external interfaces and one of the internal interfaces.
Figure 4 illustrates an example of a function interaction model for 5G, the model using N2N instances.
Referring to Figure 4, the model 400 comprises a service layer 401 and a network slice instance layer 402 mapped towards a resource layer. The service layer 401 and the network slice instance layer 402 are network slicing service abstraction layers. The service layer 401 comprises plurality of different service instances 411a, 411b. The network slice instance layer 402 comprises plurality of slice instances 421a, 421b, 421c to which service instances are sliced. The slice instances are in turn sliced to sub-network instances 422a, 422b in the network slice instance layer.
The resource layer 403 comprises plurality of execution environments 430, 430'. The execution environments 403 comprise plurality of network functions 431a, 431b, 431c, 43 Id (only network functions of one execution environment are illustrated). The network functions' 431a, 431b, 431c, 43 Id outputs are inserted to N2N instances 432-1, 432-2, 432-m in a function interaction model 432. Naturally the network functions 431a, 431b, 431c, 43 Id may fetch inputs from the N2N instances 432-1, 432-2, 432-m. In the illustrated example, the function interaction models 432, 432' in different execution environments are configured to share the outputs (data portion), illustrated by a thick line between the two func- tion interaction models. For example, a data point inserted into N2N instance 432- 2 (N2N-2) may be shared with N2N instance 432'-q. The resource layer further comprises plurality of interface functions 433a, 433b, 433c. In the illustrated example the interface functions have a dual role since they provide point-to-point legacy interfaces to and from external functions 434a, 434b, 434c, such as a radio access network (RAN) function, connectivity function and a session and subscription data function (S&S data), so that the external functions may also insert data points and/or fetch data points from the N2N instances.
Figures 5 to 8 illustrate different N2N instance functionalities, or N2N unit functionality, during data point insertion. In Figures 5 to 8 it is assumed that the N2N instance have been instantiated earlier, depicted only in Figure 5 by dashed line block 500. The N2N instance (data neuron) illustrated in Figure 2, or any N2N instance (data neuron) comprising one inbound interface, may be implemented as described in Figure 5, and the N2N instance (data neuron) illustrated in Figure 3 may be implemented using any of the functionalities described with Fig- ures 5 to 8. Further, term "subscribing N2N instance" means a connected (sharing) N2N instance whereto data points are to be forwarded regardless whether or not publish-subscribe paradigm is used.
Referring to Figure 5, when a data point is received in block 501 via an inbound interface, the data point is inserted in block 502 into a memory area ac- cording to an order used. For example, the insertion time/insertion order may define the order data points are stored. Naturally, if the data point is associated with insertion flag information and/or one or more attributes they are inserted with the data point in block 502.
Referring to Figure 6, when a data point is received in block 601 via an inbound interface, its processing depends on whether or not it was received (block 602) via an internal interface. If the inbound interface was the internal inbound interface (block 602: yes), subscribing N2N instances are determined in block 603, and sending copies of the data point via an external outbound data interface is caused in step 604, and the data point is inserted in block 605 according to an order used, as described above. Naturally, if the data point is associated with insertion flag information and/or one or more attributes they are inserted with the data point, and copied to a copy of the data point that will be sent.
If the inbound interface was the external inbound interface (block 602: no), the process proceeds directly to block 605 to insert the received data point according to the order used.
In the above example it was assumed that a copy of the data point is sent to subscribing N2N instances. However, if an N2N instance, or the network functions interacting with a specific N2N instance have access to the memory of the N2N instance whereto the data point is inserted, and they are physically close enough so that latency requirements will be fulfilled, instead of sending a copy, sending a memory reference to the inserted data point may be caused.
Figure 7 illustrates an example in which some of the N2N instances with which data points are shared, have the access to the memory area and are close enough, whereas some of the N2N instances do not have access and/or are not close enough. Further, in the illustrated example it is assumed that shared data points may be further shared, the further sharing being indicated by forward information.
Referring to Figure 7, when a data point is received in block 701 via an inbound interface, its processing depends on whether or not it was received (block 702) via an internal interface. If the inbound interface was the internal inbound interface (block 702: yes), it is checked in block 703, whether this data point should be shared more than once. The information may be received with the data point, for example by means of indication of number of times this should be shared. It should be appreciated that this check may be omitted if the N2N instance is instantiated with a rule that "share always X number of times". If the data point should be shared more than once (block 703: yes), the number of how many times the data point is shared (t#) is set in block 704 to be one, and the maximum number of sharing (t-a) is set in block 704 to be the received maximum number, or the maximum number determined during instantiation of the N2N instance. Then, or if the data point is not supposed to be shared more than once (block 703:no), subscribing N2N instances with memory access are determined in block 705, and sending a memory reference to a point whereto the data point will be (or is) inserted via an external outbound data interface is caused in step 706 (with or without forward information). Also subscribing N2N instances without memory access are determined in block 707, and sending copies of the data point (with or without forward infor- mation) via an external outbound data interface is caused in step 708 and the data point is inserted in block 709 according to an order used, as described above. Naturally, if the data point is associated with insertion flag information and/or one or more attributes they are inserted with the data point, and copied to a copy of the data point that will be sent.
If the inbound interface was the external inbound interface (block 702: no), it is checked in block 710, whether the data point was received with forward information. If yes, it is checked in block 711, whether the number how many times the data point has been shared is smaller than the maximum. If yes, in block the number how many times the data point has been shared is incremented in block 712 by one and then the process proceeds to block 705 to determine subscribing N2N instances.
If the data point is received without forward information (block 710: no), or the number how many times the data point has been shared is not smaller than the maximum (block 711: no), the process proceeds directly to block 709 to insert the received data point according to the order used.
In one implementation, before forwarding a data point received with forward information, the check of block 709 is performed to the updated number of how many times the data point has been shared, and if a condition t#=t-a is meet, the copies are forwarded without forward information.
Naturally the concept of further sharing (blocks 703, 704, 710, 711,
712) may be implemented with the example described in Figure 6, and the blocks 703, 704, 710, 711, 712 relating to further sharing in Figure 7 may be omitted, if the concept of further sharing is not implemented.
Referring to Figure 8, if (or when) a memory reference is received in block 801 via an external inbound interface, it is treated as if a data point has been received, i.e. it is inserted in block 802 to the N2N instance according to an order used. The memory reference acts as a flag that something is readable and retrievable (fetchable).
Figures 9 to 11 illustrate different N2N instance functionalities, or N2N unit functionality, during data point removal. Any N2N instance (data neuron), such as those illustrated in Figures 2 and 3, may be implemented to use any of the functionalities described with Figures 9 to 11. Further, in Figures 9 to 11 it is assumed that the N2N instance have been instantiated earlier, depicted only in Figure 9 by dashed line block 900, and that the N2N instance contains a data point. Naturally, if the N2N instance has not been instantiated, it does not exits and nothing can be fetch. Further, if the N2N instance has been instantiated but there is no data points, retrieval (fetching) a data point will fail.
Referring to Figure 9, when a request, such as "next", or "get", or "fetch", is received in block 901 via an outbound data interface, a data point that is determined to be the first one according to a fetching order is removed in block 902 from the N2N instance and sending the data point via the outbound data interface is caused in block 903.
As said above, the simplest order (rule) may be to use FIFO if the data structure is a queue or LIFO if the data structure is a stack and insertion time of data points. However, the order (increasing or decreasing) may be based on an at- tribute or an attribute combination that may be given as a rule. The rule, or rules may utilize installation flags used during an N2N instance installation (i.e. creation). For example, a rule specified by installation flags may be: sort(inc <attO>, <attl>, ...) , or sort (dec <attO>, <attl>, ... ) . Still a further example comprises a rule set in which each attribute is given order of its own: sort (<attO> inc, <attl> dec, <att2> inc, ...) and next (dec, <attO>, <att 1>, ... . ) Further, there may be several possible orders (or rules how to determine the order) and the order to be used may be given by a rule specifying sorting direction and attribute combination. An example of such a rule include next (inc <attO>, <attl>, ...) , or next (dec <attO>, <attl>, ...) . A still further possibility include to use indexing. If indexing is maintained and a rule specified by installation flag index (<attO>, <attl>, ...) is used, i.e. an index on an attribute that can be used to fetch data points, by get (<att:val>) for example, the indexing provides the order.
To ensure that data points will be removed from an N2N instance, a time-to-live information may be utilized. For example, each data point may have the same time after which the data point will be removed unless it is earlier fetch, or data points may have data point -specific time-to-live information that may be set with an installation time flag. It is possible to even have data points that shall be kept until they are explicitly deleted . Examples of time related rules specified by an installation time flag include del ( ' -1' ) for "delete when fetched", del ( ' <time>' ) for a time-to-live from insertion, and del (no) for a data point that will be kept until explicitly deleted.
Typically a rule of an N2N instance is applied to all data points inserted to the N2N instance.
Referring to Figure 10, an N2N instance may be configured to monitor (block 1001) data point -specifically, whether or not a storing time for the data point has expired. If the storing time expires (block 1001: yes) before the data point has been fetched, the data point is removed in block 1002 from the memory. Otherwise the monitoring is continued (block 1001: no).
Figure 11 illustrates a further example in which a data point will be needed as an input to more than one network function.
Referring to Figure 11, when a request is received in block 1101 via an outbound data interface, a data point that is the first one according to a fetching order is determined in block 1102, as described above, and sending the data point via the outbound data interface is caused in block 1103. However, in the example the data point is not yet removed from the N2N instance. Instead a number of requests for the data point, r-n, is incremented in block 1104 by one. (The starting value for r-n is zero). Then it is checked in block 1105, whether the number of requests r-n exceeds a threshold for the data point. The threshold can be one, two, three, etc. In other words, any value for the threshold may be used. If the number of requests exceeds the threshold (block 1105: yes), the data point is removed in block 1106 from the N2N instance. If the number of requests does not exceed the threshold (block 1105: no), storing the data point is continued in block 1107.
Naturally the time monitoring described with Figure 10 may be implemented also with the process described with Figure 11.
Figure 12 illustrates an example of an intermediate data stream. In the example it is assumed that N2N instances are instantiated at the beginning. However, it is a straightforward process to implement the solution to N2N instances instantiated dynamically during processing.
Referring to Figure 12, in the illustrated example N2N instances 1201, 1202, 1203, 1204, 1205 in an execution environment 1200 may interact via interface drivers drl, dr2 with legacy network functions or legacy network elements, such as a base station eNodeB in LTE (not illustrated in Figure 12) and provide the intermediate data stream to network functions Fl, F2, F3 and F4. Fl may be a mobility management function, F2 a session management function, F3 a context management function, etc. In Figure 12 arrows with solid line depict insertion of a data point, arrows with dash line depict fetching of a data point, and arrows with dot- and-dash line depicts interactions with the legacy functions or legacy network elements.
Referring to Figure 12, a terminal device may have sent an attach request or a service request, just to mention couple of examples, to a legacy eNodeB, which will send the message towards a mobility management entity in a core network, message 12-1 depicting the message sent towards the mobility management entity.
The interface driver drl is configured to insert the request as a data point 1-1 to an N2N instance 1201, called herein a first N2N instance, wherefrom in the illustrated example the network function F3 fetches the data point 1-1. Then the network function F3 inserts its output, i.e. a data point 2-1, to another N2N instance 1202, called herein a second N2N. The data point 2-1 is in turn fetched by the network function Fl, which inserts first a data point 2-2 to the second N2N instance 1202 and then another data point 3-1 to another N2N instance 1203, called herein a third N2N instance. The data point 2-2 is fetched by the network function F2 that inserts a data point 4-1 to a further N2N instance 1204, called herein a fourth N2N instance. The network function F3 fetches both the data point 3-1 and the data point 4-1 and then inserts its output as a data point 5-1 to a still another N2N instance 1205, called herein fifth N2N instance. The interface driver dr2 fetches the data point 5-1, which may be sent in message 12-2 as a response to a request received in message 12-1. The network function F2 inserts another data point 4-2 to the fourth N2N instance 1204, fetched by the network function F4. The network function F4 inserts a data point 2-3 to the second N2N instance 1202. The illustrated process ends when the network function Fl fetches the data point 2-3.
It should be appreciated that although not illustrated with Figure 12, any of the N2N instances may be configured to share (receive and/or send) data points.
In the example illustrated in Figure 12, the N2N instances are maintained even when they are empty. The N2N instances may be maintained basically forever or deleted in response to a specific event occurring. Each N2N instance in the execution environment may share the same "maintenance rule", or one or more of them may have N2N instance specific maintenance rules. There are no restrictions which constitutes a specific event causing one or more N2N instances to be deleted. For example, an N2N instance may be associated with a life-time, expiry of which causes deletion of the N2N instance. Other time related rules (events) in- elude that an N2N instance will be deleted after a certain time period has lapsed after last data point insertion and/or after a data point was last time fetched (or tried to be fetch). Other examples include that when all network functions associated with an N2N instance, and/or with an execution environment and/or with a service, are processed (i.e. are not running any more), the N2N instance or the N2N instances are deleted. Still a further example includes an additional rule that all connected N2N instances wherefrom data points may be received should be deleted before or simultaneously with the N2N instance. It should be appreciated that the above described list of examples is not an exhaustive list, and that the examples may be combined with each other.
As is evident from the above, the use of N2N instances with the disclosed limited interfaces enables that the N2N instances may be used with network functions so that there is no need to define interfaces between different network functions. This facilitates flexible network function deployment, ease of interfacing, flexible chaining, and co-location of network functions.
Further, the solution separates intermediate data communication (inputs/outputs) between network functions from data storage that provide data persistency and can mediate data from network functions to management functions and analytic engines. This separation makes it possible to separate low latency data streams with small data points from big background information streams, for ex- ample. Thanks to the separation, a low latency data point and a background information junk are not in the same execution queue. That will end up to a more efficient data throughput during its life cycle.
Compared to a conventional data bus or a data pool or to separate data filters, the N2N instance, i.e. the data neuron, can be seen as an enhanced/upgraded version: there is no need to a subscribing network function to listen everything that is inserted and pick up those ones that are relevant to the subscribing functions, as is the case with the conventional data bus or a data pool or separate data filters that perform the listening and picking up between a data bus and network functions or applications. Although in the above examples it is assumed that a local storing of data, meaning storing within the execution environment, is used, it should be appreciates that it may also be possible to use a data memory that is common to plurality of execution environments/N2N instances or shared by some of execution environ- ments/N2N instances. In this case the data output received by an N2N instance could, for example, be a reference to the shared memory or to a location within the shared memory, respectively.
The blocks, related functions, and information exchanges described above by means of Figures 2 to 12 are in no absolute chronological order, and some of them may be performed simultaneously or in an order differing from the given one. Naturally similar processes may run in parallel. Other functions can also be executed between them or within them, and other information may be sent. Some of the blocks or part of the blocks can also be left out or replaced by a corresponding block or part of the block.
The techniques and methods described herein may be implemented by various means so that a device (network node, apparatus) configured to support streaming of intermediate data, i.e. configured to provide an execution environment, based on at least partly on what is disclosed above with any of Figures 1 to 12, including implementing one or more functions/operations described above with an embodiment/example, for example by means of any of Figures 2 to 12, comprises not only prior art means, but also means for implementing the one or more functions/operations of a corresponding functionality described with an embodiment/example, for example by means of any of Figures 2 to 12, and it may comprise separate means for each separate function/operation, or means may be configured to perform two or more functions/operations. For example, one or more of the means and/or the execution environment/N2N unit, or its sub-units, or data neurons (i.e. N2N instances) described above may be implemented in hardware (one or more devices), firmware (one or more devices), software (one or more modules), or combinations thereof. For a hardware implementation, the ap- paratus(es) of embodiments may be implemented within one or more application- specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, logic gates, other electronic units designed to perform the functions described herein by means of Figures 1 to 12, or a combination thereof. For firmware or soft- ware, the implementation can be carried out through modules of at least one chipset (e.g. procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory unit and executed by processors. The memory unit may be implemented within the processor or externally to the processor. In the latter case, it can be communicatively coupled to the processor via various means, as is known in the art. Additionally, the components of the systems described herein may be rearranged and/or complemented by additional components in order to facilitate the achievements of the various aspects, etc., described with regard thereto, and they are not limited to the precise configu- rations set forth in the given figures, as will be appreciated by one skilled in the art.
Figure 13 illustrates an apparatus (device) configured to carry out the functions described above with an example/examples to provide an execution environment. Each apparatus may comprise one or more communication control circuitry, such as at least one processor 1302, and at least one memory 1304, includ- ing one or more algorithms 1303, such as a computer program code (software) wherein the at least one memory and the computer program code (software) are configured, with the at least one processor, to cause the apparatus to carry out any one of the exemplified functionalities of the terminal device.
The memory 1304 may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
The apparatus may further comprise different interfaces 1301, such as one or more communication interfaces (TX/RX) comprising hardware and/or soft- ware for realizing communication connectivity according to one or more communication protocols. The communication interface may provide the apparatus with communication capabilities to communicate in the cellular communication system and enable communication between different network nodes and between terminal devices and different network nodes, for example. The communication inter- face may comprise standard well-known components such as an amplifier, filter, frequency-converter, (de) modulator, and encoder/decoder circuitries and one or more antennas.
Referring to Figure 13, at least one of the communication control circuitries in the apparatus 1300 is configured to provide the execution environ- ment/N2N unit, or one or more of its sub-units, and to carry out functionalities described above by means of any of Figures 2 to 12 by one or more circuitries. As used in this application, the term 'circuitry' refers to all of the following: (a) hardware-only circuit implementations, such as implementations in only analog and/or digital circuitry, and (b) combinations of circuits and software (and/or firmware), such as (as applicable): (i) a combination of processor(s) or (ii) portions of processor(s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus to perform various functions, and (c) circuits, such as a microprocessor(s) or a portion of a micropro- cessor(s), that require software or firmware for operation, even if the software or firmware is not physically present. This definition of 'circuitry' applies to all uses of this term in this application. As a further example, as used in this application, the term 'circuitry' would also cover an implementation of merely a processor (or multiple processors) or a portion of a processor and its (or their) accompanying software and/or firmware. The term 'circuitry' would also cover, for example and if applicable to the particular element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network de-vice, or another network device.
In an embodiment, the at least one processor, the memory, and the computer program code form processing means or comprises one or more computer program code portions for carrying out one or more operations according to any one of the embodiments/examples of Figures 4 to 11 or operations thereof.
Embodiments as described may also be carried out in the form of a computer process defined by a computer program or portions thereof. Embodiments of the methods described in connection with Figures 4 toll may be carried out by executing at least one portion of a computer program comprising corresponding instructions. The computer program may be in source code form, object code form, or in some intermediate form, and it may be stored in some sort of carrier, which may be any entity or device capable of carrying the program. For example, the computer program may be stored on a computer pro-gram distribution medium readable by a computer or a processor. The computer program medium may be, for example but not limited to, a record medium, computer memory, read-only memory, electrical carrier signal, telecommunications signal, and software distribution package, for example. The computer program medium may be a non-transitory medium. Coding of software for carrying out the embodiments as shown and described is well within the scope of a per-son of ordinary skill in the art.
Even though the invention has been described above with reference to an example according to the accompanying drawings, it is clear that the invention is not restricted thereto but may be modified in several ways within the scope of the appended claims. Therefore, all words and expressions should be interpreted broadly and they are intended to illustrate, not to restrict, the embodiment. It will be obvious to a person skilled in the art that, as technology advances, the inventive concept may be implemented in various ways. Further, it is clear to a person skilled in the art that the described embodiments may, but are not required to, be combined with other embodiments in various ways.

Claims

1. A method comprising:
instantiating to a memory area one or more data neurons for data points, a data neuron being a data instance with a data structure and two or more data interfaces, and a data point being an output of a first network function, the output being usable as an input to at least one second network function;
inserting, in response to receiving a data point via an inbound data interface of a data neuron, the data point to the data neuron;
causing, in response to receiving a request via an outbound data interface of the data neuron, sending of a data point, that is according to a fetching order the first data point, via the outbound data interface to a network function that sent the request, and removing the data point from the data neuron.
2. A method as claimed in claim 1, further comprising:
monitoring how long time has lapsed from the insertion of the data point to the data neuron;
removing, in response to the time exceeding a preset threshold, the data point from the data neuron.
3. A method as claimed in claim 1 or 2, further comprising: monitoring how many times the data point has been requested; and removing, in response to the number of requests exceeding a preset threshold, the data point from the data neuron.
4. A method as claimed in any preceding claim, further comprising: providing the data neuron with at least two inbound data interfaces, one internal inbound data interface to insert data points from network functions and one external inbound data interface to insert data points from other data neurons, and/or least two outbound data interfaces, one internal outbound data interface to receive requests for data points from network functions and one external outbound data interface to forward data points or their copies to one or more other data neurons.
5. A method as claimed in claim 4, further comprising:
determining, in response to receiving a data point via the internal inbound data interface, one or more data neurons whereto data points are to be forwarded, and
causing sending a copy of the data point or, as a data point, a reference to a memory area whereto the data point is inserted in the data neuron, to the determined one or more data neurons via the external outbound data interface.
6. A method as claimed in claim 4 or 5, further comprising: detecting that a data point received via the external inbound data interface includes forward information;
determining how many times the data point has been forwarded; and removing, in response to the number of requests exceeding a preset threshold, the data point from the data neuron.
determining, in response to receiving a data point via the internal inbound data interface, one or more data neurons whereto data points are to be forwarded, and
causing sending a copy of the data point to the determined one or more data neurons via the external outbound data interface.
7. A method as claimed in claim 6, wherein the forward information comprises a maximum amount of forwards of the data point and a number indicating on how many times the data point has been forwarded, and the method further comprises:
comparing the number with the maximum amount;
if the number is smaller than the maximum amount;
incrementing the number to the forward information and forwarding the data point to the determined one ore more data neurons with the forward information.
8. A method as claimed in claim 4, 5, 6 or 7, wherein the network functions and the data neuron are in the same execution environments and the other data neurons are in one or more other execution environments. 9. A method as claimed in claim 4, 5, 6, 7 or 8 wherein the external data interfaces are data axons.
10. A method as claimed in any preceding claim, wherein the data structure is a queue and the fetching order depends on the insertion time and uses first in first out principles.
11. A method as claimed in any of claims 1 to 9, wherein the data structure is a stack and the fetching order depends on the insertion time and uses last in first out principles. 12. A method as claimed in any preceding claim, wherein the inserted data point comprises or is associated with one or more attributes and/or attribute values and the fetching order is determined using the one or more attributes and/or attribute values in response to the request indicating one or more attributes or attribute values.
13. A method as claimed in any preceding claim, wherein the inserted data point comprises or is associated with one or more attributes and/or attribute values and the data neuron comprises one or more fetching rules based on the attributes.
14. A method as claimed in any preceding claim, wherein the data neuron is a distributed many-to-many instance.
15. A device comprising:
at least one processor, and
at least one memory comprising a computer program code, wherein the processor, the memory, and the computer program code are configured to cause the device to:
instantiate to a memory area one or more data neurons for data points, a data neuron being a data instance with a data structure and two or more data interfaces, and a data point being an output of a first network function, the output being usable as an input to at least one second network function;
insert, in response to receiving a data point via an inbound data interface of a data neuron, the data point to the data neuron;
send, in response to receiving a request via an outbound data interface of the data neuron, a data point, that is according to a fetching order the first data point, via the outbound data interface to a network function that sent the request and to remove the data point from the data neuron.
16. A device as claimed in claim 15, wherein the processor, the memory, and the computer program code are further configured to cause the device to:
monitor how long time has lapsed from the insertion of the data point to the data neuron; and
remove, in response to the time exceeding a preset threshold, the data point from the data neuron.
17. A device as claimed in claim 15 or 16, wherein the processor, the memory, and the computer program code are further configured to cause the network node to:
monitor how many times the data point has been requested; and remove, in response to the number of requests exceeding a preset threshold, the data point from the data neuron.
18. A device as claimed in claim 15, 16 or 17, wherein the processor, the memory, and the computer program code are further configured to cause the de- vice to provide the data neuron with at least two inbound data interfaces, one internal inbound data interface to insert data points from network functions running in the device and one external inbound data interface to insert data points from other devices, and/or least two outbound data interfaces, one internal outbound data interface to receive requests for data points from network functions in the device and one external outbound data interface to forward data points or their copies one or more data neurons in one or more other devices.
19. A device as claimed in claim 18, wherein the processor, the memory, and the computer program code are further configured to cause the device to:
determine, in response to receiving a data point via the internal inbound data interface, one or more data neurons whereto data points are to be forwarded, and
cause sending a copy of the data point or, as a data point, a reference to a memory area whereto the data point is inserted in the data neuron, to the deter- mined one or more data neurons via the external outbound data interface.
19. A device comprising means for carrying out the method according to any one of claims 1 to 14.
20. A non-transitory computer readable media having stored thereon instructions that, when executed by a computing device, cause the computing device to:
instantiate to a memory area of the computing device one or more data neurons for data points, a data neuron being a data instance with a data structure and two or more data interfaces, and a data point being an output of a first network function, the output being usable as an input to at least one second network function;
insert, in response to receiving a data point via an inbound data interface of a data neuron, the data point to the data neuron;
send, in response to receiving a request via an outbound data interface of the data neuron, a data point that is according to a fetching order the first data point via the outbound data interface to a network function that sent the request and to remove the data point from the data neuron.
21. A non-transitory computer readable media as claimed in claim 19, having stored thereon further instructions that, when executed by a computing device, cause the computing device further to:
monitor how long time has lapsed from the insertion of the data point to the data neuron; and
remove, in response to the time exceeding a preset threshold, the data point from the data neuron.
23. A non-transitory computer readable media as claimed in claim 21 or 22, having stored thereon further instructions that, when executed by a computing device, cause the computing device further to:
monitor how many times the data point has been requested; and remove, in response to the number of requests exceeding a preset threshold, the data point from the data neuron.
24. A non-transitory computer readable media as claimed in claim 21, 22 or 23 having stored thereon further instructions that, when executed by a computing device, cause the computing device further to provide the data neuron with at least two inbound data interfaces, one internal inbound data interface to insert data points from network functions in the execution environment to which the data neuron is instantiated and one external inbound data interface to insert data points from other execution environments, and/or least two outbound data interfaces, one internal outbound data interface to receive requests for data points from network functions in the execution environment to which the data neuron is instantiated and one external outbound data interface to forward data points or their copies one or more data neurons in one or more other execution environments.
25. A non-transitory computer readable media as claimed in claim 24, having stored thereon further instructions that, when executed by a computing device, cause the computing device further to:
determine, in response to receiving a data point via the internal inbound data interface, one or more data neurons in one or more other execution environments whereto data points are to be forwarded, and
cause sending a copy of the data point or, as a data point, a reference to a memory area whereto the data point is inserted in the data neuron, to the determined one or more data neurons in the one or more other execution environments via the external outbound data interface.
26. A computer program product comprising program instructions configuring an intermediate network node to perform any of the steps of a method as claimed in any of claims 1 to 14 when the computer program is run.
PCT/EP2016/075404 2016-10-21 2016-10-21 Intermediate data between network functions WO2018072846A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP16785154.2A EP3529699A1 (en) 2016-10-21 2016-10-21 Intermediate data between network functions
PCT/EP2016/075404 WO2018072846A1 (en) 2016-10-21 2016-10-21 Intermediate data between network functions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2016/075404 WO2018072846A1 (en) 2016-10-21 2016-10-21 Intermediate data between network functions

Publications (1)

Publication Number Publication Date
WO2018072846A1 true WO2018072846A1 (en) 2018-04-26

Family

ID=57190020

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/075404 WO2018072846A1 (en) 2016-10-21 2016-10-21 Intermediate data between network functions

Country Status (2)

Country Link
EP (1) EP3529699A1 (en)
WO (1) WO2018072846A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070237153A1 (en) * 2002-12-20 2007-10-11 Slaughter Gregory L Topology and routing model for a computer network
US20140046882A1 (en) * 2006-04-06 2014-02-13 Samuel F. Wood Packet data neural network system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070237153A1 (en) * 2002-12-20 2007-10-11 Slaughter Gregory L Topology and routing model for a computer network
US20140046882A1 (en) * 2006-04-06 2014-02-13 Samuel F. Wood Packet data neural network system and method

Also Published As

Publication number Publication date
EP3529699A1 (en) 2019-08-28

Similar Documents

Publication Publication Date Title
CN107295609B (en) Network slice processing method and device, terminal and base station
US10511506B2 (en) Method and device for managing virtualized network function
CN110505073B (en) Mobile edge calculation method and device
US11902113B2 (en) Systems and methods for zero-touch deployment of network slices and network slice assurance services
CA3111098A1 (en) Data transmission method and apparatus
CN108353380A (en) Data routing in cellular communication system
US10904092B2 (en) Polymorphic virtualized network function
EP4164282A1 (en) Communication prediction-based energy saving method and apparatus
US11172336B2 (en) Logical radio network
CN116018851A (en) Operator control of user equipment behavior in registering and deregistering with network slices and in establishing and releasing PDU sessions in a communication system
US11132353B2 (en) Network component, network switch, central office, base station, data storage, method and apparatus for managing data, computer program, machine readable storage, and machine readable medium
EP3529699A1 (en) Intermediate data between network functions
EP3243298B1 (en) Control of self-organizing network functions
CN114189893A (en) O-RAN capability opening method, communication system, device and storage medium
CN114765790A (en) IAB node switching method, device and equipment
WO2020063521A1 (en) Capability reporting method and apparatus
US10003657B2 (en) Data transmission processing method and apparatus
WO2023241429A1 (en) Communication method and apparatus
CN110572860A (en) Method for selecting access network, core network device and storage medium
US20230224221A1 (en) Processing chaining in virtualized networks
US20240022989A1 (en) Provisioning user equipment route selection policies with service assurance
CN116963038B (en) Data processing method based on O-RAN equipment and O-RAN equipment
WO2023240592A1 (en) Apparatus, methods, and computer programs
US20230363039A1 (en) Providing adaptive transition between an inactive state and an idle state
CN114006707B (en) East-west firewall configuration method, device and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16785154

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2016785154

Country of ref document: EP

Effective date: 20190521