WO2021034308A1 - Adaptive push streaming with user entity feedback - Google Patents

Adaptive push streaming with user entity feedback Download PDF

Info

Publication number
WO2021034308A1
WO2021034308A1 PCT/US2019/046956 US2019046956W WO2021034308A1 WO 2021034308 A1 WO2021034308 A1 WO 2021034308A1 US 2019046956 W US2019046956 W US 2019046956W WO 2021034308 A1 WO2021034308 A1 WO 2021034308A1
Authority
WO
WIPO (PCT)
Prior art keywords
consumer
producer
data stream
data
node
Prior art date
Application number
PCT/US2019/046956
Other languages
French (fr)
Inventor
Aytac Azgin
Ravishankar Ravindran
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to PCT/US2019/046956 priority Critical patent/WO2021034308A1/en
Publication of WO2021034308A1 publication Critical patent/WO2021034308A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/613Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for the control of the source by the destination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/765Media network packet handling intermediate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Definitions

  • This application is related to mobile internet of things (IoT) communication systems and, in particular, to programmable edge computing systems that implement adaptive push streaming among nodes in the network.
  • IoT internet of things
  • Next generation mobile IoT applications (such as autonomous vehicle (AV) or drone systems) have strict service requirements.
  • High bandwidth streams are delivered over wireless links (e.g., wireless wide area network (WWAN)) and require low latency to trigger quick response or actuation by/at the end devices hosting the applications.
  • WWAN wireless wide area network
  • Typical bandwidth requirements may range from hundreds of Mbps to potentially 10s of Gbps (depending on the quality of the streams, such as whether lull versus compressed or dynamic high definition frames are used) and latency requirements may range from 10-100 ms, depending on frame rate and contextual requirements.
  • the wireless channel is also susceptible to impairments triggered by mobility, noise, path loss, and fading, etc., thereby leading to varying wireless link capacities over time.
  • DASH Dynamic Adaptive Streaming over HTTP
  • End hosts make short-duration data stream requests (for instance, of segments a few seconds long) based on observed quality levels that typically use bandwidth measurements and buffer monitoring at the end hosts.
  • the server in such a system is stateless.
  • the server initiates the process by sending a special frame “push promise.” Upon receiving this frame, the HTTP client does not send out a request until the response is pushed to the client completely. The client retrieves the response from browser cache.
  • k-push which is a lull-push approach
  • the server pushes k video segments after response to a request.
  • the server is not responsible for video rate adaptation.
  • the server pushes the segment at the same quality level as the first (lead) segment. This approach may deteriorate network adaptability and may lead to an over-push problem, where the network resources are wasted.
  • Adaptive push the server varies the parameter k dynamically to solve the above problems.
  • This approach uses additional control messages for video control adaptation.
  • the client increases k and puts a cap on it according to resource availability (start small, increment at a larger or smaller rate if value is small or large).
  • Adaptive push implements a push-directive header for lead segment request (responded to with a PushAck field).
  • Another dimension to streaming is the location of the server, especially since it may involve on-demand processing of content (for instance, transcoding based on received requests).
  • content for instance, transcoding based on received requests.
  • edge processing becomes of critical importance, as it may effectively support low- latency offloading of high-bandwidth streams. This is important for applications involving the timely processing of video frames, for instance object detection through dynamic regions of interest encoding, as edge processing may help reduce bandwidth requirements and the perceived latency of the offloading pipeline.
  • Pipeline streaming and inference processes are used for farther latency reduction through parallel streams.
  • an adaptive push streaming system may provide an adaptive client/server push using minimal client signaling.
  • the described adaptive push streaming system also may adapt at a faster time scale to fit the requirements of next generation IoT networks.
  • a Multi-Producer Multi-Consumer (MPMC) content delivery system with multiple edge servers is described that provides tight delay requirements using a new edge transport protocol.
  • Network nodes become part of an intelligent transport by providing a decision process at bottleneck nodes such as the access points by providing higher priority to streams targeting next generation mobile IoT end hosts.
  • an adaptive push streaming method for use in a network comprising multiple Producer nodes, multiple Consumer nodes, and multiple edge servers.
  • the method includes a processor receiving from a Producer node at least one stream of data having a contextualized name including an identification of resource constraints of the Producer node and the processor receiving a network quality measurement from a Consumer node that has subscribed to the at least one stream of data.
  • the processor determines from the network quality measurement from the Consumer node transmission characteristics of the at least one stream to be sent to the Consumer node via the network and pushes the at least one data stream to the Consumer node via the network.
  • an adaptive push streaming system for a network comprising a plurality of Producer nodes, Consumer nodes, and edge servers.
  • the system includes a Producer node that creates at least one stream of data having a contextualized name including an identification of resource constraints of the Producer node and at least one Consumer node that subscribes to the at least one stream of data and periodically provides a network quality measurement for the at least one Consumer node to the network.
  • the system also includes a Producer-side edge server that receives the at least one stream from the Producer node and the network quality measurement for the at least one Consumer node, determines an appropriate bit-rate at which to send the at least one stream to the at least one Consumer node via the network, and pushes the at least one stream of data to the at least one Consumer node via the network .
  • a non-transitory computer-readable medium that stores computer instructions for providing adaptive push streaming in a network comprising multiple Producer nodes, multiple Consumer nodes, and multiple edge servers, that when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving from a Producer node at least one stream of data having a contextualized name including an identification of resource constraints of the Producer node; receiving a network quality measurement from at least one Consumer node that has subscribed to the at least one stream of data; determining from the network quality measurement from the at least one Consumer node a bit- rate at which to send the at least one stream to the at least one Consumer node via the network; and pushing the at least one data stream to the at least one Consumer node via the network.
  • the processor receives multiple streams of data from the Producer node and other Producer nodes in the network and prioritizes the multiple data streams for transmission based on shared policies with the Producer nodes for the contextualized names to determine what data stream to transmit, when to transmit the data stream, and with what characteristics to transmit the data stream.
  • the processor multicasts a data stream to multiple Consumer nodes or unicasts the data stream to the Consumer node according to network quality measurements for Consumer nodes that have subscribed to the data stream.
  • the processor pushes the data stream to an edge server that establishes a dynamic multicast of the data stream to multiple Consumer nodes.
  • the processor determines from the network quality measurement from the Consumer node whether to compress a data stream before transmitting the data stream to the Consumer node.
  • a notification message is received from the Consumer node including at least an identification of the data stream using the contextualized name to which the Consumer node has subscribed and the network quality measurement.
  • a notification message is received from an access point including information relating to network usage at the access point and modifying transmission of the data stream based on information in the notification message from the access point.
  • the processor determines an optimal transmission policy based on available network resources and resources requested by the multiple Consumer nodes and pushes the data stream to each of the multiple Consumer nodes as at least one of a multicast and a unicast data stream.
  • the processor estimates a bit-rate supported by each of the multiple Consumer nodes that have subscribed to the data stream, estimates acceptable latency values for each Consumer node subscribed to the data stream, determines a transcoding level appropriate to meet the acceptable latency values for each Consumer node subscribed to the data stream, and pushes the data stream to each Consumer node subscribed to the data stream.
  • the method may be performed and the instructions on the computer readable media may be processed by the apparatus, and farther features of the method and instructions on the computer readable media result from the functionality of the apparatus. Also, the explanations provided for each aspect and its implementation apply equally to the other aspects and the corresponding implementations. The different embodiments may be implemented in hardware, software, or any combination thereof. Also, any one of the foregoing examples may be combined with any one or more of the other foregoing examples to create a new embodiment within the scope of the present disclosure.
  • FIG. 1 illustrates an edge transport system for implementing edge computing and storage transport in a sample embodiment.
  • FIG. 2 illustrates an edge server to client (Consumer) scenario in which Producers generate contextualized NameStreams (NSs).
  • FIG. 3 illustrates a client (Producer) to edge servers to client (Consumer) scenario in which Producers generate contextualized NameStreams (NSs).
  • FIG. 4 illustrates an example of push-based adaptive streaming using multiple edge servers with each edge server hosting multiple edge control points (ECPs) and each ECP servicing a single client.
  • ECPs edge control points
  • FIG. 5A illustrates enhancements of the data transport in sample embodiments through the use of transport proxy.
  • FIG. 5B illustrates a sample scheduling policy based on a latency requirement in sample embodiments.
  • FIG. 6 illustrates constraints on bandwidth in a sample embodiment of the transport system.
  • FIG. 7 illustrates the example of FIG. 6 for an application scenario where the points of access act as a transport proxy.
  • FIG. 8 illustrates farther details of the application scenario where the points of access act as a transport proxy as in FIG. 7.
  • FIG. 9 illustrates an embodiment of push-based streaming in a client (Producer)-to-client (Consumer) scenario.
  • FIG. 10 illustrates an embodiment of push-based streaming in a server-to-client scenario.
  • FIG. 11 illustrates a flow chart of a method of implementing adaptive push streaming in the processor of an edge computer in a sample embodiment.
  • FIG. 12 is a block diagram illustrating circuitry for performing the methods according to sample embodiments.
  • the lunctions or algorithms described herein may be implemented in software in one embodiment.
  • the software may include computer executable instructions stored on computer readable media or computer readable storage device such as one or more non-transitory memories or other type of hardware- based storage devices, either local or networked.
  • modules which may be software, hardware, firmware or any combination thereof. Multiple lunctions may be performed in one or more modules as desired, and the embodiments described are merely examples.
  • the software may be executed on a digital signal processor, ASIC, microprocessor, or other type of processor operating on a computer system, such as a personal computer, server or other computer system, turning such computer system into a specifically programmed machine.
  • a Multi-Producer Multi-Consumer (MPMC) system is provided with multiple edge servers.
  • the edge servers may or may not modify the received content from the Producers (e.g., stereo video or 3D point cloud streams from autonomous vehicles) to provide to Consumers (e.g., other autonomous vehicles that may have subscribed to the stereo video or 3D point cloud streams from the Producers).
  • the edge servers When the edge servers process the received content from the Producers, the edge servers should have sufficient resources available to them, not just for regular transcoding operations, but also for additional stream data processing (for instance, video processing lunctions that include merging multiple point clouds or stereo views to generate a detailed unobstructed point cloud or stereo view based on a particular Consumer’ s point of view).
  • the edge servers may or may not have access to a wireless link at the access network to control transmission rates towards the Consumers (as they may not have been installed at the access points or the base stations).
  • the Producer may either be the producer application at the edge server or the end host (or both), depending on the implementation scenario.
  • Producers in the adaptive push system described herein may not have timely access to that information to make a decision on behalf of clients (or Consumers) on what rate of data stream to push towards these clients.
  • Producers in the adaptive push system described herein may have limited processing capability and bandwidth availability (with respect to the amount of data being generated by them to help with autonomous decision making, and the transmission channel being wireless) to create multiple streams at different rates.
  • the timeframe for making decisions or adapting to changes in rate needs to be very small.
  • Edge servers may be implemented to install such functions in sample embodiments.
  • the edge servers used in the described applications also may have the capability to help clients quickly recover from failures or losses due to being closer to the clients and offering a caching service.
  • the systems and methods described herein maximize the decision accuracy at the end hosts given bandwidth/processing/storage constraints of/at the system components.
  • the decision policy is expressed as selecting the data stream with the highest quality (e.g., bitrate and bandwidth) supported by the Consumer’ s wireless link.
  • the decision involves what to send (type and features of the stream, /. ⁇ ? ., iull/dynamic/compressed), when to send (scheduling to meet deadline with high probability), and at what rate (individual versus joint). Delay variations mostly occur at the access network due to varying channel characteristics and the access point being a bottleneck point.
  • received feedback is received from clients on a regular basis to more accurately reflect the latest channel conditions (as average bandwidth availability based on Channel Quality Indicator (CQI) and Point of Access (PoA) requirements).
  • the edge server is considered to be part of (hence managed) by the access network. Accordingly, received feedback at the edge server may also include notifications from the access point not directly related to user feedback but to complement the user feedback. For instance, information on aggregate network usage at the PoA may be provided together with individual client-driven observations/requirements associated with all the similar type clients connected to the same PoA.
  • the Producer regularly receives an update on the supported rates by the Consumers subscribing to its content stream.
  • the update includes the Consumer ID and Consumer requirements.
  • These updates may be sent over a vehicle to vehicle (V2V) interface directly from the Consumer to the Producer, or through the Edge Control Points (ECPs) at the edge servers indirectly, in which case, the ECP at the Producer-side edge server may aggregate the information received from ECPs at the Consumer-side edge servers.
  • V2V vehicle to vehicle
  • ECPs Edge Control Points
  • the Producer-side edge server may aggregate the information received from ECPs at the Consumer-side edge servers.
  • the Producer may determine the optimal transmission policy based on its available resources and the requested resources. At least two cases are supported:
  • Case I The Producer may choose to send at the maximum supported rate by the receiving Consumers to optimize its bandwidth use, in which case the ECPs at the Consumer-side edge servers may transcode the received Producer streams accordingly; or
  • Case II The Producer may create a scalable stream that is delivered to each receiving Consumer at the proper rate matching their requirements (in case the edge server use is limited in regard to transcoding).
  • the levels for scalable coding may be determined to maximize the overall decision accuracy at receiving hosts under the Producer’s limitations.
  • decision accuracy may reflect how accurately the Consumers identify potential obstacles or localize them and events associated with them.
  • Decision accuracy depends on the quality of the received streams, with lull high definition streams offering the highest accuracy, while lowering the stream rate results in lower accuracy.
  • the Producer may initially send its content to its edge server using push-based streaming by specifying connection ID, name stream ID, scalable coding level, and named content.
  • the Producer side edge server then may propagate the requested content to each Consumer through the Consumer’ s edge server.
  • the streams may be multicast to other edge servers or sent to each server as unicast data streams.
  • the techniques described herein also enable caching for quick recovery at the Consumer.
  • the systems and methods described herein also include a notification/update feature where a notification message (sessionID, name stream ID, ResourceMetrics ⁇ Latency, BW ⁇ ) is provided.
  • the name stream ID (NS-ID) is assumed to be a contextualized name offering information on the application scenario and potentially other metrics associated with the scenario.
  • Multiple name streams may be transferred between a user entity (UE) and its ECP under the same session ID.
  • a notification is used to alert the Producer side (for which the initial target is the Consumer-side edge server) of the supported rate by the Consumer end host. It may be an adaptation metric (rate of increase/decrease) or the actual bandwidth value.
  • the application scenarios in accordance with the adaptive push streaming methods described herein have strict latency requirements.
  • a very fast rate adaptation scheme is required to ensure that the Producer adapts its transmission rate to the Consumer’ s current experience at a few millisecond-level granularity.
  • the frequency of notification messages is not constant, due to aperiodic changes in network quality measures and service requirements, and there are multiple sources for the received notification message(s).
  • the network quality is measured, but link quality may also be measured.
  • the wireless capacity may be measured from the Consumer to be modified by the AP, if allowed/supported, based on more accurate estimates. Other information may also be included in the measurement targeting congestion avoidance at the bottleneck nodes.
  • wireless capacity or link quality are only examples.
  • a dynamic multicast may be established to support efficient bandwidth use in the network resulting from the delivery of high-bandwidth streams among nearby hosts.
  • a transport proxy is implemented at the access points, which may help with name-based prioritized scheduling, resource reservation, and improved notification for more accurate available resource (e.g., bandwidth) estimates.
  • streams associated with autonomous driving, etc. may be assigned higher priorities, and within them, sub-classifications may be provided based on, for example, name and data stream requirements.
  • Naming helps with better decision making, as more context may be included within the names. Latency estimations and expected decision (or actuation) timeframes may be incorporated within contextual names to help with scheduling. This approach is desirable due to the mobility of hosts, different placement strategies associated with edge servers (and the varying distance with respect to end hosts), and the use of non-optimal edge servers without timely migrations.
  • Streams targeting the same end host may arrive at different times to the access point with differing deadlines. Scheduling policies may be used to account for such dynamic arrivals to optimize decision-making. Additional control signaling (e.g., ping messages in-between edge servers and end points, and among edge servers) may be used to measure the round-trip times (RTTs) on a regular basis.
  • RTT measurements cover multiple network segments: end host to access points, end host to edge server, access point to edge server, and edge server to edge server.
  • each forwarded content may be named to include expected latency of delivery from the point of reception to the end host.
  • Producers may reduce the rate they transmit data. If consuming end hosts perform processing on the data, for instance, to merge point clouds, rather than the edge server, then the Producer end host may reduce the rate it transmits data. On the other hand, if some processing is done at the edge server, then the Producer end host may have no choice but to transmit at the maximum rate available to it based on data it generates.
  • Next generation mobile IoT systems such as autonomous vehicles, mobile robotic and drone systems demand new ways to integrate programmable edge computing and networking resources to manage and control them in real time.
  • a new transport architecture for edge networks to handle low latency and high bandwidth real-time data streams from mobile IoT systems that also require services to handle mobility, service migration, data replication and reliability has been described by Ravindran, et al. in “Method and Apparatus for a Low Latency and Reliable Name Based Transport Protocol,” Application No. PCT/US2019/026583, filed April 9, 2019.
  • the data-centric edge transport protocol described therein enables, for example, autonomous vehicles to share their sensors’ point clouds with each other over a managed edge infrastructure.
  • the ETRA system facilitates IoT deployment by providing functionality to enable efficient named data streams (NS) sharing among distributed Producers, Consumers, and edge services, per-session caches in the transport layer for operating over an end-to-end session even during mobility, a service- scoped transport layer resolution lunction that resolves NS to end point identifiers, and a cache migration system that is used when edge service migration is needed.
  • NS named data streams
  • the ETRA system architecture will be described herein as a transport architecture in a sample embodiment, although it will be appreciated that other transport architectures may be used in other embodiments.
  • FIG. 1 illustrates the ETRA system architecture 100 for implementing edge computing and storage transport in a sample embodiment.
  • the ETRA system architecture 100 is a three-tier architecture that includes, at the lowest level, the user entity (UE) 110, which has, in the case of a network suitable for autonomous vehicles, computing resources to process its sensor data and to execute several inference tasks related to object detection, tracking, localization and mapping.
  • the UE 110 may be an autonomous vehicle (AV) that is both a Consumer and a Producer (AV-CP) of data streams.
  • AV autonomous vehicle
  • AV-CP Producer
  • the ETRA system architecture 100 includes an edge computing platform comprising one or more edge servers 120 having one or more edge control points (ECPs) 122 and which is placed no farther than the central office (CO).
  • the edge computing platform 120 offers the benefits of both offloading the local computing of the UEs 110 as well as generating a complimentary ‘network view that may be fased with crowdsourced input from other UEs 110 in its vicinity and other static resources around its location and input into the UE’ s inference engine in real-time to improve the decision making of the UE’ s inference engine.
  • a one-to-one relationship between an ECP 122 and the UE 110 may be assumed considering the need for performance and predictability towards the workloads.
  • the UEs 110 access the ECPs 122 via points of access 130.
  • the ETRA system architecture 100 includes a central cloud computing system 140 that is involved in longer time scale computing and network provisioning to sustain the edge services.
  • an AV-CP (UE 110) communicates over the ETRA system 100 with the central cloud computing system 140 that, in turn, communicates over an internet protocol (IP) network or information centric network (ICN) with other network nodes to process and transport data streams.
  • IP internet protocol
  • ICN information centric network
  • the ETRA system architecture 100 depicted therein is optimized for quick distribution of high bandwidth sensor data from mobile IoT systems to edge computing devices and from the edge computing devices to contextually relevant IoT devices.
  • a secure transport-level publish/subscribe (pub/sub) design carrying named data units (NDUs) with scalable transport caches is modified to incorporate pub/sub model dissemination with a name-based pull application programming interface (API).
  • pub/sub publish/subscribe
  • Subscription tables are maintained at designated nodes (servers that push content), which identify end hosts that subscribed to the content.
  • Consumers subscribe at the Consumer-side edge server.
  • the Consumer-side edge server uses the information on namespaces to then subscribe at the Producer-side edge servers.
  • the Producer or Producer-side edge servers update a resolution or namespace mapping database on how to discover content under its namespaces.
  • the mappings are updated regularly, given a scenario with mobile publishers, e.g., after handover, a Producer may migrate to a different edge server, in which the subscriptions need to be updated to point to the internet protocol (IP) address of the new edge server location.
  • IP internet protocol
  • a sub API is used by the Consumers to subscribe to a named data stream (NS), while a push API enables securely pushing an NS to multiple Consumers.
  • a pull API is mainly used to recover named data after an application detects packet loss (or missing data).
  • These APIs operate over sessions between an ECP 122 and a UE 110 (UE-ECP) or between respective ECPs 122 (ECP-ECP).
  • UE-ECP UE 110
  • ECP-ECP ECP-ECP
  • an ECP 122 may be used to process data that is contextualized to its serviced UE 110, or the ECP 122 may act as an NS relay to other ECPs 122 that may effectively aid with the creation of an overlay multicast tree.
  • Mobility of the UE 110 may be handled using a set of control lunctions in the transport layer that coordinate to update the binding between the UE’ s new network address and the persistent session state (/. ⁇ ? ., connection- ID and security context).
  • the control lunctions may include: (i) probing by the ECP 122 (towards the UE 110) to detect UE 110 reachability over the session lifetime; (ii) mobility event driven signaling by the UE 110 (towards the ECP 122), whenever the UE 110 changes its point of access 130; (iii) re-registering the NS prefixes by a UE transport layer using a name stream resolution lunction (NSRF) that enables resolution of non-local NSs using an external service- scoped name stream resolution system (NSRS); and (iv) re-resolving by the ECP 122 the UE’s new point of access 130 using its NSRF anytime during the session.
  • NRF name stream resolution lunction
  • lunctions enable an ECP 122 to redirect an NS to the UE 110 at its current point of access 130.
  • a session cache in the ECP 122 may be used by the UE 110 to recover any lost data. In this case, the recovery is handled by the UE’ s Consumer application.
  • the ETRA system architecture 100 also provides service migration with respect to the migration of the ECP services to a new host, which may be triggered due to mobility or lack of resource availability (/. ⁇ ? ., computation or bandwidth).
  • the service migration is handled by transport layer control lunctions that aid with the migration.
  • transport layer control lunctions that aid with the migration.
  • a control function uses a probing lunction to ensure that the session round trip time (RTT) remains within the threshold required by the application. If the transport session RTT violates this threshold, session migration may be initiated.
  • RTT session round trip time
  • a new host for the ECP 122 may be determined, the session context (connection ID, security context) and the session caches may be restored at the new host, services may be bootstrapped to allow the application components of the ECP 122 to operate over the restored session state, and migration control lunctions may inform the UE 110 of the new network address.
  • control lunctions also update the binding of the NS prefixes using the NSRF to allow the UE 110 to re-resolve the ECP’s new host during the session.
  • Emotionalities are enabled using the ETRA lunctional components shown in FIG. 1.
  • four types of lunctional modules may be provided: session manager 150, Consumer 152, Producer 154, and discovery module 156, all of which may be processed by data processor 158.
  • the UE 110 also includes the discovery function to contextually discover the set of nearby UEs 110 that may operate over machine to machine interfaces, also notified to the ECP 122, which may then subscribe to sensor data coming from ECPs 122 serving those UEs 110.
  • the machine to machine discovery functions could be contextually driven by information-centric network (ICN) architectures. Accordingly, Consumers may directly subscribe to the NS generated by its peers (/. ⁇ ? ., nearby UEs 110) from the ECP 122.
  • NS name discovery by the UE 110 depends on the role of the ECP 122, which may act as a relay point or as a data processor.
  • names may be inferred using the naming schema design chosen by the IoT system application alone, while in the latter scenario, another round of name discovery may take place between a UE 110 and its ECP 122, identifying the content based on the current binding between them.
  • the latter scenario may necessitate content management due to service migration using, for example, expiry policies, while at the same time taking advantage of the ECP’s computing capability towards data contextualization to suit the dynamic requirements of the UE 110.
  • the ETRA system architecture 100 includes two high level transport functions including session transport (ST) lunctions 160 that manage a session between the end point and the edge service, and common transport (CT) lunctions 170 that are generic lunctions or modules to help with pub/sub, data policy management, mobility, and state migration control lunctions.
  • ST session transport
  • CT common transport
  • the ST lunctions include Session Caching and Registration Function 162 that supports the use of transport layer caches and policy associated with session caching during the session’ s lifetime.
  • Cached data also has a shareability context managed by the CT’ s access control lunction, depending on its source.
  • instance data from UE 110 may be usable at multiple points, whereas processed data from the ECP 122 that is contextualized to a specific UE 110 may not be shareable.
  • the registration lunction allows the NS prefixes to be registered for resolution to a host address by remote Consumers. The registration action is driven either by the NS policy and also used during events such as mobility or by service migration.
  • Flow and Congestion Control 164 may be evolved considering the data characteristics (/. ⁇ ? ., time series nature, temporal constraints, mission critical features). These features use push streaming to avoid any data blocking with the aid of application aware scheduling, receiver driven network prioritization, and grant based flow control to handle congestion in access network scenarios.
  • Application/Context aware scheduling 166 complements receiver driven flow and congestion control by using the UE 110 based feedback on wireless connectivity to prioritize NDUs for transmissions. More specifically, the transport layer multiplexes the NS from the various sessions based on the application requirements. The sessions are managed by Session Management Functions 168 using, for example, Connection ID and security context.
  • the CT lunctions include the Local Resource Directory 171 that represents the local database of NS coming from local/remote Producers.
  • the Name Stream Resolution Function (NSRF) 172 enables resolution of non-local NS using an external name stream resolution system (NSRS), whose design and implementation may be influenced by application requirements.
  • Mobility Control Functions 173 help with end-to-end probing and update signaling between the UE 110 and ECP 122 whenever the network address changes.
  • State Migration Control Functions 174 include control/data migration lunctions between two ECPs 122.
  • Access Control and Authentication 175 manages the access policy for the local session-based caches at the host. Any ECP 122 request to a UE’s data stream requires authentication before granting access to it.
  • Inter-Session Data Exchange 176 helps with low- overhead based copy of data between sessions, enabling the relay function with the ECP 122.
  • AVR augmented vehicular reality
  • an autonomous vehicle (AV) service provider has provisioned sufficient resources at the edge server 120 for the ECP services 122 to serve a fleet of AVs (UE) 110.
  • Each AV may have a Producer application 154 and a Consumer application 152.
  • the AVR service may be invoked, for example, based on a UE 110 request to obtain the view of the front AV(s).
  • an application for the AVR service discovers the closest ECP 122, where the discovery fanction may be handled as part of the edge platform services to interconnect the UE 110 with the edge server 120.
  • the session managers 150 at both ends establish a secure connection using service level authentication, thereby also creating the security context at the transport layer. Consumers at either end may issue requests to discover the NS and subscribe to them or derive the names of the NSs based on a pre-existing knowledge of the naming schema by the session manager 150.
  • a LIDAR NS from an AV AVI
  • AVI LIDAR NS from an AV
  • the processed crowdsourced data for AVI from ECP1 may be named / ⁇ AVl-ID>/ ⁇ ECPl>/ ⁇ AVR-view>/ to which the other UEs 110 may subscribe.
  • an end point subscribes to the NS, over which the respective Producers publish data, including the shareability attributes of the streamed data. Published data then may be pushed over the secure transport session.
  • the transport layer may apply scheduling policies on the data from multiple sessions before handling the data over the network layer.
  • the data Once the data reaches the peer side, it is decrypted and sent to the consuming application for farther processing.
  • Named data at the ECP end is saved in the cache making it available to other ECPs 122 and to the UE 110 (for recovery).
  • NS-IDs may be registered to a service scoped NSRS to help with discovery by the other ECPs 122.
  • ECP1 and ECP2 may communicate its discovery to ECP1 over its secure data channel, after which ECP1 requests AV2’s NS.
  • ECPl Consumer resolves ECP2’s namespace using NSRS by invoking the host level NSRF.
  • ECPl ECPl’s Consumer starts to establish a secure session with ECP2 by sending a secure session request to ECP2.
  • ECPl’s Consumer may initiate arequest for AV2’s NS, which is then authenticated by ECP2’s session manager 150 to determine access to AV2’s NS. Once the request is authenticated, the request is delivered to ECP2’s Consumer, which then multicasts the stream over that session.
  • ECPl’s transport session Once the data arrives over ECPl’s transport session, it is cached and sent to the Consumer and the data processor 158 for AVI’s consumption. Specifically, data is first sent to ECPl’s Producer (which multiplexes the named data objects of multiple AVs’ NS). At this point, due to the availability of AV2’s NS at ECP1, this availability could be also registered to the NSRS.
  • Services leverage the session-probe primitive from the transport layer to track the UE 110 mobility during the lifetime of a session.
  • the UE-ECP API uses an application specific RTT threshold of x to decide if an ECP service migration is required or not.
  • the need for service migration is twofold: (i) to migrate the security/connection context associated with a session (/. ⁇ ? ., host level encryption and authentication keys) and (ii) to migrate the cache state to the new ECP 122.
  • the objective is to allow the UE 110 connected to the new point of access 130 to immediately start sending/receiving data without farther session negotiations.
  • Service migration is triggered once the ECP 110 determines that the RTT threshold is violated (for a certain duration), which is primarily driven by the UE 110 mobility. If, after a handover, the UE-ECP connection continues to satisfy the RTT threshold, then the same ECP 122 may continue to serve the UE 110. In this case, the session cache in the ECP 122 may help applications to recover data lost during the transition. Otherwise, the current ECP 122 may use an operator driven network service to identify the next closest edge server 120 with sufficient computer resources to host the ECP 122 workload associated with the UE 110, after which the service containers are orchestrated.
  • the transport session state is transferred securely so that the fature data from the UE 110 may be properly authenticated and decrypted before handing the data to the ECP 122.
  • Consumer and Producer caches may be transferred to the new ECP 122.
  • the UE 110 may resolve the new ECP 122.
  • the UE 110 learns the new ECP’s IP address, it may continue to use the same session context (/. ⁇ ? ., connection- ID and shared encryption key) between the Consumer and Producer of the UE/ECP.
  • the new association to the NS may be renamed / ⁇ AV1- ID>/ ⁇ ECP2>/ ⁇ AVR-view>/.
  • FIG. 2 illustrates an edge server 120 to client (Consumer) scenario in which Producers generate contextualized NameS treams (NSs) such as /AV/Lidar/.
  • NSS NameS treams
  • the namespace of the AV-NS may also include resource constraints such as latency and bandwidth.
  • a UE 110 discovers other vehicles using a discovery request 200 in the form of Discovery ⁇ ⁇ UE ID, NameStream ID ⁇ , ⁇ UE-ID, NS-ID ⁇ ,... ⁇ which may include information on name changes for Lidar, cameras, etc. when the UE 110 is an autonomous vehicle, as in the illustrated example.
  • the UE 110 may also receive the data in the form of a PeerUpdate ⁇ UE ID, NameStream ID, resource metrics ⁇ .
  • the UE 110 Upon discovery of other vehicles or data streams, the UE 110 sends an edge notification/update message 210 in the form of Notification! Session ID, NameStream ID, Resource Metrics ⁇ to the ECP 122 to notify ECP 122 of the discovered vehicle and its associated data streams.
  • the notification may provide updated information including / ⁇ AVl-ID>/ ⁇ Lidar>/ to identify new vehicle AVI and its data stream “Lidar.”
  • the notifications 210 may include update information on multiple NSs from a Producer 154 of the UE 110 including Lidar, camera, and other data from the UE 110.
  • the Consumer application 152 of the UE 110 may also periodically or in response to event triggers update the wireless capacity measurement (e.g., Channel Quality Indicator (CQI), Signal-to-Noise Ratio (SNR), or actual bandwidth estimates) to the ECP service.
  • the server-side Producer may further provide a data stream to a Consumer at 220 and update the Producer at 230.
  • One selection criteria could be for the Producer and/or ECP 122 to choose the bitrate that minimizes lf(Mt)- Btl (utilize highest rate streams at the maximum supported rate) for the wireless capacity measurement M t and bitrate B t .
  • the initial transmission rate thus may be determined by the Producer 154 of the UE 110, with or without knowledge of the bitrate limitations of the Consumer applications 152, and the transmission rate to the respective Consumers 152 may be determined by the ECP 122 based on the wireless capacity measurement data provided by the respective Consumers serviced by the ECP 122 and the bitrate of the data provided by the Producer application 154.
  • the goal of the ECP 122 would be to select bitrates that minimize delay and enhance the user experience given the constraints in the wireless transmission system at any given time.
  • the ECP 122 may also collect crowd sourced point clouds from multiple vehicles and process that data into a consolidated bitstream to multiple Consumers 152.
  • FIG. 3 illustrates a client (Producer) to edge servers 120 to client (Consumer) scenario in which Producers generate contextualized NameStreams (NSs) such as /AV/Lidar/.
  • the Consumer and Producer side ECPs 122 need not be at the same location and may be hosted at the same or different edge servers 120.
  • the Consumers 152 and Producer 154 of the UEs 110 may update the wireless capacity measurement (e.g., Channel Quality Indicator (CQI), Signal-to-Noise Ratio (SNR), or actual bandwidth estimates) to its associated ECP service to share with the AV service. As in the previous example, these measurements are used to predict the supported bitrates at the Consumer side.
  • CQI Channel Quality Indicator
  • SNR Signal-to-Noise Ratio
  • the end-host Producer 154 makes the initial choice of what bitrate stream(s) to send to the server-side Consumer 152 at 300.
  • the ECP 122 may make the final choice of what to send to the end-host Consumer 152 based on the wireless capacity measurement data. For example, different Consumers 152 may request data at different bitrates at 310 due to transmission conditions, hardware limitations, etc.
  • the ECP 122 on the Producer side may decide how to best transmit the requested data stream by determining what transcoding to apply, what bitrates to use, etc.
  • the ECP 122 on the Consumer side may similarly decide how to best transmit the requested data stream to the Consumer 152.
  • the performance criteria may include the decision accuracy to minimize erroneous actuation events where, for example, a lower bit stream may result in a higher error for object detection and localization inferencing.
  • Different end hosts may use different edge servers, so in such a case there may be edge to edge traffic as indicated at 320.
  • the bitrate may be chosen that minimizes lf(Mt)- Btl (utilize highest rate streams at the maximum supported rate) for the wireless capacity measurement M t and bitrate B t .
  • the Producer side edge server 120 may multicast or unicast different rate streams at 330 according to the Consumer side requirements received from the Consumer side ECPs 122. As in the example of FIG.
  • the Producer 154 may select the initial bitrate but the goal of each ECP 122 would be to select bitrates that minimize delay and enhance the user experience given the constraints in the wireless transmission system at any given time.
  • the ECPs 122 may also collect crowd sourced point clouds from multiple vehicles and process data from multiple Producers 154 into a consolidated bitstream using compression and the like and provide the merged bitstream to multiple Consumers 152.
  • a notification message (sessionID, NameStream ID, ResourceMetrics ⁇ Latency,BW ⁇ ) is provided by a Producer 154 of a UE 110 to its associated ECP 122.
  • ID of the named data stream (NS-ID) is assumed to be a contextualized name offering information on the application scenario, and potentially other metrics associated with the scenario.
  • Multiple name streams may be transferred between a UE 110 and its ECP 122 under the same session ID.
  • a notification from the Consumer also may be used to alert the Producer side of the supported rate by the Consumer end host.
  • the supported rate may be an adaptation metric (rate of increase/decrease) or the actual value (e.g., bandwidth or latency).
  • the Producer may be the edge server or the end host (or both), depending on the implementation scenario.
  • Such embodiments are particularly usbericht for application scenarios that have strict latency requirements and the data being transmitted is important data that needs to be received in a very short time frame (e.g., LIDAR data from an AV).
  • the very fast rate adaptation scheme described herein may be used to ensure that the Producer adapts its transmission rate to the Consumer’s current experience at a few millisecond-level granularity.
  • the frequency of the notification messages need not be periodic or not constant as there are multiple sources for the notification message.
  • the notification messages are instead used to provide updates when new UEs are discovered, the network characteristics change, and the like.
  • the end host Consumer and access point thus may receive the requested data streams with the supported lunctionality at optimized data rates.
  • FIG. 4 provides another example of push-based adaptive streaming using multiple edge servers 120A and 120B, with each edge server hosting multiple ECPs 122 (ECP1, ECP2, and ECP3 on first edge server (Edge 1) and ECP4 and ECP 5 on second edge server (Edge 2)) running on virtual machines or containers, and each ECP 122 servicing a single client.
  • FIG. 4 illustrates techniques for pushing a data stream from a single Producer (e.g., P2) to multiple Consumers (e.g., Cl , C2, C3) hosted on the same/different shared/unshared edge servers 120A, 120B.
  • Such adaptive push-based adaptive streaming may be used to establish a dynamic multicast to support efficient bandwidth use in the network resulting from the delivery of high-bandwidth streams among nearby hosts.
  • the Consumer side ECPs subscribe to namespaces at remote ECPs 122 or at a shared ECP (SECP) 400.
  • An SECP 400 enables aggregation of data transfer, processing, etc. by combining the data transfer and processing of two or more ECPs 122 requesting contextualized data streams from the same Producer.
  • the network quality measurements may be a combination of that from multiple Consumer nodes for which the ECPs are hosted at the same server.
  • a shared ECP may request at the highest supported rate from the Producer- side edge server and transcode to differing Consumer needs itself, rather than asking the Producer-side edge server or ECP to do that.
  • a generalized ECP process with access to data stream requests through the hosted ECPs at the same edge node may generate an SECP 400 when there are multiple ECPs 122 targeting the same content.
  • Tradeoffs to consider for creation of an SECP 400 include transcoding efficiency, bandwidth efficiency, etc.
  • related ECPs are informed, with local resolution database updated with its information so that any other ECP 122 targeting the same content may direct their requests to this SECP 400.
  • the SECP 400 creates transport sessions to Consumer-side and Producer-side ECPs 122.
  • the SECP 400 acts as a multicast proxy to receive a single stream from the Producer-side, while acting as a resource manager for the Consumer-side ECPs 122 by transcoding at desired rates.
  • SECP 400 When there are multiple Consumers requesting data streams at similar rates, use of the SECP 400 leads to one-time transcoding, whereas the lack of an SECP 400 may require transcoding multiple times at each Consumer-side ECP 122. It is possible to have multiple SECPs 400 corresponding to different transcoding levels for a group of Consumer-side ECPs requesting the same content at same or different rates. In this case, Consumer requests may be grouped into different rate categories for transcoding purposes, and each of these rate categories may be managed by a single SECP 400.
  • FIG. 4 illustrates an example where P2 provides a data stream NS(P2) of /ID(P2)/Lidar at a transcoding level Lmax of the possible transcoding levels 1, 2, and max.
  • This transcoding may be represented as /NS(P2)/data(Lmax), where Lmax is the maximum available data rate.
  • P2 may provide the same data stream at different specified quality levels to different Consumers.
  • P2 provides the stream NS/(P2) of /ID(P2)/Lidar to its ECP 4.
  • ECP4 provides a remote push of the stream /NS(P2)/data(Lmax) to the SECP 400 on edge server 120 A.
  • SECP 400 transcodes the received data stream to provide a first stream /NS(P2)/data(Ll) to ECP1 to push to Cl and a second stream /NS(P2)/data(L2) to ECP2 to push to C2.
  • Cl and C2 may not receive the datastream NS(P2) at the bitrate Lmax but may receive the data stream at bitrates LI and L2, respectively.
  • SECP 400 processes the data stream NS(P2) to provide the data to the respective Consumers Cl and C2 at the bitrates suitable to Cl and C2. In this fashion, SECP 400 may adapt the data stream based on the Consumer needs to reduce bandwidth usage.
  • SECP 400 means that only one data stream is sent to edge server 120A at the Lmax quality level for the common stream, thereby saving the establishment of a second data stream to edge server 120A and thus providing a dynamic multicast capability.
  • the transcoding may be modified on the fly in response to notifications received from the respective Producers and Consumers.
  • ECP4 may also locally push the data stream NS(P2) to ECP5 for providing to Consumer C3, in this case at the data rate Lmax.
  • ECP3 may also locally push a data stream /NS(Pl)/data(Lmax) from Producer PI to local Consumers Cl and C2.
  • the edge servers 120A and 120B also may maintain a UE-ECP mapping table 410 that keeps track of the sessions between the respective UEs 110 and the ECPs 122.
  • An NS-to-ECP mapping table 420 may also be used to keep track of which local and remote ECPs 122 are receiving the data streams under a given NS provided by the respective Producers. These tables 410 and 420 are particularly helpful during ECP migration or to provide an improved multicast targeting particular Consumers.
  • each Producer may provide different transcoding labels (L ⁇ 1, 2, max ⁇ ) for the given example (and 1,2,..., max more generally) to enable the Consumers to subscribe to lower bandwidth versions or quality levels of the data stream and/or upgrade to higher bandwidth versions or quality levels by specifying the transcoding level.
  • the transcoding level may be specified at the Producer or may be implemented at the ECP 122 to enable multi-casting of the data streams at different rates to different Consumers.
  • FIG. 5A illustrates enhancements of the data transport in sample embodiments through the use of transport proxy.
  • a transport proxy is implemented at the access points 130 (PoAl and PoA2), which may help with name-based prioritized scheduling, resource reservation, and improved notification for more accurate bandwidth estimates.
  • Transport proxy may also be used to track data streams for sessions and to schedule controlled access at the point of attachments or access points 130 based on the prioritized scheduling.
  • a data stream is provided from Producer PI via PoAl to ECP1 and through a wired channel to ECP2.
  • a notification path N may provide update notifications from the PoA2 to the ECP2 represented as (PoA- >ECP) ⁇ Session ID, NameStream ID, Resource Metrics ⁇ .
  • the PoA2 may also extract the data stream header at P to determine how to prioritize streams addressed to the same/different Consumers (intra-session, among streams targeting AV-like scenarios, and inter-session prioritization, and with respect to other scenarios).
  • the PoA2 may also update the data stream header to include information on bandwidth availability, as the PoA2 has access to a current network view (/. ⁇ ? ., number of users, active bandwidth use, aggregate requirements, etc.).
  • each PoA 130 may provide name-based prioritization at the access points by maintaining a PoA Priority Table 500 stored at the edge server 120 that includes the client ID (G) 510, contextualized session name/ID (S(i,j)) 520, and Priority Index (P(i,j)) 530.
  • the session ID i represents the user ID while j represents the stream of the multiple streams of a particular client.
  • each client may be the recipient of multiple streams.
  • i and j represent multiple data streams from user i where the data streams are represented by different levels of prioritization (LI, L2, ..., Lmax) to prioritize data streams from the same Producer.
  • P(i,j) may be a localized parameter that is decided based on the data stream input and local constraints.
  • the session-based contextualized stream information may be carried in an extension header within the packet header as part of the Next Generation (NG) Transport, such as latency constraints, and may be signaled as new transport at the IP layer for the PoA to extract and use to offer prioritized delivery on autonomous vehicle or drone like systems at the access network.
  • NG Next Generation
  • sessions including data streams from applications of autonomous vehicles, drones, etc. may be given higher priority, which is used for making decisions on rate allocation and scheduling (upstream and downstream).
  • the sessions carry contextualized names that are authenticated to validate the use of such names, ensuring consistency of scheduling for push data targeting.
  • the data streams may have classifications and sub-classifications based on name and data stream requirements. Such naming helps with better decision making, as more context may be included with the data streams.
  • joint prioritization based on session-type and latency also may be used. Latency estimations and expected decision (or actuation) timeframes may be incorporated within contextual names to help with scheduling. Latency estimation is desirable due to the mobility of the hosts, different placement strategies associated with edge servers (and the varying distance with respect to end hosts), and the use of non-optimal edge servers without timely migrations.
  • Data streams targeting the same end host may arrive at different times to the access point with differing deadlines. As a result, scheduling policies may be used to account for such dynamicity to optimize decision making.
  • Additional control signaling e.g., ping messages in-between edge server and end points, and among edge servers
  • the RTT measurements may cover multiple network segments: end host to access points, end host to edge server, access point to edge server, and edge server to edge server.
  • each forwarded content may be named to include expected latency of delivery from the point of reception to the end host.
  • priority-based queues may be used with latency-based ordering within the priority queues or a sub queue within the priority queue.
  • vehicle Lidar data may be given priority for queue entries and decisions as Lidar data may be timing critical.
  • the PoAs 130 know how many data streams are provided therethrough and how much bandwidth is needed to avoid a bottleneck at the PoA. Thus, more timing information may be provided on the ECP side to adapt the data streams to mitigate the bottleneck at the PoA 130.
  • the edge server 120 may also include a Priority Table 500 to similarly prioritize data traffic at the edge server 120 in the case where there are multiple ECPs 120 that provide data streams.
  • the prioritization of the data streams is useftd in several contexts.
  • a Consumer end host may be the recipient of multiple streams from multiple Producers (e.g., multiple cars in front in its lane or in opposite lane).
  • a dynamic prioritization scheme enables a decision to be made as to the supported rate per session. A determination on such rates may involve the impact of received data streams on the decision process, which may be provided by edge server that has access to all of the data streams. The decision may be made at the edge server 120 and updated if necessary by the Consumer.
  • the following procedure may be used to update the data stream rates:
  • FIG. 5B illustrates a sample scheduling policy based on a latency requirement in sample embodiments.
  • the scheduling policy may queue these data streams based on latency and rate requirements to ensure the timely delivery of each at the allowed rates.
  • access points acting as transport proxies more granular expected latency measures may be calculated and updated through these access points 130. For example, RTT estimations between UE 110 and PoA 130 and between PoA 130 and ECPs 122 may be calculated and updated.
  • an initial packet from the Producer may be named T-NS-> ... /Lat::90.
  • the same packet may be named T-NS-> .../Lat::82.5.
  • the same packet may be named T-NS-> .../Lat:72.5 before delivery.
  • the latency measures are probabilistic rather than deterministic and that the latency measures may change over time.
  • FIG. 6 illustrates constraints on bandwidth in a sample embodiment of the data transport system.
  • FIG. 6 illustrates multiple options for the data streams: (i) providing a single high quality data stream from Producer P2 to an ECP 122
  • the data stream is multicast to different Consumers Cl and C2 via data streams 600 and 610, respectively, through different ECPs 122 at the same/different quality levels depending on the ECP processing capabilities (in which case there would be a single stream from P2 to its ECP), and (ii) multiple quality data streams corresponding to the same observation to Consumers with different quality requirements/restrictions.
  • the respective uplinks and downlinks may have different bandwidth requirements.
  • the downlink bandwidth requirement may be represented as: for data stream i and Producer j.
  • the uplink bandwidth requirement may be represented as: for data stream i and Producer j.
  • a data stream is uplinked from PI (Wu(l)) to
  • a data stream may be multicast by uplinking Wu(2,2) from P2 via ECP3 to ECP2 for downlink to Cl (WD(2,1) and by uplinking Wu(2,l) from P2 via ECP3 to ECP4 for downlink to C2 (WD(2,2).
  • each uplink and/or downlink may have different bandwidth requirements.
  • the ECPs may keep track of the bandwidth requirements from each Producer and to each Consumer and make adjustments on the fly as appropriate to transport the data stream.
  • the end hosts make decisions regarding the available bandwidth, processing, and storage constraints at the respective system components to optimize the data stream transport efficiency.
  • the decision policy may be expressed as selecting the data stream with the highest available data rate supported by the Consumer’ s wireless link.
  • the decision involves what to send (full/dynamic/compressed), when to send (scheduling to meet deadline with high probability), and at what rate (individual versus joint). Delay variations mostly occur at the access network and are accounted for by the received notifications. Feedback is received on a regular basis to more accurately reflect the latest channel conditions (such as average bandwidth availability based on CQI and PoA requirements).
  • the edge server is considered to be part of (and hence managed) by the access network.
  • the received feedback also may include notifications from the access point that are not directly related to user feedback but to complement the user feedback.
  • information on network usage at the PoA may also be provided.
  • the location of a vehicle may be considered when selecting an ECP link to prioritize. The closer data streams may be transmitted at a higher rate.
  • the decision process depends on what data streams are being received at moments in time. The determinations change on the fly as the channel conditions change and as notifications are received from the end hosts or from the access points.
  • FIG. 7 illustrates the example of FIG. 6 for an application scenario where the points of access act as a transport proxy and, accordingly, may apply prioritization to match bandwidth and latency requirements for AV-like streams with their quality of service needs.
  • control sessions are established between C1-ECP2, Cl-PoA2, and PoA2-ECP2. It will be appreciated that with PoA2 and ECP2 managed by the same administrative domain, a single session might suffice to carry all control signaling between clients connected to PoA2 and ECP2.
  • Cl sends a notification to PoA2 (towards ECP2) with its Channel Quality Indicator (CQI) state, which enables PoA2 to modify the notification message to ECP2 to reflect the current CQI state.
  • PoA2 determines the supported rate based on Cl’s CQI and its current state and updates Cl’s notification before sending a new notification to ECP2 including identifying information on Cl. Additionally, PoA2 may send a batch notification message to merge channels including information on all connected end hosts using the service to the edge, which may then be multicast to all ECPs for all Consumers. ECP2 also requests streams and provides estimates on the supported delivery rate to Cl.
  • the delivery rate information is shared by ECP2 to ECP3, which informs the Producer P2 of the rate at (4) and optionally sends the data stream at an optimal rate that is adapted based on experience with the Consumer’ s link quality.
  • the data traffic may be adapted by upgrading or downgrading at the Producer and/or the Consumer to satisfy the expected Consumer experience.
  • the updates are also provided to the UE-to-ECP mapping table 410 and to the NS-to-ECP mapping table 420 as described above with respect to FIG. 4.
  • FIG. 8 illustrates iurther details of the application scenario where the points of access 130 act as a transport proxy as in FIG. 7.
  • FIG. 8 iurther illustrates the update notifications: Notification path (PoA->ECP) ⁇ Session ID, NameStream ID, Resource Metrics ⁇ that update ECP2 with the updated channel requirements of Cl.
  • PoA2 extracts header data (e.g., resource metrics) to determine how to prioritize streams addressed to the same/different Consumers and may update the header to include information on bandwidth availability, as the PoA2 has access to the current network view (/. ⁇ ? ., number of users, active bandwidth usage, aggregate requirements, etc.).
  • the priority information for the respective data streams may be stored in a PoA Priority Table 500 as described above with respect to FIG. 5A.
  • FIG. 9 illustrates an embodiment of push-based streaming in a client- to-client scenario.
  • Producer P2 is aware of the set of Consumers Cl, C2, C3 requesting its content.
  • Each Consumer is identified by Consumer ID (C(i)), name stream ID (NS(j)) and a set of resource metrics (A) that are tracked in table 900 of the Producer P2.
  • Awareness of Consumers helps with stream generation and delivery through the Producer by enabling the Producer to adapt the rates according to the Consumer needs where the Producer and the Consumer are expected to be within discovery range.
  • the Producer regularly receives at (1) an update on the supported rates by the Consumers subscribing to its content stream.
  • the update may be provided directly from the Consumer to the Producer over a vehicle to vehicle interface (e.g., C3 to P2) or indirectly via the ECPs (e.g., from Cl, C2 via SECP 400 and ECP4 and from C3 via ECP3 and ECP4).
  • the update includes the Consumer ID and the updated Consumer requirements.
  • the Producer P2 determines the optimal transmission policy based on its available resources and requested resources based on one of the following cases:
  • Case I The Producer chooses to send at the maximum supported rate by the receiving Consumers to optimize its bandwidth use, where the edge server transcodes as appropriate; or
  • Case II The Producer creates a scalable stream that is delivered to each receiving Consumer at the proper rate matching the Consumer’ s requirements as when the edge computing resources are limited in regard to transcoding.
  • the Producer may determine the levels for scalable coding to maximize the overall decision accuracy at receiving hosts under the Producer’s transmission limitations.
  • the Producer may send its content to its edge server (Edge 3 including ECP4) using a push data stream including the connection ID, the named stream ID, the scalable coding level, and the named content.
  • the edge server Edge3 may propagate the requested data to each Consumer through each Consumer’s edge server.
  • the pushed data stream may be multicast to other edge servers or sent to each edge server as unicast streams. It will be appreciated that this approach enables caching for quick recovery at the Consumer as described with respect to the ETRA system architecture 100.
  • FIG. 10 illustrates an embodiment of push-based streaming in a server-to-client scenario.
  • the synchronization requirement is not too strict, and it is possible to use host-to-host synchronization among AVs, PoAs, and ECPs 122 to synchronize their clocks.
  • latency requirements may be relative measures (e.g., a combination of expiration time by the Producer and RTTs).
  • the Producer streams are pushed towards these ECPs 122 through the Producer’s ECP 122 (e.g., ECP3) at the maximum supported rate by the Producer (W3 £ Wmax).
  • the Consumers (Cl, C2) update their ECPs 122 (ECP1, ECP2) with their bandwidth and latency requirements (which may depend on the application scenario, e.g., frame generation rate), and other performance measures and estimates.
  • the edge server 120 (Edgel) estimates the supported rate by each Consumer, Wc, requesting the content from the same Producer.
  • SECP 400 may help with achieving optimal performance. For example, if multiple end hosts require similar rate streams, transcoding once would be sufficient, rather than repeating the transcoding multiple times for different end hosts.
  • the Producer side ECP 122 may send the data stream at a maximum rate, and the SECP 400 transcodes based on the requirements of each Consumer request received from its ECPs 122.
  • the edge server 120 (Edgel) may also estimate the acceptable latency values for the content received from the Producer for each Consumer. Edgel may further determine the transcoding level (from a set of available levels) to meet the delivery deadline for the data stream while ensuring the best service quality for different Producers (e.g., vehicles) at different rates. Prior estimates based on previous observations may be used to minimize decision timing (a mapping database may be used). For a Consumer, at its maximum supported rate, Wc(i), the transcoded rate W T (I) should be less than Rc(i).
  • the latency requirements (e.g., based on the speed of a vehicle at the setup of the connection) may be expressed in two ways:
  • Synchronized clock case current time + latency( transcoding) + latency( transmission) + latency(scheduling) + latency(propagation) ⁇ deadline(usability);
  • Relative latency case latency( transcoding) + latency( transmission) + latency(scheduling) + latency(propagation) ⁇ expected (delivery deadline).
  • Edgel chooses the transcoding level based on these requirements and sends the data stream within the identified time constraint as the vehicles move.
  • a periodic and/or dynamic update of these requirements may be provided through control message signaling from the Consumer side.
  • the adaptation rate depends on bandwidth variations, or expectations based on movement patterns, etc., and is carelully chosen while keeping in mind the tradeoffs associated with signaling overhead versus estimation accuracy.
  • Edgel further shares the content with the matching edge computing nodes (e.g., ECP1 and ECP2 having sessions with the corresponding clients), assuming a common edge computing node (e.g., Edgel) is used by multiple ECPs 122 requesting the same content.
  • the edge computing node Edgel pushes the data stream downstream to Cl, C2 using Push() packets Push(connection ID, named stream ID, named content) at bandwidths W1 and W2 of Cl and C2, respectively.
  • W1 and W2 are less than or equal to W3. This approach also enables caching for quick recovery at the Consumer.
  • the Producers may reduce the rate they transmit data. Also, if the end hosts perform processing on the data, for instance to merge point clouds, rather than the edge server, then the Producer end host also may reduce the rate it transmits data. On the other hand, if some data processing is performed at the edge server, then the Producer end host may have no choice but to transmit at the maximum rate available to the edge server based on the type and amount of data that the Producer generates.
  • FIG. 11 illustrates a flow chart of methods of implementing adaptive push streaming in a network having multiple Producer nodes, multiple Consumer nodes, and multiple edge servers in a sample embodiment.
  • the methods described herein may be implemented in edge network servicing autonomous vehicles and/or drones.
  • the adaptive push streaming method is implemented by software on an edge server; however, it will be appreciated that the software may also be implemented in a Producer and/or a Consumer node in farther sample embodiments.
  • the adaptive push streaming method is implemented on a processor that performs operations including receiving (1100) from a Producer node one or more streams of data having a contextualized name including resource constraints.
  • the processor further receives (1110) a wireless capacity measurement from a Consumer node that has subscribed to at least one of the streams of data. From the wireless capacity measurement from the Consumer node, the processor determines (1120) a bit-rate at which to send the at least one stream to the Consumer node via the network and pushes (1130) the at least one data stream to the Consumer node via the network.
  • the processor when the processor receives multiple streams of data from the Producer node and other Producer nodes in the network, the processor optionally may further prioritize (1140) the multiple data streams for transmission based on shared policies with the Producer nodes for the contextualized names to determine what data stream to transmit, when to transmit the data stream, and at what rate to transmit the data stream.
  • the processor optionally may further multicast a data stream to multiple Consumer nodes or unicast the data stream to the Consumer node (1150).
  • the multicast may be a dynamic multicast of the data stream to multiple Consumer nodes where the transmission characteristics of the multicast change on the fly according to channel conditions.
  • the multicasting or unicasting may further include estimating a bit-rate supported by each of the multiple Consumer nodes that have subscribed to the data stream, estimating acceptable latency values for each Consumer node subscribed to the data stream, determining a transcoding level appropriate to meet the acceptable latency values for each Consumer node subscribed to the data stream, and pushing the data stream to each Consumer node subscribed to the data stream. Also, in order to accommodate the transmission characteristics determined from the wireless capacity measurement from the Consumer node, the processor may compress a data stream before transmitting the data stream to the Consumer node.
  • a notification message may optionally be received from an access point (1160) including information relating to network usage at the access point whereby the processor may modify transmission of the data stream based on information in the notification message from the access point.
  • the modified data stream may then be pushed to the one or more subscribing Consumer nodes (1130).
  • the systems and methods described herein thus provide adaptive push- based multi-streaming on contextualized name streams by edge server or Producer end hosts as well as contextualized notification and update messages that carry information relating to dynamic bandwidth resource availability.
  • Receiver driven signaling is used with named stream prioritization at the point of access or service point, and server- to- server signaling provides dynamic bandwidth adaptation at (or through) end hosts.
  • data stream prioritization at points of access is provided through policies shared through the contextualized names.
  • Hierarchical localized edge processing may also be provided for optimization of resources such as bandwidth and/or processing resources.
  • FIG. 12 is a schematic diagram of an example network device 1200 for providing adaptive push streaming as described herein in sample embodiments.
  • network device 1200 may implement an edge node, a Producer, and/or a Consumer in a network domain. Further, the network device 1200 may be configured to implement the techniques described herein, particularly the method illustrated in the scenarios of FIGS. 1-10 and the software embodiment of FIG. 11.
  • the network device 1200 may be configured to implement or support the schemes/features/methods described herein.
  • the features/methods in the disclosure may be implemented using hardware, firmware, and/or software installed to run on hardware.
  • network device encompasses a broad range of devices of which network device 1200 is merely an example.
  • Network device 1200 is included for purposes of clarity of discussion but is in no way meant to limit the application of the present disclosure to a particular network device embodiment or class of network device embodiments.
  • the network device 1200 may be a device that communicates electrical and/or optical signals through a network, e.g., a switch, router, bridge, gateway, etc. As shown in FIG. 12, the network device 1200 may comprise transceivers (Tx Rx) 1210, which maybe transmitters, receivers, or combinations thereof.
  • Tx Rx 1210 may be coupled to a plurality of downstream ports 1220 (e.g., downstream interfaces) for transmitting and/or receiving frames of data from other nodes and a Tx Rx 1210 may be coupled to a plurality of upstream ports 1250 (e.g., upstream interfaces) for transmitting and/or receiving data frames from other nodes, respectively.
  • Tx Rx transceivers
  • a processor 1230 may be coupled to the Tx Rxs 1210 to process the data streams and/or determine which network nodes to send data signals to.
  • the processor 1230 may comprise one or more multi-core processors and/or memory devices 1240, which may function as data stores, buffers, etc.
  • Processor 1230 may be implemented as a general processor or may be part of one or more application specific integrated circuits (ASICs) and/or digital signal processors (DSPs).
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • the network device 1200 also may comprise a stream processing module 1232, which may be configured to receive and to process data streams as described herein.
  • the stream processing module 1232 may be implemented in a general-purpose processor, a field programmable gate array (FPGA), an ASIC (fixed/programmable), a network processor unit (NPU), a DSP, a microcontroller, etc.
  • the stream processing module 1232 may be implemented in processor 1230 as instructions stored in memory device 1240 (e.g., as a computer program product), which may be executed by processor 1230, and/or implemented in part in the processor 1230 and in part in the memory device 1240.
  • the downstream ports 1220 and/or upstream ports 1250 may contain wireless, electrical and/or optical transmitting and/or receiving components, depending on the embodiment.
  • the example computing device is illustrated and described as a network node (e.g., edge server), the computing device may be in different forms in different embodiments.
  • the computing device may instead be a smartphone, a tablet, smartwatch, a communications module of an autonomous vehicle, drone, etc., or other computing device including the same or similar elements as illustrated and described with regard to FIG. 12.
  • Devices, such as smartphones, tablets, and smartwatches, are generally collectively referred to as mobile devices or user equipment.
  • the various data storage elements are illustrated as part of the network node 1200, the storage may also or alternatively include cloud-based storage accessible via a network, such as the Internet or server-based storage.
  • Memory 1240 may include volatile memory and/or non-volatile memory.
  • Network node 1200 may include - or have access to a computing environment that includes - a variety of computer-readable media, such as volatile memory and non-volatile memory, removable storage devices and non removable storage devices.
  • Computer storage includes random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM) or electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, compact disc read-only memory (CD ROM), Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium capable of storing computer-readable instructions.
  • the network node 1200 may include or have access to a computing environment that includes an input interface, an output interface, and a communication interface.
  • the output interface may include a display device, such as a touchscreen, that also may serve as an input device.
  • the input interface may include one or more of a touchscreen, touchpad, mouse, keyboard, camera, one or more device- specific buttons, one or more sensors integrated within or coupled via wired or wireless data connections to the network node 1200, and other input devices.
  • the network node 1200 may operate in a networked environment using a communication connection to connect to one or more remote computers, such as database servers.
  • the remote computer may include a personal computer (PC), server, router, network PC, a peer device or other common DFD network switch, or the like.
  • the communication connection may include a Local Area Network (LAN), a Wide Area Network (WAN), cellular, Wi-Fi, Bluetooth, or other networks.
  • Computer-readable instructions stored on a computer-readable medium are executable by the processor 1230 of the network node 1200, such as the stream processing module 1232.
  • the stream processing module 1232 in some embodiments comprises software that, when executed by the processor 1230 performs network processing operations according to the techniques described herein.
  • a hard drive, CD-ROM, and RAM are some examples of articles including a non-transitory computer-readable medium such as a storage device.
  • the terms computer-readable medium and storage device do not include carrier waves to the extent carrier waves are deemed too transitory.
  • Storage may also include networked storage, such as a storage area network (SAN).
  • SAN storage area network

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Adaptive push-based multi-streaming on contextualized name streams is provided by an edge server or Producer end host. Contextualized notification and update messages may be provided to carry information based on dynamic bandwidth resource availability. Receiver driven signaling towards the named stream prioritization is provided at the point of access or service point, and server-to-server signaling for dynamic bandwidth adaptation is provided at (or through) end hosts. Stream prioritization is provided at points of access through policies shared through contextualized names. Hierarchical localized edge processing for resource optimization (targeting bandwidth and/or computing resources) is also provided. Feedback is received on a regular basis to more accurately reflect the latest channel conditions (e.g., average bandwidth availability) and may also include notification from the access point to complement the user feedback. The adaptive push-based approach maximizes the decision accuracy at the end hosts given bandwidth/processing/storage constraints at the system components.

Description

ADAPTIVE PUSH STREAMING WITH USER ENTITY FEEDBACK
TECHNICAL FIELD
[0001] This application is related to mobile internet of things (IoT) communication systems and, in particular, to programmable edge computing systems that implement adaptive push streaming among nodes in the network.
BACKGROUND
[0002] Next generation mobile IoT applications (such as autonomous vehicle (AV) or drone systems) have strict service requirements. High bandwidth streams are delivered over wireless links (e.g., wireless wide area network (WWAN)) and require low latency to trigger quick response or actuation by/at the end devices hosting the applications. Typical bandwidth requirements may range from hundreds of Mbps to potentially 10s of Gbps (depending on the quality of the streams, such as whether lull versus compressed or dynamic high definition frames are used) and latency requirements may range from 10-100 ms, depending on frame rate and contextual requirements. The wireless channel is also susceptible to impairments triggered by mobility, noise, path loss, and fading, etc., thereby leading to varying wireless link capacities over time.
[0003] Efficient delivery of high bandwidth data streams over wireless links may be addressed by traditional Dynamic Adaptive Streaming over HTTP (DASH) systems that operate in open source wireless sensor and actuator networks to adjust data rates. DASH is an adaptive bitrate streaming technology where a multimedia file is partitioned into a sequence of segments, each of which may include multiple versions corresponding to varying quality levels, and delivered to a client using Hypertext Transfer Protocol (HTTP). End hosts make short-duration data stream requests (for instance, of segments a few seconds long) based on observed quality levels that typically use bandwidth measurements and buffer monitoring at the end hosts. The server in such a system is stateless. In the case of push-based video streaming over HTTP 2.0, the server initiates the process by sending a special frame “push promise.” Upon receiving this frame, the HTTP client does not send out a request until the response is pushed to the client completely. The client retrieves the response from browser cache.
[0004] On the other hand, in a “k-push” scheme, which is a lull-push approach, the server pushes k video segments after response to a request. To preserve the stateless nature of the HTTP server, the server is not responsible for video rate adaptation. The server pushes the segment at the same quality level as the first (lead) segment. This approach may deteriorate network adaptability and may lead to an over-push problem, where the network resources are wasted.
[0005] In “adaptive push,” the server varies the parameter k dynamically to solve the above problems. This approach uses additional control messages for video control adaptation. The client increases k and puts a cap on it according to resource availability (start small, increment at a larger or smaller rate if value is small or large). Adaptive push implements a push-directive header for lead segment request (responded to with a PushAck field).
[0006] Another dimension to streaming is the location of the server, especially since it may involve on-demand processing of content (for instance, transcoding based on received requests). Due to strict service requirements associated with autonomous vehicle (AV) or drone scenarios, the use of edge processing becomes of critical importance, as it may effectively support low- latency offloading of high-bandwidth streams. This is important for applications involving the timely processing of video frames, for instance object detection through dynamic regions of interest encoding, as edge processing may help reduce bandwidth requirements and the perceived latency of the offloading pipeline. Pipeline streaming and inference processes are used for farther latency reduction through parallel streams. However, in cases of strict latency requirement on the delivered high bandwidth streams that have a short lifetime, with the original source of the data stream being another end-host (such as a vehicle or a drone), it is desirable to implement a data push mechanism through an intermediary (such as an edge server). However, in networks such as those used by autonomous vehicles, data from multiple Consumers and Producers will be provided, so the prior art data push approaches noted above are inadequate. For example, to allow for quick recovery due to packet loss, the edge nodes need to be statefal to make decisions on rates.
SUMMARY
[0007] Various examples are now described to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. The Summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
[0008] In sample embodiments, an adaptive push streaming system is described that may provide an adaptive client/server push using minimal client signaling. The described adaptive push streaming system also may adapt at a faster time scale to fit the requirements of next generation IoT networks. A Multi-Producer Multi-Consumer (MPMC) content delivery system with multiple edge servers is described that provides tight delay requirements using a new edge transport protocol. Network nodes become part of an intelligent transport by providing a decision process at bottleneck nodes such as the access points by providing higher priority to streams targeting next generation mobile IoT end hosts.
[0009] According to a first aspect of the present disclosure, there is provided an adaptive push streaming method for use in a network comprising multiple Producer nodes, multiple Consumer nodes, and multiple edge servers. The method includes a processor receiving from a Producer node at least one stream of data having a contextualized name including an identification of resource constraints of the Producer node and the processor receiving a network quality measurement from a Consumer node that has subscribed to the at least one stream of data. The processor determines from the network quality measurement from the Consumer node transmission characteristics of the at least one stream to be sent to the Consumer node via the network and pushes the at least one data stream to the Consumer node via the network.
[0010] According to a second aspect of the present disclosure, there is provided an adaptive push streaming system for a network comprising a plurality of Producer nodes, Consumer nodes, and edge servers. The system includes a Producer node that creates at least one stream of data having a contextualized name including an identification of resource constraints of the Producer node and at least one Consumer node that subscribes to the at least one stream of data and periodically provides a network quality measurement for the at least one Consumer node to the network. The system also includes a Producer-side edge server that receives the at least one stream from the Producer node and the network quality measurement for the at least one Consumer node, determines an appropriate bit-rate at which to send the at least one stream to the at least one Consumer node via the network, and pushes the at least one stream of data to the at least one Consumer node via the network .
[0011] According to a third aspect of the present disclosure, there is provided a non-transitory computer-readable medium that stores computer instructions for providing adaptive push streaming in a network comprising multiple Producer nodes, multiple Consumer nodes, and multiple edge servers, that when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving from a Producer node at least one stream of data having a contextualized name including an identification of resource constraints of the Producer node; receiving a network quality measurement from at least one Consumer node that has subscribed to the at least one stream of data; determining from the network quality measurement from the at least one Consumer node a bit- rate at which to send the at least one stream to the at least one Consumer node via the network; and pushing the at least one data stream to the at least one Consumer node via the network.
[0012] In a first implementation of any of the preceding aspects, the processor receives multiple streams of data from the Producer node and other Producer nodes in the network and prioritizes the multiple data streams for transmission based on shared policies with the Producer nodes for the contextualized names to determine what data stream to transmit, when to transmit the data stream, and with what characteristics to transmit the data stream.
[0013] In a second implementation of any of the preceding aspects, the processor multicasts a data stream to multiple Consumer nodes or unicasts the data stream to the Consumer node according to network quality measurements for Consumer nodes that have subscribed to the data stream.
[0014] In a third implementation of any of the preceding aspects, the processor pushes the data stream to an edge server that establishes a dynamic multicast of the data stream to multiple Consumer nodes.
[0015] In a fourth implementation of any of the preceding aspects, the processor determines from the network quality measurement from the Consumer node whether to compress a data stream before transmitting the data stream to the Consumer node.
[0016] In a fifth implementation of any of the preceding aspects, a notification message is received from the Consumer node including at least an identification of the data stream using the contextualized name to which the Consumer node has subscribed and the network quality measurement.
[0017] In a sixth implementation of any of the preceding aspects, a notification message is received from an access point including information relating to network usage at the access point and modifying transmission of the data stream based on information in the notification message from the access point.
[0018] In a seventh implementation of any of the preceding aspects, when multiple Consumer nodes have subscribed to a data stream, the processor determines an optimal transmission policy based on available network resources and resources requested by the multiple Consumer nodes and pushes the data stream to each of the multiple Consumer nodes as at least one of a multicast and a unicast data stream.
[0019] In an eighth implementation of any of the preceding aspects, the processor estimates a bit-rate supported by each of the multiple Consumer nodes that have subscribed to the data stream, estimates acceptable latency values for each Consumer node subscribed to the data stream, determines a transcoding level appropriate to meet the acceptable latency values for each Consumer node subscribed to the data stream, and pushes the data stream to each Consumer node subscribed to the data stream.
[0020] The method may be performed and the instructions on the computer readable media may be processed by the apparatus, and farther features of the method and instructions on the computer readable media result from the functionality of the apparatus. Also, the explanations provided for each aspect and its implementation apply equally to the other aspects and the corresponding implementations. The different embodiments may be implemented in hardware, software, or any combination thereof. Also, any one of the foregoing examples may be combined with any one or more of the other foregoing examples to create a new embodiment within the scope of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS [0021] In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document. [0022] FIG. 1 illustrates an edge transport system for implementing edge computing and storage transport in a sample embodiment.
[0023] FIG. 2 illustrates an edge server to client (Consumer) scenario in which Producers generate contextualized NameStreams (NSs).
[0024] FIG. 3 illustrates a client (Producer) to edge servers to client (Consumer) scenario in which Producers generate contextualized NameStreams (NSs).
[0025] FIG. 4 illustrates an example of push-based adaptive streaming using multiple edge servers with each edge server hosting multiple edge control points (ECPs) and each ECP servicing a single client.
[0026] FIG. 5A illustrates enhancements of the data transport in sample embodiments through the use of transport proxy.
[0027] FIG. 5B illustrates a sample scheduling policy based on a latency requirement in sample embodiments.
[0028] FIG. 6 illustrates constraints on bandwidth in a sample embodiment of the transport system.
[0029] FIG. 7 illustrates the example of FIG. 6 for an application scenario where the points of access act as a transport proxy.
[0030] FIG. 8 illustrates farther details of the application scenario where the points of access act as a transport proxy as in FIG. 7.
[0031] FIG. 9 illustrates an embodiment of push-based streaming in a client (Producer)-to-client (Consumer) scenario.
[0032] FIG. 10 illustrates an embodiment of push-based streaming in a server-to-client scenario.
[0033] FIG. 11 illustrates a flow chart of a method of implementing adaptive push streaming in the processor of an edge computer in a sample embodiment. [0034] FIG. 12 is a block diagram illustrating circuitry for performing the methods according to sample embodiments.
DETAILED DESCRIPTION
[0035] It should be understood at the outset that although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods described with respect to FIGS. 1-12 may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their lull scope of equivalents.
[0036] The lunctions or algorithms described herein may be implemented in software in one embodiment. The software may include computer executable instructions stored on computer readable media or computer readable storage device such as one or more non-transitory memories or other type of hardware- based storage devices, either local or networked. Further, such lunctions correspond to modules, which may be software, hardware, firmware or any combination thereof. Multiple lunctions may be performed in one or more modules as desired, and the embodiments described are merely examples. The software may be executed on a digital signal processor, ASIC, microprocessor, or other type of processor operating on a computer system, such as a personal computer, server or other computer system, turning such computer system into a specifically programmed machine.
[0037] The systems and methods described herein provide a solution that enables adaptive client/server push streaming using minimal (which may include in-band) end user device signaling. A Multi-Producer Multi-Consumer (MPMC) system is provided with multiple edge servers. In sample embodiments, depending upon the services enabled, the edge servers may or may not modify the received content from the Producers (e.g., stereo video or 3D point cloud streams from autonomous vehicles) to provide to Consumers (e.g., other autonomous vehicles that may have subscribed to the stereo video or 3D point cloud streams from the Producers). When the edge servers process the received content from the Producers, the edge servers should have sufficient resources available to them, not just for regular transcoding operations, but also for additional stream data processing (for instance, video processing lunctions that include merging multiple point clouds or stereo views to generate a detailed unobstructed point cloud or stereo view based on a particular Consumer’ s point of view). However, the edge servers may or may not have access to a wireless link at the access network to control transmission rates towards the Consumers (as they may not have been installed at the access points or the base stations). In sample embodiments described herein, the Producer may either be the producer application at the edge server or the end host (or both), depending on the implementation scenario.
OVERVIEW
[0038] Unlike a DASH- like system, where the clients make requests based on their bandwidth estimates, due to tighter latency constraints, Producers in the adaptive push system described herein may not have timely access to that information to make a decision on behalf of clients (or Consumers) on what rate of data stream to push towards these clients. Unlike the server-based scenario, Producers in the adaptive push system described herein may have limited processing capability and bandwidth availability (with respect to the amount of data being generated by them to help with autonomous decision making, and the transmission channel being wireless) to create multiple streams at different rates. Moreover, due to the nature of the considered application scenarios, the timeframe for making decisions or adapting to changes in rate needs to be very small. Edge servers may be implemented to install such functions in sample embodiments. The edge servers used in the described applications also may have the capability to help clients quickly recover from failures or losses due to being closer to the clients and offering a caching service.
[0039] The systems and methods described herein maximize the decision accuracy at the end hosts given bandwidth/processing/storage constraints of/at the system components. At a minimum, the decision policy is expressed as selecting the data stream with the highest quality (e.g., bitrate and bandwidth) supported by the Consumer’ s wireless link. In general, the decision involves what to send (type and features of the stream, /.<?., iull/dynamic/compressed), when to send (scheduling to meet deadline with high probability), and at what rate (individual versus joint). Delay variations mostly occur at the access network due to varying channel characteristics and the access point being a bottleneck point. Feedback is received from clients on a regular basis to more accurately reflect the latest channel conditions (as average bandwidth availability based on Channel Quality Indicator (CQI) and Point of Access (PoA) requirements). The edge server is considered to be part of (hence managed) by the access network. Accordingly, received feedback at the edge server may also include notifications from the access point not directly related to user feedback but to complement the user feedback. For instance, information on aggregate network usage at the PoA may be provided together with individual client-driven observations/requirements associated with all the similar type clients connected to the same PoA.
[0040] The Producer regularly receives an update on the supported rates by the Consumers subscribing to its content stream. The update includes the Consumer ID and Consumer requirements. These updates may be sent over a vehicle to vehicle (V2V) interface directly from the Consumer to the Producer, or through the Edge Control Points (ECPs) at the edge servers indirectly, in which case, the ECP at the Producer-side edge server may aggregate the information received from ECPs at the Consumer-side edge servers. For all the Consumers subscribing to its content, the Producer may determine the optimal transmission policy based on its available resources and the requested resources. At least two cases are supported:
Case I: The Producer may choose to send at the maximum supported rate by the receiving Consumers to optimize its bandwidth use, in which case the ECPs at the Consumer-side edge servers may transcode the received Producer streams accordingly; or
Case II: The Producer may create a scalable stream that is delivered to each receiving Consumer at the proper rate matching their requirements (in case the edge server use is limited in regard to transcoding).
[0041] Given the supported rates by the Consumer devices, the levels for scalable coding may be determined to maximize the overall decision accuracy at receiving hosts under the Producer’s limitations. Here decision accuracy may reflect how accurately the Consumers identify potential obstacles or localize them and events associated with them. Decision accuracy depends on the quality of the received streams, with lull high definition streams offering the highest accuracy, while lowering the stream rate results in lower accuracy. The Producer may initially send its content to its edge server using push-based streaming by specifying connection ID, name stream ID, scalable coding level, and named content. The Producer side edge server then may propagate the requested content to each Consumer through the Consumer’ s edge server. Depending on the chosen rates and content scalability, the streams may be multicast to other edge servers or sent to each server as unicast data streams. The techniques described herein also enable caching for quick recovery at the Consumer.
[0042] The systems and methods described herein also include a notification/update feature where a notification message (sessionID, name stream ID, ResourceMetrics{ Latency, BW}) is provided. The name stream ID (NS-ID) is assumed to be a contextualized name offering information on the application scenario and potentially other metrics associated with the scenario. Multiple name streams may be transferred between a user entity (UE) and its ECP under the same session ID. A notification is used to alert the Producer side (for which the initial target is the Consumer-side edge server) of the supported rate by the Consumer end host. It may be an adaptation metric (rate of increase/decrease) or the actual bandwidth value.
[0043] Unlike existing application scenarios, the application scenarios in accordance with the adaptive push streaming methods described herein have strict latency requirements. A very fast rate adaptation scheme is required to ensure that the Producer adapts its transmission rate to the Consumer’ s current experience at a few millisecond-level granularity. Furthermore, the frequency of notification messages is not constant, due to aperiodic changes in network quality measures and service requirements, and there are multiple sources for the received notification message(s). In sample embodiments, the network quality is measured, but link quality may also be measured. Also, the wireless capacity may be measured from the Consumer to be modified by the AP, if allowed/supported, based on more accurate estimates. Other information may also be included in the measurement targeting congestion avoidance at the bottleneck nodes. Thus, wireless capacity or link quality are only examples.
[0044] Also, since the stream from a single Producer may be pushed to multiple Consumers hosted on the same/different shared/unshared edge servers (/.<?., ECPs hosted by spatially distributed edge servers), a dynamic multicast may be established to support efficient bandwidth use in the network resulting from the delivery of high-bandwidth streams among nearby hosts. A transport proxy is implemented at the access points, which may help with name-based prioritized scheduling, resource reservation, and improved notification for more accurate available resource (e.g., bandwidth) estimates. At the access point, streams associated with autonomous driving, etc. , may be assigned higher priorities, and within them, sub-classifications may be provided based on, for example, name and data stream requirements. Naming helps with better decision making, as more context may be included within the names. Latency estimations and expected decision (or actuation) timeframes may be incorporated within contextual names to help with scheduling. This approach is desirable due to the mobility of hosts, different placement strategies associated with edge servers (and the varying distance with respect to end hosts), and the use of non-optimal edge servers without timely migrations.
[0045] Streams targeting the same end host may arrive at different times to the access point with differing deadlines. Scheduling policies may be used to account for such dynamic arrivals to optimize decision-making. Additional control signaling (e.g., ping messages in-between edge servers and end points, and among edge servers) may be used to measure the round-trip times (RTTs) on a regular basis. The RTT measurements cover multiple network segments: end host to access points, end host to edge server, access point to edge server, and edge server to edge server. Based on the delivery latency associated with each stream, each forwarded content may be named to include expected latency of delivery from the point of reception to the end host.
[0046] Also, if Consumers cannot support certain delivery rates, Producers may reduce the rate they transmit data. If consuming end hosts perform processing on the data, for instance, to merge point clouds, rather than the edge server, then the Producer end host may reduce the rate it transmits data. On the other hand, if some processing is done at the edge server, then the Producer end host may have no choice but to transmit at the maximum rate available to it based on data it generates. DATA TRANSPORT PLATFORM
[0047] Next generation mobile IoT systems such as autonomous vehicles, mobile robotic and drone systems demand new ways to integrate programmable edge computing and networking resources to manage and control them in real time. A new transport architecture for edge networks to handle low latency and high bandwidth real-time data streams from mobile IoT systems that also require services to handle mobility, service migration, data replication and reliability has been described by Ravindran, et al. in “Method and Apparatus for a Low Latency and Reliable Name Based Transport Protocol,” Application No. PCT/US2019/026583, filed April 9, 2019. The data-centric edge transport protocol described therein enables, for example, autonomous vehicles to share their sensors’ point clouds with each other over a managed edge infrastructure.
The ETRA system facilitates IoT deployment by providing functionality to enable efficient named data streams (NS) sharing among distributed Producers, Consumers, and edge services, per-session caches in the transport layer for operating over an end-to-end session even during mobility, a service- scoped transport layer resolution lunction that resolves NS to end point identifiers, and a cache migration system that is used when edge service migration is needed. The ETRA system architecture will be described herein as a transport architecture in a sample embodiment, although it will be appreciated that other transport architectures may be used in other embodiments.
[0048] FIG. 1 illustrates the ETRA system architecture 100 for implementing edge computing and storage transport in a sample embodiment. As illustrated, the ETRA system architecture 100 is a three-tier architecture that includes, at the lowest level, the user entity (UE) 110, which has, in the case of a network suitable for autonomous vehicles, computing resources to process its sensor data and to execute several inference tasks related to object detection, tracking, localization and mapping. In the illustrated embodiment, the UE 110 may be an autonomous vehicle (AV) that is both a Consumer and a Producer (AV-CP) of data streams.
At the second level, the ETRA system architecture 100 includes an edge computing platform comprising one or more edge servers 120 having one or more edge control points (ECPs) 122 and which is placed no farther than the central office (CO). The edge computing platform 120 offers the benefits of both offloading the local computing of the UEs 110 as well as generating a complimentary ‘network view that may be fased with crowdsourced input from other UEs 110 in its vicinity and other static resources around its location and input into the UE’ s inference engine in real-time to improve the decision making of the UE’ s inference engine. In this architecture, a one-to-one relationship between an ECP 122 and the UE 110 may be assumed considering the need for performance and predictability towards the workloads. As indicated, the UEs 110 access the ECPs 122 via points of access 130. At the third level, the ETRA system architecture 100 includes a central cloud computing system 140 that is involved in longer time scale computing and network provisioning to sustain the edge services. Using this three-tier architecture, an AV-CP (UE 110) communicates over the ETRA system 100 with the central cloud computing system 140 that, in turn, communicates over an internet protocol (IP) network or information centric network (ICN) with other network nodes to process and transport data streams. [0049] The ETRA system architecture 100 is shown in more detail on the right-hand side of FIG. 1. The ETRA system architecture 100 depicted therein is optimized for quick distribution of high bandwidth sensor data from mobile IoT systems to edge computing devices and from the edge computing devices to contextually relevant IoT devices. A secure transport-level publish/subscribe (pub/sub) design carrying named data units (NDUs) with scalable transport caches is modified to incorporate pub/sub model dissemination with a name-based pull application programming interface (API). In the pub/sub architecture, one subscribes to content published under certain namespace, and when that content becomes available, it is pushed towards the subscribers. Subscription tables are maintained at designated nodes (servers that push content), which identify end hosts that subscribed to the content.
[0050] In the present case, Consumers subscribe at the Consumer-side edge server. The Consumer-side edge server uses the information on namespaces to then subscribe at the Producer-side edge servers. For that to happen, the Producer or Producer-side edge servers update a resolution or namespace mapping database on how to discover content under its namespaces. The mappings are updated regularly, given a scenario with mobile publishers, e.g., after handover, a Producer may migrate to a different edge server, in which the subscriptions need to be updated to point to the internet protocol (IP) address of the new edge server location.
[0051] In sample embodiments, a sub API is used by the Consumers to subscribe to a named data stream (NS), while a push API enables securely pushing an NS to multiple Consumers. A pull API is mainly used to recover named data after an application detects packet loss (or missing data). These APIs operate over sessions between an ECP 122 and a UE 110 (UE-ECP) or between respective ECPs 122 (ECP-ECP). Thus, an ECP 122 may be used to process data that is contextualized to its serviced UE 110, or the ECP 122 may act as an NS relay to other ECPs 122 that may effectively aid with the creation of an overlay multicast tree.
[0052] Mobility of the UE 110 may be handled using a set of control lunctions in the transport layer that coordinate to update the binding between the UE’ s new network address and the persistent session state (/.<?., connection- ID and security context). The control lunctions may include: (i) probing by the ECP 122 (towards the UE 110) to detect UE 110 reachability over the session lifetime; (ii) mobility event driven signaling by the UE 110 (towards the ECP 122), whenever the UE 110 changes its point of access 130; (iii) re-registering the NS prefixes by a UE transport layer using a name stream resolution lunction (NSRF) that enables resolution of non-local NSs using an external service- scoped name stream resolution system (NSRS); and (iv) re-resolving by the ECP 122 the UE’s new point of access 130 using its NSRF anytime during the session. These lunctions enable an ECP 122 to redirect an NS to the UE 110 at its current point of access 130. In addition, once a session is restored, a session cache in the ECP 122 may be used by the UE 110 to recover any lost data. In this case, the recovery is handled by the UE’ s Consumer application.
[0053] The ETRA system architecture 100 also provides service migration with respect to the migration of the ECP services to a new host, which may be triggered due to mobility or lack of resource availability (/.<?., computation or bandwidth). The service migration is handled by transport layer control lunctions that aid with the migration. With respect to migration due to mobility, a control function uses a probing lunction to ensure that the session round trip time (RTT) remains within the threshold required by the application. If the transport session RTT violates this threshold, session migration may be initiated. During this service migration process, a new host for the ECP 122 may be determined, the session context (connection ID, security context) and the session caches may be restored at the new host, services may be bootstrapped to allow the application components of the ECP 122 to operate over the restored session state, and migration control lunctions may inform the UE 110 of the new network address.
In addition, control lunctions also update the binding of the NS prefixes using the NSRF to allow the UE 110 to re-resolve the ECP’s new host during the session. [0054] These Emotionalities are enabled using the ETRA lunctional components shown in FIG. 1. At the application level, four types of lunctional modules may be provided: session manager 150, Consumer 152, Producer 154, and discovery module 156, all of which may be processed by data processor 158. While these applications apply to both the UE 110 and the ECP 122, the UE 110 also includes the discovery function to contextually discover the set of nearby UEs 110 that may operate over machine to machine interfaces, also notified to the ECP 122, which may then subscribe to sensor data coming from ECPs 122 serving those UEs 110. The machine to machine discovery functions could be contextually driven by information-centric network (ICN) architectures. Accordingly, Consumers may directly subscribe to the NS generated by its peers (/.<?., nearby UEs 110) from the ECP 122. NS name discovery by the UE 110 depends on the role of the ECP 122, which may act as a relay point or as a data processor. In the former case, names may be inferred using the naming schema design chosen by the IoT system application alone, while in the latter scenario, another round of name discovery may take place between a UE 110 and its ECP 122, identifying the content based on the current binding between them. The latter scenario may necessitate content management due to service migration using, for example, expiry policies, while at the same time taking advantage of the ECP’s computing capability towards data contextualization to suit the dynamic requirements of the UE 110.
[0055] The ETRA system architecture 100 includes two high level transport functions including session transport (ST) lunctions 160 that manage a session between the end point and the edge service, and common transport (CT) lunctions 170 that are generic lunctions or modules to help with pub/sub, data policy management, mobility, and state migration control lunctions. As illustrated in FIG. 1, the ST lunctions include Session Caching and Registration Function 162 that supports the use of transport layer caches and policy associated with session caching during the session’ s lifetime. Cached data also has a shareability context managed by the CT’ s access control lunction, depending on its source. For example, instance data from UE 110 may be usable at multiple points, whereas processed data from the ECP 122 that is contextualized to a specific UE 110 may not be shareable. The registration lunction allows the NS prefixes to be registered for resolution to a host address by remote Consumers. The registration action is driven either by the NS policy and also used during events such as mobility or by service migration. Flow and Congestion Control 164 may be evolved considering the data characteristics (/.<?., time series nature, temporal constraints, mission critical features). These features use push streaming to avoid any data blocking with the aid of application aware scheduling, receiver driven network prioritization, and grant based flow control to handle congestion in access network scenarios. Application/Context aware scheduling 166 complements receiver driven flow and congestion control by using the UE 110 based feedback on wireless connectivity to prioritize NDUs for transmissions. More specifically, the transport layer multiplexes the NS from the various sessions based on the application requirements. The sessions are managed by Session Management Functions 168 using, for example, Connection ID and security context.
[0056] As farther illustrated in FIG. 1, the CT lunctions include the Local Resource Directory 171 that represents the local database of NS coming from local/remote Producers. The Name Stream Resolution Function (NSRF) 172 enables resolution of non-local NS using an external name stream resolution system (NSRS), whose design and implementation may be influenced by application requirements. Mobility Control Functions 173 help with end-to-end probing and update signaling between the UE 110 and ECP 122 whenever the network address changes. State Migration Control Functions 174 include control/data migration lunctions between two ECPs 122. Access Control and Authentication 175 manages the access policy for the local session-based caches at the host. Any ECP 122 request to a UE’s data stream requires authentication before granting access to it. Inter-Session Data Exchange 176 helps with low- overhead based copy of data between sessions, enabling the relay function with the ECP 122.
[0057] With reference to FIG. 1, the following description will consider an embodiment of a distributed application scenario corresponding to augmented vehicular reality (AVR) for autonomous vehicles, where a vehicle gets the view of other vehicles to generate a more detailed view of its environment.
Session Establishment
[0058] In a sample embodiment, it is assumed that an autonomous vehicle (AV) service provider has provisioned sufficient resources at the edge server 120 for the ECP services 122 to serve a fleet of AVs (UE) 110. Each AV may have a Producer application 154 and a Consumer application 152. The AVR service may be invoked, for example, based on a UE 110 request to obtain the view of the front AV(s). Once the AV is connected to the network, an application for the AVR service discovers the closest ECP 122, where the discovery fanction may be handled as part of the edge platform services to interconnect the UE 110 with the edge server 120. After the closest ECP 122 has been discovered, the session managers 150 at both ends establish a secure connection using service level authentication, thereby also creating the security context at the transport layer. Consumers at either end may issue requests to discover the NS and subscribe to them or derive the names of the NSs based on a pre-existing knowledge of the naming schema by the session manager 150. For example, a LIDAR NS from an AV (AVI) may be named as /<AVl-ID>/<Lidar>/ to which an ECPl’s Consumer may subscribe. Similarly, the processed crowdsourced data for AVI from ECP1 may be named /<AVl-ID>/<ECPl>/<AVR-view>/ to which the other UEs 110 may subscribe.
[0059] Next, an end point subscribes to the NS, over which the respective Producers publish data, including the shareability attributes of the streamed data. Published data then may be pushed over the secure transport session. As the data is named contextually, the transport layer may apply scheduling policies on the data from multiple sessions before handling the data over the network layer. Once the data reaches the peer side, it is decrypted and sent to the consuming application for farther processing. Named data at the ECP end is saved in the cache making it available to other ECPs 122 and to the UE 110 (for recovery). NS-IDs may be registered to a service scoped NSRS to help with discovery by the other ECPs 122.
Dynamic Multicast among ECPs
[0060] In the case of two ECP instances (ECP1 and ECP2) serving two different AVs (AVI and AV2) that are in each other’s proximity, as soon as AVI discovers AV2 over a vehicle to vehicle interface, AV 1 may communicates its discovery to ECP1 over its secure data channel, after which ECP1 requests AV2’s NS. Next, ECPl’s Consumer resolves ECP2’s namespace using NSRS by invoking the host level NSRF. Once ECP2’s namespace is resolved, ECPl’s Consumer starts to establish a secure session with ECP2 by sending a secure session request to ECP2. Once a secure session is established between ECP1 and ECP2, ECPl’s Consumer may initiate arequest for AV2’s NS, which is then authenticated by ECP2’s session manager 150 to determine access to AV2’s NS. Once the request is authenticated, the request is delivered to ECP2’s Consumer, which then multicasts the stream over that session.
[0061] Once the data arrives over ECPl’s transport session, it is cached and sent to the Consumer and the data processor 158 for AVI’s consumption. Specifically, data is first sent to ECPl’s Producer (which multiplexes the named data objects of multiple AVs’ NS). At this point, due to the availability of AV2’s NS at ECP1, this availability could be also registered to the NSRS.
Mobility and Service Migration
[0062] Services leverage the session-probe primitive from the transport layer to track the UE 110 mobility during the lifetime of a session. The UE-ECP API uses an application specific RTT threshold of x to decide if an ECP service migration is required or not. Here, the need for service migration is twofold: (i) to migrate the security/connection context associated with a session (/.<?., host level encryption and authentication keys) and (ii) to migrate the cache state to the new ECP 122. The objective is to allow the UE 110 connected to the new point of access 130 to immediately start sending/receiving data without farther session negotiations.
[0063] Service migration is triggered once the ECP 110 determines that the RTT threshold is violated (for a certain duration), which is primarily driven by the UE 110 mobility. If, after a handover, the UE-ECP connection continues to satisfy the RTT threshold, then the same ECP 122 may continue to serve the UE 110. In this case, the session cache in the ECP 122 may help applications to recover data lost during the transition. Otherwise, the current ECP 122 may use an operator driven network service to identify the next closest edge server 120 with sufficient computer resources to host the ECP 122 workload associated with the UE 110, after which the service containers are orchestrated. Next, the transport session state is transferred securely so that the fature data from the UE 110 may be properly authenticated and decrypted before handing the data to the ECP 122. Once the security context is transferred, Consumer and Producer caches may be transferred to the new ECP 122. After migration, the UE 110 may resolve the new ECP 122. Once the UE 110 learns the new ECP’s IP address, it may continue to use the same session context (/.<?., connection- ID and shared encryption key) between the Consumer and Producer of the UE/ECP. Also, in the above example, the new association to the NS may be renamed /<AV1- ID>/<ECP2>/<AVR-view>/.
SAMPLE EMBODIMENTS
[0064] The following embodiments are implemented on the ETRA system architecture 100 to provide adaptive push streaming. As will become apparent from the following description, the adaptive push streaming in sample embodiments uses notifications and updates from Producers to keep track of Producer data streams, uses multi-rate adaptation and publishing to optimize the data rates for streams provided to various Consumers, and provides adaptive push streaming based on the available resources at the end hosts and service points. [0065] FIG. 2 illustrates an edge server 120 to client (Consumer) scenario in which Producers generate contextualized NameS treams (NSs) such as /AV/Lidar/. In sample embodiments, the namespace of the AV-NS may also include resource constraints such as latency and bandwidth. In this example, it is assumed that the ECP 122 services each of the UEs 110 via a common point of access 130, where the ECP 122 and the UEs 110 each has a Consumer application 152 and a Producer application 154, and that there is no server-server traffic. In operation, a UE 110 discovers other vehicles using a discovery request 200 in the form of Discovery{ {UE ID, NameStream ID},{UE-ID, NS-ID},... } which may include information on name changes for Lidar, cameras, etc. when the UE 110 is an autonomous vehicle, as in the illustrated example. The UE 110 may also receive the data in the form of a PeerUpdate{UE ID, NameStream ID, resource metrics}. Upon discovery of other vehicles or data streams, the UE 110 sends an edge notification/update message 210 in the form of Notification! Session ID, NameStream ID, Resource Metrics} to the ECP 122 to notify ECP 122 of the discovered vehicle and its associated data streams. For example, the notification may provide updated information including /<AVl-ID>/<Lidar>/ to identify new vehicle AVI and its data stream “Lidar.” The notifications 210 may include update information on multiple NSs from a Producer 154 of the UE 110 including Lidar, camera, and other data from the UE 110.
[0066] The Consumer application 152 of the UE 110 may also periodically or in response to event triggers update the wireless capacity measurement (e.g., Channel Quality Indicator (CQI), Signal-to-Noise Ratio (SNR), or actual bandwidth estimates) to the ECP service. As illustrated in FIG. 2, the server-side Producer may further provide a data stream to a Consumer at 220 and update the Producer at 230. One selection criteria could be for the Producer and/or ECP 122 to choose the bitrate that minimizes lf(Mt)- Btl (utilize highest rate streams at the maximum supported rate) for the wireless capacity measurement Mt and bitrate Bt. Then, if multiple Consumers request the same data stream (at same/different rates), data streams at different bit rates Bt may be provided from cache 240 to support, for example, a transport layer multicast. The initial transmission rate thus may be determined by the Producer 154 of the UE 110, with or without knowledge of the bitrate limitations of the Consumer applications 152, and the transmission rate to the respective Consumers 152 may be determined by the ECP 122 based on the wireless capacity measurement data provided by the respective Consumers serviced by the ECP 122 and the bitrate of the data provided by the Producer application 154. The goal of the ECP 122 would be to select bitrates that minimize delay and enhance the user experience given the constraints in the wireless transmission system at any given time. The ECP 122 may also collect crowd sourced point clouds from multiple vehicles and process that data into a consolidated bitstream to multiple Consumers 152.
[0067] FIG. 3 illustrates a client (Producer) to edge servers 120 to client (Consumer) scenario in which Producers generate contextualized NameStreams (NSs) such as /AV/Lidar/. In this example, the Consumer and Producer side ECPs 122 need not be at the same location and may be hosted at the same or different edge servers 120. The Consumers 152 and Producer 154 of the UEs 110 may update the wireless capacity measurement (e.g., Channel Quality Indicator (CQI), Signal-to-Noise Ratio (SNR), or actual bandwidth estimates) to its associated ECP service to share with the AV service. As in the previous example, these measurements are used to predict the supported bitrates at the Consumer side. The end-host Producer 154 makes the initial choice of what bitrate stream(s) to send to the server-side Consumer 152 at 300. However, the ECP 122 may make the final choice of what to send to the end-host Consumer 152 based on the wireless capacity measurement data. For example, different Consumers 152 may request data at different bitrates at 310 due to transmission conditions, hardware limitations, etc. The ECP 122 on the Producer side may decide how to best transmit the requested data stream by determining what transcoding to apply, what bitrates to use, etc. The ECP 122 on the Consumer side may similarly decide how to best transmit the requested data stream to the Consumer 152. The performance criteria may include the decision accuracy to minimize erroneous actuation events where, for example, a lower bit stream may result in a higher error for object detection and localization inferencing. Different end hosts may use different edge servers, so in such a case there may be edge to edge traffic as indicated at 320. As in the previous example, the bitrate may be chosen that minimizes lf(Mt)- Btl (utilize highest rate streams at the maximum supported rate) for the wireless capacity measurement Mt and bitrate Bt. The Producer side edge server 120 may multicast or unicast different rate streams at 330 according to the Consumer side requirements received from the Consumer side ECPs 122. As in the example of FIG. 2, the Producer 154 may select the initial bitrate but the goal of each ECP 122 would be to select bitrates that minimize delay and enhance the user experience given the constraints in the wireless transmission system at any given time. The ECPs 122 may also collect crowd sourced point clouds from multiple vehicles and process data from multiple Producers 154 into a consolidated bitstream using compression and the like and provide the merged bitstream to multiple Consumers 152.
[0068] As noted with respect to FIG. 2 and FIG. 3, a notification message (sessionID, NameStream ID, ResourceMetrics{Latency,BW}) is provided by a Producer 154 of a UE 110 to its associated ECP 122. In sample embodiments, the ID of the named data stream (NS-ID) is assumed to be a contextualized name offering information on the application scenario, and potentially other metrics associated with the scenario. Multiple name streams may be transferred between a UE 110 and its ECP 122 under the same session ID. A notification from the Consumer also may be used to alert the Producer side of the supported rate by the Consumer end host. The supported rate may be an adaptation metric (rate of increase/decrease) or the actual value (e.g., bandwidth or latency). As noted in the examples of FIG. 2 and FIG.3, the Producer may be the edge server or the end host (or both), depending on the implementation scenario. Such embodiments are particularly uselul for application scenarios that have strict latency requirements and the data being transmitted is important data that needs to be received in a very short time frame (e.g., LIDAR data from an AV). The very fast rate adaptation scheme described herein may be used to ensure that the Producer adapts its transmission rate to the Consumer’s current experience at a few millisecond-level granularity. The frequency of the notification messages need not be periodic or not constant as there are multiple sources for the notification message. In sample embodiments, the notification messages are instead used to provide updates when new UEs are discovered, the network characteristics change, and the like. The end host Consumer and access point thus may receive the requested data streams with the supported lunctionality at optimized data rates.
[0069] FIG. 4 provides another example of push-based adaptive streaming using multiple edge servers 120A and 120B, with each edge server hosting multiple ECPs 122 (ECP1, ECP2, and ECP3 on first edge server (Edge 1) and ECP4 and ECP 5 on second edge server (Edge 2)) running on virtual machines or containers, and each ECP 122 servicing a single client. FIG. 4 illustrates techniques for pushing a data stream from a single Producer (e.g., P2) to multiple Consumers (e.g., Cl , C2, C3) hosted on the same/different shared/unshared edge servers 120A, 120B. Such adaptive push-based adaptive streaming may be used to establish a dynamic multicast to support efficient bandwidth use in the network resulting from the delivery of high-bandwidth streams among nearby hosts.
[0070] In this example, the Consumer side ECPs (ECP1, ECP2, and ECP5) subscribe to namespaces at remote ECPs 122 or at a shared ECP (SECP) 400. An SECP 400 enables aggregation of data transfer, processing, etc. by combining the data transfer and processing of two or more ECPs 122 requesting contextualized data streams from the same Producer. For example, the network quality measurements may be a combination of that from multiple Consumer nodes for which the ECPs are hosted at the same server. A shared ECP may request at the highest supported rate from the Producer- side edge server and transcode to differing Consumer needs itself, rather than asking the Producer-side edge server or ECP to do that. In this case, a generalized ECP process with access to data stream requests through the hosted ECPs at the same edge node may generate an SECP 400 when there are multiple ECPs 122 targeting the same content. Tradeoffs to consider for creation of an SECP 400 include transcoding efficiency, bandwidth efficiency, etc. Upon creation, related ECPs are informed, with local resolution database updated with its information so that any other ECP 122 targeting the same content may direct their requests to this SECP 400. The SECP 400 creates transport sessions to Consumer-side and Producer-side ECPs 122. The SECP 400 acts as a multicast proxy to receive a single stream from the Producer-side, while acting as a resource manager for the Consumer-side ECPs 122 by transcoding at desired rates. When there are multiple Consumers requesting data streams at similar rates, use of the SECP 400 leads to one-time transcoding, whereas the lack of an SECP 400 may require transcoding multiple times at each Consumer-side ECP 122. It is possible to have multiple SECPs 400 corresponding to different transcoding levels for a group of Consumer-side ECPs requesting the same content at same or different rates. In this case, Consumer requests may be grouped into different rate categories for transcoding purposes, and each of these rate categories may be managed by a single SECP 400.
[0071] Referring based to FIG. 4, FIG. 4 illustrates an example where P2 provides a data stream NS(P2) of /ID(P2)/Lidar at a transcoding level Lmax of the possible transcoding levels 1, 2, and max. This transcoding may be represented as /NS(P2)/data(Lmax), where Lmax is the maximum available data rate. Thus, P2 may provide the same data stream at different specified quality levels to different Consumers. P2 provides the stream NS/(P2) of /ID(P2)/Lidar to its ECP 4. In turn, ECP4 provides a remote push of the stream /NS(P2)/data(Lmax) to the SECP 400 on edge server 120 A. SECP 400 transcodes the received data stream to provide a first stream /NS(P2)/data(Ll) to ECP1 to push to Cl and a second stream /NS(P2)/data(L2) to ECP2 to push to C2. In this case, Cl and C2 may not receive the datastream NS(P2) at the bitrate Lmax but may receive the data stream at bitrates LI and L2, respectively. SECP 400 processes the data stream NS(P2) to provide the data to the respective Consumers Cl and C2 at the bitrates suitable to Cl and C2. In this fashion, SECP 400 may adapt the data stream based on the Consumer needs to reduce bandwidth usage. Also, use of the SECP 400 means that only one data stream is sent to edge server 120A at the Lmax quality level for the common stream, thereby saving the establishment of a second data stream to edge server 120A and thus providing a dynamic multicast capability. As the quality levels change, the transcoding may be modified on the fly in response to notifications received from the respective Producers and Consumers. ECP4 may also locally push the data stream NS(P2) to ECP5 for providing to Consumer C3, in this case at the data rate Lmax. Also, ECP3 may also locally push a data stream /NS(Pl)/data(Lmax) from Producer PI to local Consumers Cl and C2.
[0072] As illustrated in FIG. 4, in sample embodiments the edge servers 120A and 120B also may maintain a UE-ECP mapping table 410 that keeps track of the sessions between the respective UEs 110 and the ECPs 122. An NS-to-ECP mapping table 420 may also be used to keep track of which local and remote ECPs 122 are receiving the data streams under a given NS provided by the respective Producers. These tables 410 and 420 are particularly helpful during ECP migration or to provide an improved multicast targeting particular Consumers. [0073] As farther illustrated in FIG. 4, each Producer (e.g., P2) may provide different transcoding labels (L{1, 2, max}) for the given example (and 1,2,..., max more generally) to enable the Consumers to subscribe to lower bandwidth versions or quality levels of the data stream and/or upgrade to higher bandwidth versions or quality levels by specifying the transcoding level. As noted above, the transcoding level may be specified at the Producer or may be implemented at the ECP 122 to enable multi-casting of the data streams at different rates to different Consumers. [0074] FIG. 5A illustrates enhancements of the data transport in sample embodiments through the use of transport proxy. As illustrated, a transport proxy is implemented at the access points 130 (PoAl and PoA2), which may help with name-based prioritized scheduling, resource reservation, and improved notification for more accurate bandwidth estimates. Transport proxy may also be used to track data streams for sessions and to schedule controlled access at the point of attachments or access points 130 based on the prioritized scheduling. In the example illustrated in FIG. 5A, a data stream is provided from Producer PI via PoAl to ECP1 and through a wired channel to ECP2. A notification path N may provide update notifications from the PoA2 to the ECP2 represented as (PoA- >ECP) {Session ID, NameStream ID, Resource Metrics}. The PoA2 may also extract the data stream header at P to determine how to prioritize streams addressed to the same/different Consumers (intra-session, among streams targeting AV-like scenarios, and inter-session prioritization, and with respect to other scenarios). The PoA2 may also update the data stream header to include information on bandwidth availability, as the PoA2 has access to a current network view (/.<?., number of users, active bandwidth use, aggregate requirements, etc.).
[0075] As also illustrated in FIG. 5A, each PoA 130 may provide name-based prioritization at the access points by maintaining a PoA Priority Table 500 stored at the edge server 120 that includes the client ID (G) 510, contextualized session name/ID (S(i,j)) 520, and Priority Index (P(i,j)) 530. In this example, for the session ID, i represents the user ID while j represents the stream of the multiple streams of a particular client. Thus, each client may be the recipient of multiple streams. In the case of the Priority Index, i and j represent multiple data streams from user i where the data streams are represented by different levels of prioritization (LI, L2, ..., Lmax) to prioritize data streams from the same Producer. P(i,j) may be a localized parameter that is decided based on the data stream input and local constraints. In sample embodiments, the session-based contextualized stream information may be carried in an extension header within the packet header as part of the Next Generation (NG) Transport, such as latency constraints, and may be signaled as new transport at the IP layer for the PoA to extract and use to offer prioritized delivery on autonomous vehicle or drone like systems at the access network.
[0076] At the access points 130, sessions including data streams from applications of autonomous vehicles, drones, etc. may be given higher priority, which is used for making decisions on rate allocation and scheduling (upstream and downstream). The sessions carry contextualized names that are authenticated to validate the use of such names, ensuring consistency of scheduling for push data targeting. The data streams may have classifications and sub-classifications based on name and data stream requirements. Such naming helps with better decision making, as more context may be included with the data streams. Assuming that the latency variations mostly occur at the access network, joint prioritization based on session-type and latency also may be used. Latency estimations and expected decision (or actuation) timeframes may be incorporated within contextual names to help with scheduling. Latency estimation is desirable due to the mobility of the hosts, different placement strategies associated with edge servers (and the varying distance with respect to end hosts), and the use of non-optimal edge servers without timely migrations.
[0077] Data streams targeting the same end host may arrive at different times to the access point with differing deadlines. As a result, scheduling policies may be used to account for such dynamicity to optimize decision making. Additional control signaling (e.g., ping messages in-between edge server and end points, and among edge servers) may be used to measure the RTTs on a regular basis. The RTT measurements may cover multiple network segments: end host to access points, end host to edge server, access point to edge server, and edge server to edge server. Based on delivery latency associated with each data stream, each forwarded content may be named to include expected latency of delivery from the point of reception to the end host. In sample embodiments, priority-based queues may be used with latency-based ordering within the priority queues or a sub queue within the priority queue. Thus, in the case of autonomous vehicles, vehicle Lidar data may be given priority for queue entries and decisions as Lidar data may be timing critical.
[0078] It will be appreciated that the PoAs 130 know how many data streams are provided therethrough and how much bandwidth is needed to avoid a bottleneck at the PoA. Thus, more timing information may be provided on the ECP side to adapt the data streams to mitigate the bottleneck at the PoA 130.
Also, the edge server 120 may also include a Priority Table 500 to similarly prioritize data traffic at the edge server 120 in the case where there are multiple ECPs 120 that provide data streams.
[0079] The prioritization of the data streams is useftd in several contexts. For example, a Consumer end host may be the recipient of multiple streams from multiple Producers (e.g., multiple cars in front in its lane or in opposite lane). A dynamic prioritization scheme enables a decision to be made as to the supported rate per session. A determination on such rates may involve the impact of received data streams on the decision process, which may be provided by edge server that has access to all of the data streams. The decision may be made at the edge server 120 and updated if necessary by the Consumer. After the discovery of a new end host to receive data streams from, the following procedure may be used to update the data stream rates:
(1) Reduce existing downstream session rates by a, while assigning a new data stream a rate of co, where
Figure imgf000027_0001
(2) After measuring the relative dynamicity of each received stream and its impact on the actuation process, update the priority of sessions.
(3) Calculate the new rates based on the new session-based priorities. [0080] FIG. 5B illustrates a sample scheduling policy based on a latency requirement in sample embodiments. In FIG. 5B, as the access point (PoAl) receives data streams, the scheduling policy may queue these data streams based on latency and rate requirements to ensure the timely delivery of each at the allowed rates. With access points acting as transport proxies, more granular expected latency measures may be calculated and updated through these access points 130. For example, RTT estimations between UE 110 and PoA 130 and between PoA 130 and ECPs 122 may be calculated and updated. In an example assuming a 100ms delivery latency requirement and a 20 ms RTT between Producer PI and ECP2, an initial packet from the Producer may be named T-NS-> ... /Lat::90. Assuming a 15 ms RTT between ECP1 and ECP2, the same packet may be named T-NS-> .../Lat::82.5. Also, assuming 20ms RTT between ECP1 and the Consumer Cl , the same packet may be named T-NS-> .../Lat:72.5 before delivery. It will be appreciated that the latency measures are probabilistic rather than deterministic and that the latency measures may change over time.
[0081] FIG. 6 illustrates constraints on bandwidth in a sample embodiment of the data transport system. FIG. 6 illustrates multiple options for the data streams: (i) providing a single high quality data stream from Producer P2 to an ECP 122
(one of the many containers at the edge server ideally services a single client), where the data stream is multicast to different Consumers Cl and C2 via data streams 600 and 610, respectively, through different ECPs 122 at the same/different quality levels depending on the ECP processing capabilities (in which case there would be a single stream from P2 to its ECP), and (ii) multiple quality data streams corresponding to the same observation to Consumers with different quality requirements/restrictions. It will be appreciated that the respective uplinks and downlinks may have different bandwidth requirements. For example, the downlink bandwidth requirement may be represented as:
Figure imgf000028_0001
for data stream i and Producer j. Similarly, the uplink bandwidth requirement may be represented as:
Figure imgf000028_0002
for data stream i and Producer j. [0082] In the example of FIG. 6, a data stream is uplinked from PI (Wu(l)) to
ECP1, pushed to ECP2, and downlinked to Cl (WD(1,1)). In this example, Wu(l) does not necessarily equal WD(1,1). On the other hand, a data stream may be multicast by uplinking Wu(2,2) from P2 via ECP3 to ECP2 for downlink to Cl (WD(2,1) and by uplinking Wu(2,l) from P2 via ECP3 to ECP4 for downlink to C2 (WD(2,2). It will be appreciated that, depending upon the wireless network conditions, each uplink and/or downlink may have different bandwidth requirements. Also, the ECPs may keep track of the bandwidth requirements from each Producer and to each Consumer and make adjustments on the fly as appropriate to transport the data stream.
[0083] It will be appreciated that the end hosts make decisions regarding the available bandwidth, processing, and storage constraints at the respective system components to optimize the data stream transport efficiency. At a minimum, the decision policy may be expressed as selecting the data stream with the highest available data rate supported by the Consumer’ s wireless link. In general, the decision involves what to send (full/dynamic/compressed), when to send (scheduling to meet deadline with high probability), and at what rate (individual versus joint). Delay variations mostly occur at the access network and are accounted for by the received notifications. Feedback is received on a regular basis to more accurately reflect the latest channel conditions (such as average bandwidth availability based on CQI and PoA requirements). In this case, the edge server is considered to be part of (and hence managed) by the access network. Accordingly, the received feedback also may include notifications from the access point that are not directly related to user feedback but to complement the user feedback. For instance, information on network usage at the PoA may also be provided. As an example, the location of a vehicle may be considered when selecting an ECP link to prioritize. The closer data streams may be transmitted at a higher rate. Thus, the decision process depends on what data streams are being received at moments in time. The determinations change on the fly as the channel conditions change and as notifications are received from the end hosts or from the access points.
[0084] FIG. 7 illustrates the example of FIG. 6 for an application scenario where the points of access act as a transport proxy and, accordingly, may apply prioritization to match bandwidth and latency requirements for AV-like streams with their quality of service needs. In the example of FIG. 7, it is assumed that control sessions are established between C1-ECP2, Cl-PoA2, and PoA2-ECP2. It will be appreciated that with PoA2 and ECP2 managed by the same administrative domain, a single session might suffice to carry all control signaling between clients connected to PoA2 and ECP2. As illustrated at (1), Cl sends a notification to PoA2 (towards ECP2) with its Channel Quality Indicator (CQI) state, which enables PoA2 to modify the notification message to ECP2 to reflect the current CQI state. At (2), PoA2 determines the supported rate based on Cl’s CQI and its current state and updates Cl’s notification before sending a new notification to ECP2 including identifying information on Cl. Additionally, PoA2 may send a batch notification message to merge channels including information on all connected end hosts using the service to the edge, which may then be multicast to all ECPs for all Consumers. ECP2 also requests streams and provides estimates on the supported delivery rate to Cl. At (3), the delivery rate information is shared by ECP2 to ECP3, which informs the Producer P2 of the rate at (4) and optionally sends the data stream at an optimal rate that is adapted based on experience with the Consumer’ s link quality. The data traffic may be adapted by upgrading or downgrading at the Producer and/or the Consumer to satisfy the expected Consumer experience. The updates are also provided to the UE-to-ECP mapping table 410 and to the NS-to-ECP mapping table 420 as described above with respect to FIG. 4.
[0085] FIG. 8 illustrates iurther details of the application scenario where the points of access 130 act as a transport proxy as in FIG. 7. FIG. 8 iurther illustrates the update notifications: Notification path (PoA->ECP){ Session ID, NameStream ID, Resource Metrics} that update ECP2 with the updated channel requirements of Cl. As illustrated at (P), PoA2 extracts header data (e.g., resource metrics) to determine how to prioritize streams addressed to the same/different Consumers and may update the header to include information on bandwidth availability, as the PoA2 has access to the current network view (/.<?., number of users, active bandwidth usage, aggregate requirements, etc.). The priority information for the respective data streams may be stored in a PoA Priority Table 500 as described above with respect to FIG. 5A.
[0086] FIG. 9 illustrates an embodiment of push-based streaming in a client- to-client scenario. In this example, through the discovery process, Producer P2 is aware of the set of Consumers Cl, C2, C3 requesting its content. Each Consumer is identified by Consumer ID (C(i)), name stream ID (NS(j)) and a set of resource metrics (A) that are tracked in table 900 of the Producer P2. Awareness of Consumers helps with stream generation and delivery through the Producer by enabling the Producer to adapt the rates according to the Consumer needs where the Producer and the Consumer are expected to be within discovery range. Upon discovery of the Consumers, the Producer regularly receives at (1) an update on the supported rates by the Consumers subscribing to its content stream. As illustrated, the update may be provided directly from the Consumer to the Producer over a vehicle to vehicle interface (e.g., C3 to P2) or indirectly via the ECPs (e.g., from Cl, C2 via SECP 400 and ECP4 and from C3 via ECP3 and ECP4). The update includes the Consumer ID and the updated Consumer requirements. At (2), for all the Consumers subscribing to its content, the Producer P2 determines the optimal transmission policy based on its available resources and requested resources based on one of the following cases:
Case I: The Producer chooses to send at the maximum supported rate by the receiving Consumers to optimize its bandwidth use, where the edge server transcodes as appropriate; or
Case II: The Producer creates a scalable stream that is delivered to each receiving Consumer at the proper rate matching the Consumer’ s requirements as when the edge computing resources are limited in regard to transcoding.
Given the Consumers’ bitrates, the Producer may determine the levels for scalable coding to maximize the overall decision accuracy at receiving hosts under the Producer’s transmission limitations.
[0087] At (3), the Producer may send its content to its edge server (Edge 3 including ECP4) using a push data stream including the connection ID, the named stream ID, the scalable coding level, and the named content. At (4), the edge server Edge3 may propagate the requested data to each Consumer through each Consumer’s edge server. Depending on the chosen rates and content scalability, the pushed data stream may be multicast to other edge servers or sent to each edge server as unicast streams. It will be appreciated that this approach enables caching for quick recovery at the Consumer as described with respect to the ETRA system architecture 100.
[0088] FIG. 10 illustrates an embodiment of push-based streaming in a server-to-client scenario. In this scenario, the synchronization requirement is not too strict, and it is possible to use host-to-host synchronization among AVs, PoAs, and ECPs 122 to synchronize their clocks. Hence, latency requirements may be relative measures (e.g., a combination of expiration time by the Producer and RTTs). In this scenario, after Consumers subscribe to named streams from the Producer through their ECPs 122 at (0), which may include a common SECP 400 for stream requests targeting the same Producer, the Producer streams are pushed towards these ECPs 122 through the Producer’s ECP 122 (e.g., ECP3) at the maximum supported rate by the Producer (W3 £ Wmax). At (1), the Consumers (Cl, C2) update their ECPs 122 (ECP1, ECP2) with their bandwidth and latency requirements (which may depend on the application scenario, e.g., frame generation rate), and other performance measures and estimates. At (2), the edge server 120 (Edgel) estimates the supported rate by each Consumer, Wc, requesting the content from the same Producer.
[0089] As described above with respect to FIG. 4, SECP 400 may help with achieving optimal performance. For example, if multiple end hosts require similar rate streams, transcoding once would be sufficient, rather than repeating the transcoding multiple times for different end hosts. Thus, the Producer side ECP 122 may send the data stream at a maximum rate, and the SECP 400 transcodes based on the requirements of each Consumer request received from its ECPs 122. The edge server 120 (Edgel) may also estimate the acceptable latency values for the content received from the Producer for each Consumer. Edgel may further determine the transcoding level (from a set of available levels) to meet the delivery deadline for the data stream while ensuring the best service quality for different Producers (e.g., vehicles) at different rates. Prior estimates based on previous observations may be used to minimize decision timing (a mapping database may be used). For a Consumer, at its maximum supported rate, Wc(i), the transcoded rate WT(I) should be less than Rc(i).
[0090] In sample embodiments, the latency requirements (e.g., based on the speed of a vehicle at the setup of the connection) may be expressed in two ways:
Synchronized clock case: current time + latency( transcoding) + latency( transmission) + latency(scheduling) + latency(propagation) < deadline(usability); and
Relative latency case: latency( transcoding) + latency( transmission) + latency(scheduling) + latency(propagation) < expected (delivery deadline).
[0091] In either case, Edgel chooses the transcoding level based on these requirements and sends the data stream within the identified time constraint as the vehicles move. A periodic and/or dynamic update of these requirements may be provided through control message signaling from the Consumer side. The adaptation rate depends on bandwidth variations, or expectations based on movement patterns, etc., and is carelully chosen while keeping in mind the tradeoffs associated with signaling overhead versus estimation accuracy. Edgel further shares the content with the matching edge computing nodes (e.g., ECP1 and ECP2 having sessions with the corresponding clients), assuming a common edge computing node (e.g., Edgel) is used by multiple ECPs 122 requesting the same content. Otherwise, the ECPs 122 act independently to service their own client’s needs using information for end-user decision-making. Finally, at (3), the edge computing node Edgel pushes the data stream downstream to Cl, C2 using Push() packets Push(connection ID, named stream ID, named content) at bandwidths W1 and W2 of Cl and C2, respectively. In this example, W1 and W2 are less than or equal to W3. This approach also enables caching for quick recovery at the Consumer.
[0092] In each of the embodiments described herein, if Consumers cannot support certain delivery rates, the Producers may reduce the rate they transmit data. Also, if the end hosts perform processing on the data, for instance to merge point clouds, rather than the edge server, then the Producer end host also may reduce the rate it transmits data. On the other hand, if some data processing is performed at the edge server, then the Producer end host may have no choice but to transmit at the maximum rate available to the edge server based on the type and amount of data that the Producer generates.
[0093] FIG. 11 illustrates a flow chart of methods of implementing adaptive push streaming in a network having multiple Producer nodes, multiple Consumer nodes, and multiple edge servers in a sample embodiment. For example, as illustrated in the above sample embodiments, the methods described herein may be implemented in edge network servicing autonomous vehicles and/or drones. In sample embodiments, the adaptive push streaming method is implemented by software on an edge server; however, it will be appreciated that the software may also be implemented in a Producer and/or a Consumer node in farther sample embodiments.
[0094] As illustrated in FIG. 11, the adaptive push streaming method is implemented on a processor that performs operations including receiving (1100) from a Producer node one or more streams of data having a contextualized name including resource constraints. The processor further receives (1110) a wireless capacity measurement from a Consumer node that has subscribed to at least one of the streams of data. From the wireless capacity measurement from the Consumer node, the processor determines (1120) a bit-rate at which to send the at least one stream to the Consumer node via the network and pushes (1130) the at least one data stream to the Consumer node via the network. However, when the processor receives multiple streams of data from the Producer node and other Producer nodes in the network, the processor optionally may further prioritize (1140) the multiple data streams for transmission based on shared policies with the Producer nodes for the contextualized names to determine what data stream to transmit, when to transmit the data stream, and at what rate to transmit the data stream. [0095] Consistent with the prioritization and according to wireless capacity measurements for Consumer nodes that have subscribed to the data stream, the processor optionally may further multicast a data stream to multiple Consumer nodes or unicast the data stream to the Consumer node (1150). The multicast may be a dynamic multicast of the data stream to multiple Consumer nodes where the transmission characteristics of the multicast change on the fly according to channel conditions. The multicasting or unicasting may further include estimating a bit-rate supported by each of the multiple Consumer nodes that have subscribed to the data stream, estimating acceptable latency values for each Consumer node subscribed to the data stream, determining a transcoding level appropriate to meet the acceptable latency values for each Consumer node subscribed to the data stream, and pushing the data stream to each Consumer node subscribed to the data stream. Also, in order to accommodate the transmission characteristics determined from the wireless capacity measurement from the Consumer node, the processor may compress a data stream before transmitting the data stream to the Consumer node.
[0096] Also, a notification message may optionally be received from an access point (1160) including information relating to network usage at the access point whereby the processor may modify transmission of the data stream based on information in the notification message from the access point. The modified data stream may then be pushed to the one or more subscribing Consumer nodes (1130).
[0097] The systems and methods described herein thus provide adaptive push- based multi-streaming on contextualized name streams by edge server or Producer end hosts as well as contextualized notification and update messages that carry information relating to dynamic bandwidth resource availability. Receiver driven signaling is used with named stream prioritization at the point of access or service point, and server- to- server signaling provides dynamic bandwidth adaptation at (or through) end hosts. Moreover, data stream prioritization at points of access is provided through policies shared through the contextualized names. Hierarchical localized edge processing may also be provided for optimization of resources such as bandwidth and/or processing resources. These and other advantages of the systems and methods described herein will become apparent to those skilled in the art.
[0098] FIG. 12 is a schematic diagram of an example network device 1200 for providing adaptive push streaming as described herein in sample embodiments. For example, network device 1200 may implement an edge node, a Producer, and/or a Consumer in a network domain. Further, the network device 1200 may be configured to implement the techniques described herein, particularly the method illustrated in the scenarios of FIGS. 1-10 and the software embodiment of FIG. 11.
[0099] Accordingly, the network device 1200 may be configured to implement or support the schemes/features/methods described herein. For instance, the features/methods in the disclosure may be implemented using hardware, firmware, and/or software installed to run on hardware. One skilled in the art will recognize that the term network device encompasses a broad range of devices of which network device 1200 is merely an example. Network device 1200 is included for purposes of clarity of discussion but is in no way meant to limit the application of the present disclosure to a particular network device embodiment or class of network device embodiments.
[00100] The network device 1200 may be a device that communicates electrical and/or optical signals through a network, e.g., a switch, router, bridge, gateway, etc. As shown in FIG. 12, the network device 1200 may comprise transceivers (Tx Rx) 1210, which maybe transmitters, receivers, or combinations thereof. A Tx Rx 1210 may be coupled to a plurality of downstream ports 1220 (e.g., downstream interfaces) for transmitting and/or receiving frames of data from other nodes and a Tx Rx 1210 may be coupled to a plurality of upstream ports 1250 (e.g., upstream interfaces) for transmitting and/or receiving data frames from other nodes, respectively. A processor 1230 may be coupled to the Tx Rxs 1210 to process the data streams and/or determine which network nodes to send data signals to. The processor 1230 may comprise one or more multi-core processors and/or memory devices 1240, which may function as data stores, buffers, etc. Processor 1230 may be implemented as a general processor or may be part of one or more application specific integrated circuits (ASICs) and/or digital signal processors (DSPs).
[00101] The network device 1200 also may comprise a stream processing module 1232, which may be configured to receive and to process data streams as described herein. The stream processing module 1232 may be implemented in a general-purpose processor, a field programmable gate array (FPGA), an ASIC (fixed/programmable), a network processor unit (NPU), a DSP, a microcontroller, etc. In alternative embodiments, the stream processing module 1232 may be implemented in processor 1230 as instructions stored in memory device 1240 (e.g., as a computer program product), which may be executed by processor 1230, and/or implemented in part in the processor 1230 and in part in the memory device 1240. The downstream ports 1220 and/or upstream ports 1250 may contain wireless, electrical and/or optical transmitting and/or receiving components, depending on the embodiment.
[00102] Although the example computing device is illustrated and described as a network node (e.g., edge server), the computing device may be in different forms in different embodiments. For example, the computing device may instead be a smartphone, a tablet, smartwatch, a communications module of an autonomous vehicle, drone, etc., or other computing device including the same or similar elements as illustrated and described with regard to FIG. 12. Devices, such as smartphones, tablets, and smartwatches, are generally collectively referred to as mobile devices or user equipment. Further, although the various data storage elements are illustrated as part of the network node 1200, the storage may also or alternatively include cloud-based storage accessible via a network, such as the Internet or server-based storage.
[00103] Memory 1240 may include volatile memory and/or non-volatile memory. Network node 1200 may include - or have access to a computing environment that includes - a variety of computer-readable media, such as volatile memory and non-volatile memory, removable storage devices and non removable storage devices. Computer storage includes random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM) or electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, compact disc read-only memory (CD ROM), Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium capable of storing computer-readable instructions.
[00104] The network node 1200 may include or have access to a computing environment that includes an input interface, an output interface, and a communication interface. The output interface may include a display device, such as a touchscreen, that also may serve as an input device. The input interface may include one or more of a touchscreen, touchpad, mouse, keyboard, camera, one or more device- specific buttons, one or more sensors integrated within or coupled via wired or wireless data connections to the network node 1200, and other input devices. The network node 1200 may operate in a networked environment using a communication connection to connect to one or more remote computers, such as database servers. The remote computer may include a personal computer (PC), server, router, network PC, a peer device or other common DFD network switch, or the like. The communication connection may include a Local Area Network (LAN), a Wide Area Network (WAN), cellular, Wi-Fi, Bluetooth, or other networks.
[00105] Computer-readable instructions stored on a computer-readable medium are executable by the processor 1230 of the network node 1200, such as the stream processing module 1232. The stream processing module 1232 in some embodiments comprises software that, when executed by the processor 1230 performs network processing operations according to the techniques described herein. A hard drive, CD-ROM, and RAM are some examples of articles including a non-transitory computer-readable medium such as a storage device. The terms computer-readable medium and storage device do not include carrier waves to the extent carrier waves are deemed too transitory. Storage may also include networked storage, such as a storage area network (SAN).
[00106] Although a few embodiments have been described in detail above, other modifications are possible. For example, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. Other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Other embodiments may be within the scope of the following claims.

Claims

CLAIMS What is claimed is:
1. An adaptive push streaming method for use in a network comprising multiple Producer nodes, multiple Consumer nodes, and multiple edge servers, the method comprising: a processor receiving from a Producer node at least one stream of data having a contextualized name including an identification of resource constraints of the Producer node; the processor receiving a network quality measurement from a Consumer node that has subscribed to the at least one stream of data; the processor determining from the network quality measurement from the Consumer node transmission characteristics of the at least one stream to be sent to the Consumer node via the network; and the processor pushing the at least one data stream to the Consumer node via the network.
2. The method of claim 1, iurther comprising the processor receiving multiple streams of data from the Producer node and other Producer nodes in the network and prioritizing the multiple data streams for transmission based on shared policies with the Producer nodes for the contextualized names to determine what data stream to transmit, when to transmit the data stream, and with what characteristics to transmit the data stream.
3. The method of any preceding claim, iurther comprising the processor multicasting a data stream to multiple Consumer nodes or unicasting the data stream to the Consumer node according to network quality measurements for Consumer nodes that have subscribed to the data stream.
4. The method of any preceding claim, iurther comprising the processor pushing the data stream to an edge server that establishes a dynamic multicast of the data stream to multiple Consumer nodes.
5. The method of any preceding claim, further comprising the processor determining from the network quality measurement from the Consumer node whether to compress a data stream before transmitting the data stream to the Consumer node.
6. The method of any preceding claim, further comprising receiving from the Consumer node a notification message including at least an identification of the data stream using the contextualized name to which the Consumer node has subscribed and the network quality measurement.
7. The method of any preceding claim, further comprising receiving a notification message from an access point including information relating to network usage at the access point and modifying transmission of the data stream based on information in the notification message from the access point.
8. The method of any preceding claim, wherein when multiple Consumer nodes have subscribed to a data stream, determining an optimal transmission policy based on available network resources and resources requested by the multiple Consumer nodes and pushing the data stream to each of the multiple Consumer nodes as at least one of a multicast and a unicast data stream.
9. The method of any preceding claim, further comprising estimating a bit- rate supported by each of the multiple Consumer nodes that have subscribed to the data stream, estimating acceptable latency values for each Consumer node subscribed to the data stream, determining a transcoding level appropriate to meet the acceptable latency values for each Consumer node subscribed to the data stream, and pushing the data stream to each Consumer node subscribed to the data stream.
10. An adaptive push streaming system for a network comprising a plurality of Producer nodes, Consumer nodes, and edge servers, comprising: a Producer node that creates at least one stream of data having a contextualized name including an identification of resource constraints of the Producer node; at least one Consumer node that subscribes to the at least one stream of data and periodically provides a network quality measurement for the at least one Consumer node to the network; and a Producer- side edge server that receives the at least one stream from the Producer node and the network quality measurement for the at least one Consumer node, determines an appropriate bit-rate at which to send the at least one stream to the at least one Consumer node via the network, and pushes the at least one stream of data to the at least one Consumer node via the network .
11. The system of claim 10, wherein the Producer- side edge server receives multiple streams of data from the Producer node and other Producer nodes in the network and implements shared policies for the contextualized names with the Producer nodes to prioritize the multiple data streams for transmission based on the shared policies, the shared policies establishing what data stream to transmit, when to transmit the data stream, and at what rate to transmit the data stream.
12. The system of claim 10 or claim 11, wherein the Producer-side edge server multicasts or unicasts a data stream according to at least the network quality measurement for the at least one Consumer node that has subscribed to the data stream.
13. The system of any of claims 10-12, wherein the Producer-side edge server aggregates data information received from a Consumer-side edge server to establish a dynamic multicast of a stream of data to multiple Consumer nodes.
14. The system of any of claims 10-13, wherein the Producer-side edge server determines from the network quality measurement from the Consumer-side edge server whether to compress a data stream before transmitting the data stream to the Consumer node.
15. The system of any of claims 10-14, wherein the Consumer-side edge server provides a notification message including at least an identification of a data stream using the contextualized name to which the at least one Consumer node has subscribed and the network quality measurement to the Producer- side edge server.
16. The system of any of claims 10-15, wherein the Producer-side edge server receives a notification message from an access point including information relating to network usage at the access point and modifies transmission of the data stream based on information in the notification message from the access point.
17. The system of any of claims 10-16, wherein when multiple Consumer nodes have subscribed to a data stream, the Producer node determines an optimal transmission policy based on available network resources and resources requested by the multiple Consumer nodes and pushes the data stream to the Producer-side edge server for propagation to each of the multiple Consumer nodes as at least one of a multicast and a unicast data stream.
18. The system of any of claims 10-17, wherein the Producer- side edge server estimates a bit-rate supported by each of the multiple Consumer nodes that have subscribed to the data stream from the Producer node, estimates acceptable latency values for each Consumer node subscribed to the data stream from the Producer node, determines a transcoding level appropriate to meet the acceptable latency values for each Consumer node subscribed to the data stream from the Producer node, and pushes the data stream to each Consumer node subscribed to the data stream from the Producer node.
19. A computer-readable media storing computer instructions for providing adaptive push streaming in a network comprising multiple Producer nodes, multiple Consumer nodes, and multiple edge servers, that when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving from a Producer node at least one stream of data having a contextualized name including an identification of resource constraints of the Producer node; receiving a network quality measurement from at least one Consumer node that has subscribed to the at least one stream of data; determining from the network quality measurement from the at least one Consumer node a bit-rate at which to send the at least one stream to the at least one Consumer node via the network; and pushing the at least one data stream to the at least one Consumer node via the network.
20. The medium of claim 19, farther comprising instructions that when executed by the one or more processors cause the one or more processors to perform operations including receiving multiple streams of data from the Producer node and other Producer nodes in the network and prioritizing the multiple data streams for transmission based on shared policies with the Producer nodes for the contextualized names to determine what data stream to transmit, when to transmit the data stream, and at what rate to transmit the data stream.
PCT/US2019/046956 2019-08-16 2019-08-16 Adaptive push streaming with user entity feedback WO2021034308A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2019/046956 WO2021034308A1 (en) 2019-08-16 2019-08-16 Adaptive push streaming with user entity feedback

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2019/046956 WO2021034308A1 (en) 2019-08-16 2019-08-16 Adaptive push streaming with user entity feedback

Publications (1)

Publication Number Publication Date
WO2021034308A1 true WO2021034308A1 (en) 2021-02-25

Family

ID=67851220

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/046956 WO2021034308A1 (en) 2019-08-16 2019-08-16 Adaptive push streaming with user entity feedback

Country Status (1)

Country Link
WO (1) WO2021034308A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113742400A (en) * 2021-09-10 2021-12-03 哈尔滨工业大学(威海) Network data acquisition system and method based on self-adaptive constraint conditions

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150089026A1 (en) * 2011-09-29 2015-03-26 Avvasi Inc. Systems and languages for media policy decision and control and methods for use therewith

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150089026A1 (en) * 2011-09-29 2015-03-26 Avvasi Inc. Systems and languages for media policy decision and control and methods for use therewith

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHOR PING LOW ET AL: "Dynamic group multicast routing with bandwidth reservations", INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS., vol. 15, no. 8, 8 July 2002 (2002-07-08), GB, pages 655 - 682, XP055682535, ISSN: 1074-5351, DOI: 10.1002/dac.557 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113742400A (en) * 2021-09-10 2021-12-03 哈尔滨工业大学(威海) Network data acquisition system and method based on self-adaptive constraint conditions
CN113742400B (en) * 2021-09-10 2023-09-19 哈尔滨工业大学(威海) Network data acquisition system and method based on self-adaptive constraint conditions

Similar Documents

Publication Publication Date Title
CN110769039B (en) Resource scheduling method and device, electronic equipment and computer readable storage medium
US10944698B2 (en) Apparatus and method of managing resources for video services
US10142384B2 (en) Distributing communication of a data stream among multiple devices
JP5674880B2 (en) Method, system and network for transmitting multimedia data to multiple clients
US10237315B2 (en) Distributing communication of a data stream among multiple devices
US9838459B2 (en) Enhancing dash-like content streaming for content-centric networks
Thomas et al. Enhancing MPEG DASH performance via server and network assistance
Thomas et al. Enhancing MPEG DASH performance via server and network assistance
JP2018524887A (en) Method and apparatus for multipath media transmission
US11431781B1 (en) User-defined quality of experience (QoE) prioritizations
EP3735768B1 (en) Improving qoe for video and web services using cross-layer information
JP2017507601A (en) Service distribution in communication networks
KR20170101192A (en) Link-aware streaming adaptation
JP2016526355A (en) Node and method for use in a HAS content distribution system
US20210195271A1 (en) Stream control system for use in a network
Hodroj et al. A survey on video streaming in multipath and multihomed overlay networks
Farahani et al. CSDN: CDN-aware QoE optimization in SDN-assisted HTTP adaptive video streaming
WO2021034308A1 (en) Adaptive push streaming with user entity feedback
da Silva et al. Cross-layer multiuser session control for optimized communications on SDN-based cloud platforms
Zhang et al. MEC‐enabled video streaming in device‐to‐device networks
US11627358B2 (en) Communication entity and a method for transmitting a video data stream
Schwarzmann et al. Quantitative comparison of application–network interaction: a case study of adaptive video streaming
Tüker et al. Using packet trimming at the edge for in-network video quality adaption
CN112106335B (en) Method and system for streaming media data over a content distribution network
Maheswari et al. An Optimized Control on Delay and Transmission Rate Over Wireless Video Streaming Channels

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19763133

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19763133

Country of ref document: EP

Kind code of ref document: A1