WO2023211442A1 - Adaptive lecture video conferencing - Google Patents

Adaptive lecture video conferencing Download PDF

Info

Publication number
WO2023211442A1
WO2023211442A1 PCT/US2022/026666 US2022026666W WO2023211442A1 WO 2023211442 A1 WO2023211442 A1 WO 2023211442A1 US 2022026666 W US2022026666 W US 2022026666W WO 2023211442 A1 WO2023211442 A1 WO 2023211442A1
Authority
WO
WIPO (PCT)
Prior art keywords
presentation
fetch
content
fetch content
live
Prior art date
Application number
PCT/US2022/026666
Other languages
French (fr)
Inventor
Hong Heather Yu
Original Assignee
Futurewei Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futurewei Technologies, Inc. filed Critical Futurewei Technologies, Inc.
Priority to PCT/US2022/026666 priority Critical patent/WO2023211442A1/en
Publication of WO2023211442A1 publication Critical patent/WO2023211442A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/401Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
    • H04L65/4015Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference where at least one of the additional parallel sessions is real time or time sensitive, e.g. white board sharing, collaboration or spawning of a subconference

Definitions

  • the disclosure generally relates to improving the quality of audio/visual presentations over communication networks.
  • BACKGROUND [0002]
  • the use of videoconferencing applications has expanded considerably in recent years. Typical video conferencing falls into two categories – group video conferences with two or more attendees who can all see and communicate with each other in real time, and online presentations where one or more hosts use audio, visual and text to present information to a large group of attendees. This latter category is referred to herein as lecture-based conferencing or lecture-based presentations. Both categories rely on fast and reliable network connections for effective conferencing and presentations and can suffer quality when network bandwidth between the host and attendees fluctuates or is limited.
  • Lecture-based presentations include both a live presentation by an individual (or “presenter”) and accompanying audio/visual content controlled by the presenter during the live presentation.
  • audio/visual content may take the form of slide presentations including text, foreground and background images, and audio and video.
  • Some video conferences are recurrent conferences with the same participants participating in each recurrent conference.
  • the live presentation is streamed from a host processing device to a number of client processing devices, sometimes using a cloud processing service to improve the quality of each attendee’s experience.
  • Quality of experience QoE is a measure of a customer's experiences with a service and is one of the most commonly used service indicators to measure video delivery performance.
  • QoE describes metrics that measure the performance of a service from the perspective of a user or viewer.
  • video content is the most bandwidth intensive portion of a lecture-based presentation.
  • Common video related QoE metrics include rebuffering, playback failures, and video startup time.
  • Video quality will become even more bandwidth intensive when holographic, three dimensional or volumetric video conferencing services are used.
  • Some video services use prefetching and buffering to improve the quality of on demand video streaming; however, prefetching video is not suitable for real-time live video streaming or conferencing services since such streams are often not available ahead of the presentation time.
  • One general aspect includes a computer implemented method of rendering an online presentation having a live component and a stored component.
  • the computer implemented method includes determining pre-fetch content in the stored component, the pre-fetch content including audiovisual information stored on a storage device of a host processing device, the pre-fetch content to be rendered in conjunction with the live component during the duration of the presentation at one or more client processing devices.
  • the method further includes generating sync data linking the pre- fetch content and the live component.
  • the method also includes transmitting the pre- fetch content via a network to a second storage device prior to the start of the online presentation, the second storage device accessible by the one or more client processing devices for the duration of the presentation to allow the client processing device to render the online presentation by combining the live component and at least a portion of the pre-fetch content in sync with the live component of the online presentation.
  • Implementations may include the computer implemented method further including receiving a pre-fetch request from the client processing device, and where the transmitting occurs in response to the request from the client processing device. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the transmitting occurs in response to the request from the network node and the second storage device is provided on the network node. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the transmitting occurs in response to the request from cloud service processing device and the second storage device is provided on the cloud service processing device.
  • Implementations may include the computer implemented method of any of the aforementioned embodiments further including transmitting a live stream of the online presentation with accompanying sync markers in the online presentation, the sync markers associated with the sync data and defining when during the duration of the online presentation the pre-fetch content should be rendered.
  • Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the sync data may include markers for each chunk, the markers associated with sync markers transmitted during the live component of the online presentation and indicating when in the online presentation the pre-fetch content should be rendered.
  • Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the second storage device is a storage device of the client processing device.
  • Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the second storage device is a storage device of a network node. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the second storage device is a storage device of a server in a cloud service. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the method further includes detecting network bandwidth available to the host processing device and transmitting the pre-fetch content based on the available network bandwidth. Implementations may include the computer implemented method of any of the aforementioned embodiments further including performing an adaptive prefetching calculation to maximize available caches at one or more of: a client processing device, a network node, and a cloud service presentation server.
  • Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
  • Another aspect includes a computer implemented method of rendering an online presentation having a live component and a stored component.
  • the computer implemented method includes requesting pre-fetch content from a host device having at least the stored component, the pre-fetch content including audiovisual information of the online presentation, the pre-fetch content to be rendered in conjunction with the live component during the duration of the presentation by one or more client processing devices.
  • the method also includes receiving the pre-fetch content via a network prior to the start of the online presentation and receiving sync data linking the pre-fetch content and the live component, receiving the live component via a live stream broadcast.
  • the method also includes rendering the online presentation by combining the live component and at least a portion of the pre-fetch content in sync with the live component at the one or more client devices during the duration of the presentation.
  • Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
  • Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the computer implemented method is performed by a client processing device, and where the pre-fetch content is stored prior to receiving the live component and wherein the method further includes retrieving the prefetch content from storage on the client processing device prior to rendering.
  • Implementations may include the computer implemented method of any of the aforementioned embodiments wherein method is performed by a client processing device, and where the pre-fetch content is retrieved prior to receiving the live component, the method further includes retrieving the prefetch content from a network node prior to rendering. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the method further includes detecting network bandwidth available to the client processing device and receiving the pre-fetch content based on the available network bandwidth. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein method is performed by a client processing device, and where the pre-fetch content is retrieved prior to receiving the live component, the method further including retrieving the prefetch content from a cloud server via a network prior to rendering.
  • Implementations may include the computer implemented method of any of the aforementioned embodiments wherein receiving sync data includes receiving sync markers in the pre-fetch content; and receiving corresponding sync markers in the live component, the sync markers in the live component indicating that pre-fetch content at the sync markers in the pre-fetch content should be rendered with data from the live component.
  • Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
  • One general aspect includes a processing system in a network.
  • the processing system also includes: a processor readable storage medium; a processor device including a first non-transitory memory storage may include instructions; and one or more processors in communication with the memory, where the one or more first processors execute the instructions to render an online presentation having a live component and a stored component.
  • the one or more processors execute instructions to: determine pre-fetch content in the stored component, the pre-fetch content including audiovisual information stored on a storage device of a host processing device, the pre-fetch content to be rendered in conjunction with the live component during the duration of the presentation at one or more client processing devices; generate sync data linking the pre-fetch content and the live component; and transmit the pre-fetch content via a network to a second storage device prior to the start of the online presentation, the second storage device accessible by the one or more client processing devices for the duration of the presentation to allow the client processing device to render the online presentation by combining the live component and at least a portion of the pre-fetch content in sync with the live component of the online presentation.
  • Implementations may include one or more of the processing systems wherein the one or more processors execute the instructions to receive a pre-fetch request from one or more of a client processing device, a network processing device, and a cloud service processing device, and where the system transmit occurs in response to the request from the one or more devices. Implementations may include one or more of the processing systems wherein the one or more processors execute the instructions to transmit a live stream of the online presentation with accompanying sync markers in the online presentation, the sync markers associated with the sync data and defining when during the duration of the online presentation the pre-fetch content should be rendered.
  • Implementations may include one or more of the processing systems wherein the one or more processors execute the instructions to segment the pre-fetch content into chunks, and where the sync data may include markers for each chunk, the markers associated with sync markers transmitted during the live component of the online presentation and indicating when in the online presentation the pre-fetch content should be rendered. Implementations may include one or more of the processing systems wherein the second storage device is one of: a storage device of the client processing device, a storage device of a network node, and a storage device of a server in a cloud service.
  • Implementations may include one or more of the processing systems wherein the one or more processors execute the instructions to one or more processors execute the instructions to: detect network bandwidth available to the host processing device and transmit the pre-fetch content based on the available network bandwidth. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
  • One general aspect includes a user equipment device.
  • the user equipment device also includes: a processor readable storage medium; a processor device including a first non-transitory memory storage may include instructions; and one or more processors in communication with the memory.
  • the one or more first processors execute the instructions to: request pre-fetch content from a host device having at least the stored component, the pre-fetch content including audiovisual information of the online presentation, the pre-fetch content to be rendered in conjunction with the live component during the duration of the presentation by one or more client processing devices; receive the pre-fetch content via a network prior to the start of the online presentation; receive sync data linking the pre-fetch content and the live component; receive the live component via a live stream broadcast; and render the online presentation by combining the live component and at least a portion of the pre- fetch content in sync with the live component at the one or more client devices during the duration of the presentation.
  • Implementations may include the user equipment device of any of the foregoing embodiments where the one or more processors store the pre-fetch content in a local storage device prior to receiving the live component, the one or more processors further retrieve the prefetch content from storage on the client processing device prior to the render. Implementations may include the user equipment device of any of the foregoing embodiments wherein the one or more processors retrieve the pre-fetch content prior to receiving the live component, and/or the one or more processors retrieve the prefetch content from a network node prior to rendering.
  • Implementations may include the user equipment device of any of the foregoing embodiments wherein he one or more processors retrieve the pre-fetch content prior to receiving the live component, and/or the one or more processors retrieve the prefetch content from a cloud server via a network prior to rendering. Implementations may include the user equipment device of any of the foregoing embodiments the one or more processors: receive sync markers in the pre-fetch content; and receive corresponding sync markers in the live component, the sync markers in the live component indicating that pre-fetch content at the sync markers in the pre-fetch content should be rendered with data from the live component.
  • Implementations may include the user equipment device of any of the foregoing embodiments wherein the one or more processors detect network bandwidth available to the client processing device and receive the pre-fetch content based on the available network bandwidth. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
  • FIG.1 illustrates an interface of an online conference application showing a first example of audio/visual information which may be presented in the interface.
  • FIG. 2 illustrates another interface of an online conference application showing a second example of audio/visual information which may be presented in the interface.
  • FIG. 3 illustrates an exemplary network environment for implementing a lecture-based video presentation.
  • FIG.4 illustrates a general method in accordance with the technology.
  • FIG.5A illustrates a series of steps performed by a host processing device, client processing devices and network nodes or cloud processing devices, to implement the techniques described herein.
  • FIG.5B illustrates additional details regarding the performance of certain portions of FIG.5A.
  • FIG.5C illustrates a variation on the method of FIG.5B wherein a question- and-answer session is enabled.
  • FIG.6 illustrates a series of steps performed by a host processing device, client processing devices and network nodes or cloud processing devices in an embodiment wherein the pre-fetch data comprises predominantly text data.
  • FIG.7 illustrates a series of steps performed by a client processing device and network nodes or cloud processing devices to retrieve and render attendee headshots.
  • FIG.8 illustrates data flow in accordance with the steps shown in FIG.7.
  • FIG. 9 illustrates the effect of a proactive pre-fetch algorithm relative to a host, cloud service, edge node, and the client, for various chunks of streamed data.
  • FIGs.10A and 10B illustrate two alternatives of timing diagrams illustrating both the authentication timing of the client device and when prefetch data may be sent to the client device.
  • FIG.11 is a block diagram of a network processing device that can be used to implement various embodiments.
  • FIG.12 is a block diagram of another network processing device that can be used to implement various embodiments.
  • FIG.10A and 10B illustrate two alternatives of timing diagrams illustrating both the authentication timing of the client device and when prefetch data may be sent to the client device.
  • FIG. 13 is a block diagram of a network device that can be used to implement various embodiments herein.
  • DETAILED DESCRIPTION [0031] The present disclosure and embodiments address performance improvements for lecture-based video conferencing.
  • Lecture-based presentations include a live component and a pre-created, stored component generally meant to accompany the live presentation.
  • pre-fetch data comprising portions of the presentation content (the stored component) from the lecture-based presentation are distributed by the host and cached in the client devices, cloud devices and/or network nodes.
  • an intelligent distribution/control algorithm may be used to determine caching requirements at each device.
  • FIG. 1 illustrates an interface 100 of an online conference application showing a first example of audio/visual information which may be presented in the interface.
  • Interface 100 includes a presenter window 110, a media display window 120 and attendee display windows 130 showing the connected attendees viewing the presentation.
  • the media display window is showing text and is adjacent to the presenter window 110, but the placement of the windows may differ in various embodiments. It should be understood that in embodiments, either the media display window or the presenter window may occupy the entire screen, or the portion of the interface currently occupied by both windows 110 and 120.
  • the presenter window 110 may show a live action, real time video of the presenter (the speaker or lecturer), while the information the media display window may change during the course of the presentation to include different text slides, images, and videos which accompany the presentation being made by the presenter.
  • the content of the media display window may be stored in presentation data which is displayed in the window under the control of the presenter.
  • the presentation data may comprise a series of slides, each of which includes text, images, graphics, or video, and which is displayed when selected by the presenter at an appropriate time in the presenter’s lecture.
  • audio/visual information 135 may be a motion video with accompanying text 145 provided in the same window (as shown) or different windows in the interface.
  • eight attendees are illustrated in windows 130, 230, any number of users may be attending the presentation. The attendees may or may not have the ability to provide feedback to the presenter and such feedback may be though text-based messaging or chat services in the presentation application or though audio feedback.
  • FIG.3 illustrates an example of a network environment for implementing a lecture-based video presentation.
  • Environment 300 includes a host processing device 310 which is associated with the presenter of a lecture-based video conference. Also illustrated are three exemplary client devices, including a tablet processing device 312, a desktop computer processing device 314 and a mobile processing device 316. It should be understood that there may be any number of processing devices operating as client devices for attendees of the presentation, with one client device generally associated with one attendee (although multiple attendees may use a single device).
  • host processing device 310 is illustrated as a notebook or laptop computer processing device, any type of processing device may fulfill the role of a host processing device. Examples of processing devices are illustrated in FIGS.11 – 12.
  • FIG.3 Also shown in FIG.3 are a plurality of network nodes 320a- 320d and 330a – 330d and presentations servers 340a – 340d.
  • the presentation servers 340a – 340d may be part of a cloud service 350, which in various embodiments may provide cloud computing services which are dedicated to the online conferencing application.
  • Nodes 320a - 320d are referred to herein as “edge” nodes as such devices are generally one network hop from devices 310, 312, 314, 316.
  • Each of the network nodes may comprise a switch, router, processing device, or other network-coupled processing device which may or may not include data storage capability, allowing pre-fetched presentation data to be stored in the node for distribution to devices utilizing the presentation application.
  • additional levels of network nodes other than those illustrated in FIG 3 are utilized.
  • fewer network nodes are utilized and in some embodiments, comprise basic network switches having no available caching memory.
  • the presentation servers are not part of a cloud service but may comprise one or more presentation servers which are operated by a single enterprise, such that the network environment is owned and contained by a single entity (such as a corporation) where the host and attendees are all connected via the private network of the entity. Exemplary node devices are illustrated in FIGs.
  • a host device 310 provides presentation data and the live presentation data for the lecture-based presentation to each of the clients though one or more of the network nodes 320a- 320d, 330a – 330d and/or the cloud service 350.
  • Each of the edge nodes, client devices and presentation servers in the cloud service 350 are connected by one or more public and/or private networks.
  • FIG. 3 illustrates the flow of presentation data may be provided to client devices 312, 314, 316.
  • the host device 310 is a device used by one or more presenters to provide the online lecture-based presentation.
  • Each lecture-based presentation includes at least a live component, where the presenter provides a lecture or live presentation, and usually includes a pre-prepared, stored component such as a slide presentation which is meant to accompany the live presentation.
  • the pre- prepared or stored component of the presentation is stored on the host device and may be forwarded as pre-fetch data 375 in accordance with embodiments herein.
  • live presentation data 365 is streamed to the client devices. As illustrated in FIG.3, live presentation data 365 and pre-fetch data are distributed from the host device 310 via the network nodes to the client devices 312, 314, 316.
  • Prefetch data can be stored for access by the client devices on the client devices themselves, on the edge nodes 320a - 320d, or by the cloud service 350 on presentation servers 340a - 340d, or on a combination of such devices.
  • presentation data 375 may be sent by host 310 through the host processing device’s network interface and directed to the client computers though, for example, a cloud service 350, including one or more presentation servers 340a - 340d.
  • FIG. 4 illustrates a general method in accordance with the technology.
  • the bandwidth available for the host and client processing devices is detected. As illustrated below, bandwidth detection may occur by the host and/or client processing devices, and/or the cloud service.
  • prefetching authorization policies are defined.
  • the policies may be defined at the host in the conferencing application, or in the cloud service.
  • Admission policies define whether and to what extent client devices are able to obtain pre-fetch data.
  • the policy may provide for a one-time only pre-fetch permission, or in cases where the lecture-based presentation is a recurrent event, may define a length of time that prefetching of data is allowed.
  • presentation content is classified into pre-fetch content and non-fetchable content.
  • Pre-fetch content comprises any type of audio/visual or text data which is available before the presentation and which is capable of being distributed and cached in either the client devices themselves, nodes 320a - 320d and 330a – 330d, and/or the cloud service 350.
  • Non-fetchable content includes any audiovisual or text-based data that should be streamed in real-time, such as the lecture itself, or which is defined by the host or the application as being data which is not allowed to be prefetched to a cache.
  • Non-fetchable content may also include any data which could comprise pre-fetch content, but which based on the configuration of the host or application is not allowed to be fetched.
  • a copyrighted image may be prevented from being pre- fetched and only allowed to be displayed in the lecture based on the license available from the copyright owner or fair-use considerations.
  • classification of pre-fetch content and non-fetchable content can be based on rules, features, content, or other factors.
  • the pre-fetch content is analyzed and prepared to be made available for prefetching.
  • pre-fetch content may be segmented into segments referred to herein as slices or chunks of data.
  • the pre-fetch content is distributed in segments and reassembled once it is received by the client processing devices.
  • the fetchable content may be compressed and/or encrypted before being distributed.
  • a slide presentation such as that prepared in a slide presentation program such as Microsoft PowerPoint may accompany a lecture.
  • Prefetch data in the form of one or more of the slides which accompany a lecture may be segmented into slices based on predefined policy and/or network conditions.
  • a slice of pre-fetch data may contain one or more video frames, images slides of a presentation, or text and graphics, and are segmented into data slices based on predefined rules or bandwidth prediction.
  • the slices may include original presentation, presentation in lower visual resolutions, or a modified simplified version of the data, such as a presentation with a simplified background. Multiple versions/streams of slices may be generated. For content security, visible and/or invisible watermarks as well as content encryption may be implemented.
  • a sync marker for each content slice is generated and sent to the client device such that when the live presentation data is received, corresponding sync markers in the live data of the lecture-based presentation may be used to allow the content slices (or portions) of the pre-fetch data to be rendered at the client device in sync with the live presentation, i.e. at corresponding points in the presentation where the author designed the pre-fetch content to be rendered.
  • Content prefetching may end when presentation starts or continue during presentation based on bandwidth management rules and policy.
  • each sync marker may be provided as metadata in a network frame transmitted between devices.
  • 8 bits (1 byte) of frame metadata is reserved for sync marks allowing for up to segments (slices or chunks) of data to be synchronized during a presentation.
  • the corresponding sync mark is forwarded to the system (cloud, edge, and then to the receivers).
  • Clients/receivers shall use this sync mark to search in local cache, then cache in the edge, and then in the cloud to find a matching sync mark. If a matching sync mark is found, the corresponding ppt slide shall be displayed on the client/receiver device.
  • prefetching may be scheduled by the network nodes 320a - 320d, cloud service 350 and/or client processing devices 312, 314, 316. Prefetching may occur at optimal times or according to the prefetching scheduling algorithm discussed herein.
  • watermarks and encryption keys are generated. Watermarking and encryption may provide content security enforcement, as discussed above.
  • sync marks are generated. The sync marks allow the content which is prefetched to be reconstructed at the client device and displayed at the correct time during the lecture-based presentation.
  • forwarding and retrieval of the pre- fetch data occurs.
  • FIG.5A illustrates a series of steps performed by a host processing device, a client processing device and edge network nodes or cloud processing devices, respectively, over a course of time from T0 to T3.
  • each respective device is preparing to participate in the lecture-based presentation.
  • a host processing device will detect and predict bandwidth available for sending pre-fetch and live data to client processing devices of attendees of the lecture-based presentation.
  • the host processing device or in other embodiments the presentation servers of the cloud service
  • the presentation data is stored on a storage device in the host processing device 310 (illustrated in FIG.12) enabling the host processing device to perform the functions described at 515.
  • step 515 is performed at the host processing device.
  • the presentation data may be submitted to the cloud service (or an edge node or another processing device) to perform step 515, and the presentation data may be stored on a storage device of the presentation server of the cloud service.
  • Step 515 comprises the aforementioned analysis of the pre-prepared presentation data to determine which portions of the data may be forwarded and cached on client devices, edge nodes and/or presentation servers for use when rendering the live presentation at the client device.
  • the host will prepare pre-fetch data in the form of data slices or chunks which may be then distributed and stored at the client, edge nodes or cloud service.
  • the data slices may be encrypted, compressed and/or watermarked at 520.
  • Encryption and watermarking may be used to ensure that the data saved on client devices is protected, while compression may be used to make data transfer more efficient and reduce storage space at the pre- fetch destination.
  • the sync markers (and/or time stamps) are generated. Sync markers are used between the host and client devices to signal when pre-fetch data should be rendered in during the duration of the lecture-based presentation. As discussed below, when live presentation data is sent to client devices, each device uses the marker in the pre-fetch data and a corresponding marker sent in the live presentation data to determine when to render the pre-fetch data in the presentation. Time stamps and/or packet sequence numbers may be utilized to reconstruct audio and video content which is part of the pre-fetch content and where such audio and video is itself broken into chunks.
  • the host distributes the pre-fetch data responsive to pre-fetch requests received to the clients, edge nodes or cloud service.
  • Each client device may detect and predict the bandwidth available for participation in the presentation at 545. This information may be provided to the host device or used, as discussed below, for additional processing by the client device for head shot processing and pre-fetch scheduling.
  • the client will send a pre-fetch request to the host or the cloud service. In one embodiment, the prefetch request may be answered by the cloud service or the edge node (at 585) or the host device (at 530).
  • the host will receive and analyze pre-fetch requests.
  • the client device may receive pre- fetch content at 555 and await the beginning of the lecture-based presentation.
  • the edge nodes and cloud service may cache the prefetch data having received the data at 580 and distribute the pre-fetch content at 585.
  • the edge nodes or cloud service obtain pre-fetch data after the client device requests such data and in other embodiments, the edge nodes and cloud service may cache the pre-fetch data based on caching limitations of the client devices.
  • the edge and cloud devices may determine that the edge node and/or cloud devices do not need to cache pre-fetch data because the clients all have sufficient bandwidth and memory available to store all pre-fetch data needed for the presentation.
  • the content pre-fetch and live stream
  • the content may be cached in the host, the edge, and some or all of the client devices, or only the host and the client devices.
  • the content may only be cached in the host and the client devices per a conference management policy.
  • the host broadcasts the lecture-based presentation at 535 using a streaming format. At least a portion of the lecture-based content presentation includes a live presentation by a presenter during which the presenter will display portions of the prefetch data. The lecture-based presentation continues until the presentation ends at 590. Optionally, a question-and-answer session 587 may occur, during which portions of the prefetch content may need to be re-displayed. This is discussed below with respect to FIG.5C. [0048] Commensurate with the broadcast at 535, sync markers and/or timestamps are distributed by the host at 540.
  • a sync marker for each content slice uniquely identifies the content slice to allow client devices to know which slice should be presented during the duration of the lecture-based presentation.
  • Corresponding sync markers are included in the live stream in order to identify to the client device the correct time to render the pre-fetch content.
  • the time stamp or marker may be sent to the attendee client device at 565, which also receives the non-prefetch content and the broadcast at 570.
  • Content prefetching may end when the presentation starts or continue during presentation based on bandwidth management rules, caching capabilities and policies.
  • each client device decodes any pre-fetch content and renders the pre-fetch content and broadcast content of the lecture-based presentation.
  • FIG.5B illustrates additional details regarding the performance of steps 535, 540, 565 and 575 illustrated in FIG. 5A.
  • the broadcast of a lecture-based presentation comprises streaming live data to the client devices at 535a.
  • the client devices render the live (streamed) content in the conference application.
  • a presenter may change the presented material, thereby affecting the information which should be displayed on the client devices.
  • the host device will detect which prefetched content is being displayed by the presenter and send a sync marker at 540b to client processing devices which corresponds to the correct pre-fetch data to display, in order to alert the client processing devices that the prefetch data should now be displayed on the client device.
  • the client device will receive the sync marker at 565a and determine where the prefetch data is located. Initially, the client will look to its own local cache at 575b and if the content is in the local cache, the client will render the prefetch content in time with the leg presentation at 575g. The content is not the local cache 575b, then the client will check at the edge nodes at 575c.
  • the client will request the prefetch data from the edge node cache and the prefetch data will be returned to the client and rendered with the prefetch content and live presentation at 575b. If the content is not the edge node cache at 575c, then at 575d, the client will check the cloud service cache and if the data is in the cloud service cache, retrieve data from the cloud service at 575g and procedure rendering at 575g. Finally, if the prefetch data is not in the cloud, then the client will send a request for prefetch data to the host at 575h, and when the data is received at 525f, proceed with rendering at 575g.
  • FIG. 5C illustrates a variation on FIG. 5B wherein a question-and-answer session 587 occurs.
  • the host processing device will enable a two-way livestream between the host and client devices, or some other form of question mechanism allowing attendees to present questions to the presenter of the lecture-based presentation.
  • attendees may present questions via a built-in chat application to the conferencing application.
  • live audiovisual content from each attendee may be presented when the attendee has a question.
  • the questioning access can be controlled using access controls on the host processing device.
  • the livestream broadcast will be distributed between the host and client devices.
  • the data will be two-way between the client and host devices.
  • the presenter display may change at 537.
  • the host jumps to the relevant portion of the presentation to be presented, the corresponding sync mark is forwarded to the client devices.
  • Client devices then use this sync mark to search in local cache, edge node cache and the cloud service to find a matching sync mark. If a matching sync mark is found, the corresponding portion of pre-fetch presentation data is displayed on the client processing device.
  • FIG. 5A may be utilized with any type of audio/visual data accompanying a lecture-based presentation.
  • some lecture-based presentations are accompanied by presentation slides which comprise a common background with different text on each slide.
  • the pre-fetch method described herein can be adapted for this use case.
  • a presentation which is predominantly text lecture presentations contain mostly text with limited images or audiovisual content other than the background image.
  • background and foreground separation is conducted and the background can be prefetched or sent once in real time in the beginning.
  • Foreground, text content can be sent in real time along with metadata that indicates the style of text as part of the non- fetchable content. Text data is substantially smaller and requires less bandwidth to transmit.
  • FIG. 6 An embodiment of the pre-fetch method wherein the pre-fetch data comprises predominantly text data is illustrated with respect to FIG. 6.
  • reference numbers which are the same as those in FIG.5A indicate like steps.
  • bandwidth detection and prediction occur at both the host at 505 and the client at 545, presentation content is analyzed in the host at 515, pre- fetch requests are made at 550 and received at 510.
  • the result of the analysis of presentation content is a determination that the content is predominantly text with one or more background images (as present in a slide presentation).
  • the slices of the presentation are prepared by extracting and packaging the presentation background. Backgrounds can comprise a static image or an image (or video) file containing motion.
  • a determination is made as to whether the image is static or not. If so, the background can be distributed before the presentation begins at 630. If the background is not static, then markers and timestamps are generated at 525 and the markers and timestamps sent to the client at 540. In one embodiment, text content is sent along with the presentation broadcast at 535.
  • text content can be packaged (as in FIG.5A at 520) and distributed with (or before or after) the background data before the presentation starts.
  • attendee video processing Another type of presentation data which benefits from the disclosed pre- fetching techniques is attendee video processing. Representations of attendees shown in the attendee display windows 130 may comprise live video or static images.
  • Other types of meeting applications utilize virtual representations of meeting participants instead of actual participant video. In such applications, meeting participant headshot data can be processed, analyzed, and modeled prior to the lecture-based presentation. Processing and modeling may be conducted at each client device, at network nodes and/or at the cloud service, depending on computation resource availability. Headshot models may be sent to the client devices of other meeting participants along with the live video streams of the lecture-based presentation.
  • a segment of a headshot video stream is cached in the recipient device along with a headshot model.
  • the headshot video stream may be processed, analyzed, and selectively cached and utilized to provide a better user experience.
  • headshot modeling may take advantage of historical data and reinforced learning algorithms.
  • a potentially jittery video stream of an attendee headshot may be replaced with cached data such as a 2D image, a video segment, or a virtual rendering of the attendee.
  • a pre-fetched image may be a 2D image, a segment of a video stream, or an intelligently generated virtual representation of the attendee.
  • the headshot may be intelligently selected and processed for smooth playback and better user experience. For the best end user experience, intelligent motion analysis, motion synthesis, lip sync algorithms may be implemented. Upon network restoration, live streaming may be resumed. For content security, prefetched data and 3D models may be cached or stored in an encrypted or watermarked form. [0055] These headshot video processing techniques can also be used with presenter video to conserve bandwidth, generating a representation of the presenter while accompanying audio continues to be streamed and while pre-fetch content is synchronized to the audio stream. [0056] FIGs.7 and 8 illustrate one embodiment where headshot processing and pre-fetching can be utilized to improve user experience in a lecture-based presentation. FIG. 7. FIG.
  • FIG. 7 illustrates a series of steps performed by a client processing device and network nodes or cloud processing devices, respectively, over the course of time from T0 to T2 (during the course of a presentation) to retrieve and render attendee headshots in a case where the user experience may be impacted by factors such as network bandwidth.
  • FIG.8 illustrates data flow in accordance with the steps shown in FIG.7.
  • the presenter or other system configuration administrator may establish a conference-configured preference for attendee display which defines whether the attendee display will be allowed at all, or whether live video, a rendered virtual character of the attendee or a two-dimensional image of the attendee will be displayed in the interface 100, 200.
  • bandwidth detection and prediction occur as in FIGs.5A and 6.
  • the device will proceed with decoding and rendering the lecture-based presentation at 575, including, for example, any attendee headshot video. If the bandwidth availability detection is not sufficient at 710, then at 715 a determination may be made as to whether the meeting is a recurrent meeting.
  • a recurrent meeting may be, for example, a lecture series of in an educational environment such as a school or a monthly family meeting. If the meeting is a recurrent meeting at 715, then at 730, the method will attempt to locate attendee headshot data at 730. Attendee headshot data may be provided by the network nodes which have performed headshot processing analysis and modeling at 760 and have rendered headshot images and/or video segments at 765. The headshot data may also have been stored in a cache at the client devices based on a previous meeting. If the presentation is not a recurrent meeting at 715, then at 720, a client device will request headshot data from the nearest node or the host device directly.
  • the headshot data may be provided by one of the network nodes or the host device.
  • the client receives headshot data at 725 and has the data available for insertion if a bandwidth issue is detected.
  • Headshot processing and modeling may occur at 760 as well.
  • Headshot data may be cached and made available at either the edge network nodes or from the cloud service at 765.
  • the client device will decode and render the lecture-based presentation stream at 575.
  • the device continues to monitor network bandwidth and if the bandwidth fluctuates below, for example, a defined threshold, then the client may take corrective action to insert the headshot data into the presentation.
  • the client determines whether headshot data is available.
  • headshot synthesis occurs using the headshot data acquired at 725. Synthesis may comprise generating a virtual character for the user or presenter, creating a two-dimensional static image, and/or creating a short video clip of the user or presenter. If headshot data is available at 745 or following headshot synthesis, the headshot data may be inserted into the presentation stream at 755 to improve the attendee experience by reducing the need for any headshot data to be provided in the live stream itself.
  • the video conference continues at 790 as the client continues decoding and rendering at 575 and checking bandwidth at 735.
  • adaptive or proactive pre-fetch control algorithms may be utilized to control when content pre-fetching occurs, both as a pull from the client and as a push from an edge node or the cloud service.
  • a proactive prefetching algorithm responds to fluctuations in bandwidth by adapting push and pulls of pre-fetch data to bandwidth availability.
  • Figure 9 illustrates the effect of a proactive pre-fetch algorithm relative to a host, cloud service, edge node, and the client, for various chunks of streamed data.
  • the edge node level at the interface between the cloud service and the host device (such as nodes 330a – 330d in FIG. 3) is not illustrated.
  • Bandwidth available to both the client and the host is illustrated as fluctuating over time between high, low and outage levels.
  • scheduled prefetching begins from the host device at 901 prior to time T1 and data may be sent to the cloud and edge layers and eventually the client device prior to presentation rendering beginning at T2.
  • rendering combines both prefetch data 902 and streamed data 903, beginning at T2 for the lecture-based presentation.
  • T2 as the presentation begins at 901 with data streaming out and, at the client, data streaming in, playback begins coincident with the receipt of the lecture-based presentation streaming data.
  • FIGs.10A and 10B illustrate two alternatives of timing diagrams illustrating both the authentication timing of the client device and when prefetch data may be sent to the client device.
  • FIG. 10A illustrates a use case where authorization request is transmitted from a client to a cloud service in the cloud service handles the authorization request in addition, in the embodiment in FIG. 10A, prefetch data is transmitted between the host through the network levels to the client, with the client caches the prefetch data. Once the presentation begins at T1, additional prefetch is of data may be transmitted between the host and the client.
  • FIG. 10B illustrates an embodiment where authorization request is sent at time T0 from the client to the host device itself authorization is received from the host back to the client.
  • prefetch data is sent to the edge nodes, the presentation begins at T1, prefetch data is sent from the edge nodes to the client device for use in rendering the presentation. It should be understood that various combinations of prefetched caching can be utilized.
  • Adaptive pre-fetching may be used to maximize available caches at different levels of a network system. Such adaptive prefetching may be controlled by the host, cloud service or the client devices individually, or jointly. The goal of adaptive prefetching of pre-fetch data is to maximize cache utilization while minimizing content re-fetching, so as to maximize user quality of experience.
  • Client-side adaptive pre- fetching may be accomplished using a Naive scheduling algorithm where: ⁇ is the maximum cache available for prefetched content storage at client side; ⁇ is the chunk size of fetchable content; ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ are the minimum and maximum chunk size respectively; ⁇ is the time of the i th chunk reaches client buffer; ⁇ is the deadline for the i th chunk to be received by client side for on time playback; ⁇ the client-side available bandwidth at time t; ⁇ the client-side bandwidth upper bound; ⁇ ( ⁇ ) is the client-side consumed bandwidth at time t; ⁇ ( ⁇ ) is the client-side bandwidth available for prefetching at time t; is the streaming (e.g., non-fetchable content) bandwidth consumption at time t; ⁇ ⁇ is a pre-defined available bandwidth threshold for prefetching, for s implicity, assume ; ⁇ ⁇ ⁇ is a pre-defined threshold
  • prefetching starts/continues.
  • the cache space is always available and, all other variables being as defined above, given: ⁇ ⁇ e host side available bandwidth at time t; ⁇ ⁇ e host side bandwidth upper bound; ⁇ ⁇ e host side consumed bandwidth at time t; ⁇ ⁇ e host side bandwidth available for prefetching at time t; ⁇ ⁇ ⁇ e streaming (e.g., non-fetchable content) bandwidth consumption at time t.
  • prefetching can be controlled using machine learning.
  • FIG.11 is a block diagram of a network processing device that can be used to implement various embodiments. Specific network devices may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device. Furthermore, the network device 1100 may contain multiple instances of a component, such as multiple processing units, processors, memories, transmitters, receivers, etc.
  • the network device 1100 may comprise a processing unit 1101 equipped with one or more input/output devices, such as network interfaces, storage interfaces, and the like.
  • the processing unit 1101 may include a central processing unit (CPU) 1110, a memory 1120, a mass storage device 1130, and an I/O interface 1160 connected to a bus 1170.
  • the bus 1170 may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus, or the like.
  • a network interface 1150 enables the network processing device to communicate over a network 1180 with other processing devices such as those described herein.
  • the CPU 1110 may comprise any type of electronic data processor.
  • the memory 1120 may comprise any type of system memory such as static random- access memory (SRAM), dynamic random-access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), a combination thereof, or the like.
  • the memory 1120 may include ROM for use at boot-up, and DRAM for program and data storage for use while executing programs.
  • the memory 1120 is non-transitory.
  • the memory 1120 includes computer readable instructions that are executed by the processor(s) 1320 to implement embodiments of the disclosed technology, including the presentation application 1125A which may itself include a rendering engine 1125b, sync marker data 1125c, live stream data cache 1125d and a pre-fetch data cache 1125e.
  • the functions of the presentation application 1125a are described herein.
  • the rendering engine functions as described herein to render the lecture-based presentation data and combine the lecture-based presentation live stream data with the prefetched data.
  • the lecture-based presentation live stream data may be cached in the live stream data cache 1125d in order to improve the rendering experience.
  • the mass storage device 1130 may comprise any type of storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus 1170.
  • the mass storage device 1130 may comprise, for example, one or more of a solid-state drive, hard disk drive, a magnetic disk drive, an optical disk drive, or the like.
  • the memory 1120 includes the presentation application 1125A including the rendering engine 1125b, and sync marker data 1125c.
  • the memory may also include a presentation analysis engine 1215a and a sync marker generator 1215b. The functions of the presentation analysis engine analyze the presentation data 1235 stored in the mass storage device 1130 to determine pre-fetch data and create pre- fetch data slices which can then be distributed by the host device.
  • FIG.13 is a block diagram illustrating exemplary details of a network device, or node, such as those shown in the network of FIG.3.
  • a node 1300 may comprise a router, switch, server, or other network device, according to an embodiment.
  • the node 1300 can correspond to one of the nodes 320a- 320d, 330a – 330d.
  • the router or other network node 1300 can be configured to implement or support embodiments of the technology disclosed herein.
  • the node 1300 may comprise a number of receiving input/output (I/O) ports 1310, a receiver 1312 for receiving packets, a number of transmitting I/O ports 1330 and a transmitter 1332 for forwarding packets.
  • I/O input/output
  • the node 1300 can also include a processor 1320 that can be formed of one or more processing circuits and a memory or storage section 1322.
  • the storage 1322 can be variously embodied based on available memory technologies and in this embodiment and is shown to have a cache 1324, which could be formed from a volatile RAM memory such as SRAM or DRAM, and long-term storage 1326, which can be formed of non-volatile memory such as flash NAND memory or other memory technologies.
  • Storage 1322 can be used for storing both data and instructions for implementing the data pre-fetch techniques herein.
  • instructions causing the processor 1320 to perform the functions of requesting and caching pre-fetch data for a lecture-based presentation may be included in the pre-fetch controller 1370, the data for which is stored in a pre-fetch cache 1324.
  • Other elements on node 1300 can include the programmable content forwarding plane 1328.
  • the programmable content forwarding plane 1328 can be part of the more general processing elements of the processor 1320 or a dedicated portion of the processing circuitry.
  • the processor(s) 1320, including the programmable content forwarding plane 1328 can be configured to implement embodiments of the disclosed technology described below.
  • the storage 1322 stores computer readable instructions that are executed by the processor(s) 1320 to implement embodiments of the disclosed technology.
  • FPGAs Field-programmable Gate Arrays
  • ASICs Application-specific Integrated Circuits
  • ASSPs Application-Specific Standard Products
  • SOCs System-on-a-chip systems
  • CPLDs Complex Programmable Logic Devices
  • a connection may be a direct connection or an indirect connection (e.g., via one or more other parts).
  • the element when an element is referred to as being connected or coupled to another element, the element may be directly connected to the other element or indirectly connected to the other element via intervening elements.
  • an element When an element is referred to as being directly connected to another element, then there are no intervening elements between the element and the other element.
  • Two devices are “in communication” if they are directly or indirectly connected so that they can communicate electronic signals between them.
  • the technology described herein can be implemented using hardware, software, or a combination of both hardware and software.
  • the software used is stored on one or more of the processor readable storage devices described above to program one or more of the processors to perform the functions described herein.
  • the processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and non-removable media.
  • computer readable media may comprise computer readable storage media and communication media.
  • Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information, and which can be accessed by a computer. A computer readable medium or media does (do) not include propagated, modulated, or transitory signals.
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated, or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as RF and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
  • some or all of the software can be replaced by dedicated hardware logic components.
  • illustrative types of hardware logic components include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc.
  • FPGAs Field-programmable Gate Arrays
  • ASICs Application-specific Integrated Circuits
  • ASSPs Application-specific Standard Products
  • SOCs System-on-a-chip systems
  • CPLDs Complex Programmable Logic Devices
  • special purpose computers etc.
  • software stored on a storage device
  • the one or more processors can be in communication with one or more computer readable media/ storage devices, peripherals and/or communication interfaces.
  • each process associated with the disclosed technology may be performed continuously and by one or more computing devices.
  • Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A system and method or rendering an online presentation having a live component and a stored component. Pre-fetch content in the stored component is determined and distributed to client, network nodes or cloud services devices. Synchronization data is included in the distributed content. Upon presentation of the live component of the online presentation, the presentation is rendered at client devices using both data from the live component and the pre-fetch content. Pre-fetch content is synchronized to the live content using the synchronization data.

Description

Adaptive Lecture Video Conferencing Inventors: Hong Heather Yu FIELD [0001] The disclosure generally relates to improving the quality of audio/visual presentations over communication networks. BACKGROUND [0002] The use of videoconferencing applications has expanded considerably in recent years. Typical video conferencing falls into two categories – group video conferences with two or more attendees who can all see and communicate with each other in real time, and online presentations where one or more hosts use audio, visual and text to present information to a large group of attendees. This latter category is referred to herein as lecture-based conferencing or lecture-based presentations. Both categories rely on fast and reliable network connections for effective conferencing and presentations and can suffer quality when network bandwidth between the host and attendees fluctuates or is limited. [0003] Lecture-based presentations include both a live presentation by an individual (or “presenter”) and accompanying audio/visual content controlled by the presenter during the live presentation. Such audio/visual content may take the form of slide presentations including text, foreground and background images, and audio and video. Some video conferences are recurrent conferences with the same participants participating in each recurrent conference. Typically, the live presentation is streamed from a host processing device to a number of client processing devices, sometimes using a cloud processing service to improve the quality of each attendee’s experience. [0004] Quality of experience (QoE) is a measure of a customer's experiences with a service and is one of the most commonly used service indicators to measure video delivery performance. QoE describes metrics that measure the performance of a service from the perspective of a user or viewer. Typically, video content is the most bandwidth intensive portion of a lecture-based presentation. Common video related QoE metrics include rebuffering, playback failures, and video startup time. Video quality will become even more bandwidth intensive when holographic, three dimensional or volumetric video conferencing services are used. [0005] Some video services use prefetching and buffering to improve the quality of on demand video streaming; however, prefetching video is not suitable for real-time live video streaming or conferencing services since such streams are often not available ahead of the presentation time. SUMMARY [0006] One general aspect includes a computer implemented method of rendering an online presentation having a live component and a stored component. The computer implemented method includes determining pre-fetch content in the stored component, the pre-fetch content including audiovisual information stored on a storage device of a host processing device, the pre-fetch content to be rendered in conjunction with the live component during the duration of the presentation at one or more client processing devices. The method further includes generating sync data linking the pre- fetch content and the live component. The method also includes transmitting the pre- fetch content via a network to a second storage device prior to the start of the online presentation, the second storage device accessible by the one or more client processing devices for the duration of the presentation to allow the client processing device to render the online presentation by combining the live component and at least a portion of the pre-fetch content in sync with the live component of the online presentation. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. [0007] Implementations may include the computer implemented method further including receiving a pre-fetch request from the client processing device, and where the transmitting occurs in response to the request from the client processing device. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the transmitting occurs in response to the request from the network node and the second storage device is provided on the network node. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the transmitting occurs in response to the request from cloud service processing device and the second storage device is provided on the cloud service processing device. Implementations may include the computer implemented method of any of the aforementioned embodiments further including transmitting a live stream of the online presentation with accompanying sync markers in the online presentation, the sync markers associated with the sync data and defining when during the duration of the online presentation the pre-fetch content should be rendered. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the sync data may include markers for each chunk, the markers associated with sync markers transmitted during the live component of the online presentation and indicating when in the online presentation the pre-fetch content should be rendered. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the second storage device is a storage device of the client processing device. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the second storage device is a storage device of a network node. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the second storage device is a storage device of a server in a cloud service. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the method further includes detecting network bandwidth available to the host processing device and transmitting the pre-fetch content based on the available network bandwidth. Implementations may include the computer implemented method of any of the aforementioned embodiments further including performing an adaptive prefetching calculation to maximize available caches at one or more of: a client processing device, a network node, and a cloud service presentation server. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium. [0008] Another aspect includes a computer implemented method of rendering an online presentation having a live component and a stored component. The computer implemented method includes requesting pre-fetch content from a host device having at least the stored component, the pre-fetch content including audiovisual information of the online presentation, the pre-fetch content to be rendered in conjunction with the live component during the duration of the presentation by one or more client processing devices. The method also includes receiving the pre-fetch content via a network prior to the start of the online presentation and receiving sync data linking the pre-fetch content and the live component, receiving the live component via a live stream broadcast. The method also includes rendering the online presentation by combining the live component and at least a portion of the pre-fetch content in sync with the live component at the one or more client devices during the duration of the presentation. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. [0009] Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the computer implemented method is performed by a client processing device, and where the pre-fetch content is stored prior to receiving the live component and wherein the method further includes retrieving the prefetch content from storage on the client processing device prior to rendering. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein method is performed by a client processing device, and where the pre-fetch content is retrieved prior to receiving the live component, the method further includes retrieving the prefetch content from a network node prior to rendering. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein the method further includes detecting network bandwidth available to the client processing device and receiving the pre-fetch content based on the available network bandwidth. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein method is performed by a client processing device, and where the pre-fetch content is retrieved prior to receiving the live component, the method further including retrieving the prefetch content from a cloud server via a network prior to rendering. Implementations may include the computer implemented method of any of the aforementioned embodiments wherein receiving sync data includes receiving sync markers in the pre-fetch content; and receiving corresponding sync markers in the live component, the sync markers in the live component indicating that pre-fetch content at the sync markers in the pre-fetch content should be rendered with data from the live component. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium. [0010] One general aspect includes a processing system in a network. The processing system also includes: a processor readable storage medium; a processor device including a first non-transitory memory storage may include instructions; and one or more processors in communication with the memory, where the one or more first processors execute the instructions to render an online presentation having a live component and a stored component. The one or more processors execute instructions to: determine pre-fetch content in the stored component, the pre-fetch content including audiovisual information stored on a storage device of a host processing device, the pre-fetch content to be rendered in conjunction with the live component during the duration of the presentation at one or more client processing devices; generate sync data linking the pre-fetch content and the live component; and transmit the pre-fetch content via a network to a second storage device prior to the start of the online presentation, the second storage device accessible by the one or more client processing devices for the duration of the presentation to allow the client processing device to render the online presentation by combining the live component and at least a portion of the pre-fetch content in sync with the live component of the online presentation. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. [0011] Implementations may include one or more of the processing systems wherein the one or more processors execute the instructions to receive a pre-fetch request from one or more of a client processing device, a network processing device, and a cloud service processing device, and where the system transmit occurs in response to the request from the one or more devices. Implementations may include one or more of the processing systems wherein the one or more processors execute the instructions to transmit a live stream of the online presentation with accompanying sync markers in the online presentation, the sync markers associated with the sync data and defining when during the duration of the online presentation the pre-fetch content should be rendered. Implementations may include one or more of the processing systems wherein the one or more processors execute the instructions to segment the pre-fetch content into chunks, and where the sync data may include markers for each chunk, the markers associated with sync markers transmitted during the live component of the online presentation and indicating when in the online presentation the pre-fetch content should be rendered. Implementations may include one or more of the processing systems wherein the second storage device is one of: a storage device of the client processing device, a storage device of a network node, and a storage device of a server in a cloud service. Implementations may include one or more of the processing systems wherein the one or more processors execute the instructions to one or more processors execute the instructions to: detect network bandwidth available to the host processing device and transmit the pre-fetch content based on the available network bandwidth. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium. [0012] One general aspect includes a user equipment device. The user equipment device also includes: a processor readable storage medium; a processor device including a first non-transitory memory storage may include instructions; and one or more processors in communication with the memory. The one or more first processors execute the instructions to: request pre-fetch content from a host device having at least the stored component, the pre-fetch content including audiovisual information of the online presentation, the pre-fetch content to be rendered in conjunction with the live component during the duration of the presentation by one or more client processing devices; receive the pre-fetch content via a network prior to the start of the online presentation; receive sync data linking the pre-fetch content and the live component; receive the live component via a live stream broadcast; and render the online presentation by combining the live component and at least a portion of the pre- fetch content in sync with the live component at the one or more client devices during the duration of the presentation. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. [0013] Implementations may include the user equipment device of any of the foregoing embodiments where the one or more processors store the pre-fetch content in a local storage device prior to receiving the live component, the one or more processors further retrieve the prefetch content from storage on the client processing device prior to the render. Implementations may include the user equipment device of any of the foregoing embodiments wherein the one or more processors retrieve the pre-fetch content prior to receiving the live component, and/or the one or more processors retrieve the prefetch content from a network node prior to rendering. Implementations may include the user equipment device of any of the foregoing embodiments wherein he one or more processors retrieve the pre-fetch content prior to receiving the live component, and/or the one or more processors retrieve the prefetch content from a cloud server via a network prior to rendering. Implementations may include the user equipment device of any of the foregoing embodiments the one or more processors: receive sync markers in the pre-fetch content; and receive corresponding sync markers in the live component, the sync markers in the live component indicating that pre-fetch content at the sync markers in the pre-fetch content should be rendered with data from the live component. Implementations may include the user equipment device of any of the foregoing embodiments wherein the one or more processors detect network bandwidth available to the client processing device and receive the pre-fetch content based on the available network bandwidth. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium. [0014] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the Background. BRIEF DESCRIPTION OF THE DRAWINGS [0015] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate the same or similar elements. [0016] FIG.1 illustrates an interface of an online conference application showing a first example of audio/visual information which may be presented in the interface. [0017] FIG. 2 illustrates another interface of an online conference application showing a second example of audio/visual information which may be presented in the interface. [0018] FIG. 3 illustrates an exemplary network environment for implementing a lecture-based video presentation. [0019] FIG.4 illustrates a general method in accordance with the technology. [0020] FIG.5A illustrates a series of steps performed by a host processing device, client processing devices and network nodes or cloud processing devices, to implement the techniques described herein. [0021] FIG.5B illustrates additional details regarding the performance of certain portions of FIG.5A. [0022] FIG.5C illustrates a variation on the method of FIG.5B wherein a question- and-answer session is enabled. [0023] FIG.6 illustrates a series of steps performed by a host processing device, client processing devices and network nodes or cloud processing devices in an embodiment wherein the pre-fetch data comprises predominantly text data. [0024] FIG.7 illustrates a series of steps performed by a client processing device and network nodes or cloud processing devices to retrieve and render attendee headshots. [0025] FIG.8 illustrates data flow in accordance with the steps shown in FIG.7. [0026] FIG. 9 illustrates the effect of a proactive pre-fetch algorithm relative to a host, cloud service, edge node, and the client, for various chunks of streamed data. [0027] FIGs.10A and 10B illustrate two alternatives of timing diagrams illustrating both the authentication timing of the client device and when prefetch data may be sent to the client device. [0028] FIG.11 is a block diagram of a network processing device that can be used to implement various embodiments. [0029] FIG.12 is a block diagram of another network processing device that can be used to implement various embodiments. [0030] FIG. 13 is a block diagram of a network device that can be used to implement various embodiments herein. DETAILED DESCRIPTION [0031] The present disclosure and embodiments address performance improvements for lecture-based video conferencing. Lecture-based presentations include a live component and a pre-created, stored component generally meant to accompany the live presentation. In order to provide a better lecture-based presentation experience for attendees, pre-fetch data comprising portions of the presentation content (the stored component) from the lecture-based presentation are distributed by the host and cached in the client devices, cloud devices and/or network nodes. In embodiments, an intelligent distribution/control algorithm may be used to determine caching requirements at each device. Client processing devices stream and/or download the pre-fetch data segments from the host, nodes and/or the cloud devices, and/or from other clients in a peer-to-peer distribution. The particular configuration of pre-fetch data distribution can be configured by a host, or an application administrator based on defined rules, policies, network bandwidth availability, cache availability, and computation resources. Joint resource optimization algorithms may be used. In other embodiments, machine learning algorithms for bandwidth prediction and prefetch management, especially for recurrent meetings, may be used. [0032] FIG. 1 illustrates an interface 100 of an online conference application showing a first example of audio/visual information which may be presented in the interface. Interface 100 includes a presenter window 110, a media display window 120 and attendee display windows 130 showing the connected attendees viewing the presentation. In this example, the media display window is showing text and is adjacent to the presenter window 110, but the placement of the windows may differ in various embodiments. It should be understood that in embodiments, either the media display window or the presenter window may occupy the entire screen, or the portion of the interface currently occupied by both windows 110 and 120. The presenter window 110 may show a live action, real time video of the presenter (the speaker or lecturer), while the information the media display window may change during the course of the presentation to include different text slides, images, and videos which accompany the presentation being made by the presenter. The content of the media display window may be stored in presentation data which is displayed in the window under the control of the presenter. For example, the presentation data may comprise a series of slides, each of which includes text, images, graphics, or video, and which is displayed when selected by the presenter at an appropriate time in the presenter’s lecture. For example, in the application interface 200 shown in FIG. 2, audio/visual information 135 may be a motion video with accompanying text 145 provided in the same window (as shown) or different windows in the interface. [0033] It should be further understood that although eight attendees are illustrated in windows 130, 230, any number of users may be attending the presentation. The attendees may or may not have the ability to provide feedback to the presenter and such feedback may be though text-based messaging or chat services in the presentation application or though audio feedback. Generally, the ability to provide such feedback may be based on permissions defined by the presenter or other administrator in the online conference application. [0034] FIG.3 illustrates an example of a network environment for implementing a lecture-based video presentation. Environment 300 includes a host processing device 310 which is associated with the presenter of a lecture-based video conference. Also illustrated are three exemplary client devices, including a tablet processing device 312, a desktop computer processing device 314 and a mobile processing device 316. It should be understood that there may be any number of processing devices operating as client devices for attendees of the presentation, with one client device generally associated with one attendee (although multiple attendees may use a single device). In addition, although host processing device 310 is illustrated as a notebook or laptop computer processing device, any type of processing device may fulfill the role of a host processing device. Examples of processing devices are illustrated in FIGS.11 – 12. [0035] Also shown in FIG.3 are a plurality of network nodes 320a- 320d and 330a – 330d and presentations servers 340a – 340d. The presentation servers 340a – 340d may be part of a cloud service 350, which in various embodiments may provide cloud computing services which are dedicated to the online conferencing application. Nodes 320a - 320d are referred to herein as “edge” nodes as such devices are generally one network hop from devices 310, 312, 314, 316. Each of the network nodes may comprise a switch, router, processing device, or other network-coupled processing device which may or may not include data storage capability, allowing pre-fetched presentation data to be stored in the node for distribution to devices utilizing the presentation application. In other embodiments, additional levels of network nodes other than those illustrated in FIG 3 are utilized. In other embodiments, fewer network nodes are utilized and in some embodiments, comprise basic network switches having no available caching memory. In still other embodiments, the presentation servers are not part of a cloud service but may comprise one or more presentation servers which are operated by a single enterprise, such that the network environment is owned and contained by a single entity (such as a corporation) where the host and attendees are all connected via the private network of the entity. Exemplary node devices are illustrated in FIGs. 12 – 13. As illustrated in FIG. 3, a host device 310 provides presentation data and the live presentation data for the lecture-based presentation to each of the clients though one or more of the network nodes 320a- 320d, 330a – 330d and/or the cloud service 350. Each of the edge nodes, client devices and presentation servers in the cloud service 350 are connected by one or more public and/or private networks. [0036] FIG. 3 illustrates the flow of presentation data may be provided to client devices 312, 314, 316. The host device 310 is a device used by one or more presenters to provide the online lecture-based presentation. Each lecture-based presentation includes at least a live component, where the presenter provides a lecture or live presentation, and usually includes a pre-prepared, stored component such as a slide presentation which is meant to accompany the live presentation. The pre- prepared or stored component of the presentation is stored on the host device and may be forwarded as pre-fetch data 375 in accordance with embodiments herein. Subsequent to distributing all or part of the pre-fetch data of the stored component of the presentation, live presentation data 365 is streamed to the client devices. As illustrated in FIG.3, live presentation data 365 and pre-fetch data are distributed from the host device 310 via the network nodes to the client devices 312, 314, 316. At least portions of the prefetch data which accompanies the live presentation are sent before they are needed in the lecture-based presentation. Prefetch data can be stored for access by the client devices on the client devices themselves, on the edge nodes 320a - 320d, or by the cloud service 350 on presentation servers 340a - 340d, or on a combination of such devices. [0037] In one example, presentation data 375 may be sent by host 310 through the host processing device’s network interface and directed to the client computers though, for example, a cloud service 350, including one or more presentation servers 340a - 340d. Withing the cloud service 350 the data is distributed according to the workload of each of the presentation servers and can be sent from the presentation servers directly to a client or though one or more of the network nodes 320a - 320d and 330a – 330d. In embodiments, the network nodes 320a - 320d and 330a - 330d may include processors and memory, allowing the nodes to cache data from the presentation. In other embodiments, the network nodes do not have the ability to cache presentation data. [0038] FIG. 4 illustrates a general method in accordance with the technology. At 405, the bandwidth available for the host and client processing devices is detected. As illustrated below, bandwidth detection may occur by the host and/or client processing devices, and/or the cloud service. At 410, prefetching authorization policies are defined. The policies may be defined at the host in the conferencing application, or in the cloud service. Admission policies define whether and to what extent client devices are able to obtain pre-fetch data. The policy may provide for a one-time only pre-fetch permission, or in cases where the lecture-based presentation is a recurrent event, may define a length of time that prefetching of data is allowed. At 415, presentation content is classified into pre-fetch content and non-fetchable content. Pre-fetch content comprises any type of audio/visual or text data which is available before the presentation and which is capable of being distributed and cached in either the client devices themselves, nodes 320a - 320d and 330a – 330d, and/or the cloud service 350. Non-fetchable content includes any audiovisual or text-based data that should be streamed in real-time, such as the lecture itself, or which is defined by the host or the application as being data which is not allowed to be prefetched to a cache. Non-fetchable content may also include any data which could comprise pre-fetch content, but which based on the configuration of the host or application is not allowed to be fetched. For example, a copyrighted image may be prevented from being pre- fetched and only allowed to be displayed in the lecture based on the license available from the copyright owner or fair-use considerations. As such, classification of pre-fetch content and non-fetchable content can be based on rules, features, content, or other factors. At 420, the pre-fetch content is analyzed and prepared to be made available for prefetching. In 420, pre-fetch content may be segmented into segments referred to herein as slices or chunks of data. The pre-fetch content is distributed in segments and reassembled once it is received by the client processing devices. In addition, the fetchable content may be compressed and/or encrypted before being distributed. [0039] In one example, a slide presentation such as that prepared in a slide presentation program such as Microsoft PowerPoint may accompany a lecture. Prefetch data in the form of one or more of the slides which accompany a lecture may be segmented into slices based on predefined policy and/or network conditions. A slice of pre-fetch data may contain one or more video frames, images slides of a presentation, or text and graphics, and are segmented into data slices based on predefined rules or bandwidth prediction. The slices may include original presentation, presentation in lower visual resolutions, or a modified simplified version of the data, such as a presentation with a simplified background. Multiple versions/streams of slices may be generated. For content security, visible and/or invisible watermarks as well as content encryption may be implemented. A sync marker for each content slice is generated and sent to the client device such that when the live presentation data is received, corresponding sync markers in the live data of the lecture-based presentation may be used to allow the content slices (or portions) of the pre-fetch data to be rendered at the client device in sync with the live presentation, i.e. at corresponding points in the presentation where the author designed the pre-fetch content to be rendered. Content prefetching may end when presentation starts or continue during presentation based on bandwidth management rules and policy. [0040] In one embodiment, each sync marker may be provided as metadata in a network frame transmitted between devices. In one embodiment, 8 bits (1 byte) of frame metadata is reserved for sync marks allowing for up to segments (slices or chunks) of data to be synchronized during a presentation. During a presentation, once the host reaches to the relevant slice or page, the corresponding sync mark is forwarded to the system (cloud, edge, and then to the receivers). Clients/receivers shall use this sync mark to search in local cache, then cache in the edge, and then in the cloud to find a matching sync mark. If a matching sync mark is found, the corresponding ppt slide shall be displayed on the client/receiver device. [0041] At 425, prefetching may be scheduled by the network nodes 320a - 320d, cloud service 350 and/or client processing devices 312, 314, 316. Prefetching may occur at optimal times or according to the prefetching scheduling algorithm discussed herein. At 435, optionally, watermarks and encryption keys are generated. Watermarking and encryption may provide content security enforcement, as discussed above. At 440, sync marks are generated. The sync marks allow the content which is prefetched to be reconstructed at the client device and displayed at the correct time during the lecture-based presentation. At 445, forwarding and retrieval of the pre- fetch data occurs. In some cases, all agree fetchable content is downloaded to a client processing device, while in other cases some of it is streamed from one or more the network notes, and in particular the edge nodes 320b, 320c, and 320d. At 450, the lecture presentation (which is itself non-fetchable content) is sent from the host device and the presentation rendered on client devices using the live, lecture-based presentation data and the pre-fetch content stored at the client and/or streamed from the edge nodes or cloud service. [0042] FIG.5A illustrates a series of steps performed by a host processing device, a client processing device and edge network nodes or cloud processing devices, respectively, over a course of time from T0 to T3. [0043] Between time T0 and T1, each respective device is preparing to participate in the lecture-based presentation. At 505, a host processing device will detect and predict bandwidth available for sending pre-fetch and live data to client processing devices of attendees of the lecture-based presentation. Before, after, or commensurate with step 505, at 515 the host processing device (or in other embodiments the presentation servers of the cloud service) will analyze, process, and segment presentation data available from the presenter and which is to accompany the lecture-based presentation. In this embodiment, the presentation data is stored on a storage device in the host processing device 310 (illustrated in FIG.12) enabling the host processing device to perform the functions described at 515. In the embodiment illustrated in FIG. 5, step 515 is performed at the host processing device. In other embodiments, the presentation data may be submitted to the cloud service (or an edge node or another processing device) to perform step 515, and the presentation data may be stored on a storage device of the presentation server of the cloud service. Step 515 comprises the aforementioned analysis of the pre-prepared presentation data to determine which portions of the data may be forwarded and cached on client devices, edge nodes and/or presentation servers for use when rendering the live presentation at the client device. At 520, the host will prepare pre-fetch data in the form of data slices or chunks which may be then distributed and stored at the client, edge nodes or cloud service. Optionally, the data slices may be encrypted, compressed and/or watermarked at 520. Encryption and watermarking may be used to ensure that the data saved on client devices is protected, while compression may be used to make data transfer more efficient and reduce storage space at the pre- fetch destination. At 525, the sync markers (and/or time stamps) are generated. Sync markers are used between the host and client devices to signal when pre-fetch data should be rendered in during the duration of the lecture-based presentation. As discussed below, when live presentation data is sent to client devices, each device uses the marker in the pre-fetch data and a corresponding marker sent in the live presentation data to determine when to render the pre-fetch data in the presentation. Time stamps and/or packet sequence numbers may be utilized to reconstruct audio and video content which is part of the pre-fetch content and where such audio and video is itself broken into chunks. At 530, the host distributes the pre-fetch data responsive to pre-fetch requests received to the clients, edge nodes or cloud service. [0044] Each client device may detect and predict the bandwidth available for participation in the presentation at 545. This information may be provided to the host device or used, as discussed below, for additional processing by the client device for head shot processing and pre-fetch scheduling. At 550, the client will send a pre-fetch request to the host or the cloud service. In one embodiment, the prefetch request may be answered by the cloud service or the edge node (at 585) or the host device (at 530). [0045] At 510, the host will receive and analyze pre-fetch requests. It should be understood that there may be as many pre-fetch requests as there are attendees of the lecture-based presentation and there may be multiple pre-fetch requests to store data at different levels of the network such as the edge nodes and the cloud service. [0046] Following step 550 in the client device, the client device may receive pre- fetch content at 555 and await the beginning of the lecture-based presentation. Similarly, the edge nodes and cloud service may cache the prefetch data having received the data at 580 and distribute the pre-fetch content at 585. In one embodiment, the edge nodes or cloud service obtain pre-fetch data after the client device requests such data and in other embodiments, the edge nodes and cloud service may cache the pre-fetch data based on caching limitations of the client devices. In embodiments, communication between hosts, edged nodes and presentation servers occurs to - optimize caching requirements. In one embodiment, the edge and cloud devices may determine that the edge node and/or cloud devices do not need to cache pre-fetch data because the clients all have sufficient bandwidth and memory available to store all pre-fetch data needed for the presentation. In other examples, if all the conference attendees are on a proprietary or local network, (for example within the same corporate/company network,) the content (pre-fetch and live stream) may be cached in the host, the edge, and some or all of the client devices, or only the host and the client devices. In other embodiments, the content may only be cached in the host and the client devices per a conference management policy. [0047] Between time T1 and T2, the host broadcasts the lecture-based presentation at 535 using a streaming format. At least a portion of the lecture-based content presentation includes a live presentation by a presenter during which the presenter will display portions of the prefetch data. The lecture-based presentation continues until the presentation ends at 590. Optionally, a question-and-answer session 587 may occur, during which portions of the prefetch content may need to be re-displayed. This is discussed below with respect to FIG.5C. [0048] Commensurate with the broadcast at 535, sync markers and/or timestamps are distributed by the host at 540. As noted above, a sync marker for each content slice uniquely identifies the content slice to allow client devices to know which slice should be presented during the duration of the lecture-based presentation. Corresponding sync markers are included in the live stream in order to identify to the client device the correct time to render the pre-fetch content. The time stamp or marker may be sent to the attendee client device at 565, which also receives the non-prefetch content and the broadcast at 570. Content prefetching may end when the presentation starts or continue during presentation based on bandwidth management rules, caching capabilities and policies. At 575, each client device decodes any pre-fetch content and renders the pre-fetch content and broadcast content of the lecture-based presentation. [0049] Between T2 and T3 the presentation continues until finished at T3, at which point the presentation ends at 590. On the client devices and network nodes, any cached data is cleared at 592, 595, respectively. [0050] FIG.5B illustrates additional details regarding the performance of steps 535, 540, 565 and 575 illustrated in FIG. 5A. In one implementation, the broadcast of a lecture-based presentation comprises streaming live data to the client devices at 535a. At 575a, the client devices render the live (streamed) content in the conference application. During the course of a lecture-based presentation, at 537, a presenter may change the presented material, thereby affecting the information which should be displayed on the client devices. At 548, the host device will detect which prefetched content is being displayed by the presenter and send a sync marker at 540b to client processing devices which corresponds to the correct pre-fetch data to display, in order to alert the client processing devices that the prefetch data should now be displayed on the client device. The client device will receive the sync marker at 565a and determine where the prefetch data is located. Initially, the client will look to its own local cache at 575b and if the content is in the local cache, the client will render the prefetch content in time with the leg presentation at 575g. The content is not the local cache 575b, then the client will check at the edge nodes at 575c. If the required pre- fetch content is in the edge node cache, the client will request the prefetch data from the edge node cache and the prefetch data will be returned to the client and rendered with the prefetch content and live presentation at 575b. If the content is not the edge node cache at 575c, then at 575d, the client will check the cloud service cache and if the data is in the cloud service cache, retrieve data from the cloud service at 575g and procedure rendering at 575g. Finally, if the prefetch data is not in the cloud, then the client will send a request for prefetch data to the host at 575h, and when the data is received at 525f, proceed with rendering at 575g. The client then continues rendering live content until the presentation ends at 590 is in FIG.5A. [0051] FIG. 5C illustrates a variation on FIG. 5B wherein a question-and-answer session 587 occurs. With reference to FIG.5C, at 538, the host processing device will enable a two-way livestream between the host and client devices, or some other form of question mechanism allowing attendees to present questions to the presenter of the lecture-based presentation. In one embodiment, attendees may present questions via a built-in chat application to the conferencing application. In other embodiments, live audiovisual content from each attendee may be presented when the attendee has a question. As known in the art, the questioning access can be controlled using access controls on the host processing device. At 539, in a manner similar to step 535a in FIG. 5B, the livestream broadcast will be distributed between the host and client devices. In step 539, the data will be two-way between the client and host devices. When a new question occurs at 541, the presenter display may change at 537. During Q&A, once the host jumps to the relevant portion of the presentation to be presented, the corresponding sync mark is forwarded to the client devices. Client devices then use this sync mark to search in local cache, edge node cache and the cloud service to find a matching sync mark. If a matching sync mark is found, the corresponding portion of pre-fetch presentation data is displayed on the client processing device. With each change of a presenter display at 537, the method continues, looking for the correct pre-fetches content through steps 540a, 540b, 56a, and 575b through 575g. This occurs until no further questioning is allowed or occurs at 541 and the presentation and the 590. [0052] The embodiment of FIG. 5A may be utilized with any type of audio/visual data accompanying a lecture-based presentation. As should be generally understood, some lecture-based presentations are accompanied by presentation slides which comprise a common background with different text on each slide. In embodiments, the pre-fetch method described herein can be adapted for this use case. A presentation which is predominantly text lecture presentations contain mostly text with limited images or audiovisual content other than the background image. In this embodiment, background and foreground separation is conducted and the background can be prefetched or sent once in real time in the beginning. Foreground, text content, can be sent in real time along with metadata that indicates the style of text as part of the non- fetchable content. Text data is substantially smaller and requires less bandwidth to transmit. [0053] An embodiment of the pre-fetch method wherein the pre-fetch data comprises predominantly text data is illustrated with respect to FIG. 6. In FIG. 6, reference numbers which are the same as those in FIG.5A indicate like steps. As in the embodiment of FIG.5A, bandwidth detection and prediction occur at both the host at 505 and the client at 545, presentation content is analyzed in the host at 515, pre- fetch requests are made at 550 and received at 510. The result of the analysis of presentation content is a determination that the content is predominantly text with one or more background images (as present in a slide presentation). At 620, the slices of the presentation are prepared by extracting and packaging the presentation background. Backgrounds can comprise a static image or an image (or video) file containing motion. At 625 a determination is made as to whether the image is static or not. If so, the background can be distributed before the presentation begins at 630. If the background is not static, then markers and timestamps are generated at 525 and the markers and timestamps sent to the client at 540. In one embodiment, text content is sent along with the presentation broadcast at 535. In other embodiments, text content can be packaged (as in FIG.5A at 520) and distributed with (or before or after) the background data before the presentation starts. [0054] Another type of presentation data which benefits from the disclosed pre- fetching techniques is attendee video processing. Representations of attendees shown in the attendee display windows 130 may comprise live video or static images. Other types of meeting applications utilize virtual representations of meeting participants instead of actual participant video. In such applications, meeting participant headshot data can be processed, analyzed, and modeled prior to the lecture-based presentation. Processing and modeling may be conducted at each client device, at network nodes and/or at the cloud service, depending on computation resource availability. Headshot models may be sent to the client devices of other meeting participants along with the live video streams of the lecture-based presentation. In one implementation, a segment of a headshot video stream is cached in the recipient device along with a headshot model. The headshot video stream may be processed, analyzed, and selectively cached and utilized to provide a better user experience. For recurrent meetings, headshot modeling may take advantage of historical data and reinforced learning algorithms. During the video conference, especially between T1 and T3 when bandwidth fluctuation is predicted or detected, a potentially jittery video stream of an attendee headshot may be replaced with cached data such as a 2D image, a video segment, or a virtual rendering of the attendee. In embodiments, a pre-fetched image may be a 2D image, a segment of a video stream, or an intelligently generated virtual representation of the attendee. The headshot may be intelligently selected and processed for smooth playback and better user experience. For the best end user experience, intelligent motion analysis, motion synthesis, lip sync algorithms may be implemented. Upon network restoration, live streaming may be resumed. For content security, prefetched data and 3D models may be cached or stored in an encrypted or watermarked form. [0055] These headshot video processing techniques can also be used with presenter video to conserve bandwidth, generating a representation of the presenter while accompanying audio continues to be streamed and while pre-fetch content is synchronized to the audio stream. [0056] FIGs.7 and 8 illustrate one embodiment where headshot processing and pre-fetching can be utilized to improve user experience in a lecture-based presentation. FIG. 7. FIG. 7 illustrates a series of steps performed by a client processing device and network nodes or cloud processing devices, respectively, over the course of time from T0 to T2 (during the course of a presentation) to retrieve and render attendee headshots in a case where the user experience may be impacted by factors such as network bandwidth. FIG.8 illustrates data flow in accordance with the steps shown in FIG.7. [0057] It should be understood that the presenter (or other system configuration administrator) may establish a conference-configured preference for attendee display which defines whether the attendee display will be allowed at all, or whether live video, a rendered virtual character of the attendee or a two-dimensional image of the attendee will be displayed in the interface 100, 200. It should be further understood that attendees generally have the option in most conferencing applications to prevent live video from their client device from being shared with other attendees. The embodiment in FIGs.7 and 8 will be described with respect to an embodiment where attendee video is displayed. [0058] In the client device, bandwidth detection and prediction occur as in FIGs.5A and 6. At 710, if the bandwidth is sufficient to render video for each attendee, then at 575, then the device will proceed with decoding and rendering the lecture-based presentation at 575, including, for example, any attendee headshot video. If the bandwidth availability detection is not sufficient at 710, then at 715 a determination may be made as to whether the meeting is a recurrent meeting. A recurrent meeting may be, for example, a lecture series of in an educational environment such as a school or a monthly family meeting. If the meeting is a recurrent meeting at 715, then at 730, the method will attempt to locate attendee headshot data at 730. Attendee headshot data may be provided by the network nodes which have performed headshot processing analysis and modeling at 760 and have rendered headshot images and/or video segments at 765. The headshot data may also have been stored in a cache at the client devices based on a previous meeting. If the presentation is not a recurrent meeting at 715, then at 720, a client device will request headshot data from the nearest node or the host device directly. As discussed herein, the headshot data may be provided by one of the network nodes or the host device. The client receives headshot data at 725 and has the data available for insertion if a bandwidth issue is detected. [0059] Headshot processing and modeling may occur at 760 as well. Headshot data may be cached and made available at either the edge network nodes or from the cloud service at 765. [0060] Between times T1 and T2, as noted above, the client device will decode and render the lecture-based presentation stream at 575. At 735, the device continues to monitor network bandwidth and if the bandwidth fluctuates below, for example, a defined threshold, then the client may take corrective action to insert the headshot data into the presentation. At 745, the client determines whether headshot data is available. If not, then headshot synthesis occurs using the headshot data acquired at 725. Synthesis may comprise generating a virtual character for the user or presenter, creating a two-dimensional static image, and/or creating a short video clip of the user or presenter. If headshot data is available at 745 or following headshot synthesis, the headshot data may be inserted into the presentation stream at 755 to improve the attendee experience by reducing the need for any headshot data to be provided in the live stream itself. The video conference continues at 790 as the client continues decoding and rendering at 575 and checking bandwidth at 735. If the bandwidth is sufficient at 735, then at 740, the client may return to rendering presenter and attendee video at 740 [0061] In embodiments, adaptive or proactive pre-fetch control algorithms may be utilized to control when content pre-fetching occurs, both as a pull from the client and as a push from an edge node or the cloud service. A proactive prefetching algorithm responds to fluctuations in bandwidth by adapting push and pulls of pre-fetch data to bandwidth availability. [0062] Figure 9 illustrates the effect of a proactive pre-fetch algorithm relative to a host, cloud service, edge node, and the client, for various chunks of streamed data. For simplicity, the edge node level at the interface between the cloud service and the host device (such as nodes 330a – 330d in FIG. 3) is not illustrated. Bandwidth available to both the client and the host is illustrated as fluctuating over time between high, low and outage levels. As illustrated therein, scheduled prefetching begins from the host device at 901 prior to time T1 and data may be sent to the cloud and edge layers and eventually the client device prior to presentation rendering beginning at T2. As illustrated, rendering combines both prefetch data 902 and streamed data 903, beginning at T2 for the lecture-based presentation. At T2 as the presentation begins at 901 with data streaming out and, at the client, data streaming in, playback begins coincident with the receipt of the lecture-based presentation streaming data. At 904, a low bandwidth fluctuation occurs at the host side, and a break in the scheduled fetch occurs at 905. Similarly, because an outage occurs at the client at 906, no prefetching occurs during this outage period. When bandwidth is restored at the host and the client at 910, 912 respectively, prefetching continues until an outage occurs at the host bandwidth at 914, during which period the pre-fetching is suspended until the outage is restored. By adapting prefetching to bandwidth situations at both the client and the host, the present technology improves the quality of user experience for each of the attendees of the lecture-based presentation. [0063] FIGs.10A and 10B illustrate two alternatives of timing diagrams illustrating both the authentication timing of the client device and when prefetch data may be sent to the client device. FIG. 10A illustrates a use case where authorization request is transmitted from a client to a cloud service in the cloud service handles the authorization request in addition, in the embodiment in FIG. 10A, prefetch data is transmitted between the host through the network levels to the client, with the client caches the prefetch data. Once the presentation begins at T1, additional prefetch is of data may be transmitted between the host and the client. FIG. 10B illustrates an embodiment where authorization request is sent at time T0 from the client to the host device itself authorization is received from the host back to the client. This embodiment, prefetch data is sent to the edge nodes, the presentation begins at T1, prefetch data is sent from the edge nodes to the client device for use in rendering the presentation. It should be understood that various combinations of prefetched caching can be utilized. For example, some the prefetch data may be cached at the edge nodes, while other prefetch data needed is cached in the client device. This is subject to limitations of the available cache data at the client device and at the edge nodes. Generally, it can be assumed that a cloud service will have abundant resources to enable caching prefetch data, this may not be true at client devices. [0064] Adaptive pre-fetching may used to maximize available caches at different levels of a network system. Such adaptive prefetching may be controlled by the host, cloud service or the client devices individually, or jointly. The goal of adaptive prefetching of pre-fetch data is to maximize cache utilization while minimizing content re-fetching, so as to maximize user quality of experience. Client-side adaptive pre- fetching may be accomplished using a Naive scheduling algorithm where: ^ is the maximum cache available for prefetched content storage
Figure imgf000026_0009
at client side; ^
Figure imgf000026_0002
is the
Figure imgf000026_0001
chunk size of fetchable content; ^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ are the minimum and maximum chunk size respectively; ^
Figure imgf000026_0003
is the time of the i th chunk reaches client buffer; ^ is the deadline for the i th chunk to be received by client side for on time playback; ^
Figure imgf000026_0004
the client-side available bandwidth at time t; ^
Figure imgf000026_0005
the client-side bandwidth upper bound; ^
Figure imgf000026_0006
( ^^) is the client-side consumed bandwidth at time t; ^
Figure imgf000026_0007
( ^^) is the client-side bandwidth available for prefetching at time t; is the streaming (e.g., non-fetchable content) bandwidth
Figure imgf000026_0011
consumption at time t; ^ β is a pre-defined available bandwidth threshold for prefetching, for simplicity, assume ; ^
Figure imgf000026_0010
^ δ is a pre-defined threshold preserved for traffic burst; and ^ τ is a pre-defined threshold preserved for prefetching,
Figure imgf000026_0012
[0065] To maximize cache utilization while minimizing content re-fetching:
Figure imgf000026_0008
Figure imgf000027_0001
[0066] total pre-fetched content size with the nth to the Nth+
Figure imgf000027_0003
mth prefetched content chunks cached in the client device content buffer. For chunk j: I
Figure imgf000027_0002
prefetching starts/continues. [0067] For host and cloud services, one may assume that the cache space is always available and, all other variables being as defined above, given: ^ ^ e host side available bandwidth at time t; ^ ^ e host side bandwidth upper bound; ^ ^ e host side consumed bandwidth at time t; ^ ^ e host side bandwidth available for prefetching at time t; ^ ^
Figure imgf000027_0004
ௌ௧ e streaming (e.g., non-fetchable content) bandwidth consumption at time t. To maximize cache utilization while minimizing re-fetching:
Figure imgf000027_0005
Then for a chunk j,
Figure imgf000027_0006
prefetching starts/continues. [0068] In other embodiments, prefetching can be controlled using machine learning. For recurrent meetings, historical data available from previous meeting can be used to train and predict cache availability and bandwidth available for regular participants using machine learning algorithms. [0069] In other embodiments, the relative amount of data cached at each of the cloud, edge and client devices can be distributed differently, according to one or more various caching distribution algorithms and/or user preferences. [0070] FIG.11 is a block diagram of a network processing device that can be used to implement various embodiments. Specific network devices may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device. Furthermore, the network device 1100 may contain multiple instances of a component, such as multiple processing units, processors, memories, transmitters, receivers, etc. The network device 1100 may comprise a processing unit 1101 equipped with one or more input/output devices, such as network interfaces, storage interfaces, and the like. The processing unit 1101 may include a central processing unit (CPU) 1110, a memory 1120, a mass storage device 1130, and an I/O interface 1160 connected to a bus 1170. The bus 1170 may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus, or the like. A network interface 1150 enables the network processing device to communicate over a network 1180 with other processing devices such as those described herein. [0071] The CPU 1110 may comprise any type of electronic data processor. The memory 1120 may comprise any type of system memory such as static random- access memory (SRAM), dynamic random-access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), a combination thereof, or the like. In an embodiment, the memory 1120 may include ROM for use at boot-up, and DRAM for program and data storage for use while executing programs. In embodiments, the memory 1120 is non-transitory. In one embodiment, the memory 1120 includes computer readable instructions that are executed by the processor(s) 1320 to implement embodiments of the disclosed technology, including the presentation application 1125A which may itself include a rendering engine 1125b, sync marker data 1125c, live stream data cache 1125d and a pre-fetch data cache 1125e. The functions of the presentation application 1125a are described herein. The rendering engine functions as described herein to render the lecture-based presentation data and combine the lecture-based presentation live stream data with the prefetched data. In embodiments, the lecture-based presentation live stream data may be cached in the live stream data cache 1125d in order to improve the rendering experience. [0072] The mass storage device 1130 may comprise any type of storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus 1170. The mass storage device 1130 may comprise, for example, one or more of a solid-state drive, hard disk drive, a magnetic disk drive, an optical disk drive, or the like. [0073] FIG. 12 is a block diagram of a network device that can be used to implement various embodiments of a presentation server or network node. Specific network devices may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device. In FIG.12, like numbers represent like parts with respect to those of FIG.11. In one embodiment, the memory 1120 includes the presentation application 1125A including the rendering engine 1125b, and sync marker data 1125c. The memory may also include a presentation analysis engine 1215a and a sync marker generator 1215b. The functions of the presentation analysis engine analyze the presentation data 1235 stored in the mass storage device 1130 to determine pre-fetch data and create pre- fetch data slices which can then be distributed by the host device. [0074] FIG.13 is a block diagram illustrating exemplary details of a network device, or node, such as those shown in the network of FIG.3. A node 1300 may comprise a router, switch, server, or other network device, according to an embodiment. The node 1300 can correspond to one of the nodes 320a- 320d, 330a – 330d. The router or other network node 1300 can be configured to implement or support embodiments of the technology disclosed herein. The node 1300 may comprise a number of receiving input/output (I/O) ports 1310, a receiver 1312 for receiving packets, a number of transmitting I/O ports 1330 and a transmitter 1332 for forwarding packets. Although shown separated into an input section and an output section in FIG.13, often these will be I/O ports 1310 and 1330 that are used for both down-stream and up-stream transfers and the receiver 1312 and transmitter 1332 will be transceivers. Together I/O ports 1310, receiver 1312, I/O ports 1330, and transmitter 1332 can be collectively referred to as a network interface that is configured to receive and transmit packets over a network. [0075] The node 1300 can also include a processor 1320 that can be formed of one or more processing circuits and a memory or storage section 1322. The storage 1322 can be variously embodied based on available memory technologies and in this embodiment and is shown to have a cache 1324, which could be formed from a volatile RAM memory such as SRAM or DRAM, and long-term storage 1326, which can be formed of non-volatile memory such as flash NAND memory or other memory technologies. [0076] Storage 1322 can be used for storing both data and instructions for implementing the data pre-fetch techniques herein. In particular, instructions causing the processor 1320 to perform the functions of requesting and caching pre-fetch data for a lecture-based presentation may be included in the pre-fetch controller 1370, the data for which is stored in a pre-fetch cache 1324. [0077] Other elements on node 1300 can include the programmable content forwarding plane 1328. Depending on the embodiment, the programmable content forwarding plane 1328 can be part of the more general processing elements of the processor 1320 or a dedicated portion of the processing circuitry. [0078] More specifically, the processor(s) 1320, including the programmable content forwarding plane 1328, can be configured to implement embodiments of the disclosed technology described below. In accordance with certain embodiments, the storage 1322 stores computer readable instructions that are executed by the processor(s) 1320 to implement embodiments of the disclosed technology. It would also be possible for embodiments of the disclosed technology described below to be implemented, at least partially, using hardware logic components, such as, but not limited to, Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-Specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc. [0079] For purposes of this document, it should be noted that the dimensions of the various features depicted in the figures may not necessarily be drawn to scale. [0080] For purposes of this document, reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “another embodiment” may be used to describe different embodiments or the same embodiment. [0081] For purposes of this document, a connection may be a direct connection or an indirect connection (e.g., via one or more other parts). In some cases, when an element is referred to as being connected or coupled to another element, the element may be directly connected to the other element or indirectly connected to the other element via intervening elements. When an element is referred to as being directly connected to another element, then there are no intervening elements between the element and the other element. Two devices are “in communication” if they are directly or indirectly connected so that they can communicate electronic signals between them. [0082] Although the present disclosure has been described with reference to specific features and embodiments thereof, it is evident that various modifications and combinations can be made thereto without departing from scope of the disclosure. The specification and drawings are, accordingly, to be regarded simply as an illustration of the disclosure as defined by the appended claims, and are contemplated to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the present disclosure. [0083] The technology described herein can be implemented using hardware, software, or a combination of both hardware and software. The software used is stored on one or more of the processor readable storage devices described above to program one or more of the processors to perform the functions described herein. The processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer readable storage media and communication media. Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information, and which can be accessed by a computer. A computer readable medium or media does (do) not include propagated, modulated, or transitory signals. [0084] Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated, or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as RF and other wireless media. Combinations of any of the above are also included within the scope of computer readable media. [0085] In alternative embodiments, some or all of the software can be replaced by dedicated hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application- specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc. In one embodiment, software (stored on a storage device) implementing one or more embodiments is used to program one or more processors. The one or more processors can be in communication with one or more computer readable media/ storage devices, peripherals and/or communication interfaces. [0086] It is understood that the present subject matter may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this subject matter will be thorough and complete and will fully convey the disclosure to those skilled in the art. Indeed, the subject matter is intended to cover alternatives, modifications, and equivalents of these embodiments, which are included within the scope and spirit of the subject matter as defined by the appended claims. Furthermore, in the following detailed description of the present subject matter, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be clear to those of ordinary skill in the art that the present subject matter may be practiced without such specific details. [0087] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. [0088] The description of the present disclosure has been presented for purposes of illustration and description but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated. [0089] For purposes of this document, each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device. [0090] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims

CLAIMS What is claimed is: 1. A computer implemented method of rendering an online presentation having a live component and a stored component, the online presentation having a start and a duration comprising: determining pre-fetch content in the stored component, the pre-fetch content including audiovisual information stored on a storage device of a host processing device, the pre-fetch content to be rendered in conjunction with the live component during the duration of the presentation at one or more client processing devices; generating sync data linking the pre-fetch content and the live component; and transmitting the pre-fetch content via a network to a second storage device prior to the start of the online presentation, the second storage device accessible by the one or more client processing devices for the duration of the presentation to allow the client processing device to render the online presentation by combining the live component and at least a portion of the pre-fetch content in sync with the live component of the online presentation.
2. The computer implemented method of claim 1 further including receiving a pre- fetch request from the client processing device, and wherein the transmitting occurs in response to the request from the client processing device.
3. The computer implemented method of claim 1 further including receiving a pre- fetch request from a network node device, and wherein the transmitting occurs in response to the request from the network node and the second storage device is provided on the network node.
4. The computer implemented method of claim 1 further including receiving a pre- fetch request from a cloud service processing device, and wherein the transmitting occurs in response to the request from cloud service processing device and the second storage device is provided on the cloud service processing device.
5. The computer implemented method of claim 1 further including transmitting a live stream of the online presentation with accompanying sync markers in the online presentation, the sync markers associated with the sync data and defining when during the duration of the online presentation the pre-fetch content should be rendered.
6. The computer implemented method of claim 1 further including segmenting the pre-fetch content into chunks, and wherein the sync data comprises markers for each chunk, the markers associated with sync markers transmitted during the live component of the online presentation and indicating when in the online presentation the pre-fetch content should be rendered.
7. The computer implemented method of claim 1 wherein the second storage device is a storage device of the client processing device.
8. The computer implemented method of claim 1 wherein the second storage device is a storage device of a network node.
9. The computer implemented method of claim 1 wherein the second storage device is a storage device of a server in a cloud service.
10. The computer implemented method of claim 1 wherein the method further includes: detecting network bandwidth available to the host processing device; and transmitting the pre-fetch content based on the available network bandwidth.
11. The computer implemented method of claim 1 further including performing an adaptive prefetching calculation to maximize available caches at one or more of: a client processing device, a network node, and a cloud service presentation server.
12. A computer implemented method of rendering an online presentation having a live component and a stored component, the online presentation having a start and a duration comprising: requesting pre-fetch content from a host device having at least the stored component, the pre-fetch content including audiovisual information of the online presentation, the pre-fetch content to be rendered in conjunction with the live component during the duration of the presentation by one or more client processing devices; receiving the pre-fetch content via a network prior to the start of the online presentation; receiving sync data linking the pre-fetch content and the live component; receiving the live component via a live stream broadcast; and rendering the online presentation by combining the live component and at least a portion of the pre-fetch content in sync with the live component at the one or more client devices during the duration of the presentation.
13. The computer implemented method of claim 12 wherein the method is performed by a client processing device, and wherein the pre-fetch content is stored prior to receiving the live component, the method further including retrieving the prefetch content from storage on the client processing device prior to rendering.
14. The computer implemented method of claim 12 wherein the method is performed by a client processing device, and wherein the pre-fetch content is retrieved prior to receiving the live component, the method further including retrieving the prefetch content from a network node prior to rendering.
15. The computer implemented method of claim 12 wherein the method is performed by a client processing device, and wherein the pre-fetch content is retrieved prior to receiving the live component, the method further including retrieving the prefetch content from a cloud server via a network prior to rendering.
16. The computer implemented method of claim 12 wherein receiving sync data includes: receiving sync markers in the pre-fetch content; and receiving corresponding sync markers in the live component, the sync markers in the live component indicating that pre-fetch content at the sync markers in the pre- fetch content should be rendered with data from the live component.
17. The computer implemented method of claim 14 wherein the method further includes detecting network bandwidth available to the client processing device; and receiving the pre-fetch content based on the available network bandwidth.
18. A processing system in a network, comprising: a processor readable storage medium; a processor device including a first non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more first processors execute the instructions to render an online presentation having a live component and a stored component: determine pre-fetch content in the stored component, the pre-fetch content including audiovisual information stored on a storage device of a host processing device, the pre-fetch content to be rendered in conjunction with the live component during the duration of the presentation at one or more client processing devices; generate sync data linking the pre-fetch content and the live component; and transmit the pre-fetch content via a network to a second storage device prior to the start of the online presentation, the second storage device accessible by the one or more client processing devices for the duration of the presentation to allow the client processing device to render the online presentation by combining the live component and at least a portion of the pre-fetch content in sync with the live component of the online presentation.
19. The processing system of claim 18 wherein the one or more processors execute the instructions to receive a pre-fetch request from one or more of a client processing device, a network processing device, and a cloud service processing device, and wherein the system transmit occurs in response to the request from the one or more devices.
20. The processing system of claim 18 wherein the one or more processors execute the instructions to transmit a live stream of the online presentation with accompanying sync markers in the online presentation, the sync markers associated with the sync data and defining when during the duration of the online presentation the pre-fetch content should be rendered.
21. The processing system of claim 18 wherein the one or more processors execute the instructions to segment the pre-fetch content into chunks, and wherein the sync data comprises markers for each chunk, the markers associated with sync markers transmitted during the live component of the online presentation and indicating when in the online presentation the pre-fetch content should be rendered.
22. The processing system of claim 18 wherein the second storage device is one of: a storage device of the client processing device, a storage device of a network node, and a storage device of a server in a cloud service.
23. The processing system of claim 18 wherein the one or more processors execute the instructions to: detect network bandwidth available to the host processing device; and transmit the pre-fetch content based on the available network bandwidth.
24. A user equipment device, comprising: a processor readable storage medium; a processor device including a first non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more first processors execute the instructions to: request pre-fetch content from a host device having at least the stored component, the pre-fetch content including audiovisual information of the online presentation, the pre-fetch content to be rendered in conjunction with the live component during the duration of the presentation by one or more client processing devices; receive the pre-fetch content via a network prior to the start of the online presentation; receive sync data linking the pre-fetch content and the live component; receive the live component via a live stream broadcast; and render the online presentation by combining the live component and at least a portion of the pre-fetch content in sync with the live component at the one or more client devices during the duration of the presentation.
25. The user equipment device of claim 24 wherein the one or more processors store the pre-fetch content in a local storage device prior to receiving the live component, the one or more processors further retrieve the prefetch content from storage on the client processing device prior to the render.
26. The user equipment device of claim 24 wherein the one or more processors retrieve the pre-fetch prior to receiving the live component, the one or more processors retrieve the prefetch content from a network node prior to rendering.
27. The user equipment device of claim 24 wherein the one or more processors retrieve the pre-fetch content prior to receiving the live component, the one or more processors retrieve the prefetch content from a cloud server via a network prior to rendering.
28. The user equipment device of claim 24 wherein the one or more processors: receive sync markers in the pre-fetch content; and receive corresponding sync markers in the live component, the sync markers in the live component indicating that pre-fetch content at the sync markers in the pre- fetch content should be rendered with data from the live component.
29. The user equipment device of claim 24 wherein the one or more processors detect network bandwidth available to the client processing device, and receive the pre-fetch content based on the available network bandwidth.
PCT/US2022/026666 2022-04-28 2022-04-28 Adaptive lecture video conferencing WO2023211442A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2022/026666 WO2023211442A1 (en) 2022-04-28 2022-04-28 Adaptive lecture video conferencing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2022/026666 WO2023211442A1 (en) 2022-04-28 2022-04-28 Adaptive lecture video conferencing

Publications (1)

Publication Number Publication Date
WO2023211442A1 true WO2023211442A1 (en) 2023-11-02

Family

ID=82019889

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/026666 WO2023211442A1 (en) 2022-04-28 2022-04-28 Adaptive lecture video conferencing

Country Status (1)

Country Link
WO (1) WO2023211442A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002054192A2 (en) * 2001-01-04 2002-07-11 3Cx, Inc. Synchronized multimedia presentation
US20100186056A1 (en) * 2009-01-22 2010-07-22 Microsoft Corporation Lecture Capture and Broadcast System

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002054192A2 (en) * 2001-01-04 2002-07-11 3Cx, Inc. Synchronized multimedia presentation
US20100186056A1 (en) * 2009-01-22 2010-07-22 Microsoft Corporation Lecture Capture and Broadcast System

Similar Documents

Publication Publication Date Title
US11457088B2 (en) Adaptive transfer rate for retrieving content from a server
US11528264B2 (en) Merged video streaming, authorization, and metadata requests
CN103583051B (en) Playlists for real-time or near real-time streaming
US8583818B2 (en) System and method for custom segmentation for streaming video
US10484737B2 (en) Methods and systems for instantaneous asynchronous media sharing
US11356493B2 (en) Systems and methods for cloud storage direct streaming
US8301697B2 (en) Adaptive streaming of conference media and data
US10887362B2 (en) Forensic watermarking of shared video content
JP2019193312A (en) System and method for frame duplication and frame extension in live video encoding and streaming
US11356739B2 (en) Video playback method, terminal apparatus, and storage medium
WO2017088394A1 (en) Online live video player and playing method
JP2022524073A (en) Methods and equipment for dynamic adaptive streaming with HTTP
US11909517B2 (en) Systems and methods for secure, low bandwidth replicated virtual worlds for shared space computing
Pan et al. OJUMP: Optimization for joint unicast‐multicast panoramic VR live streaming system
WO2023211442A1 (en) Adaptive lecture video conferencing
JP7477251B2 (en) Method, system, and program for improving cacheability of single-page applications
WO2021107934A1 (en) Increase image quality in video streaming sessions
US11870830B1 (en) Embedded streaming content management
US11882170B2 (en) Extended W3C media extensions for processing dash and CMAF inband events
JP7363920B2 (en) System control device and prototype manifest file acquisition method
US20220201372A1 (en) Live video streaming architecture with real-time frame and subframe level live watermarking
US11025969B1 (en) Video packaging system using source encoding
Fang et al. Design of Tile-Based VR Transcoding and Transmission System for Metaverse
US20150088943A1 (en) Media-Aware File System and Method
KR20230073314A (en) Content Delivery Using Distributed Ledger and AI-Based Transcoding Technology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22729841

Country of ref document: EP

Kind code of ref document: A1