US20220124171A1 - Execution of user interface (ui) tasks having foreground (fg) and background (bg) priorities - Google Patents

Execution of user interface (ui) tasks having foreground (fg) and background (bg) priorities Download PDF

Info

Publication number
US20220124171A1
US20220124171A1 US17/567,187 US202217567187A US2022124171A1 US 20220124171 A1 US20220124171 A1 US 20220124171A1 US 202217567187 A US202217567187 A US 202217567187A US 2022124171 A1 US2022124171 A1 US 2022124171A1
Authority
US
United States
Prior art keywords
tasks
priority
app
display
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/567,187
Inventor
Roee Peled
Amit Wix
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tensera Networks Ltd
Original Assignee
Tensera Networks Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tensera Networks Ltd filed Critical Tensera Networks Ltd
Priority to US17/567,187 priority Critical patent/US20220124171A1/en
Assigned to Tensera Networks Ltd. reassignment Tensera Networks Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PELED, ROEE, Wix, Amit
Publication of US20220124171A1 publication Critical patent/US20220124171A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • H04L67/32
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/543User-generated data transfer, e.g. clipboards, dynamic data exchange [DDE], object linking and embedding [OLE]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 

Definitions

  • the present invention relates generally to communication systems, and particularly to pre-rendering of content in user devices.
  • app In applications (“apps”) that run on user devices such as smartphones, one of the major factors affecting user experience is the latency of the User Interface (UI).
  • UI User Interface
  • Various techniques have been proposed for reducing latency and providing a more responsive UI. Some techniques involve prefetching of content. Other techniques involve background preloading of apps. Yet other techniques involve pre-rendering of an app's UI. Techniques of this sort are described, for example, in PCT International Publication WO 2018/055506, entitled “An Optimized CDN for the Wireless Last Mile,” which is incorporated herein by reference.
  • An embodiment of the present invention that is described herein provides a method including, in a user device that is configured to communicate over a network, preloading an application in a background mode in which content presented by the application is hidden from a user of the user device. At least part of the content presented by the application is pre-rendered in an off-line pre-render mode in which fetching of content over the network to the user device is not permitted. In response to an action by the user that requests to access the application, a switch is made to presenting at least the pre-rendered content to the user in a foreground mode.
  • pre-rendering in the off-line pre-render mode includes declining network-related requests from the application. Declining the network-related requests may include responding to the network-related requests from the application with a response indicating the network is unavailable.
  • pre-rendering in the off-line pre-render mode includes rendering at least part of the content from a local cache in the user device.
  • pre-rendering in the off-line pre-render mode includes notifying the application that pre-rendering is performed in accordance with the off-line pre-render mode.
  • pre-rendering in the off-line pre-render mode includes pre-rendering a placeholder item in place of an actual content item that requires fetching over the network.
  • pre-rendering in the off-line pre-render mode includes penalizing the application for requesting to fetch a content item over the network.
  • pre-rendering in the off-line pre-render mode includes receiving in the application, via an Application Programming Interface (API), an indication that pre-rendering is performed in accordance with the off-line pre-render mode, and in response running program code that pre-renders the content in accordance with the off-line pre-render mode.
  • API Application Programming Interface
  • switching to the foreground mode includes refreshing at least some of the content over the network.
  • pre-rendering in the off-line pre-render mode is performed in response to an acknowledgement from the application, indicating that the application supports the off-line pre-render mode.
  • preloading the application includes choosing, based on a criterion, whether to pre-render the content in accordance with the off-line pre-render mode, or in accordance with an on-line pre-render mode in which it is permitted to fetch content over the network to the user device.
  • pre-rendering in the on-line pre-render mode includes receiving in the application, via an Application Programming Interface (API), an indication that pre-rendering is performed in accordance with the on-line pre-render mode, and in response running program code that pre-renders the content in accordance with the on-line pre-render mode.
  • API Application Programming Interface
  • choosing the on-line pre-render mode is performed in response to an acknowledgement from the application, indicating that the application supports the on-line pre-render mode.
  • the criterion depends on at least one factor selected from (i) a usage pattern of the application, (ii) a condition of the user device, and (iii) a condition of the network.
  • a method including issuing, by an application running in a user device, a request to fetch over the network content that includes multiple content items.
  • the request is received by a software agent running in the user device and, in response to the request, a chain of fetch operations is executed for fetching the requested content.
  • Each of the fetch operations in the chain includes (i) receiving from the application an identification of one or more additional content items identified by the application within a content item fetched in a preceding fetch operation in the chain, (ii) evaluating a criterion, and (iii) deciding, depending on the criterion, whether or not to fetch the one or more additional content items.
  • the method includes pre-rendering one or more of the content items in a background mode. In some embodiments, issuing the request includes prefetching the content, not in response to a user accessing the content.
  • a method including, in a user device, which is configured to execute User Interface (UI) tasks that process one or more UI displays presented to a user, assigning to each UI task among the UI tasks (i) a priority selected from at least a Foreground (FG) priority and a Background (BG) priority, and (ii) an association with a UI display being processed by the UI task.
  • the UI tasks are scheduled for execution in accordance with a schedule that (i) gives precedence to the UI tasks having the FG priority over the UI tasks having the BG priority, and (ii) for any UI display, retains in-order execution of the UI tasks associated with the UI display.
  • the UI tasks are executed in accordance with the schedule.
  • one or more of the UI tasks having the BG priority include pre-rendering tasks.
  • the UI tasks include both (i) one or more UI tasks having the BG priority, and (ii) one or more UI tasks having the FG priority that relate to user actions.
  • executing the UI tasks is performed by a single UI thread per user application.
  • assigning the priority includes, in response to addition of a new UI task having the FG priority, identifying one or more UI tasks that (i) are associated with a same UI display as the new UI task and (ii) have the BG priority, and promoting the identified UI tasks to the FG priority.
  • scheduling the UI tasks includes scheduling the promoted UI tasks to be executed before the new UI task.
  • scheduling the UI tasks includes retaining an original order of execution among the promoted UI tasks.
  • scheduling the UI tasks includes permitting out-of-order execution of UI tasks associated with different UI displays.
  • a user device including an interface for communicating over a network, and a processor.
  • the processor is configured to preload an application in a background mode in which content presented by the application is hidden from a user of the user device, to pre-render at least part of the content presented by the application in an off-line pre-render mode in which fetching of content over the network to the user device is not permitted, and, in response to an action by the user that requests to access the application, to switch to presenting at least the pre-rendered content to the user in a foreground mode.
  • a user device including an interface for communicating over a network, and a processor.
  • the processor is configured to issue, by an application running on the processor, a request to fetch over the network content that includes multiple content items, to receive the request by a software agent running on the processor and, in response to the request, execute a chain of fetch operations for fetching the requested content, wherein each of the fetch operations in the chain comprises (i) receiving from the application an identification of one or more additional content items identified by the application within a content item fetched in a preceding fetch operation in the chain, (ii) evaluating a criterion, and (iii) deciding, depending on the criterion, whether or not to fetch the one or more additional content items.
  • a user device including an interface for communicating over a network, and a processor.
  • the processor is configured to assign, to each User Interface (UI) task from among multiple UI tasks that process one or more UI displays presented to a user, (i) a priority selected from at least a Foreground (FG) priority and a Background (BG) priority, and (ii) an association with a UI display being processed by the UI task, to schedule the UI tasks for execution in accordance with a schedule that (i) gives precedence to the UI tasks having the FG priority over the UI tasks having the BG priority, and (ii) for any UI display, retains in-order execution of the UI tasks associated with the UI display, and to execute the UI tasks in accordance with the schedule.
  • UI User Interface
  • FIG. 1 is a block diagram that schematically illustrates a communication system that employs Preloading, Pre-rendering and Prefetching (PPP), in accordance with an embodiment of the present invention
  • FIG. 2 is a flow chart that schematically illustrates a method for content pre-rendering using an off-line pre-render mode, in accordance with an embodiment of the present invention
  • FIG. 3 is a flow chart that schematically illustrates a method for content prefetching using a prefetch parse chain, in accordance with an embodiment of the present invention.
  • FIG. 4 is a flow chart that schematically illustrates a method for handling background and foreground User Interface (UI) tasks using a single UI thread, in accordance with an embodiment of the present invention.
  • UI User Interface
  • Embodiments of the present invention that are described herein provide improved methods and systems for Preloading, Pre-rendering and Prefetching (PPP) in user devices.
  • preloading refers to the process of loading, launching and at least partially running an app in a background mode, not in response to (and typically before) invocation of the app by the user.
  • pre-rendering refers to the process of constructing a UI display of an app in the background mode.
  • UI display in this context refers to a logical UI element—A view or a window. In the Android OS, for example, UI displays are referred to as Views or Activities.
  • pre-rendering may involve UI tasks that modify the UI display, and/or UI tasks that do not directly modify the UI display but are prerequisite to such modification or are synced to modification of the UI display in the background.
  • initialization or preparatory tasks that are performed when preloading an app and preparing to initialize or modify a UI display of the app are also regarded herein as pre-rendering tasks.
  • prefetching refers to the process of fetching content over a network, from a content source to a local cache memory of the user device, not in response to an explicit request or access by the user.
  • the user device pre-renders a UI display of the app in the background.
  • the UI display being pre-rendered may comprise various content items, e.g., text, images, graphics, videos and the like.
  • the user device pre-renders at least some of the content of the UI display using an “off-line pre-render mode” in which fetching of content over the network to the user device is not permitted.
  • the off-line pre-render mode can be implemented in various ways. For example, instead of fetching content items over the network, the user device may pre-render locally-cached versions of the content items regardless of whether they are fresh or stale or up to a predefined extent of staleness. As another example, the user device may pre-render empty “placeholder” content items having similar dimensions as the actual content items. When the user requests to access the app in question, the user device moves the pre-rendered UI display to the foreground, so as to present the pre-rendered content to the user. At this stage, the user device may refresh missing or stale content items over the network.
  • Some disclosed techniques assume that the app supports the off-line pre-rendering mode. Other disclosed techniques do not make this assumption. For apps that do not support the off-line pre-rendering mode, various ways of enforcing off-line pre-rendering, i.e., preventing apps' network requests from reaching the network, are described.
  • the user device may support an on-line pre-render mode in which fetching of content over the network is permitted.
  • the off-line pre-rendering mode balances user experience with cost. In other words, on-line pre-rendering minimizes the latency of presenting the user a fully operational and relatively fresh UI display, but on the other hand incurs data costs, along with related power/battery drain costs.
  • the off-line pre-rendering mode reduces the data cost for the user device and/or the app server, but may initially present to the user an incomplete or relatively stale UI display for a short period of time. As such, the off-line pre-rendering mode enables a flexible trade-off between latency and cost. Embodiments that combine (e.g., choose between) off-line and on-line pre-rendering are also described.
  • a content item often links to one or more additional (“nested”) content items, each of which may link to one or more yet additional content items, and so on.
  • An app that receives such a content item typically parses it, discovers one or more additional content items within the parsed content item, requests to fetch the discovered content items, and so on.
  • the user device runs an agent that handles parse chains.
  • the agent decides, for each additional content item being identified as part of a parse chain, whether to fetch the content item over the network or not.
  • the agent may use various criteria for deciding whether or not to fetch an additional content item, and in particular to decide whether to terminate the parse chain entirely.
  • the disclosed parse-chain handling techniques are useful in various use-cases. Examples relating to pre-rendering and to general prefetching are described.
  • pre-rendering the selective fetching of content items in a parse chain is used as a “hybrid pre-rendering” mode. Such a mode is useful, for example, for reducing costs such as integration effort, data usage and battery drain.
  • Yet other disclosed embodiments relate to correct prioritization and scheduling of foreground and background UI tasks that are associated with the same UI display.
  • UI tasks are typically handled by a single UI thread per app in the user device.
  • disclosed techniques address the case of a user performing some action in the foreground with respect to a UI display of an app, while another UI display of the same app is being pre-rendered in the background.
  • pre-rendering UI tasks might cause the app to appear non-responsive to the user's actions.
  • pre-rendering UI tasks may comprise tasks that modify the UI display directly, and/or initialization or prerequisite tasks that do not directly modify the UI display.
  • an agent running in the user device overcomes such challenges by proper prioritization and scheduling of the UI tasks.
  • the agent assigns each UI task a priority selected from at least a Foreground (FG) priority and a Background (BG) priority.
  • the agent associates each UI task with the UI display (also referred to as “scenario”) processed by this UI task.
  • the agent schedules the UI tasks for execution in accordance with a schedule that (i) gives precedence to the UI tasks having the FG priority over the UI tasks having the BG priority, and (ii) for any UI display, retains in-order execution of the UI tasks associated with the UI display.
  • the UI tasks are then executed in accordance with the schedule.
  • the agent in order to retain in-order execution of the UI tasks of a given UI display, applies a “promotion” mechanism that promotes selected UI tasks from the BG priority to the FG priority.
  • a new UI task having the FG priority e.g., a UI task derived from a user action
  • the agent identifies all UI tasks that are both (i) associated with the same UI display as the new UI task and (ii) assigned the BG priority, and promotes the identified UI tasks to the FG priority.
  • the agent schedules the promoted UI tasks to be executed before the new UI task, and also retains the original order of execution among the promoted UI tasks.
  • FIG. 1 is a block diagram that schematically illustrates a communication system 20 that employs Preloading, Pre-rendering and Prefetching (PPP), in accordance with an embodiment of the present invention.
  • PPP Preloading, Pre-rendering and Prefetching
  • System 20 comprises a user device 24 , which runs one or more user applications (“apps”) 26 .
  • Device 24 may comprise any suitable wireless or wireline device, such as, for example, a cellular phone or smartphone, a wireless-enabled laptop or tablet computer, a desktop personal computer, a video gaming console, a smart TV, a wearable device, an automotive user device, or any other suitable type of user device that is capable of communicating over a network and presenting content to a user.
  • the figure shows a single user device 24 for the sake of clarity. Real-life systems typically comprise a large number of user devices of various kinds.
  • apps 26 may be dedicated, special-purpose applications such as game apps.
  • Other apps 26 may be general-purpose applications such as Web browsers.
  • apps 26 are provided by and/or communicate with one or more network-side servers, e.g., portals 28 , over a network 32 .
  • Network 32 may comprise, for example a Wide Area Network (WAN) such as the Internet, a Local Area Network (LAN), a wireless network such as a cellular network or Wireless LAN (WLAN), or any other suitable network or combination of networks.
  • WAN Wide Area Network
  • LAN Local Area Network
  • WLAN Wireless LAN
  • user device 24 comprises a processor 44 that carries out the various processing tasks of the user device.
  • processor 44 runs apps 26 , and also runs a software component referred to as a Preload/Pre-render/Prefetch (PPP) agent 48 , which handles preloading of apps, content pre-rendering and/or content prefetching.
  • PPP Preload/Pre-render/Prefetch
  • Apps 26 and PPP agent 48 are drawn schematically inside processor 44 , to indicate that they comprise software running on the processor.
  • user device 24 comprises a Non-Volatile Memory (NVM) 54 , e.g., a Flash memory.
  • NVM 54 may serve, inter alia, for storing a cache memory 52 for caching content associated with apps.
  • the user device uses a single cache 52 .
  • a separate cache memory 52 may be defined per app.
  • Hybrid implementations, in which part of cache 52 is centralized and some is app-specific, are also possible. For clarity, the description that follows will refer simply to “cache 52 ”, meaning any suitable cache configuration.
  • User device 24 further comprises a display screen 56 for presenting visual content to the user, and a suitable network interface (not shown in the figure) for connecting to network 32 .
  • This network interface may be wired (e.g., an Ethernet Network Interface Controller—NIC) or wireless (e.g., a cellular modem or a Wi-Fi modem).
  • NIC Network Interface Controller
  • wireless e.g., a cellular modem or a Wi-Fi modem
  • user device 24 further comprises some internal memory, e.g., Random Access Memory (RAM)—not shown in the figure—that is used for storing relevant code and/or data.
  • RAM Random Access Memory
  • system 20 further comprises a PPP subsystem 60 that performs preloading, pre-rendering and/or prefetching tasks on the network side.
  • Subsystem 60 comprises a network interface 64 for communicating over network 32 , and a processor 68 that carries out the various processing tasks of the PPP subsystem.
  • processor 68 runs a PPP control unit 72 that carries out network-side PPP tasks.
  • Network-side PPP tasks may comprise, for example, deciding which apps to preload and when, or choosing whether and which in-app content to preload, deciding how much of an app component to preload (e.g., only executing some initial executable code, or pre-rendering of the app's user interface), to name just a few examples.
  • PPP subsystem 60 may be implemented as a cloud-based application.
  • processor 44 of user device 24 the PPP tasks are described as being carried out by processor 44 of user device 24 . Generally, however, PPP tasks may be carried out by processor 44 of device 24 , by processor 68 of subsystem 60 , or both. Thus, any reference to “processor” below may refer, in various embodiments, to processor 44 , processor 68 , or both.
  • Preloading an app 26 may involve preloading any app element such as executable code associated with the app, e.g., launch code, app feed, app landing page, various UI elements associated with the app, content associated with the app, app data associated with the app, and/or code or content that is reachable using the app by user actions such as clicks (“in-app content”).
  • Pre-rendering of content is typically performed in an app that has been preloaded and is currently running in the background.
  • Pre-rendering may involve background processing of any suitable kind of UI display, or a portion thereof.
  • pre-rendering may comprise background processing of one or more Android Activities.
  • background mode UI elements associated with the app are not presented to the user on display screen 56 , i.e., are hidden from the user.
  • the user device switches to run the app in a foreground mode that is visible to the user.
  • background mode and “foreground mode” are referred to herein simply as “background” and “foreground,” for brevity.
  • system 20 and its various elements shown in FIG. 1 are example configurations, which are chosen purely for the sake of conceptual clarity. In alternative embodiments, any other suitable configurations can be used.
  • the PPP tasks may be implemented entirely in processor 44 of user device 24 , in which case subsystem 60 may be eliminated altogether.
  • PPP agent 48 may be implemented in a software module running on processor 44 , in an application running on processor 44 , in a Software Development Kit (SDK) embedded in an application running on processor 44 , as part of the Operating System (OS) running on processor 44 (possibly added to the OS by the user-device vendor or other party), in a proxy server running on processor 44 , using a combination of two or more of the above, or in any other suitable manner.
  • SDK Software Development Kit
  • OS Operating System
  • PPP agent 48 is assumed to be part of the OS of user device 24 .
  • Machine users may comprise, for example, various host systems that use wireless communication, such as in various Internet-of-Things (IoT) applications.
  • IoT Internet-of-Things
  • the different elements of system 20 may be implemented using suitable software, using suitable hardware, e.g., using one or more Application-Specific Integrated Circuits (ASICs) or Field-Programmable Gate Arrays (FPGAs), or using a combination of hardware and software elements.
  • Cache 52 may be implemented using one or more memory or storage devices of any suitable type.
  • PPP agent 48 and/or subsystem 60 may be implemented using one or more general-purpose processors, which are programmed in software to carry out the functions described herein.
  • the software may be downloaded to the processors in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.
  • a UI display of an app 26 may comprise various content items, such as text, images, videos, graphics and/or other visual elements, which are laid-out visually in accordance with a specified layout.
  • Rendering of a UI display typically calls for fetching the content items over the network.
  • pre-rendering a UI display in the background it is possible, and sometimes beneficial, to limit the extent of network access, e.g., in order to reduce power consumption, cost and/or cellular data usage.
  • user device 24 supports an “off-line pre-render” mode that performs pre-rendering but without permitting fetching of content over network 32 to the user device.
  • the off-line pre-render mode is also referred to herein as “off-line mode” for brevity.
  • the app whose content is being pre-rendered, and/or the OS of the user device is required to support the off-line pre-render mode. In other embodiments, no such support is assumed.
  • the techniques described herein may be performed by PPP agent 48 , by the OS of user device 24 that runs on processor 44 , and/or by apps 26 . Any suitable partitioning (“division of labor”) between the PPP agent, the OS and the apps can be used. In some embodiments, the actual pre-rendering and rendering of content is performed by the apps.
  • the PPP agent is configured to trigger the apps, e.g., to notify an app that off-line pre-rendering is being used.
  • the PPP agent is implemented as a component of the OS, and both the PPP agent and the apps are orchestrated by the OS. This partitioning, however, is chosen purely by way of example. For clarity, some of the description that follows refers simply to processor 44 as carrying out the disclosed techniques.
  • the off-line pre-render mode can be applied to any app in user device 24 , to all apps, to a selected subset of one or more apps, or otherwise.
  • off-line pre-rendering allows leveraging these existing features to integrate pre-rendering functions more easily. In such apps, much of the value of on-line pre-rendering (having no restrictions on network access) can be retained at a considerably lower cost. For apps that do not already have off-line user experience, the end result of having a good off-line user experience provides an added incentive for integration of pre-rendering.
  • the app before the off-line pre-render mode is enabled for a given app 26 , the app is required to declare to agent 48 , e.g., via a suitable Application Programming Interface (API), that it supports the off-line pre-render mode.
  • API Application Programming Interface
  • Supporting off-line pre-rendering is beneficial to apps, for example since it allows preloading to be scheduled more often (in comparison with on-line pre-render in which network requests are allowed) and regardless of network conditions.
  • agent 48 intercepts requests from the app to fetch content over the network, and declines the requests if the app is being pre-rendered in the off-line mode. For example, agent 48 may respond to such requests with a response indicating that the network is unavailable, e.g., a “not connected” response. Alternatively, agent 48 may use any other suitable technique for blocking content requests from accessing the network.
  • the app in the off-line mode, may pre-render content items that are cached locally in cache 52 instead of fetching them over the network, provided that cache 52 indeed contains copies of the content items.
  • the app in the off-line mode the app pre-renders cached content items even if they are known to be stale, i.e., not fresh. This is contrast to on-line pre-rendering, in which the app will typically fetch requested content over the network if the cached content is not fresh.
  • the app in the off-line mode the app will not pre-render a cached content item if it is staler than a predefined extent, e.g., older than a predefined age.
  • the app in the off-line mode, will pre-render a “placeholder” content item instead of the actual content item that is specified for display in the UI display.
  • This technique may be used, for example, if the app requests a content item that is not available locally (or is staler than a predefined extent) in cache 52 .
  • the placeholder content item is typically an empty content item that has a similar layout (e.g., similar dimensions) to the actual content item.
  • the app may fetch and insert the actual content item seamlessly in-place into the pre-rendered UI display.
  • agent 48 notifies the app that its UI display is being pre-rendered in accordance with the off-line mode, and therefore network requests are expected to fail or are not expected.
  • the app may query agent 48 , e.g., via a suitable API, whether a UI display of the app is being pre-rendered and using which mode.
  • the app when using such an API, performs both off-line pre-rendering and on-line pre-rendering (and possibly also normal rendering in the foreground) using the same launch code, but the execution of this launch code differs between the different modes.
  • the app's launch code loads and pre-renders only the parts of the UI display that do not require any network resources.
  • the app starts running in the background but stops its progress at a point before it requires network access.
  • the app may not be fully operational in this state, and may require more processing to become usable. Processing is resumed once the user accesses the app. This technique considerably reduces the latency from the time the user accesses the app to the point the UI display is fully presented and operational, and at the same time does not require network access while in the background.
  • PPP agent 48 penalizes (“punishes”) an app 26 that initiates network requests while in the off-line pre-render mode.
  • Penalizing an app may comprise, for example, disallowing or reducing the rate of subsequent off-line pre-rendering, since the app's network requests indicate that the app behaves improperly in the off-line mode.
  • FIG. 2 is a flow chart that schematically illustrates a method for content pre-rendering using the off-line pre-render mode, in accordance with an embodiment of the present invention. It is noted that the flow of FIG. 2 is depicted purely by way of example, in order to demonstrate certain features relating to off-line pre-rendering. In alternative embodiments, the off-line pre-render mode can be implemented and utilized in any other suitable way.
  • the method of FIG. 2 begins with PPP agent 48 notifying an app 26 that its UI display is to be pre-rendered using the off-line pre-rendering mode, at a notification step 80 .
  • the OS loads the app launch code and runs the app in a background mode. Since the app was notified of the off-line pre-rendering mode, the app pre-renders its UI display using locally-cached content only.
  • the user may access the app.
  • PPP agent switches to run the app in the foreground so that the UI display is presented to the user on display screen 56 , at a foreground switching step 88 .
  • the app's UI display is typically incomplete, since it comprises only elements that were obtainable off-line.
  • the app then refreshes any missing or stale content in the UI display over network 32 , at a refreshing step 92 .
  • the refreshing operation can be performed entirely by the app or it can be assisted by agent 48 .
  • content items that are fetched after off-line pre-rendering is completed, or even after the pre-rendered UI display is moved to the foreground can be inserted by the app into the UI display with minimal visual impact.
  • content items may be inserted without requiring complete refreshing or re-rendering of the entire UI display, or requiring only refreshing of only the content items being replaced.
  • all pre-rendering in user device 24 is performed using the off-line mode.
  • the user device additionally or alternatively to the off-line mode, supports an “on-line pre-render” mode (also referred to as “on-line mode” for brevity) in which fetching of content over network 32 for a preloaded app 26 is allowed.
  • the app before the on-line pre-render mode is enabled for a given app 26 , the app is required to declare to PPP agent 48 , e.g., via a suitable API, that it supports the on-line pre-render mode.
  • An app may use this API, for example, to declare that it can cope with the server load associated with on-line pre-rendering.
  • agent 48 may learn the data usage pattern of the app, and use this information to decide whether and when to use the on-line pre-rendering mode.
  • agent 48 may monitor the data usage of the app in real-time, and revert to off-line pre-rendering if the amount of data fetched over the network is too large, e.g., larger than a threshold. In such a case, agent 48 may also avoid further pre-rendering, and/or notify the app developer that the app is not behaving properly when being pre-rendered.
  • PPP agent 48 chooses between the two modes considering the lower cost of off-line pre-rendering and the potential for better user experience offered by on-line pre-rendering.
  • the choice may be based, for example, on current and/or expected network conditions, learned usage patterns of the user, hints from the app regarding expected usage patterns provide to agent 48 via an API, user device limitations, a maximal permitted rate of preloading/pre-rendering (as detailed below), maximal permitted content staleness (as detailed below), and the like.
  • User-device limitations may comprise, for example, battery state, whether the device is connected to an external power supply, data saving modes, RAM size, Flash size and/or data budget.
  • the app may specify a maximum allowed rate of pre-rendering for the off-line mode, for the on-line mode, or for both.
  • a maximum allowed rate of pre-rendering for the off-line mode, for the on-line mode, or for both.
  • One motivation for this feature is to ensure that pre-rendering operations do not overload the app's servers (e.g., portals 28 ).
  • the app may specify a limit of two on-line pre-rendering operations per day, or one off-line pre-rendering operation and one on-line pre-rendering operation for every activation of the app by the user.
  • the app may specify a maximal permitted content staleness for the off-line mode, for the on-line mode, or for both.
  • a maximal permitted content staleness for the off-line mode, for the on-line mode, or for both.
  • One motivation for this feature is to ensure that the pre-rendered content that the user can see is not older than a given maximal staleness threshold.
  • the OS enforces the maximal permitted content staleness by either destroying the preloaded content or by triggering a refresh of this content (e.g., pre-rendering again) upon reaching the specified maximal staleness.
  • a content item e.g., a Web page or app feed
  • Fetching of such a nested content item for an app 26 can be viewed as a “parse chain”: Typically, the app receives the content item, parses it, discovers one or more additional content items within the parsed content item, requests to fetch the discovered content items, and so on. The app may continue this process until no more nested content items are found.
  • PPP agent 48 when executing a parse chain, controls the fetching of content items for the requesting app 26 .
  • the PPP agent may fetch the content items itself, or it may decide and instruct the app whether or not to fetch certain content.
  • agent 48 decides, for each additional content item being identified as part of the parse chain, whether to fetch the content item over the network or not. If the decision is not to fetch a content item, the app's network request is ignored. In such an event, the PPP agent would typically notify the app that the network request has failed, or that the network request will not be served.
  • Agent 48 may use various criteria for deciding whether an additional content item should be fetched or not, and in particular to decide whether to terminate the parse chain entirely.
  • agent 48 may use the selective fetching of content items as a “hybrid pre-rendering” mode. In this mode, when agent 48 permits app 26 to fetch an additional content item that was identified by the app as part of the parse chain, the app also pre-renders the additional content item in the background. When agent 48 decides not to fetch an additional content item, this content item will not be pre-rendered (and will typically be fetched and rendered only once the user accesses the app and the UI display is moved to the foreground). Executing such a prefetch parse chain during pre-rendering is useful, for example, for reducing costs such as integration effort, data usage and battery drain.
  • the app is required to declare that it supports the hybrid pre-rendering mode before the mode becomes available. Once declared, agent 48 uses the hybrid pre-rendering mode instead of non-selective on-line pre-rendering.
  • Another use-case relates to prefetching, but not in the context of pre-rendering.
  • an app that runs in the background for which agent 48 prefetches content but does not perform pre-rendering.
  • This use-case is useful, for example, if pre-rendering is considered too costly or is otherwise not desired or not feasible. Pre-rendering may be unavailable, for example, when agent 48 is implemented as part of an app on an OS that does not allow pre-rendering.
  • agent 48 may handle a parse chain by intercepting some or all of the network calls that occur during pre-rendering or prefetching. Such network calls are treated as optional requests to be prefetched only under certain criteria, e.g., criteria relating to efficiency or safety. Network calls that are not intercepted by agent 48 , if any, may be allowed to reach the network or may be blocked, as appropriate.
  • PPP agent 48 implements a parse chain by providing a suitable API to the app.
  • the app uses this API for sending requests to agent 48 for prefetching content items (instead of the app issuing network calls directly).
  • Benefits of the disclosed parse-chain handling scheme include, for example:
  • the app when carrying out the disclosed parse chain technique, some of the app's network requests may be ignored by agent 48 . Therefore, in some embodiments the app is expected to handle ignored network requests correctly, as if the app was run in an off-line pre-render mode, for example by obtaining the requested content from cache 52 .
  • the app may choose to continue the parse chain by parsing a content item cached in cache 52 , such as images that are linked through a feed json.
  • the app provides agent 48 information relating to a content item, and agent 48 takes this information into account in deciding whether or not to download the content item.
  • agent 48 may indicate, by way of example:
  • FIG. 3 is a flow chart that schematically illustrates a method for content prefetching using a prefetch parse chain, in accordance with an embodiment of the present invention.
  • the method flow is described in the context of prefetching (without pre-rendering), by way of example.
  • pre-rendering can be implemented in a similar manner as explained above.
  • the method flow describes a process that terminates the entire parse chain after prefetching some of the content items. This flow, however, is not mandatory.
  • agent 48 may decide not to prefetch certain content items, but nevertheless proceeds with execution of the parse chain and may subsequently decide to prefetch other content items that are discovered later.
  • the method of FIG. 3 begins with an app 26 , e.g., an app that was preloaded and is currently running in a background mode in user device 24 , beginning to prefetch content, at a prefetch initiation step 120 .
  • an app 26 e.g., an app that was preloaded and is currently running in a background mode in user device 24 , beginning to prefetch content, at a prefetch initiation step 120 .
  • PPP agent 48 receives from the app a request to prefetch one or more content items.
  • agent 48 initializes a prefetch parse chain for the app.
  • agent 48 downloads the requested content item(s) over network 32 and delivers the content item(s) to app 26 (or alternatively permits the app to download the requested content item(s)).
  • the app parses the content item(s), at a parsing step 136 .
  • the app checks whether the content item(s) delivered at step 132 link to additional content item(s) to be downloaded. If so, the app requests agent 48 to prefetch the additional content item(s), at an additional requesting step 144 .
  • Agent 48 evaluates a criterion that decides whether to approve or decline the app's request for additional content item(s), at a prefetching evaluation step 148 . (In some embodiments the criterion can be evaluated for the initial content item(s) requested by the app, as well.) If the criterion indicates that the additional content item(s) are approved for prefetching, the method loops back to step 132 above, in which agent 48 and/or app 26 downloads the additional content item(s), i.e., executes the next prefetch stage of the parse chain. If the criterion indicates that the additional content item(s) are not approved for prefetching, agent 48 terminates the parse chain, at a termination step 152 . The method also jumps to step 152 upon finding, at step 140 , that no additional content items are to be prefetched in the preset parse chain.
  • agent 48 and/or the app refreshes any missing or stale content over the network.
  • the OS of user device 24 executes multiple UI tasks for the various apps 26 .
  • Each UI task specifies an action that processes a certain UI display of an app. Some UI tasks may modify the UI display directly, whereas other UI tasks do not directly modify the UI display but are prerequisite to such modification or are synced to modification of the UI display.
  • UI displays are referred to as Views or Activities.
  • a UI display is also referred to herein as a “scenario”. Some UI tasks may originate from user actions, whereas other UI tasks may relate to background pre-rendering.
  • the OS runs a single UI thread per app 26 .
  • a user performs actions that affect a UI display of a certain app, while another UI display of the same app is being pre-rendered in the background.
  • the UI tasks derived from the user's actions and the UI tasks relating to pre-rendering all compete for the resources of the same single UI thread.
  • the pre-rendering UI tasks might cause the app to appear non-responsive to the user's actions.
  • Another challenge encountered in the above situation is the need to retain in-order execution of UI tasks associated with a given UI display.
  • UI tasks associated with a given UI display For example, a situation in which one or more pre-rendering UI tasks for a given UI display are pending for execution, and then a user performs an action that modifies the same UI display.
  • the user's action is in the foreground and are more latency-sensitive than the background pre-rendering tasks, it would be incorrect to execute the user's UI tasks before the pending pre-rendering tasks.
  • the in-order requirement holds for UI tasks associated with the same UI display, but UI tasks of different UI displays are allowed to be handled out-of-order.
  • PPP agent 48 in user device 24 overcomes such challenges by correctly prioritizing and scheduling the various UI tasks.
  • PPP agent 48 assigns to each UI task a priority selected from at least a Foreground (FG) priority and a Background (BG) priority.
  • PPP agent 48 associates each UI task with the UI display (“scenario”) being processed (e.g., modified or prepared, for example) by this UI task.
  • PPP agent 48 then schedules the UI tasks for execution in accordance with a schedule that (i) gives precedence to the UI tasks having the FG priority over the UI tasks having the BG priority, and (ii) for any UI display, retains in-order execution of the UI tasks associated with the UI display.
  • the PPP agent does not enforce any constraints as to the order of execution of UI tasks associated with different UI displays (other than, of course, giving precedence to FG tasks over BG tasks).
  • the UI tasks are then executed in accordance with the schedule.
  • PPP agent 48 in order to retain in-order execution of the UI tasks of a given UI display, PPP agent 48 applies a “promotion” mechanism that promotes selected UI tasks from the BG priority to the FG priority.
  • agent 48 In response to creation of a new UI task having the FG priority (e.g., derived from a user action), agent 48 identifies all UI tasks that (i) are associated with the same UI display as the new UI task and (ii) have the BG priority, and promotes the identified UI tasks to the FG priority. Agent 48 then schedules the promoted UI tasks to be executed before the new UI task. Agent 48 also retains the original order of execution among the promoted UI tasks.
  • a UI task may be associated with multiple UI displays, in which case agent 48 promotes the UI task if a new FG-priority UI task is added in any of these multiple UI displays.
  • PPP agent 48 represents the various UI tasks as messages.
  • UI task and “message” are used interchangeably.
  • a single user action, or a single UI change by the app or the user device in general, may be translated into several smaller UI tasks, e.g., drawing a portion of the screen, invoking an app callback or creating a new view or screen.
  • Each such UI task is represented by a message.
  • Agent 48 queues the messages in a suitable queue structure, and schedules the queued messages for execution by the UI thread.
  • Any suitable queue structure can be used for queuing the messages, e.g., a priority queue, a set of queues with each queue holding the messages of a respective priority, a multi-threaded UI environment with each thread assigned to handle messages of a respective priority, or even a multi-process structure in which different processes handle different priorities.
  • agent 48 starts a predefined time-out (denoted T1) following the execution of a foreground message, and ensures that background messages are only handled if no foreground messages have been handled for T1 seconds.
  • agent 48 may schedule messages to be handled at a specified time in the future, and halt the handling of background messages if a foreground message is scheduled to be handled in the near future.
  • the time interval considered “near future” in this context may be constant (denoted T2), or may be based on an estimation of the expected running time of the background message in question. For example, if a foreground message is scheduled to start being handled in three seconds, then a background message whose handling is expected to take two seconds will be allowed to proceed, while a background message whose handling expected to take four seconds will not be allowed to proceed at this time.
  • agent 48 may delete a background message if its associated UI display becomes no longer relevant. For example, in the Android OS, when an Activity is destroyed, if a UI display based on that Activity exists, agent 48 deletes all messages associated with this UI display.
  • UI tasks may be assigned priorities in various ways. For example, the app developer may specify the priority of each UI task or message to be queued. As another example, the app developer may specify the priority of a certain UI display that needs to be processed. In this case, messages associated with this display will receive the specified priority. In some embodiments the priority may be assigned automatically by agent 48 . For example, tasks that are independent of immediate needs of the user may be assigned BG priority.
  • specific UI components may be modified to take advantage of the priority system, and some or all of their tasks are assigned to BG priority.
  • a UI component that holds multiple tabs that the user may browse such as Android ViewPager.
  • Such a component may load multiple tabs together, assign FG priority only to the visible tab, and BG priority to adjacent tabs. This assignment helps provide the fastest response to the user while still loading adjacent tabs ahead of time.
  • agent 48 may automatically assign BG priority to views that are drawn but not currently visible, for example views that are “below-the-fold” and require scrolling to become visible.
  • agent 48 creates UI displays (e.g., views or Android Activities) predictively before the user requires them, and sets the priorities of their UI tasks to BG priority.
  • UI displays e.g., views or Android Activities
  • agent 48 assigns more than two priorities. For example, non-visible views (i.e., views that are “below-the-fold”) of a foreground activity may have higher priority than views of other pre-rendered activities, but lower priority than the view which is visible.
  • the promotion among multiple priorities may be defined such that when a message with a specific priority P and UI display S is sent, all messages of UI display S with priorities that are lower that P are promoted to priority P.
  • UI tasks may be associated with UI displays (“scenarios”) in various ways.
  • PPP agent 48 associates UI tasks with UI displays automatically.
  • example associations may comprise:
  • the app developer may choose which UI display is associated per message. In such embodiments, and if agent 48 also automatically associates messages with UI displays, then the developer's choice may override the choice of agent 48 .
  • agent 48 may modify pre-rendering related UI tasks to take into account that the results are not immediately visible to the user. For example, agent 48 may reduce the frame-rate for background UI tasks, or may avoid running animations, jumping directly to the end result of the animation. Such manipulation is done to reduce the load introduced by background tasks, while also completing the background tasks faster. In some embodiments, agent 48 may split UI tasks into smaller portions, so that handling each message may be quick, allowing greater control over message scheduling.
  • agent 48 if a FG message is created while a BG message is being handled, agent 48 allows the running BG message to complete before handling the FG message. In other embodiments, agent 48 pauses handling of the BG message, saves the state of the paused message, then handles the FG message, and then resumes handling of the paused BG message from the saved state. In yet other embodiments, agent 48 stops handling the BG message and reverses its effects, then handles the FG message, and then retries handling the BG message from the beginning. To this end, agent 48 may use a transaction system for messages, so that the effects of a message will not persist unless the message is handled completely and committed.
  • FIG. 4 is a flow chart that schematically illustrates a method for handling background and foreground UI tasks using a single UI thread, in accordance with an embodiment of the present invention.
  • the method begins with PPP agent 48 receiving a new UI task, at a task input step 160 .
  • Agent 48 represents the new UI task as a message, at a message representation step 164 , and associates the message with a UI display (a “scenario”, e.g., an Android Activity), at a scenario assignment step 168 .
  • a “scenario”, e.g., an Android Activity e.g., an Android Activity
  • agent 48 checks whether the message relates to a user action or to a pre-rendering operation. If the message relates to pre-rendering, agent 48 assigns the message a BG priority and adds the message to the queue structure, at a BG prioritization step 176 . Agent 48 schedules the message for execution, at a scheduling step 180 .
  • agent 48 assigns the message a FG priority and adds the message to the queue structure, at a FG prioritization step 184 . Agent 48 then checks whether any of the queued messages are both (i) assigned the BG priority and (ii) associated with the same UI display (“scenario”) as the new FG message. If so, agent 48 promotes these messages to the FG priority, at a promotion step 192 . With or without promotion, agent 48 proceeds to schedule the message at scheduling step 180 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A method includes, in a user device configured to execute User Interface (UI) tasks that process one or more UI displays presented to a user, assigning to each UI task among the UI tasks (i) a priority selected from at least a Foreground (FG) priority and a Background (BG) priority, and (ii) an association with a UI display being processed by the UI task. The UI tasks are scheduled for execution in accordance with a schedule that (i) gives precedence to the UI tasks having the FG priority over the UI tasks having the BG priority, and (ii) for any UI display, retains in-order execution of the UI tasks associated with the UI display. The UI tasks are executed in accordance with the schedule.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of PCT application PCT/IB2020/057046, filed Jul. 26, 2020, which claims the benefit of U.S. Provisional Patent Application 62/880,092, filed Jul. 30, 2019, and U.S. Provisional Patent Application 62/880,674, filed Jul. 31, 2019. The disclosures of these related applications are incorporated herein by reference.
  • FIELD OF THE INVENTION
  • The present invention relates generally to communication systems, and particularly to pre-rendering of content in user devices.
  • BACKGROUND OF THE INVENTION
  • In applications (“apps”) that run on user devices such as smartphones, one of the major factors affecting user experience is the latency of the User Interface (UI). Various techniques have been proposed for reducing latency and providing a more responsive UI. Some techniques involve prefetching of content. Other techniques involve background preloading of apps. Yet other techniques involve pre-rendering of an app's UI. Techniques of this sort are described, for example, in PCT International Publication WO 2018/055506, entitled “An Optimized CDN for the Wireless Last Mile,” which is incorporated herein by reference.
  • SUMMARY OF THE INVENTION
  • An embodiment of the present invention that is described herein provides a method including, in a user device that is configured to communicate over a network, preloading an application in a background mode in which content presented by the application is hidden from a user of the user device. At least part of the content presented by the application is pre-rendered in an off-line pre-render mode in which fetching of content over the network to the user device is not permitted. In response to an action by the user that requests to access the application, a switch is made to presenting at least the pre-rendered content to the user in a foreground mode.
  • In some embodiments, pre-rendering in the off-line pre-render mode includes declining network-related requests from the application. Declining the network-related requests may include responding to the network-related requests from the application with a response indicating the network is unavailable.
  • In an embodiment, pre-rendering in the off-line pre-render mode includes rendering at least part of the content from a local cache in the user device. In another embodiment, pre-rendering in the off-line pre-render mode includes notifying the application that pre-rendering is performed in accordance with the off-line pre-render mode. In yet another embodiment, pre-rendering in the off-line pre-render mode includes pre-rendering a placeholder item in place of an actual content item that requires fetching over the network.
  • In still another embodiment, pre-rendering in the off-line pre-render mode includes penalizing the application for requesting to fetch a content item over the network. In an example embodiment, pre-rendering in the off-line pre-render mode includes receiving in the application, via an Application Programming Interface (API), an indication that pre-rendering is performed in accordance with the off-line pre-render mode, and in response running program code that pre-renders the content in accordance with the off-line pre-render mode.
  • In a disclosed embodiment, switching to the foreground mode includes refreshing at least some of the content over the network. In another embodiment, pre-rendering in the off-line pre-render mode is performed in response to an acknowledgement from the application, indicating that the application supports the off-line pre-render mode.
  • In some embodiments, preloading the application includes choosing, based on a criterion, whether to pre-render the content in accordance with the off-line pre-render mode, or in accordance with an on-line pre-render mode in which it is permitted to fetch content over the network to the user device. In an embodiment, pre-rendering in the on-line pre-render mode includes receiving in the application, via an Application Programming Interface (API), an indication that pre-rendering is performed in accordance with the on-line pre-render mode, and in response running program code that pre-renders the content in accordance with the on-line pre-render mode. In another embodiment, choosing the on-line pre-render mode is performed in response to an acknowledgement from the application, indicating that the application supports the on-line pre-render mode. In a disclosed embodiment, the criterion depends on at least one factor selected from (i) a usage pattern of the application, (ii) a condition of the user device, and (iii) a condition of the network.
  • There is additionally provided, in accordance with an embodiment of the present invention, a method including issuing, by an application running in a user device, a request to fetch over the network content that includes multiple content items. The request is received by a software agent running in the user device and, in response to the request, a chain of fetch operations is executed for fetching the requested content. Each of the fetch operations in the chain includes (i) receiving from the application an identification of one or more additional content items identified by the application within a content item fetched in a preceding fetch operation in the chain, (ii) evaluating a criterion, and (iii) deciding, depending on the criterion, whether or not to fetch the one or more additional content items.
  • In some embodiments, the method includes pre-rendering one or more of the content items in a background mode. In some embodiments, issuing the request includes prefetching the content, not in response to a user accessing the content.
  • There is also provided, in accordance with an embodiment of the present invention, a method including, in a user device, which is configured to execute User Interface (UI) tasks that process one or more UI displays presented to a user, assigning to each UI task among the UI tasks (i) a priority selected from at least a Foreground (FG) priority and a Background (BG) priority, and (ii) an association with a UI display being processed by the UI task. The UI tasks are scheduled for execution in accordance with a schedule that (i) gives precedence to the UI tasks having the FG priority over the UI tasks having the BG priority, and (ii) for any UI display, retains in-order execution of the UI tasks associated with the UI display. The UI tasks are executed in accordance with the schedule.
  • In some embodiments, one or more of the UI tasks having the BG priority include pre-rendering tasks. In an embodiment, at a given time, the UI tasks include both (i) one or more UI tasks having the BG priority, and (ii) one or more UI tasks having the FG priority that relate to user actions. In a disclosed embodiment, executing the UI tasks is performed by a single UI thread per user application.
  • In some embodiments, assigning the priority includes, in response to addition of a new UI task having the FG priority, identifying one or more UI tasks that (i) are associated with a same UI display as the new UI task and (ii) have the BG priority, and promoting the identified UI tasks to the FG priority. In an example embodiment, scheduling the UI tasks includes scheduling the promoted UI tasks to be executed before the new UI task. In another embodiment, scheduling the UI tasks includes retaining an original order of execution among the promoted UI tasks. In yet another embodiment, scheduling the UI tasks includes permitting out-of-order execution of UI tasks associated with different UI displays.
  • There is further provided, in accordance with an embodiment of the present invention, a user device including an interface for communicating over a network, and a processor. The processor is configured to preload an application in a background mode in which content presented by the application is hidden from a user of the user device, to pre-render at least part of the content presented by the application in an off-line pre-render mode in which fetching of content over the network to the user device is not permitted, and, in response to an action by the user that requests to access the application, to switch to presenting at least the pre-rendered content to the user in a foreground mode.
  • There is also provided, in accordance with an embodiment of the present invention, a user device including an interface for communicating over a network, and a processor. The processor is configured to issue, by an application running on the processor, a request to fetch over the network content that includes multiple content items, to receive the request by a software agent running on the processor and, in response to the request, execute a chain of fetch operations for fetching the requested content, wherein each of the fetch operations in the chain comprises (i) receiving from the application an identification of one or more additional content items identified by the application within a content item fetched in a preceding fetch operation in the chain, (ii) evaluating a criterion, and (iii) deciding, depending on the criterion, whether or not to fetch the one or more additional content items.
  • There is additionally provided, in accordance with an embodiment of the present invention, a user device including an interface for communicating over a network, and a processor. The processor is configured to assign, to each User Interface (UI) task from among multiple UI tasks that process one or more UI displays presented to a user, (i) a priority selected from at least a Foreground (FG) priority and a Background (BG) priority, and (ii) an association with a UI display being processed by the UI task, to schedule the UI tasks for execution in accordance with a schedule that (i) gives precedence to the UI tasks having the FG priority over the UI tasks having the BG priority, and (ii) for any UI display, retains in-order execution of the UI tasks associated with the UI display, and to execute the UI tasks in accordance with the schedule.
  • The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram that schematically illustrates a communication system that employs Preloading, Pre-rendering and Prefetching (PPP), in accordance with an embodiment of the present invention;
  • FIG. 2 is a flow chart that schematically illustrates a method for content pre-rendering using an off-line pre-render mode, in accordance with an embodiment of the present invention;
  • FIG. 3 is a flow chart that schematically illustrates a method for content prefetching using a prefetch parse chain, in accordance with an embodiment of the present invention; and
  • FIG. 4 is a flow chart that schematically illustrates a method for handling background and foreground User Interface (UI) tasks using a single UI thread, in accordance with an embodiment of the present invention.
  • DETAILED DESCRIPTION OF EMBODIMENTS Overview
  • Embodiments of the present invention that are described herein provide improved methods and systems for Preloading, Pre-rendering and Prefetching (PPP) in user devices. In the present context, the term “preloading” refers to the process of loading, launching and at least partially running an app in a background mode, not in response to (and typically before) invocation of the app by the user. The term “pre-rendering” refers to the process of constructing a UI display of an app in the background mode. The term “UI display” in this context refers to a logical UI element—A view or a window. In the Android OS, for example, UI displays are referred to as Views or Activities.
  • Note that pre-rendering may involve UI tasks that modify the UI display, and/or UI tasks that do not directly modify the UI display but are prerequisite to such modification or are synced to modification of the UI display in the background. Thus, for example, initialization or preparatory tasks that are performed when preloading an app and preparing to initialize or modify a UI display of the app are also regarded herein as pre-rendering tasks.
  • The term “prefetching” refers to the process of fetching content over a network, from a content source to a local cache memory of the user device, not in response to an explicit request or access by the user.
  • In some embodiments, as part of preloading an app, the user device pre-renders a UI display of the app in the background. The UI display being pre-rendered may comprise various content items, e.g., text, images, graphics, videos and the like. In some embodiments, the user device pre-renders at least some of the content of the UI display using an “off-line pre-render mode” in which fetching of content over the network to the user device is not permitted.
  • The off-line pre-render mode can be implemented in various ways. For example, instead of fetching content items over the network, the user device may pre-render locally-cached versions of the content items regardless of whether they are fresh or stale or up to a predefined extent of staleness. As another example, the user device may pre-render empty “placeholder” content items having similar dimensions as the actual content items. When the user requests to access the app in question, the user device moves the pre-rendered UI display to the foreground, so as to present the pre-rendered content to the user. At this stage, the user device may refresh missing or stale content items over the network.
  • Some disclosed techniques assume that the app supports the off-line pre-rendering mode. Other disclosed techniques do not make this assumption. For apps that do not support the off-line pre-rendering mode, various ways of enforcing off-line pre-rendering, i.e., preventing apps' network requests from reaching the network, are described.
  • Additionally or alternatively to the off-line pre-render mode, the user device may support an on-line pre-render mode in which fetching of content over the network is permitted. As opposed to on-line pre-rendering, which does not restrict network access, the off-line pre-rendering mode balances user experience with cost. In other words, on-line pre-rendering minimizes the latency of presenting the user a fully operational and relatively fresh UI display, but on the other hand incurs data costs, along with related power/battery drain costs. The off-line pre-rendering mode reduces the data cost for the user device and/or the app server, but may initially present to the user an incomplete or relatively stale UI display for a short period of time. As such, the off-line pre-rendering mode enables a flexible trade-off between latency and cost. Embodiments that combine (e.g., choose between) off-line and on-line pre-rendering are also described.
  • Other disclosed embodiments relate to handling of “parse chains.” In practice, a content item often links to one or more additional (“nested”) content items, each of which may link to one or more yet additional content items, and so on. An app that receives such a content item typically parses it, discovers one or more additional content items within the parsed content item, requests to fetch the discovered content items, and so on. In some embodiments, the user device runs an agent that handles parse chains.
  • In particular, the agent decides, for each additional content item being identified as part of a parse chain, whether to fetch the content item over the network or not. The agent may use various criteria for deciding whether or not to fetch an additional content item, and in particular to decide whether to terminate the parse chain entirely. The disclosed parse-chain handling techniques are useful in various use-cases. Examples relating to pre-rendering and to general prefetching are described. In pre-rendering, the selective fetching of content items in a parse chain is used as a “hybrid pre-rendering” mode. Such a mode is useful, for example, for reducing costs such as integration effort, data usage and battery drain.
  • Yet other disclosed embodiments relate to correct prioritization and scheduling of foreground and background UI tasks that are associated with the same UI display. In some operating systems, for example iOS and Android, such UI tasks are typically handled by a single UI thread per app in the user device. In particular, disclosed techniques address the case of a user performing some action in the foreground with respect to a UI display of an app, while another UI display of the same app is being pre-rendered in the background. Such cases are challenging for several reasons. For example, unless handled properly, pre-rendering UI tasks might cause the app to appear non-responsive to the user's actions. As another example, since UI tasks of a given UI display should be handled in-order, it would be incorrect to execute the user-action-related UI tasks before any pending pre-rendering-related UI tasks. As noted above, pre-rendering UI tasks may comprise tasks that modify the UI display directly, and/or initialization or prerequisite tasks that do not directly modify the UI display.
  • In some embodiments, an agent running in the user device overcomes such challenges by proper prioritization and scheduling of the UI tasks. As explained in detail herein, the agent assigns each UI task a priority selected from at least a Foreground (FG) priority and a Background (BG) priority. In addition, the agent associates each UI task with the UI display (also referred to as “scenario”) processed by this UI task. The agent schedules the UI tasks for execution in accordance with a schedule that (i) gives precedence to the UI tasks having the FG priority over the UI tasks having the BG priority, and (ii) for any UI display, retains in-order execution of the UI tasks associated with the UI display. The UI tasks are then executed in accordance with the schedule.
  • In some embodiments, in order to retain in-order execution of the UI tasks of a given UI display, the agent applies a “promotion” mechanism that promotes selected UI tasks from the BG priority to the FG priority. In response to creation of a new UI task having the FG priority (e.g., a UI task derived from a user action), the agent identifies all UI tasks that are both (i) associated with the same UI display as the new UI task and (ii) assigned the BG priority, and promotes the identified UI tasks to the FG priority. The agent schedules the promoted UI tasks to be executed before the new UI task, and also retains the original order of execution among the promoted UI tasks.
  • System Description
  • FIG. 1 is a block diagram that schematically illustrates a communication system 20 that employs Preloading, Pre-rendering and Prefetching (PPP), in accordance with an embodiment of the present invention.
  • System 20 comprises a user device 24, which runs one or more user applications (“apps”) 26. Device 24 may comprise any suitable wireless or wireline device, such as, for example, a cellular phone or smartphone, a wireless-enabled laptop or tablet computer, a desktop personal computer, a video gaming console, a smart TV, a wearable device, an automotive user device, or any other suitable type of user device that is capable of communicating over a network and presenting content to a user. The figure shows a single user device 24 for the sake of clarity. Real-life systems typically comprise a large number of user devices of various kinds.
  • In the present context, the terms “user application,” “application” and “app” are used interchangeably, and refer to any suitable computer program that runs on the user device and may be invoked (activated) by the user. Some apps 26 may be dedicated, special-purpose applications such as game apps. Other apps 26 may be general-purpose applications such as Web browsers.
  • In some embodiments, although not necessarily, apps 26 are provided by and/or communicate with one or more network-side servers, e.g., portals 28, over a network 32. Network 32 may comprise, for example a Wide Area Network (WAN) such as the Internet, a Local Area Network (LAN), a wireless network such as a cellular network or Wireless LAN (WLAN), or any other suitable network or combination of networks.
  • In the present example, user device 24 comprises a processor 44 that carries out the various processing tasks of the user device. Among other tasks, processor 44 runs apps 26, and also runs a software component referred to as a Preload/Pre-render/Prefetch (PPP) agent 48, which handles preloading of apps, content pre-rendering and/or content prefetching. Apps 26 and PPP agent 48 are drawn schematically inside processor 44, to indicate that they comprise software running on the processor.
  • In addition, user device 24 comprises a Non-Volatile Memory (NVM) 54, e.g., a Flash memory. NVM 54 may serve, inter alia, for storing a cache memory 52 for caching content associated with apps. In some embodiments the user device uses a single cache 52. In other embodiments, also depicted schematically in the figure, a separate cache memory 52 may be defined per app. Hybrid implementations, in which part of cache 52 is centralized and some is app-specific, are also possible. For clarity, the description that follows will refer simply to “cache 52”, meaning any suitable cache configuration.
  • User device 24 further comprises a display screen 56 for presenting visual content to the user, and a suitable network interface (not shown in the figure) for connecting to network 32. This network interface may be wired (e.g., an Ethernet Network Interface Controller—NIC) or wireless (e.g., a cellular modem or a Wi-Fi modem). Typically, user device 24 further comprises some internal memory, e.g., Random Access Memory (RAM)—not shown in the figure—that is used for storing relevant code and/or data.
  • In the example embodiment of FIG. 1, although not necessarily, system 20 further comprises a PPP subsystem 60 that performs preloading, pre-rendering and/or prefetching tasks on the network side. Subsystem 60 comprises a network interface 64 for communicating over network 32, and a processor 68 that carries out the various processing tasks of the PPP subsystem. In the present example, processor 68 runs a PPP control unit 72 that carries out network-side PPP tasks. Network-side PPP tasks may comprise, for example, deciding which apps to preload and when, or choosing whether and which in-app content to preload, deciding how much of an app component to preload (e.g., only executing some initial executable code, or pre-rendering of the app's user interface), to name just a few examples. In an embodiment, PPP subsystem 60 may be implemented as a cloud-based application.
  • In the embodiments described herein, for the sake of clarity, the PPP tasks are described as being carried out by processor 44 of user device 24. Generally, however, PPP tasks may be carried out by processor 44 of device 24, by processor 68 of subsystem 60, or both. Thus, any reference to “processor” below may refer, in various embodiments, to processor 44, processor 68, or both.
  • Preloading an app 26 may involve preloading any app element such as executable code associated with the app, e.g., launch code, app feed, app landing page, various UI elements associated with the app, content associated with the app, app data associated with the app, and/or code or content that is reachable using the app by user actions such as clicks (“in-app content”). Pre-rendering of content is typically performed in an app that has been preloaded and is currently running in the background. Pre-rendering may involve background processing of any suitable kind of UI display, or a portion thereof. In Android terminology, for example, pre-rendering may comprise background processing of one or more Android Activities. In the background mode, UI elements associated with the app are not presented to the user on display screen 56, i.e., are hidden from the user. When the user invokes a previously-preloaded app, the user device switches to run the app in a foreground mode that is visible to the user. (The terms “background mode” and “foreground mode” are referred to herein simply as “background” and “foreground,” for brevity.)
  • The configurations of system 20 and its various elements shown in FIG. 1 are example configurations, which are chosen purely for the sake of conceptual clarity. In alternative embodiments, any other suitable configurations can be used. For example, in some embodiments the PPP tasks may be implemented entirely in processor 44 of user device 24, in which case subsystem 60 may be eliminated altogether.
  • PPP agent 48 may be implemented in a software module running on processor 44, in an application running on processor 44, in a Software Development Kit (SDK) embedded in an application running on processor 44, as part of the Operating System (OS) running on processor 44 (possibly added to the OS by the user-device vendor or other party), in a proxy server running on processor 44, using a combination of two or more of the above, or in any other suitable manner. In most of the description that follows, PPP agent 48 is assumed to be part of the OS of user device 24.
  • Although the embodiments described herein refer mainly to human users, the term “user” refers to machine users, as well. Machine users may comprise, for example, various host systems that use wireless communication, such as in various Internet-of-Things (IoT) applications.
  • The different elements of system 20 may be implemented using suitable software, using suitable hardware, e.g., using one or more Application-Specific Integrated Circuits (ASICs) or Field-Programmable Gate Arrays (FPGAs), or using a combination of hardware and software elements. Cache 52 may be implemented using one or more memory or storage devices of any suitable type. In some embodiments, PPP agent 48 and/or subsystem 60 may be implemented using one or more general-purpose processors, which are programmed in software to carry out the functions described herein. The software may be downloaded to the processors in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.
  • Content Pre-Rendering Using Off-Line Pre-Render Mode
  • A UI display of an app 26 may comprise various content items, such as text, images, videos, graphics and/or other visual elements, which are laid-out visually in accordance with a specified layout. Rendering of a UI display typically calls for fetching the content items over the network. When pre-rendering a UI display in the background, however, it is possible, and sometimes beneficial, to limit the extent of network access, e.g., in order to reduce power consumption, cost and/or cellular data usage.
  • In some embodiments, user device 24 supports an “off-line pre-render” mode that performs pre-rendering but without permitting fetching of content over network 32 to the user device. The off-line pre-render mode is also referred to herein as “off-line mode” for brevity. In some embodiments, the app whose content is being pre-rendered, and/or the OS of the user device, is required to support the off-line pre-render mode. In other embodiments, no such support is assumed.
  • In various embodiments, the techniques described herein may be performed by PPP agent 48, by the OS of user device 24 that runs on processor 44, and/or by apps 26. Any suitable partitioning (“division of labor”) between the PPP agent, the OS and the apps can be used. In some embodiments, the actual pre-rendering and rendering of content is performed by the apps. The PPP agent is configured to trigger the apps, e.g., to notify an app that off-line pre-rendering is being used. The PPP agent is implemented as a component of the OS, and both the PPP agent and the apps are orchestrated by the OS. This partitioning, however, is chosen purely by way of example. For clarity, some of the description that follows refers simply to processor 44 as carrying out the disclosed techniques.
  • The description that follows refers to a single app 26 that has been preloaded and whose content is being pre-rendered. This choice is made purely for the sake of clarity. Generally, the off-line pre-render mode can be applied to any app in user device 24, to all apps, to a selected subset of one or more apps, or otherwise.
  • For apps that already have some existing off-line features, such as the ability to launch and render content regardless of network availability, the use of off-line pre-rendering allows leveraging these existing features to integrate pre-rendering functions more easily. In such apps, much of the value of on-line pre-rendering (having no restrictions on network access) can be retained at a considerably lower cost. For apps that do not already have off-line user experience, the end result of having a good off-line user experience provides an added incentive for integration of pre-rendering.
  • In some embodiments, before the off-line pre-render mode is enabled for a given app 26, the app is required to declare to agent 48, e.g., via a suitable Application Programming Interface (API), that it supports the off-line pre-render mode. Supporting off-line pre-rendering is beneficial to apps, for example since it allows preloading to be scheduled more often (in comparison with on-line pre-render in which network requests are allowed) and regardless of network conditions.
  • In some embodiments, agent 48 intercepts requests from the app to fetch content over the network, and declines the requests if the app is being pre-rendered in the off-line mode. For example, agent 48 may respond to such requests with a response indicating that the network is unavailable, e.g., a “not connected” response. Alternatively, agent 48 may use any other suitable technique for blocking content requests from accessing the network.
  • In some embodiments, in the off-line mode, the app may pre-render content items that are cached locally in cache 52 instead of fetching them over the network, provided that cache 52 indeed contains copies of the content items. In some embodiments, in the off-line mode the app pre-renders cached content items even if they are known to be stale, i.e., not fresh. This is contrast to on-line pre-rendering, in which the app will typically fetch requested content over the network if the cached content is not fresh. In an embodiment, in the off-line mode the app will not pre-render a cached content item if it is staler than a predefined extent, e.g., older than a predefined age.
  • In some embodiments, in the off-line mode, the app will pre-render a “placeholder” content item instead of the actual content item that is specified for display in the UI display. This technique may be used, for example, if the app requests a content item that is not available locally (or is staler than a predefined extent) in cache 52. The placeholder content item is typically an empty content item that has a similar layout (e.g., similar dimensions) to the actual content item. Subsequently, e.g., when the user accesses the app, the app may fetch and insert the actual content item seamlessly in-place into the pre-rendered UI display.
  • In some embodiments, agent 48 notifies the app that its UI display is being pre-rendered in accordance with the off-line mode, and therefore network requests are expected to fail or are not expected. Alternatively, the app may query agent 48, e.g., via a suitable API, whether a UI display of the app is being pre-rendered and using which mode.
  • In some embodiments, when using such an API, the app performs both off-line pre-rendering and on-line pre-rendering (and possibly also normal rendering in the foreground) using the same launch code, but the execution of this launch code differs between the different modes.
  • In an example embodiment, when operating in the off-line pre-rendering mode the app's launch code loads and pre-renders only the parts of the UI display that do not require any network resources. As a result, during pre-rendering, the app starts running in the background but stops its progress at a point before it requires network access. The app may not be fully operational in this state, and may require more processing to become usable. Processing is resumed once the user accesses the app. This technique considerably reduces the latency from the time the user accesses the app to the point the UI display is fully presented and operational, and at the same time does not require network access while in the background.
  • In some embodiments, PPP agent 48 penalizes (“punishes”) an app 26 that initiates network requests while in the off-line pre-render mode. Penalizing an app may comprise, for example, disallowing or reducing the rate of subsequent off-line pre-rendering, since the app's network requests indicate that the app behaves improperly in the off-line mode.
  • FIG. 2 is a flow chart that schematically illustrates a method for content pre-rendering using the off-line pre-render mode, in accordance with an embodiment of the present invention. It is noted that the flow of FIG. 2 is depicted purely by way of example, in order to demonstrate certain features relating to off-line pre-rendering. In alternative embodiments, the off-line pre-render mode can be implemented and utilized in any other suitable way.
  • The method of FIG. 2 begins with PPP agent 48 notifying an app 26 that its UI display is to be pre-rendered using the off-line pre-rendering mode, at a notification step 80. At a preloading/pre-rendering step 84, the OS loads the app launch code and runs the app in a background mode. Since the app was notified of the off-line pre-rendering mode, the app pre-renders its UI display using locally-cached content only.
  • At any stage of the preloading and pre-rendering process, the user may access the app. In such a case, PPP agent switches to run the app in the foreground so that the UI display is presented to the user on display screen 56, at a foreground switching step 88. At this stage, the app's UI display is typically incomplete, since it comprises only elements that were obtainable off-line. The app then refreshes any missing or stale content in the UI display over network 32, at a refreshing step 92. The refreshing operation can be performed entirely by the app or it can be assisted by agent 48.
  • In some embodiments, content items that are fetched after off-line pre-rendering is completed, or even after the pre-rendered UI display is moved to the foreground, can be inserted by the app into the UI display with minimal visual impact. In other words, content items may be inserted without requiring complete refreshing or re-rendering of the entire UI display, or requiring only refreshing of only the content items being replaced.
  • On-Line Pre-Rendering Mode
  • In some embodiments, all pre-rendering in user device 24 is performed using the off-line mode. In other embodiments, additionally or alternatively to the off-line mode, the user device supports an “on-line pre-render” mode (also referred to as “on-line mode” for brevity) in which fetching of content over network 32 for a preloaded app 26 is allowed.
  • In some embodiments, before the on-line pre-render mode is enabled for a given app 26, the app is required to declare to PPP agent 48, e.g., via a suitable API, that it supports the on-line pre-render mode. An app may use this API, for example, to declare that it can cope with the server load associated with on-line pre-rendering.
  • Any suitable criterion can be used for choosing whether to pre-render a given UI display of a given preloaded app using the on-line mode or using the off-line mode. For example, agent 48 may learn the data usage pattern of the app, and use this information to decide whether and when to use the on-line pre-rendering mode. As another example, agent 48 may monitor the data usage of the app in real-time, and revert to off-line pre-rendering if the amount of data fetched over the network is too large, e.g., larger than a threshold. In such a case, agent 48 may also avoid further pre-rendering, and/or notify the app developer that the app is not behaving properly when being pre-rendered.
  • In some embodiments, when the app declares support for both the off-line pre-render mode and the on-line pre-render mode, PPP agent 48 chooses between the two modes considering the lower cost of off-line pre-rendering and the potential for better user experience offered by on-line pre-rendering. The choice may be based, for example, on current and/or expected network conditions, learned usage patterns of the user, hints from the app regarding expected usage patterns provide to agent 48 via an API, user device limitations, a maximal permitted rate of preloading/pre-rendering (as detailed below), maximal permitted content staleness (as detailed below), and the like. User-device limitations may comprise, for example, battery state, whether the device is connected to an external power supply, data saving modes, RAM size, Flash size and/or data budget.
  • In some embodiments, the app may specify a maximum allowed rate of pre-rendering for the off-line mode, for the on-line mode, or for both. One motivation for this feature is to ensure that pre-rendering operations do not overload the app's servers (e.g., portals 28). For example, the app may specify a limit of two on-line pre-rendering operations per day, or one off-line pre-rendering operation and one on-line pre-rendering operation for every activation of the app by the user.
  • In some embodiments, the app may specify a maximal permitted content staleness for the off-line mode, for the on-line mode, or for both. One motivation for this feature is to ensure that the pre-rendered content that the user can see is not older than a given maximal staleness threshold. In an example embodiment, the OS enforces the maximal permitted content staleness by either destroying the preloaded content or by triggering a refresh of this content (e.g., pre-rendering again) upon reaching the specified maximal staleness.
  • Content Prefetching and Pre-Rendering Using Parse Chains
  • In many practical cases, a content item, e.g., a Web page or app feed, comprises one or more additional content items, each of which may link to one or more yet additional content items, and so on. Fetching of such a nested content item for an app 26 can be viewed as a “parse chain”: Typically, the app receives the content item, parses it, discovers one or more additional content items within the parsed content item, requests to fetch the discovered content items, and so on. The app may continue this process until no more nested content items are found.
  • In some embodiments, when executing a parse chain, PPP agent 48 controls the fetching of content items for the requesting app 26. The PPP agent may fetch the content items itself, or it may decide and instruct the app whether or not to fetch certain content. In particular, in some embodiments agent 48 decides, for each additional content item being identified as part of the parse chain, whether to fetch the content item over the network or not. If the decision is not to fetch a content item, the app's network request is ignored. In such an event, the PPP agent would typically notify the app that the network request has failed, or that the network request will not be served. Agent 48 may use various criteria for deciding whether an additional content item should be fetched or not, and in particular to decide whether to terminate the parse chain entirely.
  • The control of agent 48 over parse chains is advantageous in various use-cases. One example use-case is a preloaded app that runs in the background, and whose UI display is being pre-rendered in the background. In such embodiments, agent 48 may use the selective fetching of content items as a “hybrid pre-rendering” mode. In this mode, when agent 48 permits app 26 to fetch an additional content item that was identified by the app as part of the parse chain, the app also pre-renders the additional content item in the background. When agent 48 decides not to fetch an additional content item, this content item will not be pre-rendered (and will typically be fetched and rendered only once the user accesses the app and the UI display is moved to the foreground). Executing such a prefetch parse chain during pre-rendering is useful, for example, for reducing costs such as integration effort, data usage and battery drain.
  • In some embodiments, the app is required to declare that it supports the hybrid pre-rendering mode before the mode becomes available. Once declared, agent 48 uses the hybrid pre-rendering mode instead of non-selective on-line pre-rendering.
  • Another use-case relates to prefetching, but not in the context of pre-rendering. Consider, for example, an app that runs in the background, for which agent 48 prefetches content but does not perform pre-rendering. This use-case is useful, for example, if pre-rendering is considered too costly or is otherwise not desired or not feasible. Pre-rendering may be unavailable, for example, when agent 48 is implemented as part of an app on an OS that does not allow pre-rendering.
  • In some embodiments, agent 48 may handle a parse chain by intercepting some or all of the network calls that occur during pre-rendering or prefetching. Such network calls are treated as optional requests to be prefetched only under certain criteria, e.g., criteria relating to efficiency or safety. Network calls that are not intercepted by agent 48, if any, may be allowed to reach the network or may be blocked, as appropriate.
  • In some embodiments, PPP agent 48 implements a parse chain by providing a suitable API to the app. The app uses this API for sending requests to agent 48 for prefetching content items (instead of the app issuing network calls directly).
  • Benefits of the disclosed parse-chain handling scheme include, for example:
      • Simplicity of integration: The content items to be prefetched are discovered through the existing app loading procedure.
      • Ability to prioritize content items to prefetch, e.g., avoiding downloading of content items that are not deemed worthy of the associated costs.
      • Ability to use efficient and safe prefetch techniques instead of the app's default network requests. Such techniques may involve monitoring the progress of the prefetch and terminating the downloads if too slow, blacklisting content items that repeatedly fail to download, redirecting to download content from separate servers, limiting prefetch resources depending on recent usage of the app, and/or any other suitable technique.
  • As can be appreciated, when carrying out the disclosed parse chain technique, some of the app's network requests may be ignored by agent 48. Therefore, in some embodiments the app is expected to handle ignored network requests correctly, as if the app was run in an off-line pre-render mode, for example by obtaining the requested content from cache 52.
  • In some embodiments, if a parse chain was terminated by agent 48, the app may choose to continue the parse chain by parsing a content item cached in cache 52, such as images that are linked through a feed json.
  • In some embodiments, the app provides agent 48 information relating to a content item, and agent 48 takes this information into account in deciding whether or not to download the content item. Such information may indicate, by way of example:
      • Whether the content item was discovered from cache and therefore may be stale.
      • A priority assigned to the content item. For example, content items discovered from cache may be given low priority.
      • A timestamp indicating the time the content item was last known to be needed.
      • The expected media type (e.g., text file, image, video).
      • The expected size of the file.
  • FIG. 3 is a flow chart that schematically illustrates a method for content prefetching using a prefetch parse chain, in accordance with an embodiment of the present invention. The method flow is described in the context of prefetching (without pre-rendering), by way of example. In alternative embodiments, pre-rendering can be implemented in a similar manner as explained above. Also for simplicity, the method flow describes a process that terminates the entire parse chain after prefetching some of the content items. This flow, however, is not mandatory. In alternative embodiments, agent 48 may decide not to prefetch certain content items, but nevertheless proceeds with execution of the parse chain and may subsequently decide to prefetch other content items that are discovered later.
  • The method of FIG. 3 begins with an app 26, e.g., an app that was preloaded and is currently running in a background mode in user device 24, beginning to prefetch content, at a prefetch initiation step 120. At a request input step 124, PPP agent 48 receives from the app a request to prefetch one or more content items. At a chain initialization step 128, agent 48 initializes a prefetch parse chain for the app.
  • At a downloading step 132, agent 48 downloads the requested content item(s) over network 32 and delivers the content item(s) to app 26 (or alternatively permits the app to download the requested content item(s)). The app parses the content item(s), at a parsing step 136. At a nesting checking step 140, the app checks whether the content item(s) delivered at step 132 link to additional content item(s) to be downloaded. If so, the app requests agent 48 to prefetch the additional content item(s), at an additional requesting step 144.
  • Agent 48 evaluates a criterion that decides whether to approve or decline the app's request for additional content item(s), at a prefetching evaluation step 148. (In some embodiments the criterion can be evaluated for the initial content item(s) requested by the app, as well.) If the criterion indicates that the additional content item(s) are approved for prefetching, the method loops back to step 132 above, in which agent 48 and/or app 26 downloads the additional content item(s), i.e., executes the next prefetch stage of the parse chain. If the criterion indicates that the additional content item(s) are not approved for prefetching, agent 48 terminates the parse chain, at a termination step 152. The method also jumps to step 152 upon finding, at step 140, that no additional content items are to be prefetched in the preset parse chain.
  • At any stage of the above flow, the user may access the app. In such a case, agent 48 and/or the app refreshes any missing or stale content over the network.
  • Prioritizing of Background and Foreground Ui Tasks
  • In some embodiments, the OS of user device 24 executes multiple UI tasks for the various apps 26. Each UI task specifies an action that processes a certain UI display of an app. Some UI tasks may modify the UI display directly, whereas other UI tasks do not directly modify the UI display but are prerequisite to such modification or are synced to modification of the UI display. In the Android OS, for example, UI displays are referred to as Views or Activities. A UI display is also referred to herein as a “scenario”. Some UI tasks may originate from user actions, whereas other UI tasks may relate to background pre-rendering.
  • In some cases, the OS runs a single UI thread per app 26. Consider a case in which a user performs actions that affect a UI display of a certain app, while another UI display of the same app is being pre-rendered in the background. In such a case, the UI tasks derived from the user's actions and the UI tasks relating to pre-rendering all compete for the resources of the same single UI thread. Unless handled properly, the pre-rendering UI tasks might cause the app to appear non-responsive to the user's actions.
  • Another challenge encountered in the above situation is the need to retain in-order execution of UI tasks associated with a given UI display. Consider, for example, a situation in which one or more pre-rendering UI tasks for a given UI display are pending for execution, and then a user performs an action that modifies the same UI display. In such a case, even though the user's action is in the foreground and are more latency-sensitive than the background pre-rendering tasks, it would be incorrect to execute the user's UI tasks before the pending pre-rendering tasks.
  • Typically, the in-order requirement holds for UI tasks associated with the same UI display, but UI tasks of different UI displays are allowed to be handled out-of-order.
  • In some disclosed embodiments, PPP agent 48 in user device 24 overcomes such challenges by correctly prioritizing and scheduling the various UI tasks. In some embodiments, PPP agent 48 assigns to each UI task a priority selected from at least a Foreground (FG) priority and a Background (BG) priority. In addition, PPP agent 48 associates each UI task with the UI display (“scenario”) being processed (e.g., modified or prepared, for example) by this UI task. PPP agent 48 then schedules the UI tasks for execution in accordance with a schedule that (i) gives precedence to the UI tasks having the FG priority over the UI tasks having the BG priority, and (ii) for any UI display, retains in-order execution of the UI tasks associated with the UI display. Typically, in specifying the schedule, the PPP agent does not enforce any constraints as to the order of execution of UI tasks associated with different UI displays (other than, of course, giving precedence to FG tasks over BG tasks). The UI tasks are then executed in accordance with the schedule.
  • In some embodiments, in order to retain in-order execution of the UI tasks of a given UI display, PPP agent 48 applies a “promotion” mechanism that promotes selected UI tasks from the BG priority to the FG priority. In response to creation of a new UI task having the FG priority (e.g., derived from a user action), agent 48 identifies all UI tasks that (i) are associated with the same UI display as the new UI task and (ii) have the BG priority, and promotes the identified UI tasks to the FG priority. Agent 48 then schedules the promoted UI tasks to be executed before the new UI task. Agent 48 also retains the original order of execution among the promoted UI tasks.
  • In some embodiments, a UI task may be associated with multiple UI displays, in which case agent 48 promotes the UI task if a new FG-priority UI task is added in any of these multiple UI displays.
  • In some embodiments, PPP agent 48 represents the various UI tasks as messages. In the description that follows, the terms “UI task” and “message” are used interchangeably. A single user action, or a single UI change by the app or the user device in general, may be translated into several smaller UI tasks, e.g., drawing a portion of the screen, invoking an app callback or creating a new view or screen. Each such UI task is represented by a message. Agent 48 queues the messages in a suitable queue structure, and schedules the queued messages for execution by the UI thread.
  • Any suitable queue structure can be used for queuing the messages, e.g., a priority queue, a set of queues with each queue holding the messages of a respective priority, a multi-threaded UI environment with each thread assigned to handle messages of a respective priority, or even a multi-process structure in which different processes handle different priorities.
  • In some embodiments, agent 48 starts a predefined time-out (denoted T1) following the execution of a foreground message, and ensures that background messages are only handled if no foreground messages have been handled for T1 seconds.
  • In some embodiments, agent 48 may schedule messages to be handled at a specified time in the future, and halt the handling of background messages if a foreground message is scheduled to be handled in the near future. The time interval considered “near future” in this context may be constant (denoted T2), or may be based on an estimation of the expected running time of the background message in question. For example, if a foreground message is scheduled to start being handled in three seconds, then a background message whose handling is expected to take two seconds will be allowed to proceed, while a background message whose handling expected to take four seconds will not be allowed to proceed at this time.
  • In some embodiments, agent 48 may delete a background message if its associated UI display becomes no longer relevant. For example, in the Android OS, when an Activity is destroyed, if a UI display based on that Activity exists, agent 48 deletes all messages associated with this UI display.
  • In various embodiments, UI tasks may be assigned priorities in various ways. For example, the app developer may specify the priority of each UI task or message to be queued. As another example, the app developer may specify the priority of a certain UI display that needs to be processed. In this case, messages associated with this display will receive the specified priority. In some embodiments the priority may be assigned automatically by agent 48. For example, tasks that are independent of immediate needs of the user may be assigned BG priority.
  • In some embodiments, specific UI components may be modified to take advantage of the priority system, and some or all of their tasks are assigned to BG priority. Consider, for example, a UI component that holds multiple tabs that the user may browse, such as Android ViewPager. Such a component may load multiple tabs together, assign FG priority only to the visible tab, and BG priority to adjacent tabs. This assignment helps provide the fastest response to the user while still loading adjacent tabs ahead of time.
  • In some embodiments, agent 48 may automatically assign BG priority to views that are drawn but not currently visible, for example views that are “below-the-fold” and require scrolling to become visible.
  • In some embodiments, agent 48 creates UI displays (e.g., views or Android Activities) predictively before the user requires them, and sets the priorities of their UI tasks to BG priority.
  • In some embodiments, agent 48 assigns more than two priorities. For example, non-visible views (i.e., views that are “below-the-fold”) of a foreground activity may have higher priority than views of other pre-rendered activities, but lower priority than the view which is visible. The promotion among multiple priorities may be defined such that when a message with a specific priority P and UI display S is sent, all messages of UI display S with priorities that are lower that P are promoted to priority P.
  • In various embodiments, UI tasks (messages) may be associated with UI displays (“scenarios”) in various ways. In some embodiments, PPP agent 48 associates UI tasks with UI displays automatically. For example, in the Android OS, example associations may comprise:
      • For an Activity's main Handler, the Handler is marked with the Activity as the associated UI display, which the Handler then assigns to all messages passing through it.
      • For views that are attached to an Activity, the Activity may be set as the UI display by default for messages related to those views.
      • Messages that pass the Activity as a parameter, set the message's UI display to be the Activity. Note that the Android OS regularly sends the Activity as a parameter to some of the messages related to the Activity.
  • In some embodiments, the app developer may choose which UI display is associated per message. In such embodiments, and if agent 48 also automatically associates messages with UI displays, then the developer's choice may override the choice of agent 48.
  • In some embodiments, agent 48 may modify pre-rendering related UI tasks to take into account that the results are not immediately visible to the user. For example, agent 48 may reduce the frame-rate for background UI tasks, or may avoid running animations, jumping directly to the end result of the animation. Such manipulation is done to reduce the load introduced by background tasks, while also completing the background tasks faster. In some embodiments, agent 48 may split UI tasks into smaller portions, so that handling each message may be quick, allowing greater control over message scheduling.
  • In some embodiments, if a FG message is created while a BG message is being handled, agent 48 allows the running BG message to complete before handling the FG message. In other embodiments, agent 48 pauses handling of the BG message, saves the state of the paused message, then handles the FG message, and then resumes handling of the paused BG message from the saved state. In yet other embodiments, agent 48 stops handling the BG message and reverses its effects, then handles the FG message, and then retries handling the BG message from the beginning. To this end, agent 48 may use a transaction system for messages, so that the effects of a message will not persist unless the message is handled completely and committed.
  • FIG. 4 is a flow chart that schematically illustrates a method for handling background and foreground UI tasks using a single UI thread, in accordance with an embodiment of the present invention. The method begins with PPP agent 48 receiving a new UI task, at a task input step 160. Agent 48 represents the new UI task as a message, at a message representation step 164, and associates the message with a UI display (a “scenario”, e.g., an Android Activity), at a scenario assignment step 168.
  • At a FG/BG checking step 172, agent 48 checks whether the message relates to a user action or to a pre-rendering operation. If the message relates to pre-rendering, agent 48 assigns the message a BG priority and adds the message to the queue structure, at a BG prioritization step 176. Agent 48 schedules the message for execution, at a scheduling step 180.
  • If, on the other hand, the message was derived from a user action, agent 48 assigns the message a FG priority and adds the message to the queue structure, at a FG prioritization step 184. Agent 48 then checks whether any of the queued messages are both (i) assigned the BG priority and (ii) associated with the same UI display (“scenario”) as the new FG message. If so, agent 48 promotes these messages to the FG priority, at a promotion step 192. With or without promotion, agent 48 proceeds to schedule the message at scheduling step 180.
  • It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art. Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.

Claims (16)

1. A method, comprising:
in a user device, which is configured to execute User Interface (UI) tasks that process one or more UI displays presented to a user, assigning to each UI task among the UI tasks (i) a priority selected from at least a Foreground (FG) priority and a Background (BG) priority, and (ii) an association with a UI display being processed by the UI task;
scheduling the UI tasks for execution in accordance with a schedule that (i) gives precedence to the UI tasks having the FG priority over the UI tasks having the BG priority, and (ii) for any UI display, retains in-order execution of the UI tasks associated with the UI display; and
executing the UI tasks in accordance with the schedule.
2. The method according to claim 1, wherein one or more of the UI tasks having the BG priority comprise pre-rendering tasks.
3. The method according to claim 1, wherein, at a given time, the UI tasks comprise both (i) one or more UI tasks having the BG priority, and (ii) one or more UI tasks having the FG priority that relate to user actions.
4. The method according to claim 1, wherein executing the UI tasks is performed by a single UI thread per user application.
5. The method according to claim 1, wherein assigning the priority comprises, in response to addition of a new UI task having the FG priority, identifying one or more UI tasks that (i) are associated with a same UI display as the new UI task and (ii) have the BG priority, and promoting the identified UI tasks to the FG priority.
6. The method according to claim 5, wherein scheduling the UI tasks comprises scheduling the promoted UI tasks to be executed before the new UI task.
7. The method according to claim 5, wherein scheduling the UI tasks comprises retaining an original order of execution among the promoted UI tasks.
8. The method according to claim 1, wherein scheduling the UI tasks comprises permitting out-of-order execution of UI tasks associated with different UI displays.
9. A user device, comprising:
an interface for communicating over a network; and
a processor, configured to:
assign, to each User Interface (UI) task from among multiple UI tasks that process one or more UI displays presented to a user, (i) a priority selected from at least a Foreground (FG) priority and a Background (BG) priority, and (ii) an association with a UI display being processed by the UI task;
schedule the UI tasks for execution in accordance with a schedule that (i) gives precedence to the UI tasks having the FG priority over the UI tasks having the BG priority, and (ii) for any UI display, retains in-order execution of the UI tasks associated with the UI display; and
execute the UI tasks in accordance with the schedule.
10. The user device according to claim 9, wherein one or more of the UI tasks having the BG priority comprise pre-rendering tasks.
11. The user device according to claim 9, wherein, at a given time, the UI tasks comprise both (i) one or more UI tasks having the BG priority, and (ii) one or more UI tasks having the FG priority that relate to user actions.
12. The user device according to claim 9, wherein the processor is configured to execute the UI tasks by a single UI thread per user application.
13. The user device according to claim 9, wherein, in response to addition of a new UI task having the FG priority, the processor is configured to identify one or more UI tasks that (i) are associated with a same UI display as the new UI task and (ii) have the BG priority, and to promote the identified UI tasks to the FG priority.
14. The user device according to claim 13, wherein the processor is configured to schedule the promoted UI tasks to be executed before the new UI task.
15. The user device according to claim 13, wherein the processor is configured to retain an original order of execution among the promoted UI tasks.
16. The user device according to claim 9, wherein the processor is configured to permit out-of-order execution of UI tasks associated with different UI displays.
US17/567,187 2019-07-30 2022-01-03 Execution of user interface (ui) tasks having foreground (fg) and background (bg) priorities Abandoned US20220124171A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/567,187 US20220124171A1 (en) 2019-07-30 2022-01-03 Execution of user interface (ui) tasks having foreground (fg) and background (bg) priorities

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201962880092P 2019-07-30 2019-07-30
US201962880674P 2019-07-31 2019-07-31
PCT/IB2020/057046 WO2021019415A1 (en) 2019-07-30 2020-07-26 Pre-rendering of application user-interfaces in user devices
US17/567,187 US20220124171A1 (en) 2019-07-30 2022-01-03 Execution of user interface (ui) tasks having foreground (fg) and background (bg) priorities

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/057046 Continuation WO2021019415A1 (en) 2019-07-30 2020-07-26 Pre-rendering of application user-interfaces in user devices

Publications (1)

Publication Number Publication Date
US20220124171A1 true US20220124171A1 (en) 2022-04-21

Family

ID=74230214

Family Applications (3)

Application Number Title Priority Date Filing Date
US17/624,357 Active US11824956B2 (en) 2019-07-30 2020-07-26 Pre-rendering of application user-interfaces in user devices using off-line pre-render mode
US17/567,187 Abandoned US20220124171A1 (en) 2019-07-30 2022-01-03 Execution of user interface (ui) tasks having foreground (fg) and background (bg) priorities
US17/567,183 Abandoned US20220121725A1 (en) 2019-07-30 2022-01-03 Prefetching using a chain of fetch operations

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/624,357 Active US11824956B2 (en) 2019-07-30 2020-07-26 Pre-rendering of application user-interfaces in user devices using off-line pre-render mode

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/567,183 Abandoned US20220121725A1 (en) 2019-07-30 2022-01-03 Prefetching using a chain of fetch operations

Country Status (4)

Country Link
US (3) US11824956B2 (en)
EP (1) EP4004767A4 (en)
CN (1) CN114144777A (en)
WO (1) WO2021019415A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12039345B2 (en) 2020-12-20 2024-07-16 Tensera Networks Ltd. Preloading of applications transparently to user using audio-focus component, and detection of preloading completion
US20240273322A1 (en) * 2023-02-10 2024-08-15 Qualcomm Incorporated Protecting against malicious attacks in images

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022118131A1 (en) 2020-12-03 2022-06-09 Tensera Networks Preloading of applications having an existing task
CN114339415B (en) * 2021-12-23 2024-01-02 天翼云科技有限公司 Client video playing method and device, electronic equipment and readable medium
CN114995978A (en) * 2022-06-10 2022-09-02 亿咖通(湖北)技术有限公司 Rendering task processing method, device and equipment and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080307339A1 (en) * 2006-03-20 2008-12-11 Kidzui, Inc. Child-oriented computing system
US20110211813A1 (en) * 2010-02-26 2011-09-01 Research In Motion Limited Enhanced banner advertisements
US20120324481A1 (en) * 2011-06-16 2012-12-20 Samsung Electronics Co. Ltd. Adaptive termination and pre-launching policy for improving application startup time
US20130275899A1 (en) * 2010-01-18 2013-10-17 Apple Inc. Application Gateway for Providing Different User Interfaces for Limited Distraction and Non-Limited Distraction Contexts
US20140201673A1 (en) * 2013-01-15 2014-07-17 Apple Inc. Progressive tiling
US20160085583A1 (en) * 2014-09-24 2016-03-24 Facebook, Inc. Multi-Threaded Processing of User Interfaces for an Application
US20160103608A1 (en) * 2014-10-09 2016-04-14 Vellum Tech Corporation Virtual keyboard of a computing device to create a rich output and associated methods
US20160259656A1 (en) * 2015-03-08 2016-09-08 Apple Inc. Virtual assistant continuity
US20160344679A1 (en) * 2015-05-22 2016-11-24 Microsoft Technology Licensing, Llc Unified messaging platform and interface for providing user callouts
US20180129537A1 (en) * 2016-11-10 2018-05-10 Microsoft Technology Licensing, Llc Managing memory usage using soft memory targets
US20190188013A1 (en) * 2017-12-20 2019-06-20 Google Llc Suggesting Actions Based on Machine Learning
US20190196849A1 (en) * 2017-12-21 2019-06-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and Device for Preloading Application, Storage Medium, and Terminal Device
US20190205159A1 (en) * 2016-09-12 2019-07-04 Huawei Technologies Co., Ltd. Method and apparatus for silently starting application in background and terminal device
US20210224085A1 (en) * 2018-11-07 2021-07-22 Citrix Systems, Inc. Preloading of Application on a User Device Based on Content Received by the User Device
US20210304096A1 (en) * 2020-03-27 2021-09-30 Intel Corporation Device, system and method to dynamically prioritize a data flow based on user interest in a task
US20220413695A1 (en) * 2019-11-30 2022-12-29 Huawei Technologies Co., Ltd. Split-screen display method and electronic device

Family Cites Families (102)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6807570B1 (en) 1997-01-21 2004-10-19 International Business Machines Corporation Pre-loading of web pages corresponding to designated links in HTML
US20030101234A1 (en) 2001-11-28 2003-05-29 International Business Machines Corporation System and method for indicating whether a document is cached
US7392390B2 (en) 2001-12-12 2008-06-24 Valve Corporation Method and system for binding kerberos-style authenticators to single clients
US6868439B2 (en) 2002-04-04 2005-03-15 Hewlett-Packard Development Company, L.P. System and method for supervising use of shared storage by multiple caching servers physically connected through a switching router to said shared storage via a robust high speed connection
WO2003107186A1 (en) 2002-06-18 2003-12-24 松下電器産業株式会社 Program execution terminal device, program execution method, and program
US20040030882A1 (en) * 2002-08-08 2004-02-12 Forman George Henry Managed application pre-launching
US7360161B2 (en) 2003-12-12 2008-04-15 Sap Aktiengesellschaft Refreshing a transaction screen
WO2005109908A2 (en) 2004-04-30 2005-11-17 Vulcan Inc. Maintaining a graphical user interface state that is based on a selected piece of content
US8224964B1 (en) 2004-06-30 2012-07-17 Google Inc. System and method of accessing a document efficiently through multi-tier web caching
US7437364B1 (en) 2004-06-30 2008-10-14 Google Inc. System and method of accessing a document efficiently through multi-tier web caching
US8037527B2 (en) 2004-11-08 2011-10-11 Bt Web Solutions, Llc Method and apparatus for look-ahead security scanning
US8140997B2 (en) 2005-12-26 2012-03-20 International Business Machines Corporation Manipulating display of multiple display objects
US8019811B1 (en) 2006-04-06 2011-09-13 Versata Development Group, Inc. Application state server-side cache for a state-based client-server application
US7861008B2 (en) 2007-06-28 2010-12-28 Apple Inc. Media management and routing within an electronic device
US20100058248A1 (en) 2008-08-29 2010-03-04 Johnson Controls Technology Company Graphical user interfaces for building management systems
US8812451B2 (en) 2009-03-11 2014-08-19 Microsoft Corporation Programming model for synchronizing browser caches across devices and web services
US8418190B2 (en) 2009-11-24 2013-04-09 Microsoft Corporation Responsive user interface with background application logic for working on an object
US8934645B2 (en) 2010-01-26 2015-01-13 Apple Inc. Interaction of sound, silent and mute modes in an electronic device
US20110252430A1 (en) 2010-04-07 2011-10-13 Apple Inc. Opportunistic Multitasking
US20150205489A1 (en) 2010-05-18 2015-07-23 Google Inc. Browser interface for installed applications
US8832559B2 (en) 2010-06-25 2014-09-09 LeftsnRights, Inc. Content distribution system and method
US8429674B2 (en) 2010-07-20 2013-04-23 Apple Inc. Maintaining data states upon forced exit
GB2495455B (en) 2010-07-26 2013-11-13 Seven Networks Inc Prediction of activity session for mobile network use optimization and user experience enhancement
US8601052B2 (en) 2010-10-04 2013-12-03 Qualcomm Incorporated System and method of performing domain name server pre-fetching
CN102063302B (en) 2010-12-20 2014-07-02 北京握奇数据系统有限公司 Window management method, system and terminal
US9529866B2 (en) 2010-12-20 2016-12-27 Sybase, Inc. Efficiently handling large data sets on mobile devices
US20120167122A1 (en) 2010-12-27 2012-06-28 Nokia Corporation Method and apparatus for pre-initializing application rendering processes
US20130283283A1 (en) 2011-01-13 2013-10-24 Htc Corporation Portable electronic device and control method therefor
GB2493473B (en) 2011-04-27 2013-06-19 Seven Networks Inc System and method for making requests on behalf of a mobile device based on atomic processes for mobile network traffic relief
US8838261B2 (en) 2011-06-03 2014-09-16 Apple Inc. Audio configuration based on selectable audio modes
US8788711B2 (en) 2011-06-14 2014-07-22 Google Inc. Redacting content and inserting hypertext transfer protocol (HTTP) error codes in place thereof
WO2013000059A1 (en) 2011-06-29 2013-01-03 Rockstar Bidco, LP Method and apparatus for pre-loading information over a communication network
US8612418B2 (en) * 2011-07-14 2013-12-17 Google Inc. Mobile web browser for pre-loading web pages
US9384297B2 (en) 2011-07-28 2016-07-05 Hewlett Packard Enterprise Development Lp Systems and methods of accelerating delivery of remote content
US20130067050A1 (en) 2011-09-11 2013-03-14 Microsoft Corporation Playback manager
US8341245B1 (en) 2011-09-26 2012-12-25 Google Inc. Content-facilitated speculative preparation and rendering
US8250228B1 (en) 2011-09-27 2012-08-21 Google Inc. Pausing or terminating video portion while continuing to run audio portion of plug-in on browser
US9438526B2 (en) 2011-12-16 2016-09-06 Telefonaktiebolaget Lm Ericsson (Publ) Network controlled client caching system and method
US9189252B2 (en) 2011-12-30 2015-11-17 Microsoft Technology Licensing, Llc Context-based device action prediction
US8959431B2 (en) 2012-01-16 2015-02-17 Microsoft Corporation Low resolution placeholder content for document navigation
CA2865267A1 (en) 2012-02-21 2013-08-29 Ensighten, Inc. Graphical overlay related to data mining and analytics
US20150193395A1 (en) 2012-07-30 2015-07-09 Google Inc. Predictive link pre-loading
CN111614980B (en) 2012-08-14 2022-04-12 俄亥俄州立创新基金会 System and method for optimizing use of network bandwidth by mobile device
US9898445B2 (en) 2012-08-16 2018-02-20 Qualcomm Incorporated Resource prefetching via sandboxed execution
US9776078B2 (en) 2012-10-02 2017-10-03 Razer (Asia-Pacific) Pte. Ltd. Application state backup and restoration across multiple devices
US10057726B2 (en) 2012-10-02 2018-08-21 Razer (Asia-Pacific) Pte. Ltd. Managing user data on an electronic device
WO2014110294A1 (en) 2013-01-09 2014-07-17 Mcgushion Kevin D Active web page consolidator and internet history management system
US8984058B2 (en) 2013-03-15 2015-03-17 Appsense Limited Pre-fetching remote resources
TW201448604A (en) 2013-06-04 2014-12-16 Dynalab Singapore Co Ltd Method for switching audio playback between foreground area and background area in screen image using audio/video programs
US20140373032A1 (en) 2013-06-12 2014-12-18 Microsoft Corporation Prefetching content for service-connected applications
US9508040B2 (en) 2013-06-12 2016-11-29 Microsoft Technology Licensing, Llc Predictive pre-launch for applications
US9588897B2 (en) 2013-07-19 2017-03-07 Samsung Electronics Co., Ltd. Adaptive application caching for mobile devices
US9565233B1 (en) 2013-08-09 2017-02-07 Google Inc. Preloading content for requesting applications
US10013497B1 (en) * 2013-12-20 2018-07-03 Google Llc Background reloading of currently displayed content
US9513888B1 (en) 2014-01-30 2016-12-06 Sprint Communications Company L.P. Virtual preloads
US10623351B2 (en) 2014-03-31 2020-04-14 Htc Corporation Messaging system and method thereof
CN105094861A (en) 2014-05-06 2015-11-25 腾讯科技(深圳)有限公司 Webpage application program loading method, device and system
US9959506B1 (en) * 2014-06-17 2018-05-01 Amazon Technologies, Inc. Predictive content retrieval using device movements
US9646254B2 (en) 2014-06-20 2017-05-09 Amazon Technologies, Inc. Predicting next web pages
WO2018055506A1 (en) 2016-09-22 2018-03-29 Tensera Networks Ltd. An optimized content-delivery network (cdn) for the wireless last mile
US9979796B1 (en) 2014-07-16 2018-05-22 Tensera Networks Ltd. Efficient pre-fetching notifications
US11483415B2 (en) 2014-07-16 2022-10-25 Tensera Networks Ltd. Background pre-rendering of user applications
US11095743B2 (en) 2014-07-16 2021-08-17 Tensera Networks Ltd. Optimized content-delivery network (CDN) for the wireless last mile
KR102260177B1 (en) 2014-07-16 2021-06-04 텐세라 네트워크스 리미티드 Efficient content delivery over wireless networks using guaranteed prefetching at selected times-of-day
US10432748B2 (en) 2014-07-16 2019-10-01 Tensera Networks Ltd. Efficient content delivery over wireless networks using guaranteed prefetching at selected times-of-day
US11489941B2 (en) 2014-07-16 2022-11-01 Tensera Networks Ltd. Pre-loading of user applications including skipping of selected launch actions
US9509715B2 (en) 2014-08-21 2016-11-29 Salesforce.Com, Inc. Phishing and threat detection and prevention
US20160180762A1 (en) 2014-12-22 2016-06-23 Elwha Llc Systems, methods, and devices for controlling screen refresh rates
US10083494B2 (en) 2015-01-30 2018-09-25 Huawei Technologies Co., Ltd. Systems, devices and methods for distributed content pre-fetching to a user device
JP5963991B1 (en) 2015-02-27 2016-08-03 三菱電機株式会社 User interface execution device and user interface design device
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
KR102367882B1 (en) 2015-03-31 2022-02-25 엘지전자 주식회사 Digital device and method of processing application data thereof
US20180241837A1 (en) 2015-04-21 2018-08-23 Tensera Networks Ltd. Efficient Pre-Fetching Notifications
US10459887B1 (en) 2015-05-12 2019-10-29 Apple Inc. Predictive application pre-launch
US11449560B2 (en) 2015-07-27 2022-09-20 WP Company, LLC Native integration of arbitrary data sources
US10452497B2 (en) 2015-08-14 2019-10-22 Oracle International Corporation Restoration of UI state in transactional systems
KR102401772B1 (en) 2015-10-02 2022-05-25 삼성전자주식회사 Apparatus and method for executing application in electronic deivce
US10613713B2 (en) 2015-10-07 2020-04-07 Google Llc Integration of content in non-browser applications
KR20180069806A (en) * 2015-10-15 2018-06-25 텐세라 네트워크스 리미티드 Presentation of content freshness recognition in a communication terminal
US9860336B2 (en) 2015-10-29 2018-01-02 International Business Machines Corporation Mitigating service disruptions using mobile prefetching based on predicted dead spots
WO2017080604A1 (en) 2015-11-12 2017-05-18 Telefonaktiebolaget Lm Ericsson (Publ) Server, wireless device, methods and computer programs
KR102498451B1 (en) 2016-03-24 2023-02-13 삼성전자주식회사 Electronic device and method for provideing information in the electronic device
US10325610B2 (en) 2016-03-30 2019-06-18 Microsoft Technology Licensing, Llc Adaptive audio rendering
US10089219B1 (en) 2017-01-20 2018-10-02 Intuit Inc. Mock server for testing
KR102340199B1 (en) 2017-06-14 2021-12-16 삼성전자주식회사 Image display apparatus, and operating method for the same
WO2018234967A1 (en) * 2017-06-19 2018-12-27 Tensera Networks Ltd. Silent updating of content in user devices
US20190087205A1 (en) 2017-09-18 2019-03-21 Microsoft Technology Licensing, Llc Varying modality of user experiences with a mobile device based on context
US11397555B2 (en) 2017-10-26 2022-07-26 Tensera Networks Ltd. Background pre-loading and refreshing of applications with audio inhibition
CN109976821B (en) 2017-12-14 2022-02-11 Oppo广东移动通信有限公司 Application program loading method and device, terminal and storage medium
US20200159816A1 (en) 2018-01-23 2020-05-21 Grant Bostrom Methods and systems for automatically delivering content using a refresh script based on content viewability
WO2019171237A1 (en) 2018-03-05 2019-09-12 Tensera Networks Ltd. Application preloading in the presence of user actions
US10613735B1 (en) 2018-04-04 2020-04-07 Asana, Inc. Systems and methods for preloading an amount of content based on user scrolling
CN108647052B (en) 2018-04-28 2020-12-01 Oppo广东移动通信有限公司 Application program preloading method and device, storage medium and terminal
CN108681475B (en) 2018-05-21 2021-11-26 Oppo广东移动通信有限公司 Application program preloading method and device, storage medium and mobile terminal
CN108762839B (en) 2018-05-22 2020-12-18 北京小米移动软件有限公司 Interface display method and device of application program
CN108920156A (en) 2018-05-29 2018-11-30 Oppo广东移动通信有限公司 Application program prediction model method for building up, device, storage medium and terminal
CN108804157A (en) 2018-06-05 2018-11-13 Oppo广东移动通信有限公司 Application program preloads method, apparatus, storage medium and terminal
US10824483B2 (en) 2018-11-20 2020-11-03 R Software Inc. Application programming interface scoring, ranking and selection
US11128676B2 (en) 2019-04-16 2021-09-21 Citrix Systems, Inc. Client computing device providing predictive pre-launch software as a service (SaaS) sessions and related methods
US11481231B2 (en) 2019-10-02 2022-10-25 Citrix Systems, Inc. Systems and methods for intelligent application instantiation
EP4104424A4 (en) 2020-02-13 2024-02-28 Tensera Networks Ltd. Preloading of applications and in-application content in user devices
US20210323908A1 (en) 2020-04-10 2021-10-21 Imbria Pharmaceuticals, Inc. Process for producing tca cycle intermediate conjugates

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080307339A1 (en) * 2006-03-20 2008-12-11 Kidzui, Inc. Child-oriented computing system
US20130275899A1 (en) * 2010-01-18 2013-10-17 Apple Inc. Application Gateway for Providing Different User Interfaces for Limited Distraction and Non-Limited Distraction Contexts
US20110211813A1 (en) * 2010-02-26 2011-09-01 Research In Motion Limited Enhanced banner advertisements
US20120324481A1 (en) * 2011-06-16 2012-12-20 Samsung Electronics Co. Ltd. Adaptive termination and pre-launching policy for improving application startup time
US20140201673A1 (en) * 2013-01-15 2014-07-17 Apple Inc. Progressive tiling
US20160085583A1 (en) * 2014-09-24 2016-03-24 Facebook, Inc. Multi-Threaded Processing of User Interfaces for an Application
US20160103608A1 (en) * 2014-10-09 2016-04-14 Vellum Tech Corporation Virtual keyboard of a computing device to create a rich output and associated methods
US20160259656A1 (en) * 2015-03-08 2016-09-08 Apple Inc. Virtual assistant continuity
US20160344679A1 (en) * 2015-05-22 2016-11-24 Microsoft Technology Licensing, Llc Unified messaging platform and interface for providing user callouts
US20190205159A1 (en) * 2016-09-12 2019-07-04 Huawei Technologies Co., Ltd. Method and apparatus for silently starting application in background and terminal device
US20180129537A1 (en) * 2016-11-10 2018-05-10 Microsoft Technology Licensing, Llc Managing memory usage using soft memory targets
US20190188013A1 (en) * 2017-12-20 2019-06-20 Google Llc Suggesting Actions Based on Machine Learning
US20190196849A1 (en) * 2017-12-21 2019-06-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and Device for Preloading Application, Storage Medium, and Terminal Device
US20210224085A1 (en) * 2018-11-07 2021-07-22 Citrix Systems, Inc. Preloading of Application on a User Device Based on Content Received by the User Device
US20220413695A1 (en) * 2019-11-30 2022-12-29 Huawei Technologies Co., Ltd. Split-screen display method and electronic device
US20210304096A1 (en) * 2020-03-27 2021-09-30 Intel Corporation Device, system and method to dynamically prioritize a data flow based on user interest in a task

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12039345B2 (en) 2020-12-20 2024-07-16 Tensera Networks Ltd. Preloading of applications transparently to user using audio-focus component, and detection of preloading completion
US12099854B2 (en) 2020-12-20 2024-09-24 Tensera Networks Ltd. Techniques for detecting completion of preloading of user applications
US20240273322A1 (en) * 2023-02-10 2024-08-15 Qualcomm Incorporated Protecting against malicious attacks in images

Also Published As

Publication number Publication date
US20220121725A1 (en) 2022-04-21
CN114144777A (en) 2022-03-04
US11824956B2 (en) 2023-11-21
EP4004767A1 (en) 2022-06-01
WO2021019415A1 (en) 2021-02-04
EP4004767A4 (en) 2023-03-08
US20220358177A1 (en) 2022-11-10

Similar Documents

Publication Publication Date Title
US11824956B2 (en) Pre-rendering of application user-interfaces in user devices using off-line pre-render mode
US11915012B2 (en) Application preloading in the presence of user actions
US10101910B1 (en) Adaptive maximum limit for out-of-memory-protected web browser processes on systems using a low memory manager
US20230054174A1 (en) Preloading of applications and in-application content in user devices
US8522249B2 (en) Management of software implemented services in processor-based devices
JP6363796B2 (en) Dynamic code deployment and versioning
US8639772B2 (en) Centralized application resource manager
US9582326B2 (en) Quality of service classes
US20220166845A1 (en) Silent updating of content in user devices
US10289446B1 (en) Preserving web browser child processes by substituting a parent process with a stub process
US11734023B2 (en) Preloading of applications having an existing task
CN106445696B (en) Multi-process interactive processing method and system
US10248321B1 (en) Simulating multiple lower importance levels by actively feeding processes to a low-memory manager
US10747550B2 (en) Method, terminal and storage medium for starting software
US20220237002A1 (en) Scheduling of Application Preloading
US11922187B2 (en) Robust application preloading with accurate user experience
US20220179668A1 (en) Robust Application Preloading with Accurate User Experience
US20240012677A1 (en) Pre-launching an application using interprocess communication
CN110633137A (en) Method and device for starting installation package of application program and electronic equipment
KR20220019946A (en) System and method for reducing cold start latency of functions as a service
US20120229480A1 (en) Regulation of Screen Composing in a Device
WO2022180505A1 (en) Robust application preloading with accurate user experience

Legal Events

Date Code Title Description
AS Assignment

Owner name: TENSERA NETWORKS LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PELED, ROEE;WIX, AMIT;SIGNING DATES FROM 20211123 TO 20211210;REEL/FRAME:058524/0279

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION