WO2019028524A1 - Architecture de mémoire cache multiple - Google Patents

Architecture de mémoire cache multiple Download PDF

Info

Publication number
WO2019028524A1
WO2019028524A1 PCT/AU2018/050845 AU2018050845W WO2019028524A1 WO 2019028524 A1 WO2019028524 A1 WO 2019028524A1 AU 2018050845 W AU2018050845 W AU 2018050845W WO 2019028524 A1 WO2019028524 A1 WO 2019028524A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
cache architecture
digital content
partner
cache
Prior art date
Application number
PCT/AU2018/050845
Other languages
English (en)
Inventor
Gary Fink
Brendan BARLOW
Original Assignee
Unlockd Media Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unlockd Media Pty Ltd filed Critical Unlockd Media Pty Ltd
Publication of WO2019028524A1 publication Critical patent/WO2019028524A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0277Online advertisement

Definitions

  • the present disclosure relates to a multiple cache architecture and more particularly, to a multiple cache architecture for improving the availability and speed at which digital content can be accessed.
  • Cache is a hardware/software component for storing digital data so it can be retrieved at a later time.
  • Cache is typically utilized in computing so that requests for data can be served faster than if the data were stored in another type of memory (e.g., remote main memory).
  • the data stored in cache memory may be a duplicate copy of data stored in another type of memory.
  • the cache may be located closer in proximity to a processor than the other memory.
  • Some embodiments described herein may provide for a cache architecture comprising: a data storage device; a query engine configured to query an external decision engine for information about where to retrieve digital content and receive information about a content partner upon a first trigger event; a third party SDK to query the content partner for digital content and store the digital content received from the content partner; and an API configured to retrieve the digital content from the third party SDK upon the occurrence of a second trigger event and store the digital content in the data storage device.
  • a cache architecture comprising: a data storage device; a query engine configured to query an external decision engine for information about where to retrieve digital content and receive information about a content partner upon a first trigger event; a third party SDK to query the content partner for digital content and store the digital content received from the content partner; and an API configured to retrieve the digital content from the third party SDK upon the occurrence of a second trigger event and store the digital content in the data storage device.
  • the decision engine may return digital content directly.
  • the third party SDK may retry to query the content partner for a predetermined number of times if the previous query is unsuccessful.
  • the query engine may query the external decision engine again if the content partner does not return digital content to the cache architecture after the predetermined number of retries.
  • the cache architecture may be implemented in a mobile device.
  • the cache architecture may be implemented in a stationary device.
  • the digital content may be an advertisement.
  • the digital content may be news content.
  • the digital content may be loyalty offers or membership deals.
  • the digital content may be audio content.
  • the digital content may be audio/visual content.
  • the content partner may be an ad server.
  • the first trigger event may be one or more of a dismiss, a screen off detection by the cache architecture, a screen on detection by the cache architecture, the detection of some other user interaction by cache architecture.
  • the second trigger event may be an unlocking of a mobile device.
  • multiple first trigger events may occur before the second trigger event occurs.
  • Some embodiments described herein may provide for a method for loading digital content into a cache architecture comprising: identifying a first trigger event; querying an external decision engine for information about where to retrieve digital content from; receiving information about a content partner; querying a content partner for digital content storing the digital content received from the content partner as part of a third party SDK; identifying a second trigger event; and retrieving the digital content from the third party SDK for display to the user.
  • the method may further comprise retrying to query the content partner for a predetermined number of times if the previous query is unsuccessful.
  • the method may further comprise querying the external decision engine again if the content partner does not return digital content after the
  • the method may be implemented in a mobile device.
  • the method may be implemented in a stationary device.
  • the digital content may be an advertisement.
  • the digital content may be news content.
  • the digital content may be loyalty offers or membership deals.
  • the digital content may be audio content.
  • the digital content may be audio/visual content.
  • the content partner may be an ad server.
  • the first trigger event may be one or more of a dismiss, a screen off detection by the cache architecture, a screen on detection by the cache architecture, the detection of some other user interaction by cache architecture.
  • the second trigger event may be an unlocking of a mobile device.
  • multiple first trigger events may occur before the second trigger event occurs.
  • the digital content that may be delivered to devices includes any combination of one or more of the following: advertisements, news content, social media content, weather content, sports content, loyalty offerings, membership deals, audio content, visual content, audio-visual content, radio content, television content, games, financial market information and other forms of information.
  • the digital content may be delivered via a number of different approaches including one or more of the following: the Internet, a local area network (LAN), a wide area network (WAN) and a cellular network, or some other telecommunications network.
  • the user device may be connected to the telecommunications network in one or more of the following ways: wireless, wired, coaxial, Ethernet, and fiber optics.
  • the device may be configured to be unlocked by receiving an input from a user of the device.
  • the input provided by the user may be one or more of the following inputs into the device: touch sensitive display, one or more buttons associated with the device, mouse, keyboard, voice activation, motion activation and gesture activation.
  • FIG. 1 is a schematic representation of an embodiment of the cache architecture described, according to some embodiments.
  • FIG. 2 illustrates the operation of an exemplary cache architecture, according to certain embodiments.
  • FIG. 3 is a schematic diagram of an embodiment of the waterfall architecture, according to certain embodiments.
  • the present disclosure relates to a multiple cache architecture and more particularly, to a multiple cache architecture for improving the availability and speed at which digital content can be accessed.
  • the caching solutions described herein may be used for a variety of purposes where reliable digital content retrieval is desired. For example, in some embodiments, it may be desirable to have fast reliable retrieval of digital content on a computing device (e.g., a mobile device). In some embodiments, the cache architecture described herein may be utilized to a computing device to improve the time required to retrieve and display digital content on a display.
  • the cache architecture described herein may be implemented to ensure (or improve) the availability of digital content on the computing device so there is content available to retrieve (e.g., display) to the user at the appropriate time (e.g. when requested by the computing device, an application running on the computing device or an input from a user.
  • the input from the user may be the unlocking of the mobile device.
  • the digital content may be pushed to the computing device and stored in the cache architecture.
  • the digital content may be retrieved or pulled, by the computing device over a network (e.g., the internet) from various content sources including e.g., data exchanges and stored into the cache architecture.
  • the cache architecture described herein the server content may be desirable to utilize a predetermined window of opportunity (WOO).
  • WOO may be about 1 sec (e.g., about 0.5 sec, 0.75 sec, 1 sec, 1.25 sec, 1.5 sec, etc.).
  • the desired content may be readily available in a cache architecture configured to enable the desired speed.
  • content may be retrieved before the content is requested (e.g., before a triggering event). Determining when, how, and/or what content to retrieve may be advantageous for the cache architecture described herein.
  • the digital content may not be reliably retrieved to store in the cache because the request for the digital content may fail.
  • the computing device may be in a standby/sleep/low-power mode, which may result in an inability to execute the desired requests over the network until the computing device is available again (e.g., wakes up).
  • the computing device may not have a stable network connection because of external factors (e.g. poor connection to the network, poor WiFi, poor cellular reception, bandwidth limitations, data shaping by the internet service provider, etc.).
  • the radios in the device may not be started until after the device is awoken due to the user interacting with it which may, in some embodiments, be too late to retrieve the digital data in a timely fashion.
  • digital content that is already cached within the cache architecture may expire if not utilized (e.g., displayed) before the expiry time specified by a content provider. Showing the content after it has expired may not be desirable for any number of reasons.
  • digital content may be loaded to the cache architecture in two different ways - directly by downloading the digital content or indirectly by asking a third party application (e.g., SDK) to download the content.
  • a third party application e.g., SDK
  • the digital content is downloaded directly, the content from an external device and can be stored in cache on the computing device. In some embodiments, this storage of content may persist in the cache after the computing device is restarted and/or powered off.
  • many large content providers e.g., the Facebook and Google content networks
  • An SDK is a library of code which may be embedded within a larger application or computing device).
  • the computing device and corresponding cache architecture may include an application or process (e.g., an API) for the SDK to load the content and later, after the content is loaded, may call another API to provide (e.g., retrieve and/or display) the content to the user.
  • an API in the cache architecture e.g., an application that is part of the architecture
  • the cache architecture calls a second API to query the Facebook SDK to provide (e.g., display) the requested content. Since the cache architecture does not have direct access to the content held within the third party SDK it cannot cache the content in a resilient manner before the content is requested. This may result in the need to request new content from the third party SDK when the computing device is restarted or turned on.
  • a triggering event e.g., mobile device activation and/or unlock
  • the application responsible for loading the content may only be in the foreground when displaying content, which means it may be constantly at risk of being stopped by the operating system on the computing device.
  • the stoppage of certain application may occur in accordance with certain rules (e.g., an importance hierarchy) whereby background processes may be stopped at any time if resources such as processing and memory are needed by foreground processes.
  • certain rules e.g., an importance hierarchy
  • set of interrelated solutions may be developed to implement a cache architecture in which content is cached and ready to be provided to (e.g., displayed to) the user upon a triggering event.
  • a durable cache architecture may be implemented such that direct served content, which can be downloaded over the network directly can be stored in persistent storage (e.g. a disk or memory card) on the computing device.
  • this durable cache may survive application and /or device restarts
  • the cache architecture may fill the cache from a content source chosen by an external decision engine.
  • an application associated with the cache architecture may query an external decision engine (e.g. a content server) which content source (e.g. content network) the computing device should query to retrieve content.
  • the decision of which content source to select may be based, at least in part, on a ranking of content sources determined by a third party and/or based on user properties (such as age, gender, interests, phone model, location, time of day, etc.). If a content source does not have content available then the cache architecture may ask the external decision engine for the next best content source, continuing down the list until content is returned or the list is exhausted.
  • FIG. 1 is a schematic representation of an embodiment of the cache architecture described, according to some embodiments.
  • the cache architecture 100 includes a memory 1 10 for storing digital content.
  • the cache architecture 100 further includes a query engine 120 configured to query a remotely located decision engine 130 to ask the remote decision engine what content partner 140 should be queried for content.
  • the decision engine may return content directly (e.g., provide direct content to the cache architecture).
  • the decision engine 130 makes its choice based on a set of predefined rules.
  • the cache architecture then utilizes a Requesting API 150 to request content from the selected content partner 140 (e.g., by calling a third party SDK 160).
  • the third party SDK will retry again (e.g., retry up to N times, where N is a number configured in the third party SDK or request API).
  • the query engine 120 may query the remote decision engine 130 to ask which content partner 140 to query next.
  • these steps may be repeated until digital content is returned to the cache architecture or the decision engine 130 indicates there are no more content partners 140 to query.
  • the content When the content is requested after a triggering event, the content may be retrieved from the third party SDK by a load API 170 and stored in memory 110.
  • third party content networks may include Facebook, Google's AdMob or Twitter's MoPub.
  • the cache architecture may be configured to directly show content (e.g., an HTML document with CSS, images and javascript), or the cache architecture may embed the third party SDKs within the cache architecture (e.g., embed the MoPub SDK inside an application on the computing device).
  • the cache architecture calls the load API and once a digital content is loaded into memory, a show API may display the content to the user.
  • the decision engine may be responsible for determining what content to provide to the cache architecture.
  • the content server may receive an HTTP/S request from the cache architecture and make a decision about what content should be served to the cache architecture at that time.
  • the cache architecture may send additional data to help the content server make its decision.
  • the data may include any combination of one or more of, latitude and longitude, age, age range, gender, interests, how many days the user has been registered on a particular platform, etc.
  • one or more line items set up as a waterfall may be included.
  • waterfall refers to a set of rules configured in such a way that if the rules for the first line item don't apply, the decision falls through to the next line item.
  • the rules on each line item are applied to the request, one line item at a time, flowing from the first line item to the next.
  • the decision lands on a particular line item, then the content or instruction associated with that item is returned.
  • line item 2 is identified as a mediated instruction.
  • the content server can return an ad (html or image etc.) directly but it can also return some arbitrary string, an instruction, which the cache architecture may be configured to understand and use to take an action.
  • a mediated instruction therefore, is an instruction that tells the cache architecture to call one of the third party SDKs.
  • the content server might return the string "MoPub: 180:2" which says the cache architecture should query the MoPub SDK for content.
  • the content can be cached for 180 minutes and the cache architecture can retry twice if it fails to load content.
  • FIG. 2 illustrates the operation of an exemplary cache architecture, according to certain embodiments.
  • the digital content is an advertisement but the process can be applied to any type of digital content.
  • a trigger event occurs to initiate the process.
  • the trigger event could be, for example, a dismiss, a screen off detection by the computing device, a screen on detection by the computing device, the detection of some other user interaction by the computing device, etc.
  • the cache architecture makes a request (e.g., a call over HTTPS) to the content server. If a response (e.g., an HTTP response) is received from the content server, the process continues, otherwise, the cache architecture logs a content failure at operation 8.
  • the process stops at operation 13 until a new trigger event occurs. Otherwise, the process continues back to operation 2 to make another content request.
  • the maximum number of retries may be provided by the third party content partner (e.g., mediated partner or the cache architecture).
  • the response is parsed to determine whether it is digital content or a failure message. If the response is a failure message from the content server this may indicate there is no content in the content server that satisfies the waterfall criteria (e.g., no more line items) so log a failure operation 8 occurs.
  • Operation 5 corresponds to the reception of digital content. If the digital content is actually content, the content is cached into the cache architecture at operation X. If the response is a mediation request, (e.g., an instruction to use the third party SDK to deliver the content) then the cache architecture checks to determine if the content is cacheable. In the illustrated embodiment, all third party content is cacheable, so the false path is not illustrated. However, in some embodiments, the false path may result in a failed process or similar event. If the third party content is cacheable, the third party SDK (e.g. AdMob or Facebook SDK) is queried to load content at operation 24. If the content is received, the content (or a reference to the SDK where the content is stored) is cached in memory so it is ready for the user when necessary (e.g., when the computing device is initiated or when the mobile device screen is unlocked.
  • a mediation request e.g., an instruction to use the third party SDK to deliver the content
  • the cache architecture checks to determine if the content
  • the cycle repeats (i.e., the request at operation 24 is made until, it either succeeds or fails - if it fails the cache architecture retries if retry limit for that third party has not been reached. If it succeeds then content is loaded within the third party SDK, which is ready to provide (e.g., display) the content when the user unlocks mobile device. As illustrated, if the cache architecture queries the third party to load content, until it runs out of retries and there is no content (operation 27), then the cache architecture may make another call to the content server and ask it for other content or instructions. In some embodiments, if this occurs, the cache architecture may instruct the content server to skip the third party that just failed.
  • the process described above may occur multiple times before any content is requested such that multiple piece of content are cached (i.e., multiple caches are filled).
  • the cache architecture attempts to retrieve content from the first cache and if it is empty, used the second cache.
  • the cache architecture described herein may include 2, 3, 4, 5, 6, 7, 8, 9, 10, or more than 10 caches.
  • FIG. 3 is a schematic diagram of an embodiment of the waterfall architecture, according to certain embodiments.
  • the content server (SaS) is queried to fill the cache.
  • the configuration of the cache architecture assumes that a content request is retried up to nine times before failing and the third party (mediated) partners are retried zero times.
  • the process is initiated by an event that triggers the SDK to load content into one of its caches.
  • the SDK instructs the SaS to get content (i.e. the cache architecture queries the content server to make a decision about what content it should show).
  • the Sas responds to the SDK that it should use "Mopub High Floor” (first line in the SaS waterfall).
  • the SDK queries the Mopub partner for content. Assuming the MoPub responds to the SDK that there is no content, the SDK initiates a first retry with SaS to get content. In this iteration, SaS may respond to the SDK to use "AdMob High Floor” (second line in the SaS waterfall).
  • the SDK queries the AdMob partner for content.
  • the SDK initiates a second retry with SaS to get content.
  • SaS may respond to the SDK to use "MoPub Medium Floor” (third line in the SaS waterfall).
  • the SDK queries the MoPub partner for content.
  • the SDK initiates a third retry with SaS to get content.
  • SaS may respond to the SDK to use "AdMob Medium Floor” (fourth line in the SaS waterfall).
  • the SDK queries the AdMob partner for content.
  • the SDK initiates a fourth retry with SaS to get content.
  • SaS may respond to the SDK to use "MoPub Low Floor" (fifth line in the SaS waterfall).
  • the SDK queries the MoPub partner for content. In this instance, assuming the MoPub responds to the SDK that there is content, it provided the content to the SDK.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Selon la présente invention, une architecture de mémoire cache comprend : un dispositif de stockage de données ; un moteur d'interrogation configuré pour interroger un moteur de décision externe relativement à des informations concernant l'endroit où récupérer un contenu numérique et recevoir des informations concernant un partenaire de contenu lors d'un premier événement de déclenchement ; un SDK de tierce partie pour interroger le partenaire de contenu relativement à un contenu numérique et stocker le contenu numérique reçu du partenaire de contenu ; et une API configurée pour récupérer le contenu numérique du SDK de tierce partie lors de l'apparition d'un second événement de déclenchement et stocker le contenu numérique dans le dispositif de stockage de données.
PCT/AU2018/050845 2017-08-10 2018-08-10 Architecture de mémoire cache multiple WO2019028524A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762543707P 2017-08-10 2017-08-10
US62/543,707 2017-08-10

Publications (1)

Publication Number Publication Date
WO2019028524A1 true WO2019028524A1 (fr) 2019-02-14

Family

ID=65273019

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2018/050845 WO2019028524A1 (fr) 2017-08-10 2018-08-10 Architecture de mémoire cache multiple

Country Status (1)

Country Link
WO (1) WO2019028524A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6826614B1 (en) * 2001-05-04 2004-11-30 Western Digital Ventures, Inc. Caching advertising information in a mobile terminal to enhance remote synchronization and wireless internet browsing
US20130110637A1 (en) * 2011-11-02 2013-05-02 Ross Bott Strategically timed delivery of advertisements or electronic coupons to a mobile device in a mobile network
US20140006538A1 (en) * 2012-06-28 2014-01-02 Bytemobile, Inc. Intelligent Client-Side Caching On Mobile Devices
US8805950B1 (en) * 2007-09-12 2014-08-12 Aol Inc. Client web cache
WO2015017891A1 (fr) * 2013-08-07 2015-02-12 Unlockd Media Pty. Ltd. Systèmes, dispositifs et procédés pour afficher du contenu numérique sur un dispositif d'affichage
US20150294342A1 (en) * 2014-04-15 2015-10-15 Xperiel, Inc. Platform for Providing Customizable Brand Experiences

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6826614B1 (en) * 2001-05-04 2004-11-30 Western Digital Ventures, Inc. Caching advertising information in a mobile terminal to enhance remote synchronization and wireless internet browsing
US8805950B1 (en) * 2007-09-12 2014-08-12 Aol Inc. Client web cache
US20130110637A1 (en) * 2011-11-02 2013-05-02 Ross Bott Strategically timed delivery of advertisements or electronic coupons to a mobile device in a mobile network
US20140006538A1 (en) * 2012-06-28 2014-01-02 Bytemobile, Inc. Intelligent Client-Side Caching On Mobile Devices
WO2015017891A1 (fr) * 2013-08-07 2015-02-12 Unlockd Media Pty. Ltd. Systèmes, dispositifs et procédés pour afficher du contenu numérique sur un dispositif d'affichage
US20150294342A1 (en) * 2014-04-15 2015-10-15 Xperiel, Inc. Platform for Providing Customizable Brand Experiences

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KHAN, A J I ET AL.: "CAMEO: A Middleware for Mobile Advertisement Delivery", PROCEEDING OF THE 11TH ANNUAL INTERNATIONAL CONFERENCE ON MOBILE SYSTEMS, APPLICATIONS, AND SERVICES, 2013, pages 125 - 138, XP055576980 *

Similar Documents

Publication Publication Date Title
US9823917B2 (en) Update application user interfaces on client devices
US8171114B1 (en) System using specific geographic area multi-level caches for caching application data
US9189801B2 (en) System and method for rewarding application installs
US9804994B2 (en) Application architecture supporting multiple services and caching
KR101960007B1 (ko) 편의적 네트워크 업데이트
US20180241801A1 (en) Push-based cache invalidation notification
CN113420051B (zh) 一种数据查询方法、装置、电子设备和存储介质
CN113010818A (zh) 访问限流方法、装置、电子设备及存储介质
US9928174B1 (en) Consistent caching
WO2017131873A1 (fr) Gestion de publicités dans une application de navigation
US20190265851A1 (en) Platform for third-party supplied calls-to-action
CN111756847B (zh) 网站支持https协议的方法和装置
CN112764948A (zh) 数据发送方法、数据发送装置、计算机设备及存储介质
US20100036892A1 (en) Determination of an updated data source from disparate data sources
CN111581239A (zh) 缓存刷新方法和电子设备
CN111125595A (zh) 多页面控制方法、装置、电子设备及存储介质
US11269784B1 (en) System and methods for efficient caching in a distributed environment
CN112181733A (zh) 一种服务请求的处理方法、装置、设备及存储介质
US9378178B1 (en) Enhancing HTTP caching by allowing content sharing of data blocks across resources identified by different uniform resource locators
CN114746854A (zh) 移动客户端应用中数据提供者不可知的变更处置
WO2019028524A1 (fr) Architecture de mémoire cache multiple
CN110807040B (zh) 管理数据的方法、装置、设备及存储介质
CN113824675B (zh) 管理登录态的方法和装置
US11281542B2 (en) System and method for backup generation for deployments
CN111182330B (zh) 视频播放方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18844860

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18844860

Country of ref document: EP

Kind code of ref document: A1