US9483191B2 - Multi-tier storage for delivery of services - Google Patents

Multi-tier storage for delivery of services Download PDF

Info

Publication number
US9483191B2
US9483191B2 US14/211,699 US201414211699A US9483191B2 US 9483191 B2 US9483191 B2 US 9483191B2 US 201414211699 A US201414211699 A US 201414211699A US 9483191 B2 US9483191 B2 US 9483191B2
Authority
US
United States
Prior art keywords
content
storage unit
counters
period
count
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/211,699
Other versions
US20140297982A1 (en
Inventor
Robert C. Duzett
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Arris Enterprises LLC
Original Assignee
Arris Enterprises LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US14/211,699 priority Critical patent/US9483191B2/en
Application filed by Arris Enterprises LLC filed Critical Arris Enterprises LLC
Assigned to ARRIS ENTERPRISES, INC. reassignment ARRIS ENTERPRISES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DUZETT, ROBERT C.
Publication of US20140297982A1 publication Critical patent/US20140297982A1/en
Assigned to BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT reassignment BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ARCHIE U.S. HOLDINGS LLC, ARCHIE U.S. MERGER LLC, ARRIS ENTERPRISES, INC., ARRIS GLOBAL SERVICES, INC., ARRIS GROUP, INC., ARRIS HOLDINGS CORP. OF ILLINOIS, INC., ARRIS INTERNATIONAL LIMITED, ARRIS SOLUTIONS, INC., ARRIS TECHNOLOGY, INC., BIG BAND NETWORKS, INC., GIC INTERNATIONAL CAPITAL LLC, GIC INTERNATIONAL HOLDCO LLC, JERROLD DC RADIO, INC., NEXTLEVEL SYSTEMS (PUERTO RICO), INC., POWER GUARD, INC., TEXSCAN CORPORATION
Publication of US9483191B2 publication Critical patent/US9483191B2/en
Application granted granted Critical
Assigned to ARRIS ENTERPRISES LLC reassignment ARRIS ENTERPRISES LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: ARRIS ENTERPRISES INC
Assigned to ARCHIE U.S. HOLDINGS LLC, ARRIS TECHNOLOGY, INC., POWER GUARD, INC., NEXTLEVEL SYSTEMS (PUERTO RICO), INC., TEXSCAN CORPORATION, ARRIS ENTERPRISES, INC., BIG BAND NETWORKS, INC., GIC INTERNATIONAL CAPITAL LLC, ARRIS INTERNATIONAL LIMITED, ARRIS SOLUTIONS, INC., ARRIS HOLDINGS CORP. OF ILLINOIS, INC., GIC INTERNATIONAL HOLDCO LLC, JERROLD DC RADIO, INC., ARCHIE U.S. MERGER LLC, ARRIS GLOBAL SERVICES, INC., ARRIS GROUP, INC. reassignment ARCHIE U.S. HOLDINGS LLC TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT
Assigned to ARRIS ENTERPRISES LLC reassignment ARRIS ENTERPRISES LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: ARRIS ENTERPRISES, INC.
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. TERM LOAN SECURITY AGREEMENT Assignors: ARRIS ENTERPRISES LLC, ARRIS SOLUTIONS, INC., ARRIS TECHNOLOGY, INC., COMMSCOPE TECHNOLOGIES LLC, COMMSCOPE, INC. OF NORTH CAROLINA, RUCKUS WIRELESS, INC.
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. ABL SECURITY AGREEMENT Assignors: ARRIS ENTERPRISES LLC, ARRIS SOLUTIONS, INC., ARRIS TECHNOLOGY, INC., COMMSCOPE TECHNOLOGIES LLC, COMMSCOPE, INC. OF NORTH CAROLINA, RUCKUS WIRELESS, INC.
Assigned to WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT reassignment WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: ARRIS ENTERPRISES LLC
Assigned to WILMINGTON TRUST reassignment WILMINGTON TRUST SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ARRIS ENTERPRISES LLC, ARRIS SOLUTIONS, INC., COMMSCOPE TECHNOLOGIES LLC, COMMSCOPE, INC. OF NORTH CAROLINA, RUCKUS WIRELESS, INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0613Improving I/O performance in relation to throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2405Monitoring of the internal components or processes of the server, e.g. server load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2408Monitoring of the upstream path of the transmission network, e.g. client requests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling

Definitions

  • This disclosure relates to providing a multi-tier storage system for delivery of services.
  • RAM random access memory
  • HDDs hard disk drives
  • SSDs flash-based solid-state drives
  • FIG. 1 is a block diagram illustrating an example network environment operable to provide multi-tier storage for delivery services.
  • FIG. 2 is a block diagram illustrating an example server operable to provide multi-tier storage for delivery services.
  • FIG. 3 is a flowchart illustrating an example operational scenario for multi-tier storage for delivery services.
  • FIG. 4 is a flowchart illustrating an example operational scenario for multi-tier storage for delivery services using a rolling period window.
  • FIG. 5 is a block diagram illustrating an example hardware configuration operable to provide multi-tier storage for delivery services.
  • Methods, systems, and computer readable media can be operable to facilitate the transfer of content associated with, or stored within a storage unit entry (e.g., data associated with content such as information, video, audio, etc.) between two or more storage units.
  • a storage unit entry can include data corresponding with stored content and identification data and/or history field (e.g., data that serves to identify and/or indicate a status of the content stored within the storage unit entry).
  • a storage unit can include a storage (e.g., SSD, HDD, etc.), a memory, a cache, or any other component that is operable to store data, and a storage unit entry can be a location for storing data within a storage unit.
  • the transfer of content between two or more storage units can be based upon a count of the number of hits to the storage unit entry, or the number of requests for the content associated with the storage unit entry. In embodiments, the transfer of content between two or more storage units can be further based upon a predetermined threshold associated with a period.
  • a multi-tier storage can produce higher streaming densities at lower costs than existing storage components used alone, thereby yielding a storage solution that simultaneously meets the goals of high throughput (e.g., rate of data reception/input or delivery/output), high density, and low cost.
  • a server e.g., a video-on-demand server
  • a primary storage unit for storing content that is frequently requested
  • a secondary storage unit for storing content that is less frequently requested.
  • the primary storage unit can be a storage medium that has a high throughput capability (e.g., SSD), and the secondary storage unit can be a storage medium that has a high storage capacity (e.g., HDD), thereby providing the server with greater bandwidth or throughput with which to provide popular content as well as greater storage capacity with which to store less-popular content.
  • a high throughput capability e.g., SSD
  • HDD high storage capacity
  • a method can be used to ascertain the dynamic popularity of content associated with a storage unit entry by introducing a time-bounded popularity method.
  • this method can introduce a rolling period window whereby content can be transferred from one storage unit to another when a play count (e.g., a count of the number of times a storage unit entry or content associated with the storage unit entry is requested) exceeds a transfer threshold.
  • the rolling period window approach can be tuned by configuring various parameters (e.g., the number of periods for which to retain information, the length of each period, a transfer threshold defining a maximum number of storage unit entry hits over a period of time before the content is transferred from one storage unit to another).
  • a period can be defined in terms of total play requests to respond to traffic variations.
  • a rolling total play requests can be used to form a dynamic total play requests window, and this window can be adjusted according to traffic and/or storage unit behavior. It should be understood that this approach can be used in place of, in addition to, or in conjunction with a rolling time period counter approach.
  • FIG. 1 is a block diagram illustrating an example network environment 100 operable to provide multi-tier storage for delivery services.
  • client devices 105 a - c e.g., computer 105 a , mobile device or tablet 105 b , internet protocol (IP) television 105 c , etc.
  • IP internet protocol
  • CPE customer premise equipment
  • CPE devices 110 a - b can be connected to a content distribution network 115 .
  • client device(s) 105 a - c can communicate with one or more content delivery networks 120 via a connection to a content distribution network 115 .
  • client device(s) 105 a - c can request content (e.g., data, video, etc.) from a content server 125 by transmitting a request to a content distribution network 115 , and the request can be routed from to the content server 125 via a content delivery network 120 .
  • the content distribution network 115 can take the form of an all-coaxial, all-fiber, hybrid fiber-coaxial (HFC) network, an over-the-air network, a telephone network, for example, among many others.
  • HFC hybrid fiber-coaxial
  • the content server 125 can represent a local content server at a headend (e.g., a cable modem termination system) or can be provided by a service provided via a connection to the content delivery network(s) 120 .
  • content from the content server 125 can be delivered to a CPE device 110 a - b through a content delivery network 120 or a content distribution network 115 . It should be understood that content from the content server 125 can be delivered directly to a client device 105 a - c through a content distribution network 115 .
  • content stored on the content server 125 can be processed using various mechanisms.
  • video content stored on the content server 125 can be processed using either an MPEG-2 or an MPEG-4 coder-decoder (CODEC) to produce an MPEG transport stream.
  • the MPEG transport stream can then be transmitted by the content server 125 to a client device 105 a - c over a content distribution network 115 and/or a content delivery network 120 .
  • the MPEG transport stream can be converted to a signal that can be transported through a content distribution network 115 and/or a content delivery network 120 .
  • the content server 125 can include a plurality of storage units.
  • an initial storage unit load can be implemented to increase performance of the content server 125 .
  • the peak-miss and write loads can be avoided when a storage unit within the content server 125 is empty, or near empty.
  • controlled pre-placement of content e.g., cache-warming
  • managed pacing of pull-thru into empty storage units avoidance of storage unit restarts during primetime, and/or other approaches can be used in conjunction with multi-tier storage methods.
  • content included within the content server 125 can be managed by a content manager 130 .
  • the content manager 130 can be operable to allocate node capacity (e.g., capacity of the content server 125 ) to content objects. In embodiments, the allocation can be based on a best-fit, random, or hybrid allocation process.
  • the content manager 130 can be further operable to select nodes for streaming and failover based upon requests received from any of the client devices 105 a - c .
  • the content manager 130 can also identify instances of exceptional asymmetries in content allocation and/or use of streaming resources and demand for scaling the allocated resources.
  • the content manager 130 and content server 125 can reside in the same physical server.
  • FIG. 2 is a block diagram illustrating an example content server 125 operable to provide multi-tier storage for delivery services.
  • the content server 125 can include a network interface 210 , a primary storage unit 220 , a secondary storage unit 230 , and a transfer module 240 .
  • a network interface 210 can be used to receive an inquiry for a storage unit entry within the content server 125 .
  • the network interface 210 can be used to output data associated with a storage unit entry (e.g., storage unit entries A-F).
  • the network interface 210 can be used to input data to be stored within the content server 125 . It should be understood that the network interface 210 can be implemented as multiple interfaces.
  • a primary storage unit 220 can store a plurality of storage unit entries (e.g., storage unit entries A-C).
  • the plurality of storage unit entries that are stored within the primary storage unit 220 can be storage unit entries that are frequently requested.
  • the primary storage unit 220 can be smaller and faster than a secondary storage unit 230 .
  • the primary storage unit 220 can provide for efficient access to more popular content or more frequently requested content.
  • the primary storage unit 220 can be a solid-state drive (SSD). It should be understood that the primary storage unit 220 can include various storage mediums (e.g., data cache, static memory, hard disk, flash memory, etc.).
  • a secondary storage unit 230 can store a plurality of storage unit entries (e.g., storage unit entries D-F).
  • the plurality of storage unit entries that are stored within the secondary storage unit 230 can comprise storage unit entries that are associated with data that is requested by a client device 110 a - c of FIG. 1 .
  • each secondary storage unit entry can include data corresponding with requested content and a history field that can store and update information associated with the requested content (e.g., a number of times the content has been requested).
  • the secondary storage unit 230 can be used to store data or storage unit entries associated with content that is less popular or less frequently accessed than content stored in the primary storage unit 220 .
  • a secondary storage unit 230 when a secondary storage unit 230 is full, data that is associated with content that is requested by a client device 110 a - c , or otherwise new data, can be inserted into the secondary storage unit as a most-recently used (MRU) entry, and a least-recently used (LRU) entry can be discarded from the secondary storage unit.
  • MRU most-recently used
  • LRU least-recently used
  • miss data can be added to the secondary storage unit 230 .
  • the secondary storage unit 230 can be a hard disk drive (HDD). It should be understood that the secondary storage unit 230 can include various storage mediums (e.g., data cache, static memory, hard disk, flash memory, etc.).
  • the activity history associated with the content can be set or reset.
  • the activity history of the content can be updated to include the most recent time at which the content or a storage unit entry associated with the content was hit.
  • the activity history of the content can be set or reset so that the activity history indicates that the content or a storage unit entry associated with the content has received no hits.
  • the activity history of content can be maintained within a server (e.g., content server 125 ) or data store as a play count.
  • a server e.g., content server 125
  • the activity history of the content can be retrieved from a server or data store and can be included as a history field in a count tag associated with the content.
  • the play count for content can be set to the most recent play count value associated with the content or a storage unit entry associated with the content.
  • the activity history for content can be periodically reset.
  • the play count for content can be maintained for a predetermined period of time (e.g., in a server or data store) and then reset or attenuated (e.g., scaled down) upon the expiration of the predetermined period of time.
  • a secondary storage unit entry can include a count tag, wherein the count tag comprises one or more period counters.
  • period counters associated with the storage unit entry can be set to indicate that the content has yet to be hit more than once, or that the content has been hit once during the current period. It should be understood that period counters associated with a storage unit entry can be stored at a server or data store.
  • a transfer module 240 can compare an activity history associated with content stored in the secondary storage unit 230 to a predetermined threshold.
  • a piece of content's activity history can include a variety of information that serves to indicate a number of times that the content or a storage unit entry associated with the content has been hit and/or the times at which the storage unit entry has been hit.
  • a piece of content's activity history can be maintained as a count tag associated with the content. For example, a storage unit entry's count tag can be updated and/or incremented each time content associated with the storage unit entry is hit, thus maintaining a count of the number of times the content is hit and/or an identification of the time at which the content was hit.
  • a piece of content's activity history can be maintained in a data store, and a tag associated with the content can point to the stored activity history.
  • a hit to a secondary storage unit entry e.g., storage unit entries D-F
  • that storage unit entry's activity history can be transferred to the primary storage unit 220 .
  • a transfer module 240 can transfer content from the secondary storage unit 230 to the primary storage unit 220 .
  • the transfer module 240 can write the content of the secondary storage unit entry to the primary storage unit 220 .
  • the content can be removed from the secondary storage unit entry.
  • the content can become the MRU entry of the primary storage unit 220 .
  • a LRU entry within the primary storage unit can be transferred from the primary storage unit to the secondary storage unit.
  • the LRU entry of the primary storage unit can become the MRU entry of the secondary storage unit 230 , and the activity history associated with the storage unit entry can be reset.
  • the migration of a LRU block from the primary storage unit back into the secondary storage unit allows the storage unit entry to continue to be hit unless or until it eventually falls out the bottom of the secondary storage unit, while also providing the opportunity for the content associated with the storage unit entry to be transferred back to the primary storage unit if warranted.
  • FIG. 3 is a flowchart illustrating an example process 300 for operating a multi-tier storage for delivery services.
  • a play count associated with a storage unit entry can be compared to a transfer threshold. For example, when a secondary storage unit entry is hit, a play count associated with the secondary storage unit entry can be incremented, and if the play count eventually meets a transfer threshold, then the content associated with the secondary storage unit entry can be transferred to a primary storage unit.
  • the process 300 can start at 310 where a transfer threshold is initialized for a secondary storage unit.
  • the transfer threshold can be a fixed number.
  • the transfer threshold can be variable based upon factors such as, for example, a capacity or load associated with a content server 125 of FIG.
  • a capacity or load associated with a content distribution network 115 of FIG. 1 a capacity or load associated with a content distribution network 115 of FIG. 1 , the amount of traffic that is expected for a storage unit, the type of content or data stored within the storage unit, among many other factors.
  • a low transfer threshold e.g. one (1) play for insertion into a secondary storage unit section of the server and one (1) hit for transfer into the primary storage unit section
  • the transfer threshold can be raised to the desired level.
  • a storage unit entry hit to a storage unit entry that is within a secondary storage unit can be received.
  • a storage unit entry hit can be the reception of a request for content or data that is associated with the storage unit entry by a server (e.g., content server 125 of FIG. 1 ).
  • a play count associated with the storage unit entry for which a hit is received can be incremented.
  • a count tag associated with the storage unit entry can be updated to reflect the hit to the storage unit entry.
  • each storage unit entry can include a count tag that indicates the number of times the corresponding storage unit entry has been hit or the number of times the content associated with the storage unit entry has been requested.
  • the number that is indicated by the count tag of the storage unit entry can be incremented to account for the hit.
  • an activity history associated with a storage unit entry or the content associated with the storage unit entry can be maintained, for example, within a server or data store. When the secondary storage unit entry is hit, the activity history associated with the storage unit entry can be updated to account for the hit.
  • a determination can be made whether the transfer threshold has been met by the storage unit entry that is hit.
  • the activity history, or current play count, associated with the hit storage unit entry can be compared to the transfer threshold.
  • the activity history, or play count associated with the storage unit entry can indicate a number of times that the storage unit entry has been hit, and this number can be compared to the transfer threshold. If the activity history, or play count, associated with the storage unit entry is less than, or in some cases equal to, the transfer threshold, the determination can be made that the transfer threshold has not been met and the process 300 can return to 320 . If the activity history, or play count associated with the storage unit entry is greater than, or in some cases equal to the transfer threshold, the determination can be made that the transfer threshold has been met and the process 300 can proceed to 350 .
  • the content associated with the storage unit entry for which the transfer threshold has been met can be transferred from the secondary storage unit to a primary storage unit (e.g., primary storage unit 220 of FIG. 2 ).
  • a transfer module 240 of FIG. 2 can write the content of the secondary storage unit entry to the primary storage unit 220 .
  • the content can be removed from the secondary storage unit entry.
  • the content can become the MRU entry of the primary storage unit 220 .
  • a LRU entry within the primary storage unit can be transferred from the primary storage unit to the secondary storage unit.
  • the LRU entry of the primary storage unit can become the MRU entry of the secondary storage unit 230 , and the activity history associated with the storage unit entry can be reset.
  • FIG. 4 is a flowchart illustrating an example process 400 for operating a multi-tier storage using a rolling period window.
  • the process 400 can start at 405 where a transfer threshold is initialized for a secondary storage unit.
  • the transfer threshold can be a fixed number.
  • the transfer threshold can be variable based upon factors such as, for example, a capacity or load associated with a content server 125 of FIG. 1 , a capacity or load associated with a content distribution network 115 of FIG. 1 , the amount of traffic that is expected for a storage unit, the type of content or data stored within the storage unit, among many other factors.
  • the transfer threshold can be associated with a period of time.
  • the period of time can be a fixed length of time, or the period of time can be dynamic based upon factors such as, for example, total accessed block or CPU load.
  • the period of time can be adjusted by altering the number of period counters maintained within an activity history, or count tag of a storage unit entry (e.g., the activity history can include 3, 4, 5, or any other number of period counters in order to maintain a count of content requests for a desired period of time), and/or by altering the length of time associated with a period (e.g., each period counter can maintain a count of content requests for a number of minutes, a number of hours, a number of days, or any other predetermined period of time).
  • a hit to a secondary storage unit entry can be received.
  • a storage unit entry hit can be the reception of a request for content associated with the storage unit entry by a server (e.g., content server 125 of FIG. 1 ).
  • the period of the storage unit entry's last hit can be identified.
  • rolling period counters can be kept for entries in the secondary storage unit (e.g., secondary storage unit 230 of FIG. 2 ), and can be accessed and/or updated when a corresponding storage unit entry is hit.
  • a storage unit entry can include a count tag, and the count tag can include one or more period counters.
  • the period associated with each period counter can be fixed or can be dynamic based upon factors such as, for example, capacity and/or load of a server or a network.
  • Each period counter can indicate and/or include the number of hits to the storage unit entry during a specific period of time.
  • each period counter can be associated with a predetermined maximum number of storage unit requests (e.g., each period counter can be maintained over a predetermined range of requests). For example, each time a storage unit entry is hit, the current period counter can be incremented, and when the number of storage unit requests reaches the predetermined maximum number for the current period, the oldest period counter can be discarded, each of the remaining period counters can be shifted, and a new period counter can be inserted into the corresponding count tag.
  • the number of period counters included in a count tag can be varied and can be based upon many different factors.
  • the first period counter of the plurality of period counters in the count tag can indicate the period of the storage unit entry's last hit.
  • the period counter can also indicate the number of times the storage unit entry was hit during said period of time.
  • a determination can be made whether the current period is the same as the period of the storage unit entry's last hit.
  • the period of the storage unit entry's last hit can be identified from the newest period counter of a count tag associated with the storage unit entry or from another data store or server, and the identified period of the storage unit entry's last hit can be compared to a current period.
  • a current period can be maintained within a content server 125 (e.g., at transfer module 240 of FIG. 2 ).
  • a play count associated with the current period can be updated for the storage unit entry.
  • a count tag associated with the storage unit entry can be updated to reflect the hit to the storage unit entry.
  • the period counter corresponding with the current period e.g., the newest period counter of the storage unit entry's count tag
  • an activity history associated with a storage unit entry can be maintained, for example, within a server or data store. When the secondary storage unit entry is hit, the activity history associated with the storage unit entry can be updated to account for the hit. For example, the activity history associated with the storage unit entry can be updated to reflect that an additional hit has been made to the storage unit entry during the current period.
  • the process 400 can proceed to 430 .
  • the period counters of the storage unit entry can be shifted according to the difference between the period of the storage unit entry's last hit and the current period.
  • counts for a predetermined number of periods can be maintained within a count tag for each storage unit entry, and the periods can include the last period in which the storage unit entry was hit as well as a number of periods that chronologically precede the last period in which the storage unit entry was hit.
  • one or more of the period counters for a storage unit entry can be empty (e.g., where the storage unit entry was not hit during the corresponding period).
  • the difference between the current period and the last period in which the storage unit entry was hit can indicate how many of the storage unit entry's period counters are to be discarded and replaced with empty period counters.
  • the number of period counters that are to be discarded and replaced with empty counters can be equivalent to the difference between the current period and the last period in which the storage unit entry was hit, and one of the new, empty counter can correspond with the current period.
  • empty period counters can replace discarded period counters, period counters that are not discarded can be shifted to the end of the count tag, and the period counter that is now at the beginning of the count tag can correspond with the current period.
  • one or more new period counters can be inserted at the beginning of the storage unit entry's count tag to replace the discarded counters or the counters that were shifted out.
  • the newest period counter e.g., the first counter in the count tag
  • the process 400 can proceed to 425 .
  • a sum of the play counts associated with each period counter of the storage unit entry can be calculated.
  • each period counter within a count tag can indicate the number of times the storage unit entry was hit during a corresponding period.
  • the play counts of each of the period counters can be added together to calculate a total number of times the storage unit entry was hit over the period of time covered by the period counters in combination. For example, where each period counter accounts for a five (5) minute period, and a count tag maintains a maximum of five (5) period counters, the number of times the storage unit entry was hit over the last twenty to twenty-five (25) minutes (as low as twenty depending on the current position within the current period) can be calculated by summing the play counts of each of the period counters.
  • the number of period counters maintained within a count tag can be varied to account for larger or smaller periods of time.
  • the time associated with each period counter can be varied to account for larger or smaller periods of time.
  • a determination can be made whether the predetermined threshold has been met at the hit storage unit entry.
  • the sum of the play counts of each of the period counters associated with the storage unit entry can be compared to the predetermined threshold. If the sum of the play counts is less than, or in some cases equal to, the predetermined threshold, the determination can be made that the predetermined threshold has not been met and the process 400 can return to 410 .
  • the updated activity information e.g., the updated period counters
  • the secondary storage unit entry can be stored until the secondary storage unit entry is hit again at 410 .
  • the content associated with the storage unit entry for which the predetermined threshold has been met can be transferred from the secondary storage unit (e.g., secondary storage unit 230 of FIG. 2 ) to a primary storage unit (e.g., primary storage unit 220 of FIG. 2 ).
  • the content can be removed from the secondary storage unit entry. For example, the content can become the MRU entry of a primary storage unit 220 .
  • a LRU entry within the primary storage unit can be transferred from the primary storage unit to the secondary storage unit 230 .
  • the LRU entry of the primary storage unit can become the MRU entry of the secondary storage unit 230 , and the activity history associated with the storage unit entry can be reset.
  • FIG. 5 is a block diagram illustrating an example hardware configuration 500 operable to provide multi-tier storage for delivery services. While a content server 125 is shown, it should be understood that many different kinds of network devices can implement a multi-tier storage for delivery services.
  • the configuration 500 can include a processor 510 , a memory 520 , a primary storage unit 530 , a secondary storage unit 540 , and an input/output device 550 . Each of the components 510 , 520 , 530 , 540 and 550 can, for example, be interconnected using a system bus 560 .
  • the processor 510 is capable of processing instructions for execution within the configuration 500 . In one implementation, the processor 510 is a single-threaded processor. In another implementation, the processor 510 is a multi-threaded processor.
  • the processor 510 is capable of processing instructions stored in the memory 520 or on the storage device storage units 530 or 540 .
  • the memory 520 stores information within the configuration 500 .
  • the memory 520 is a computer-readable medium.
  • the memory 520 is a volatile memory unit.
  • the memory 520 is a non-volatile memory unit.
  • the storage device storage units 530 and 540 are capable of providing mass storage within the configuration 500 .
  • the storage device storage units 530 and 540 are a computer-readable medium.
  • the storage device storage units 530 and 540 can, for example, include a hard disk device, an optical disk device, flash memory, random-access memory, or some other large capacity storage device. It should be understood that more storage units can be available in addition to the primary storage unit 530 and secondary storage unit 540 .
  • the input/output device 550 provides input/output operations for the configuration 500 .
  • the input/output device 550 can include one or more of a plain old telephone interface (e.g., an RJ11 connector), a network interface device (e.g., an Ethernet card), a serial communication device (e.g., RS-232 port), and/or a wireless interface device (e.g., 802.11 card).
  • the input/output device can include driver devices configured to receive input data and send output data to other input/output devices, such as one or more client devices 105 a - c of FIG.
  • FIG. 1 As well as sending communications to, and receiving communications from one or more networks (e.g., content delivery network(s) 120 of FIG. 1 , content distribution network(s) 115 of FIG. 1 , etc.).
  • networks e.g., content delivery network(s) 120 of FIG. 1 , content distribution network(s) 115 of FIG. 1 , etc.
  • Other implementations can also be used, such as mobile computing devices, mobile communication devices, set-top box television client devices, etc.
  • Methods, systems, and computer readable media can be operable to facilitate the transfer of content associated with storage unit entries between two or more storage units.
  • the transfer of content between two or more storage units can be based upon a count of the number of hits to the storage unit entry.
  • the transfer of content between two or more storage units can be further based upon a predetermined threshold associated with a period.
  • Such instructions can, for example, comprise interpreted instructions, such as script instructions, e.g., JavaScript or ECMAScript instructions, or executable code, or other instructions stored in a computer readable medium.
  • Implementations of the subject matter and the functional operations described in this specification can be provided in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Embodiments of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible program carrier for execution by, or to control the operation of, data processing apparatus.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification are performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output thereby tying the process to a particular machine (e.g., a machine programmed to perform the processes described herein).
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices); magnetic disks (e.g., internal hard disks or removable disks); magneto optical disks; and CD ROM and DVD ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto optical disks e.g., CD ROM and DVD ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Methods, systems, and computer readable media can be operable to facilitate the transfer of content between two or more storage units. The transfer of content between two or more storage units can be based upon a count of the number of hits to a storage unit entry associated with the content. In embodiments, the transfer of content between two or more storage units can be further based upon a predetermined threshold associated with a period.

Description

CROSS REFERENCE TO RELATED APPLICATION
This application is a non-provisional application claiming the benefit of U.S. Provisional Application Ser. No. 61/800,689, entitled “Hybrid Promotion Cache for Video Delivery Services,” which was filed on Mar. 15, 2013, and is incorporated herein by reference in its entirety.
TECHNICAL FIELD
This disclosure relates to providing a multi-tier storage system for delivery of services.
BACKGROUND
The advent of centric delivery solutions places an increasing value on high-throughput, high-density caching servers because of high network and storage demands. This increasing demand presents a challenge to develop storage based on fast, dense, reliable, and inexpensive storage systems. Currently, there is no storage medium that sufficiently meets all of these requirements. Random access memory (RAM) is fast but capacity-limited and very expensive; hard disk drives (HDDs) are high capacity and inexpensive, but slow; flash-based solid-state drives (SSDs) are relatively fast and moderately high-capacity, but very expensive, particularly the single-level-cell type that can support write traffic of a typical storage. Therefore, an improvement to data caching is desired, wherein the improvement can meet high-throughput and high-capacity needs at a low cost.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram illustrating an example network environment operable to provide multi-tier storage for delivery services.
FIG. 2 is a block diagram illustrating an example server operable to provide multi-tier storage for delivery services.
FIG. 3 is a flowchart illustrating an example operational scenario for multi-tier storage for delivery services.
FIG. 4 is a flowchart illustrating an example operational scenario for multi-tier storage for delivery services using a rolling period window.
FIG. 5 is a block diagram illustrating an example hardware configuration operable to provide multi-tier storage for delivery services.
Like reference numbers and designations in the various drawings indicate like elements.
DETAILED DESCRIPTION
Methods, systems, and computer readable media can be operable to facilitate the transfer of content associated with, or stored within a storage unit entry (e.g., data associated with content such as information, video, audio, etc.) between two or more storage units. For example, a storage unit entry can include data corresponding with stored content and identification data and/or history field (e.g., data that serves to identify and/or indicate a status of the content stored within the storage unit entry). A storage unit can include a storage (e.g., SSD, HDD, etc.), a memory, a cache, or any other component that is operable to store data, and a storage unit entry can be a location for storing data within a storage unit. The transfer of content between two or more storage units can be based upon a count of the number of hits to the storage unit entry, or the number of requests for the content associated with the storage unit entry. In embodiments, the transfer of content between two or more storage units can be further based upon a predetermined threshold associated with a period.
Systems and methods of this disclosure can operate to implement a multi-tier storage using a mixture of storage components to achieve storage efficiencies while reducing costs. In embodiments, a multi-tier storage can produce higher streaming densities at lower costs than existing storage components used alone, thereby yielding a storage solution that simultaneously meets the goals of high throughput (e.g., rate of data reception/input or delivery/output), high density, and low cost. For example, a server (e.g., a video-on-demand server) can include a primary storage unit for storing content that is frequently requested and a secondary storage unit for storing content that is less frequently requested. The primary storage unit can be a storage medium that has a high throughput capability (e.g., SSD), and the secondary storage unit can be a storage medium that has a high storage capacity (e.g., HDD), thereby providing the server with greater bandwidth or throughput with which to provide popular content as well as greater storage capacity with which to store less-popular content.
In embodiments, a method can be used to ascertain the dynamic popularity of content associated with a storage unit entry by introducing a time-bounded popularity method. In embodiments, this method can introduce a rolling period window whereby content can be transferred from one storage unit to another when a play count (e.g., a count of the number of times a storage unit entry or content associated with the storage unit entry is requested) exceeds a transfer threshold. The rolling period window approach can be tuned by configuring various parameters (e.g., the number of periods for which to retain information, the length of each period, a transfer threshold defining a maximum number of storage unit entry hits over a period of time before the content is transferred from one storage unit to another).
In embodiments, a period can be defined in terms of total play requests to respond to traffic variations. In such embodiments, a rolling total play requests can be used to form a dynamic total play requests window, and this window can be adjusted according to traffic and/or storage unit behavior. It should be understood that this approach can be used in place of, in addition to, or in conjunction with a rolling time period counter approach.
FIG. 1 is a block diagram illustrating an example network environment 100 operable to provide multi-tier storage for delivery services. One or more client devices 105 a-c (e.g., computer 105 a, mobile device or tablet 105 b, internet protocol (IP) television 105 c, etc.) can be connected to a customer premise equipment (CPE) device 110 a-b (e.g., modem 110 a, gateway 110 b, etc.). In embodiments, CPE devices 110 a-b can be connected to a content distribution network 115.
In embodiments, client device(s) 105 a-c can communicate with one or more content delivery networks 120 via a connection to a content distribution network 115. For example, client device(s) 105 a-c can request content (e.g., data, video, etc.) from a content server 125 by transmitting a request to a content distribution network 115, and the request can be routed from to the content server 125 via a content delivery network 120. In embodiments, the content distribution network 115 can take the form of an all-coaxial, all-fiber, hybrid fiber-coaxial (HFC) network, an over-the-air network, a telephone network, for example, among many others. It should be understood that the content server 125 can represent a local content server at a headend (e.g., a cable modem termination system) or can be provided by a service provided via a connection to the content delivery network(s) 120. In embodiments, content from the content server 125 can be delivered to a CPE device 110 a-b through a content delivery network 120 or a content distribution network 115. It should be understood that content from the content server 125 can be delivered directly to a client device 105 a-c through a content distribution network 115.
In embodiments, content stored on the content server 125 can be processed using various mechanisms. For example, video content stored on the content server 125 can be processed using either an MPEG-2 or an MPEG-4 coder-decoder (CODEC) to produce an MPEG transport stream. The MPEG transport stream can then be transmitted by the content server 125 to a client device 105 a-c over a content distribution network 115 and/or a content delivery network 120. In embodiments, the MPEG transport stream can be converted to a signal that can be transported through a content distribution network 115 and/or a content delivery network 120.
In embodiments, the content server 125 can include a plurality of storage units. In embodiments, an initial storage unit load can be implemented to increase performance of the content server 125. In such embodiments, the peak-miss and write loads can be avoided when a storage unit within the content server 125 is empty, or near empty. In embodiments, controlled pre-placement of content (e.g., cache-warming), managed pacing of pull-thru into empty storage units, avoidance of storage unit restarts during primetime, and/or other approaches can be used in conjunction with multi-tier storage methods.
In embodiments, content included within the content server 125 can be managed by a content manager 130. The content manager 130 can be operable to allocate node capacity (e.g., capacity of the content server 125) to content objects. In embodiments, the allocation can be based on a best-fit, random, or hybrid allocation process. In embodiments, the content manager 130 can be further operable to select nodes for streaming and failover based upon requests received from any of the client devices 105 a-c. In embodiments, the content manager 130 can also identify instances of exceptional asymmetries in content allocation and/or use of streaming resources and demand for scaling the allocated resources. In embodiments, the content manager 130 and content server 125 can reside in the same physical server.
FIG. 2 is a block diagram illustrating an example content server 125 operable to provide multi-tier storage for delivery services. In embodiments, the content server 125 can include a network interface 210, a primary storage unit 220, a secondary storage unit 230, and a transfer module 240. In embodiments, a network interface 210 can be used to receive an inquiry for a storage unit entry within the content server 125. In embodiments, the network interface 210 can be used to output data associated with a storage unit entry (e.g., storage unit entries A-F). In embodiments, the network interface 210 can be used to input data to be stored within the content server 125. It should be understood that the network interface 210 can be implemented as multiple interfaces.
In embodiments, a primary storage unit 220 can store a plurality of storage unit entries (e.g., storage unit entries A-C). The plurality of storage unit entries that are stored within the primary storage unit 220 can be storage unit entries that are frequently requested. In embodiments, the primary storage unit 220 can be smaller and faster than a secondary storage unit 230. For example, the primary storage unit 220 can provide for efficient access to more popular content or more frequently requested content. In embodiments, the primary storage unit 220 can be a solid-state drive (SSD). It should be understood that the primary storage unit 220 can include various storage mediums (e.g., data cache, static memory, hard disk, flash memory, etc.).
In embodiments, a secondary storage unit 230 can store a plurality of storage unit entries (e.g., storage unit entries D-F). The plurality of storage unit entries that are stored within the secondary storage unit 230 can comprise storage unit entries that are associated with data that is requested by a client device 110 a-c of FIG. 1. For example, each secondary storage unit entry can include data corresponding with requested content and a history field that can store and update information associated with the requested content (e.g., a number of times the content has been requested). In embodiments, the secondary storage unit 230 can be used to store data or storage unit entries associated with content that is less popular or less frequently accessed than content stored in the primary storage unit 220. For example, when a secondary storage unit 230 is full, data that is associated with content that is requested by a client device 110 a-c, or otherwise new data, can be inserted into the secondary storage unit as a most-recently used (MRU) entry, and a least-recently used (LRU) entry can be discarded from the secondary storage unit. In embodiments, miss data can be added to the secondary storage unit 230. In embodiments, the secondary storage unit 230 can be a hard disk drive (HDD). It should be understood that the secondary storage unit 230 can include various storage mediums (e.g., data cache, static memory, hard disk, flash memory, etc.).
In embodiments, when content is inserted into the secondary storage unit 230, the activity history associated with the content can be set or reset. For example, when content is inserted into the secondary storage unit 230, the activity history of the content can be updated to include the most recent time at which the content or a storage unit entry associated with the content was hit. As another example, when content is inserted into the secondary storage unit 230, the activity history of the content can be set or reset so that the activity history indicates that the content or a storage unit entry associated with the content has received no hits. In embodiments, the activity history of content, such as the number of times the content is hit or requested (e.g., content associated with a storage unit entry is requested by a client device), can be maintained within a server (e.g., content server 125) or data store as a play count. In embodiments, when content is inserted into the secondary storage unit 230, the activity history of the content can be retrieved from a server or data store and can be included as a history field in a count tag associated with the content. For example, the play count for content can be set to the most recent play count value associated with the content or a storage unit entry associated with the content. In embodiments, the activity history for content can be periodically reset. For example, the play count for content can be maintained for a predetermined period of time (e.g., in a server or data store) and then reset or attenuated (e.g., scaled down) upon the expiration of the predetermined period of time. In embodiments, a secondary storage unit entry can include a count tag, wherein the count tag comprises one or more period counters. When content is inserted into the secondary storage unit, period counters associated with the storage unit entry can be set to indicate that the content has yet to be hit more than once, or that the content has been hit once during the current period. It should be understood that period counters associated with a storage unit entry can be stored at a server or data store.
In embodiments, a transfer module 240 can compare an activity history associated with content stored in the secondary storage unit 230 to a predetermined threshold. For example, a piece of content's activity history can include a variety of information that serves to indicate a number of times that the content or a storage unit entry associated with the content has been hit and/or the times at which the storage unit entry has been hit. In embodiments, a piece of content's activity history can be maintained as a count tag associated with the content. For example, a storage unit entry's count tag can be updated and/or incremented each time content associated with the storage unit entry is hit, thus maintaining a count of the number of times the content is hit and/or an identification of the time at which the content was hit. In embodiments, a piece of content's activity history can be maintained in a data store, and a tag associated with the content can point to the stored activity history. In embodiments, if a hit to a secondary storage unit entry (e.g., storage unit entries D-F) pushes that storage unit entry's activity history (could be as simple as a play count) beyond a predetermined transfer threshold, content associated with the storage unit entry can be transferred to the primary storage unit 220.
In embodiments, a transfer module 240 can transfer content from the secondary storage unit 230 to the primary storage unit 220. For example, when an activity history associated with a secondary storage unit entry (e.g., storage unit entries D-F) reaches a predetermined threshold, the transfer module 240 can write the content of the secondary storage unit entry to the primary storage unit 220. In embodiments, after the content is written to the primary storage unit 220, the content can be removed from the secondary storage unit entry. For example, the content can become the MRU entry of the primary storage unit 220. In embodiments, when content is transferred to the primary storage unit 220, a LRU entry within the primary storage unit can be transferred from the primary storage unit to the secondary storage unit. For example, the LRU entry of the primary storage unit can become the MRU entry of the secondary storage unit 230, and the activity history associated with the storage unit entry can be reset. The migration of a LRU block from the primary storage unit back into the secondary storage unit allows the storage unit entry to continue to be hit unless or until it eventually falls out the bottom of the secondary storage unit, while also providing the opportunity for the content associated with the storage unit entry to be transferred back to the primary storage unit if warranted.
FIG. 3 is a flowchart illustrating an example process 300 for operating a multi-tier storage for delivery services. In embodiments, a play count associated with a storage unit entry can be compared to a transfer threshold. For example, when a secondary storage unit entry is hit, a play count associated with the secondary storage unit entry can be incremented, and if the play count eventually meets a transfer threshold, then the content associated with the secondary storage unit entry can be transferred to a primary storage unit. The process 300 can start at 310 where a transfer threshold is initialized for a secondary storage unit. In embodiments, the transfer threshold can be a fixed number. In embodiments, the transfer threshold can be variable based upon factors such as, for example, a capacity or load associated with a content server 125 of FIG. 1, a capacity or load associated with a content distribution network 115 of FIG. 1, the amount of traffic that is expected for a storage unit, the type of content or data stored within the storage unit, among many other factors. In embodiments, to avoid a slow ramp to steady-state behaviors, it can be beneficial to begin with a low transfer threshold (e.g. one (1) play for insertion into a secondary storage unit section of the server and one (1) hit for transfer into the primary storage unit section) until the respective storage units are mostly full, at which point the transfer threshold can be raised to the desired level. It should be understood that such caching improvements can be used in conjunction with the multi-tier storage methods described herein.
At 320, a storage unit entry hit to a storage unit entry that is within a secondary storage unit (e.g., secondary storage unit 230 of FIG. 2) can be received. In embodiments, a storage unit entry hit can be the reception of a request for content or data that is associated with the storage unit entry by a server (e.g., content server 125 of FIG. 1).
At 330, a play count associated with the storage unit entry for which a hit is received can be incremented. In embodiments, a count tag associated with the storage unit entry can be updated to reflect the hit to the storage unit entry. For example, each storage unit entry can include a count tag that indicates the number of times the corresponding storage unit entry has been hit or the number of times the content associated with the storage unit entry has been requested. When a secondary storage unit entry is hit, the number that is indicated by the count tag of the storage unit entry can be incremented to account for the hit. In embodiments, an activity history associated with a storage unit entry or the content associated with the storage unit entry can be maintained, for example, within a server or data store. When the secondary storage unit entry is hit, the activity history associated with the storage unit entry can be updated to account for the hit.
At 340, a determination can be made whether the transfer threshold has been met by the storage unit entry that is hit. In embodiments, the activity history, or current play count, associated with the hit storage unit entry can be compared to the transfer threshold. For example the activity history, or play count associated with the storage unit entry can indicate a number of times that the storage unit entry has been hit, and this number can be compared to the transfer threshold. If the activity history, or play count, associated with the storage unit entry is less than, or in some cases equal to, the transfer threshold, the determination can be made that the transfer threshold has not been met and the process 300 can return to 320. If the activity history, or play count associated with the storage unit entry is greater than, or in some cases equal to the transfer threshold, the determination can be made that the transfer threshold has been met and the process 300 can proceed to 350.
At 350, the content associated with the storage unit entry for which the transfer threshold has been met can be transferred from the secondary storage unit to a primary storage unit (e.g., primary storage unit 220 of FIG. 2). In embodiments, a transfer module 240 of FIG. 2 can write the content of the secondary storage unit entry to the primary storage unit 220. In embodiments, after the content is written to the primary storage unit 220, the content can be removed from the secondary storage unit entry. For example, the content can become the MRU entry of the primary storage unit 220. In embodiments, when content is transferred to the primary storage unit 220, a LRU entry within the primary storage unit can be transferred from the primary storage unit to the secondary storage unit. For example, the LRU entry of the primary storage unit can become the MRU entry of the secondary storage unit 230, and the activity history associated with the storage unit entry can be reset.
FIG. 4 is a flowchart illustrating an example process 400 for operating a multi-tier storage using a rolling period window. The process 400 can start at 405 where a transfer threshold is initialized for a secondary storage unit. In embodiments, the transfer threshold can be a fixed number. In embodiments, the transfer threshold can be variable based upon factors such as, for example, a capacity or load associated with a content server 125 of FIG. 1, a capacity or load associated with a content distribution network 115 of FIG. 1, the amount of traffic that is expected for a storage unit, the type of content or data stored within the storage unit, among many other factors. In embodiments, the transfer threshold can be associated with a period of time. For example, the period of time can be a fixed length of time, or the period of time can be dynamic based upon factors such as, for example, total accessed block or CPU load. In embodiments, the period of time can be adjusted by altering the number of period counters maintained within an activity history, or count tag of a storage unit entry (e.g., the activity history can include 3, 4, 5, or any other number of period counters in order to maintain a count of content requests for a desired period of time), and/or by altering the length of time associated with a period (e.g., each period counter can maintain a count of content requests for a number of minutes, a number of hours, a number of days, or any other predetermined period of time).
At 410, a hit to a secondary storage unit entry can be received. In embodiments, a storage unit entry hit can be the reception of a request for content associated with the storage unit entry by a server (e.g., content server 125 of FIG. 1).
At 415, the period of the storage unit entry's last hit can be identified. In embodiments, rolling period counters can be kept for entries in the secondary storage unit (e.g., secondary storage unit 230 of FIG. 2), and can be accessed and/or updated when a corresponding storage unit entry is hit. In embodiments, a storage unit entry can include a count tag, and the count tag can include one or more period counters. In embodiments, the period associated with each period counter can be fixed or can be dynamic based upon factors such as, for example, capacity and/or load of a server or a network. Each period counter can indicate and/or include the number of hits to the storage unit entry during a specific period of time. In embodiments, each period counter can be associated with a predetermined maximum number of storage unit requests (e.g., each period counter can be maintained over a predetermined range of requests). For example, each time a storage unit entry is hit, the current period counter can be incremented, and when the number of storage unit requests reaches the predetermined maximum number for the current period, the oldest period counter can be discarded, each of the remaining period counters can be shifted, and a new period counter can be inserted into the corresponding count tag. In embodiments, the number of period counters included in a count tag can be varied and can be based upon many different factors. In embodiments, the first period counter of the plurality of period counters in the count tag can indicate the period of the storage unit entry's last hit. In embodiments, the period counter can also indicate the number of times the storage unit entry was hit during said period of time.
At 420, a determination can be made whether the current period is the same as the period of the storage unit entry's last hit. For example, the period of the storage unit entry's last hit can be identified from the newest period counter of a count tag associated with the storage unit entry or from another data store or server, and the identified period of the storage unit entry's last hit can be compared to a current period. In embodiments, a current period can be maintained within a content server 125 (e.g., at transfer module 240 of FIG. 2).
If, at 420, the determination is made that the current period is the same as the period of the storage unit entry's last hit, the process 400 can proceed to 425. At 425, a play count associated with the current period can be updated for the storage unit entry. In embodiments, a count tag associated with the storage unit entry can be updated to reflect the hit to the storage unit entry. For example, the period counter corresponding with the current period (e.g., the newest period counter of the storage unit entry's count tag) can be incremented to account for the hit to the storage unit entry. In embodiments, an activity history associated with a storage unit entry can be maintained, for example, within a server or data store. When the secondary storage unit entry is hit, the activity history associated with the storage unit entry can be updated to account for the hit. For example, the activity history associated with the storage unit entry can be updated to reflect that an additional hit has been made to the storage unit entry during the current period.
Returning to 420, if the determination is made that the current period is not the same as the period of the storage unit entry's last hit, the process 400 can proceed to 430. At 430, the period counters of the storage unit entry can be shifted according to the difference between the period of the storage unit entry's last hit and the current period. In embodiments, counts for a predetermined number of periods can be maintained within a count tag for each storage unit entry, and the periods can include the last period in which the storage unit entry was hit as well as a number of periods that chronologically precede the last period in which the storage unit entry was hit. It should be understood, that one or more of the period counters for a storage unit entry can be empty (e.g., where the storage unit entry was not hit during the corresponding period). In embodiments, the difference between the current period and the last period in which the storage unit entry was hit can indicate how many of the storage unit entry's period counters are to be discarded and replaced with empty period counters. For example, the number of period counters that are to be discarded and replaced with empty counters can be equivalent to the difference between the current period and the last period in which the storage unit entry was hit, and one of the new, empty counter can correspond with the current period. In embodiments, empty period counters can replace discarded period counters, period counters that are not discarded can be shifted to the end of the count tag, and the period counter that is now at the beginning of the count tag can correspond with the current period. The following is an example shift that can take place where the period length is five (5) minutes, the number of period counters retained in a count tag is five (5), and the difference in start times between the current period and the last period in which the storage unit entry was hit is fifteen (15) minutes: the number of period counters to be discarded would be equal to the start-time difference between periods, fifteen (15), divided by the period length, five (5), and, therefore, the three (3) oldest period counters of the count tag would be discarded, the newest period counter would be shifted to the next-to-last spot in the count tag, and three (3) empty period counters corresponding with the three (3) periods between the last period of a hit and the current period (current period inclusive) would be added to the count tag, with the first counter (now empty counter) now corresponding to the current period.
At 435, one or more new period counters can be inserted at the beginning of the storage unit entry's count tag to replace the discarded counters or the counters that were shifted out. In embodiments, the newest period counter (e.g., the first counter in the count tag) can correspond with the current period. After the one or more new period counters are inserted, the process 400 can proceed to 425.
At 440, a sum of the play counts associated with each period counter of the storage unit entry can be calculated. In embodiments, each period counter within a count tag can indicate the number of times the storage unit entry was hit during a corresponding period. The play counts of each of the period counters can be added together to calculate a total number of times the storage unit entry was hit over the period of time covered by the period counters in combination. For example, where each period counter accounts for a five (5) minute period, and a count tag maintains a maximum of five (5) period counters, the number of times the storage unit entry was hit over the last twenty to twenty-five (25) minutes (as low as twenty depending on the current position within the current period) can be calculated by summing the play counts of each of the period counters. In embodiments, the number of period counters maintained within a count tag can be varied to account for larger or smaller periods of time. In embodiments, the time associated with each period counter can be varied to account for larger or smaller periods of time.
At 445, a determination can be made whether the predetermined threshold has been met at the hit storage unit entry. In embodiments, the sum of the play counts of each of the period counters associated with the storage unit entry can be compared to the predetermined threshold. If the sum of the play counts is less than, or in some cases equal to, the predetermined threshold, the determination can be made that the predetermined threshold has not been met and the process 400 can return to 410. In embodiments, the updated activity information (e.g., the updated period counters) for the secondary storage unit entry can be stored until the secondary storage unit entry is hit again at 410.
If, at 445, the determination is made that the sum of the play counts of each of the period counters associated with the storage unit entry is greater than, or in some cases equal to, the predetermined threshold, the determination can be made that the predetermined threshold has been met and the process 400 can proceed to 450. At 450, the content associated with the storage unit entry for which the predetermined threshold has been met can be transferred from the secondary storage unit (e.g., secondary storage unit 230 of FIG. 2) to a primary storage unit (e.g., primary storage unit 220 of FIG. 2). In embodiments, after the content is written to the primary storage unit 220, the content can be removed from the secondary storage unit entry. For example, the content can become the MRU entry of a primary storage unit 220. In embodiments, when content is transferred to the primary storage unit 220, a LRU entry within the primary storage unit can be transferred from the primary storage unit to the secondary storage unit 230. For example, the LRU entry of the primary storage unit can become the MRU entry of the secondary storage unit 230, and the activity history associated with the storage unit entry can be reset.
FIG. 5 is a block diagram illustrating an example hardware configuration 500 operable to provide multi-tier storage for delivery services. While a content server 125 is shown, it should be understood that many different kinds of network devices can implement a multi-tier storage for delivery services. The configuration 500 can include a processor 510, a memory 520, a primary storage unit 530, a secondary storage unit 540, and an input/output device 550. Each of the components 510, 520, 530, 540 and 550 can, for example, be interconnected using a system bus 560. The processor 510 is capable of processing instructions for execution within the configuration 500. In one implementation, the processor 510 is a single-threaded processor. In another implementation, the processor 510 is a multi-threaded processor. The processor 510 is capable of processing instructions stored in the memory 520 or on the storage device storage units 530 or 540.
The memory 520 stores information within the configuration 500. In one implementation, the memory 520 is a computer-readable medium. In one implementation, the memory 520 is a volatile memory unit. In another implementation, the memory 520 is a non-volatile memory unit.
In some implementations, the storage device storage units 530 and 540 are capable of providing mass storage within the configuration 500. In one implementation, the storage device storage units 530 and 540 are a computer-readable medium. In various different implementations, the storage device storage units 530 and 540 can, for example, include a hard disk device, an optical disk device, flash memory, random-access memory, or some other large capacity storage device. It should be understood that more storage units can be available in addition to the primary storage unit 530 and secondary storage unit 540.
The input/output device 550 provides input/output operations for the configuration 500. In some implementations, the input/output device 550 can include one or more of a plain old telephone interface (e.g., an RJ11 connector), a network interface device (e.g., an Ethernet card), a serial communication device (e.g., RS-232 port), and/or a wireless interface device (e.g., 802.11 card). In additional and/or other implementations, the input/output device can include driver devices configured to receive input data and send output data to other input/output devices, such as one or more client devices 105 a-c of FIG. 1, as well as sending communications to, and receiving communications from one or more networks (e.g., content delivery network(s) 120 of FIG. 1, content distribution network(s) 115 of FIG. 1, etc.). Other implementations, however, can also be used, such as mobile computing devices, mobile communication devices, set-top box television client devices, etc.
Those skilled in the art will appreciate that the invention improves upon methods and apparatuses for implementing a multi-tiered storage unit within a server. Methods, systems, and computer readable media can be operable to facilitate the transfer of content associated with storage unit entries between two or more storage units. The transfer of content between two or more storage units can be based upon a count of the number of hits to the storage unit entry. In embodiments, the transfer of content between two or more storage units can be further based upon a predetermined threshold associated with a period.
The subject matter of this disclosure, and components thereof, can be realized by instructions that upon execution cause one or more processing devices to carry out the processes and functions described above. Such instructions can, for example, comprise interpreted instructions, such as script instructions, e.g., JavaScript or ECMAScript instructions, or executable code, or other instructions stored in a computer readable medium.
Implementations of the subject matter and the functional operations described in this specification can be provided in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible program carrier for execution by, or to control the operation of, data processing apparatus.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification are performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output thereby tying the process to a particular machine (e.g., a machine programmed to perform the processes described herein). The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices); magnetic disks (e.g., internal hard disks or removable disks); magneto optical disks; and CD ROM and DVD ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Particular embodiments of the subject matter described in this specification have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results, unless expressly noted otherwise. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.

Claims (11)

We claim:
1. A method of providing content from a first device to a second device upon a request for the content by the second device, the method comprising:
receiving a request associated with content in a secondary storage unit;
incrementing a count of the number of received requests associated with the content, wherein the count of the number of received requests associated with the content is identified from a field associated with the content, wherein the field associated with the content comprises one or more counters, each of the one or more counters being associated with a period of time, wherein each of the one or more counters maintains a sub-count of the number of received requests associated with the content, the sub-count being the number of received requests associated with the content over the period of time associated with the counter, and wherein the count of the number of received requests associated with the content is determined by summing the sub-counts of all of the counters;
identifying a period of time associated with the newest counter of the field associated with the content;
when the identified period of time is not the same as a current period of time:
removing a number of the one or more counters from the field, the number of counters being based upon the difference between the identified period of time and the current period of time;
if any of the one or more counters remain after removing the number of the one or more counters, shifting the remaining counters to the end of the field;
replacing the removed counters with empty counters, the empty counters being inserted from the beginning of the field; and
incrementing a counter corresponding with the current period;
when the count of the number of received requests associated with the content exceeds a predetermined threshold, transferring the content from the secondary storage unit to a primary storage unit, wherein the primary storage unit has a higher throughput rate than the secondary storage unit.
2. The method of claim 1, wherein:
the field associated with the content comprises one or more counters, each of the one or more counters being associated with a range of a number of requests and each of the one or more counters maintains a sub-count of the number of received requests associated with the content over the corresponding range of requests associated with the counter; and
the count of the number of received requests associated with the content is determined by summing the sub-counts of all of the counters.
3. The method of claim 1, further comprising:
transferring content associated with a storage unit entry from the primary storage unit to the secondary storage unit.
4. The method of claim 1, wherein the primary storage unit is a solid-state drive.
5. The method of claim 1, wherein the secondary storage unit is a hard disk drive.
6. An apparatus comprising:
an interface configured to receive a hit associated with content, the content being within a secondary storage unit;
a transfer module configured to:
increment a count of the number of received requests associated with the content, wherein the count of the number of received requests associated with the content is identified from a field associated with the content, wherein the field associated with the content comprises one or more counters, each of the one or more counters being associated with a period of time, and wherein each of the one or more counters maintains a sub-count of the number of received requests associated with the content, the sub-count being the number of received requests associated with the content over the period of time associated with the counter;
determine the count of the number of received requests associated with the content by summing the sub-counts of all of the counters;
identify a period of time associated with the newest counter of the field associated with the content;
when the identified period of time is not the same as a current period of time:
remove a number of the one or more counters, the number of counters being based upon the difference between the identified period of time and the current period of time;
if any of the one or more counters remain after removing the number of the one or more counters, shift the remaining counters to the end of the field;
replace the removed counters with empty counters; and
increment a counter corresponding with the current period;
when the count of the number of received requests associated with the content exceeds a predetermined threshold, transfer the content from the secondary storage unit to a primary storage unit.
7. The apparatus of claim 6, wherein the transfer module is further configured to:
transfer content associated with a storage unit entry from the primary storage unit to the secondary storage unit.
8. One or more non-transitory computer readable media having instructions operable to cause one or more processors to perform the operations comprising:
receiving a request associated with a content within a secondary storage unit;
incrementing a count of the number of received requests associated with the content, wherein the count of the number of received requests associated with the content is identified from a field associated with the content, wherein the field associated with the content comprises one or more counters, each of the one or more counters being associated with a period of time, wherein each of the one or more counters maintains a sub-count of the number of received requests associated with the content, the sub-count being the number of received requests associated with the content over the period of time associated with the counter, and wherein the count of the number of received requests associated with the content is determined by summing the sub-counts of all of the counters;
identifying a period associated with the newest counter of the field associated with the content;
when the identified period is not the same as a current period:
removing a number of the one or more counters, the number of counters being based upon the difference between the identified period and the current period;
if any of the one or more counters remain after removing the number of the one or more counters, shifting the remaining counters to the end of the field;
replacing the removed counters with empty counters; and
incrementing a counter corresponding with the current period;
when the count of the number of received requests associated with the content exceeds a predetermined threshold, transferring the content from the secondary storage unit to a primary storage unit.
9. The one or more non-transitory computer-readable media of claim 8, wherein the identified period is a range between a minimum and a maximum number of requests associated with the content and the current period is a range which the current number of requests for the content is within.
10. The one or more non-transitory computer-readable media of claim 8, further comprising:
transferring content associated with a storage unit entry from the primary storage unit to the secondary storage unit.
11. The one or more non-transitory computer-readable media of claim 8, wherein the secondary storage unit is a hard disk drive and the primary storage unit is a solid-state drive.
US14/211,699 2013-03-15 2014-03-14 Multi-tier storage for delivery of services Active 2035-02-06 US9483191B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/211,699 US9483191B2 (en) 2013-03-15 2014-03-14 Multi-tier storage for delivery of services

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361800689P 2013-03-15 2013-03-15
US14/211,699 US9483191B2 (en) 2013-03-15 2014-03-14 Multi-tier storage for delivery of services

Publications (2)

Publication Number Publication Date
US20140297982A1 US20140297982A1 (en) 2014-10-02
US9483191B2 true US9483191B2 (en) 2016-11-01

Family

ID=51622016

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/211,699 Active 2035-02-06 US9483191B2 (en) 2013-03-15 2014-03-14 Multi-tier storage for delivery of services

Country Status (1)

Country Link
US (1) US9483191B2 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016028319A (en) * 2014-07-08 2016-02-25 富士通株式会社 Access control program, access control device, and access control method
US10936200B2 (en) 2014-07-30 2021-03-02 Excelero Storage Ltd. System and method for improved RDMA techniques for multi-host network interface controllers
US10979503B2 (en) 2014-07-30 2021-04-13 Excelero Storage Ltd. System and method for improved storage access in multi core system
US10237347B2 (en) 2015-06-08 2019-03-19 Excelero Storage Ltd. System and method for providing a client device seamless access to a plurality of remote storage devices presented as a virtual device
US9658782B2 (en) 2014-07-30 2017-05-23 Excelero Storage Ltd. Scalable data using RDMA and MMIO
JP2017033262A (en) * 2015-07-31 2017-02-09 富士通株式会社 Information processor, access information management program, and access information management method
US10686906B2 (en) * 2016-05-02 2020-06-16 Netapp, Inc. Methods for managing multi-level flash storage and devices thereof
US10649950B2 (en) 2016-08-29 2020-05-12 Excelero Storage Ltd. Disk access operation recovery techniques
CN109002259B (en) * 2018-06-28 2021-03-09 苏州浪潮智能科技有限公司 Hard disk allocation method, system, device and storage medium of homing group
US11146832B1 (en) * 2018-11-08 2021-10-12 Amazon Technologies, Inc. Distributed storage of files for video content
CA3163480A1 (en) * 2020-01-02 2021-07-08 William Crowder Systems and methods for storing content items in secondary storage
CN114554278B (en) * 2022-01-28 2023-12-19 青岛海尔科技有限公司 Playing control method and device, storage medium and electronic device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020078085A1 (en) * 2000-12-18 2002-06-20 Nec Corporation Terminal device, information collecting system, and information collecting method
US6557080B1 (en) * 1999-01-25 2003-04-29 Wisconsin Alumni Research Foundation Cache with dynamic control of sub-block fetching
US20060271972A1 (en) * 2005-05-31 2006-11-30 Microsoft Corporation Popularity-based on-demand media distribution
US20080294896A1 (en) * 2005-12-12 2008-11-27 Electronics & Telecommunications Research Institute Method and System for Transmitting and Receiving User's Personal Information Using Agent
US7506008B2 (en) * 2004-09-22 2009-03-17 Fujitsu Limited Storage apparatus, storage control method, and computer product
US7823156B2 (en) * 2005-02-03 2010-10-26 Hewlett-Packard Development Company, L.P. Method of hashing address space to storage servers
US20120137086A1 (en) * 2010-11-26 2012-05-31 Fujitsu Limited Non-transitory medium, access control method, and information processing apparatus
US8315995B1 (en) * 2008-09-09 2012-11-20 Peer Fusion, Inc. Hybrid storage system
US8335904B1 (en) * 2008-12-30 2012-12-18 Emc Corporation Identifying active and inactive data in storage systems
US20130246718A1 (en) * 2012-03-19 2013-09-19 Fujitsu Limited Control device and control method for control device
US20150106649A1 (en) * 2013-10-11 2015-04-16 Qualcomm Innovation Center, Inc. Dynamic scaling of memory and bus frequencies
US20150121024A1 (en) * 2013-10-28 2015-04-30 Lenovo Enterprise Solutions (Singapore) Ptw. Ltd. Operating A Memory Management Controller
US20160077988A1 (en) * 2014-09-15 2016-03-17 Microsoft Corporation Efficient data movement within file system volumes

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6557080B1 (en) * 1999-01-25 2003-04-29 Wisconsin Alumni Research Foundation Cache with dynamic control of sub-block fetching
US20020078085A1 (en) * 2000-12-18 2002-06-20 Nec Corporation Terminal device, information collecting system, and information collecting method
US7506008B2 (en) * 2004-09-22 2009-03-17 Fujitsu Limited Storage apparatus, storage control method, and computer product
US7823156B2 (en) * 2005-02-03 2010-10-26 Hewlett-Packard Development Company, L.P. Method of hashing address space to storage servers
US20060271972A1 (en) * 2005-05-31 2006-11-30 Microsoft Corporation Popularity-based on-demand media distribution
US20080294896A1 (en) * 2005-12-12 2008-11-27 Electronics & Telecommunications Research Institute Method and System for Transmitting and Receiving User's Personal Information Using Agent
US8315995B1 (en) * 2008-09-09 2012-11-20 Peer Fusion, Inc. Hybrid storage system
US8335904B1 (en) * 2008-12-30 2012-12-18 Emc Corporation Identifying active and inactive data in storage systems
US20120137086A1 (en) * 2010-11-26 2012-05-31 Fujitsu Limited Non-transitory medium, access control method, and information processing apparatus
US20130246718A1 (en) * 2012-03-19 2013-09-19 Fujitsu Limited Control device and control method for control device
US20150106649A1 (en) * 2013-10-11 2015-04-16 Qualcomm Innovation Center, Inc. Dynamic scaling of memory and bus frequencies
US20150121024A1 (en) * 2013-10-28 2015-04-30 Lenovo Enterprise Solutions (Singapore) Ptw. Ltd. Operating A Memory Management Controller
US20160077988A1 (en) * 2014-09-15 2016-03-17 Microsoft Corporation Efficient data movement within file system volumes

Also Published As

Publication number Publication date
US20140297982A1 (en) 2014-10-02

Similar Documents

Publication Publication Date Title
US9483191B2 (en) Multi-tier storage for delivery of services
CN110198495B (en) Method, device, equipment and storage medium for downloading and playing video
US10958702B1 (en) Timeout optimization for streaming video
US9888053B2 (en) Systems and methods for conditional download using idle network capacity
US10606510B2 (en) Memory input/output management
US8799967B2 (en) Using video viewing patterns to determine content placement
CN103763635B (en) A kind of control method and system of video buffer
US10250657B2 (en) Streaming media optimization
US9560390B2 (en) Asynchronous encoding of digital content
US10782888B2 (en) Method and device for improving file system write bandwidth through hard disk track management
US10116763B2 (en) Method for operating a cache arranged along a transmission path between client terminals and at least one server, and corresponding cache
CN103873929A (en) Method and device for playing multimedia data
CN104866339A (en) Distributed persistent management method, system and device of FOTA data
US9769279B2 (en) Controlling the pre-delivery of content to a mobile device
CN106454396A (en) Method for improving concurrency of live broadcast time shifted TV
CN104199729B (en) A kind of method for managing resource and system
US9197712B2 (en) Multi-stage batching of content distribution in a media distribution system
US10341454B2 (en) Video and media content delivery network storage in elastic clouds
US10965607B2 (en) Arbitration of competing flows
US20160112534A1 (en) Hierarchical caching for online media
CN106162317A (en) A kind of intelligent terminal obtains the method and system of movie film
US9060192B2 (en) Method of and a system for providing buffer management mechanism
US11197052B2 (en) Low latency streaming media
EP3035568A1 (en) System and method for audio/video content distribution
US9792218B2 (en) Data storage methods and apparatuses for reducing the number of writes to flash-based storage

Legal Events

Date Code Title Description
AS Assignment

Owner name: ARRIS ENTERPRISES, INC., GEORGIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DUZETT, ROBERT C.;REEL/FRAME:033155/0863

Effective date: 20140619

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, NORTH CAROLINA

Free format text: SECURITY INTEREST;ASSIGNORS:ARRIS GROUP, INC.;ARRIS ENTERPRISES, INC.;ARRIS INTERNATIONAL LIMITED;AND OTHERS;REEL/FRAME:036020/0789

Effective date: 20150618

Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, NO

Free format text: SECURITY INTEREST;ASSIGNORS:ARRIS GROUP, INC.;ARRIS ENTERPRISES, INC.;ARRIS INTERNATIONAL LIMITED;AND OTHERS;REEL/FRAME:036020/0789

Effective date: 20150618

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: ARRIS ENTERPRISES LLC, PENNSYLVANIA

Free format text: CHANGE OF NAME;ASSIGNOR:ARRIS ENTERPRISES INC;REEL/FRAME:041995/0031

Effective date: 20151231

AS Assignment

Owner name: JERROLD DC RADIO, INC., PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: BIG BAND NETWORKS, INC., PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: ARRIS TECHNOLOGY, INC., PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: NEXTLEVEL SYSTEMS (PUERTO RICO), INC., PENNSYLVANI

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: ARRIS INTERNATIONAL LIMITED, PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: ARCHIE U.S. MERGER LLC, PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: GIC INTERNATIONAL CAPITAL LLC, PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: ARRIS GROUP, INC., PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: ARRIS ENTERPRISES, INC., PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: ARCHIE U.S. HOLDINGS LLC, PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: TEXSCAN CORPORATION, PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: ARRIS SOLUTIONS, INC., PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: ARRIS GLOBAL SERVICES, INC., PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: ARRIS HOLDINGS CORP. OF ILLINOIS, INC., PENNSYLVAN

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: GIC INTERNATIONAL HOLDCO LLC, PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: POWER GUARD, INC., PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: ARRIS HOLDINGS CORP. OF ILLINOIS, INC., PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

Owner name: NEXTLEVEL SYSTEMS (PUERTO RICO), INC., PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:050721/0401

Effective date: 20190404

AS Assignment

Owner name: ARRIS ENTERPRISES LLC, GEORGIA

Free format text: CHANGE OF NAME;ASSIGNOR:ARRIS ENTERPRISES, INC.;REEL/FRAME:049586/0470

Effective date: 20151231

AS Assignment

Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATE

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:ARRIS ENTERPRISES LLC;REEL/FRAME:049820/0495

Effective date: 20190404

Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK

Free format text: TERM LOAN SECURITY AGREEMENT;ASSIGNORS:COMMSCOPE, INC. OF NORTH CAROLINA;COMMSCOPE TECHNOLOGIES LLC;ARRIS ENTERPRISES LLC;AND OTHERS;REEL/FRAME:049905/0504

Effective date: 20190404

Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK

Free format text: ABL SECURITY AGREEMENT;ASSIGNORS:COMMSCOPE, INC. OF NORTH CAROLINA;COMMSCOPE TECHNOLOGIES LLC;ARRIS ENTERPRISES LLC;AND OTHERS;REEL/FRAME:049892/0396

Effective date: 20190404

Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT, CONNECTICUT

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:ARRIS ENTERPRISES LLC;REEL/FRAME:049820/0495

Effective date: 20190404

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: WILMINGTON TRUST, DELAWARE

Free format text: SECURITY INTEREST;ASSIGNORS:ARRIS SOLUTIONS, INC.;ARRIS ENTERPRISES LLC;COMMSCOPE TECHNOLOGIES LLC;AND OTHERS;REEL/FRAME:060752/0001

Effective date: 20211115

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8