WO2012177267A1 - Optimisation de coûts d'écriture pour une architecture de stockage pour un réseau de diffusion de contenu - Google Patents

Optimisation de coûts d'écriture pour une architecture de stockage pour un réseau de diffusion de contenu Download PDF

Info

Publication number
WO2012177267A1
WO2012177267A1 PCT/US2011/041913 US2011041913W WO2012177267A1 WO 2012177267 A1 WO2012177267 A1 WO 2012177267A1 US 2011041913 W US2011041913 W US 2011041913W WO 2012177267 A1 WO2012177267 A1 WO 2012177267A1
Authority
WO
WIPO (PCT)
Prior art keywords
content object
content
ssd
cdn
recited
Prior art date
Application number
PCT/US2011/041913
Other languages
English (en)
Inventor
Nathan F. Raciborski
Bradley B. Harvell
Original Assignee
Limelight Networks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Limelight Networks, Inc. filed Critical Limelight Networks, Inc.
Priority to PCT/US2011/041913 priority Critical patent/WO2012177267A1/fr
Priority to US13/316,289 priority patent/US8321521B1/en
Priority to US13/662,202 priority patent/US20130110984A1/en
Publication of WO2012177267A1 publication Critical patent/WO2012177267A1/fr
Priority to US14/195,645 priority patent/US8965997B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0616Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0635Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Definitions

  • CDNs Content delivery networks
  • SSDs solid state drives
  • CDNs Content delivery networks
  • CDNs are in the business of delivering content for others. CDNs will either cache and/or host content for its customers. Efficiently delivering content for a large number of customers creates difficulty. It would not be practical to store every possible content object serviced by the CDN on every edge server. Often caches are used on the edge servers to store popular or important content at the edges of the CDN. Popular content is less likely to have delivery latency, while less popular content is more likely to take a longer time to locate and deliver.
  • CDNs are optimized for affordable delivery and hosting of content and processing at varying quality of service (QoS).
  • QoS quality of service
  • POPs points of presence
  • Caching and storage servers at the edge (“edge servers”) of the CDN and in each POP are optimized to deliver content for the customers who purchase service from the CDN.
  • edge servers are ever increasing as the number of customers increases along with the size and volume of content objects.
  • SSDs have very fast seek times in comparison to spinning disks. This advantage comes with serious disadvantages that plague SSDs. Specifically, they are around ten times more expensive per byte than spinning media. Additionally, the underlying storage cells are EEPROM or flash which degrades over time as more writes occur. Caches are constantly adding and removing content such that the lifetime of a SSD would be unacceptable where reliability is important. These disadvantages have precluded adoption of SSD by CDNs.
  • SSD manufactures try to solve the limitation on writing to SSDs through several techniques. There is wear leveling which tries to distribute the writes evenly across the flash. Additionally, there is spare flash that is used to replace worn out cells. As cells go bad, replacement cells are seamlessly substituted so that the SSD appears to be without defect. The higher the stated reliability, the more spare flash is reserved. Additionally, error correction codes are used to mask bad cells. Error correction bits reduce the amount of information that can be stored on the SSD. Conventional solutions address less write-intensive activity than CDN caching.
  • the present disclosure provides a method and system for cache optimization in a hybrid solid state drive and magnetic storage cache architecture for a content delivery network (CDN).
  • CDN has a number of geographically distributed points of presence (POPs) across the Internet.
  • POPs points of presence
  • Customers of the CDN pay for storage of content objects.
  • Cache management in a POP analyzes information related to content objects to determine if storage will be on a solid state drive (SSD) instead of a magnetic drive.
  • SSD solid state drive
  • the information used in this analysis is from the application layer or above in the open systems interconnection (OSI) model.
  • the content objects are delivered from either the SSD or magnetic storage to end users.
  • OSI open systems interconnection
  • the present disclosure provides a method for cache
  • a content object is received at a content delivery network (CDN) from a customer for storage.
  • CDN content delivery network
  • the CDN has a plurality of points of presence (POPs) geographically distributed across the Internet;
  • POPs points of presence
  • Information related to the content object is analyzed to determine if storage at one POP of the plurality of POPs will be on a solid state drive (SSD).
  • SSD solid state drive
  • the information is from the application layer or above in the open systems interconnection (OSI) model.
  • the content object is stored on the SSD.
  • a request for the content object is received from an end user at the one POP and the request corresponds to an universal resource identifier (URI). If content object is determined to be stored on the SSD rather than a magnetic drive, the content object is retrieved from the SSD and delivered to the end user.
  • URI universal resource identifier
  • the present disclosure provides an edge server of a content delivery network (CDN) having a plurality of points of presence (POPs) geographically distributed across the Internet.
  • the edge server includes a solid state drive (SSD) that stores a content object, a magnetic drive that does not store the content object and a network interface.
  • SSD solid state drive
  • the network interface receives a request for the content object from an end user.
  • the request corresponds to an universal resource identifier (URI).
  • URI universal resource identifier
  • the network interface returns the content object from the SSD to the end user.
  • the edge server includes a cache manager operating in the application layer or above in the open systems interconnection (OSI) model.
  • OSI open systems interconnection
  • the cache manager loads information related to the content object that is stored by the CDN for a customer and analyzes the information to designate the SSD for storage of the content object.
  • one or more machine-readable medium having machine- executable instructions configured to perform the machine-implementable method for cache optimization in a hybrid solid state drive and spinning drive cache architecture.
  • the one or more machine-readable medium comprising code for: receiving a content object at a content delivery network (CDN) from a customer for storage, the CDN having a plurality of points of presence (POPs) geographically distributed across the Internet; analyzing information related to the content object to determine if storage at one POP of the plurality of POPs will be on a solid state drive (SSD), where the information is from the application layer or above in the open systems interconnection (OSI) model; storing the content object on the SSD; receiving a request for the content object from an end user at the one POP, where the request corresponds to an universal resource identifier (URI); determining that the content object is stored on the SSD rather than a magnetic drive; retrieving the content object from the SSD; and delivering the content object to the end user.
  • CDN content delivery network
  • POPs points of presence
  • FIG. 1 depicts a block diagram of an embodiment of a content distribution system
  • FIG. 2 depicts a block diagram of an embodiment of a content delivery network
  • FIG. 3 depicts a block diagram of an embodiment of an edge server
  • FIGs. 4A, 4B, 4C & 4D illustrate diagrams of embodiments of a storage architecture for a CDN
  • FIG. 5 illustrates a flowchart of an embodiment of a process for distributing content with a CDN
  • FIG. 6 illustrates a flowchart of an embodiment of a process for ingesting new content into the CDN
  • FIG. 7 illustrates a flowchart of an embodiment of a process for serving content to an end user
  • FIG. 8 illustrates a flowchart of an embodiment of a process for maintaining balance between the SSD and magnetic caches
  • FIG. 9 depicts a block diagram of an embodiment of a computer system
  • FIG. 10 depicts a block diagram of an embodiment of a special-purpose computer system.
  • FIG. 1 a block diagram of an embodiment of a content distribution system 100 is shown.
  • the content originator 106 offloads delivery of the content objects to a content delivery network (CDN) 110 in this embodiment.
  • the content originator 106 produces and/or distributes content objects and includes a content provider 108, a content site 116, and an origin server 112.
  • the CDN 110 can both cache and/or host content in various embodiments for third parties to offload delivery and typically provide better quality of service (QoS) to a broad spectrum of end user systems 102 distributed worldwide.
  • QoS quality of service
  • the content originator 106 is the customer of the CDN 110 and an end user 128 benefits from improvements in QoS.
  • the content distribution system 100 locates the content objects (or portions thereof) and distributes the content objects to an end user system 102.
  • the content objects are dynamically cached within the CDN 110 and/or hosted by the CDN 110.
  • a content object is any content file, content stream or a range defining a segment of a content file or content stream and could include, for example, video, pictures, data, audio, software, and/or text.
  • the content object could be live, delayed or stored.
  • the range defining a segment could be defined as a byte range or time range within the playback.
  • the CDN 110 includes a number of points of presence (POPs) 120, which are geographically distributed through the content distribution system 100 to deliver content with lower latency.
  • POPs points of presence
  • Various embodiments may have any number of POPs 120 within the CDN 110 that are generally distributed in various locations around the Internet 104 so as to be proximate to end user systems 102.
  • Multiple POPs 120 use the same IP address such that an Anycast routing scheme is used to find a POP likely to be close to the end user, in a network sense, for each request.
  • a wide area network (WAN) and/or local area network (LAN) 114 or other backbone may couple the POPs 120 with each other and also couple the POPs 120 with other parts of the CDN 110.
  • Distributed storage, processing and caching is provided by the CDN 110.
  • the content originator 106 is the source or re- distributor of content objects, i.e., the so-called origin server 1 12.
  • the content site 116 is an Internet web site accessible by the end user system 102.
  • the content site 116 could be a web site where the content is viewable with a web browser. In other embodiments, the content site 116 could be accessible with application software other than a web browser.
  • the content provider 108 directs content requests to a CDN 110 after they are made or formulates the delivery path by embedding the delivery path into the universal resource indicators (URIs) for a web page. In any event, the request for content is handed over to the CDN 110 in this embodiment by using an Anycast IP address corresponding to two or more POPs 120.
  • the CDN 110 hosts content objects and/or web pages to be the origin server.
  • the request for a content object is passed to the CDN 110, the request is associated with a particular POP 120 within the CDN 110 using the Anycast routing scheme, but other embodiments could use routing, redirection or DNS to shunt requests to a particular POP 120.
  • the CDN 110 processes requests for content in the application layer of the open systems interconnection (OSI) model with URIs, URLs and HTTP.
  • the particular POP 120 may retrieve the portion of the content object from the content provider 108 where it is acting as the origin server.
  • the content provider 108 may directly provide the content object to the CDN 110 and its associated POPs 120 through pre-population of caches (i.e., in advance of the first request) or hosting.
  • a storage policy could be defined to specify the conditions under which pre-population is performed.
  • the content objects are provided to the CDN 110 and stored in one or more CDN servers such that the portion of the requested content may be hosted from the CDN 110.
  • the CDN servers include edge servers in each POP 120 that actually serve end user requests.
  • the origin server 112 holds a copy of each content object for the content originator 106. Periodically, the content of the origin server 112 may be reconciled with the CDN 110 through a caching, hosting and/or pre-population algorithm, for example, through a storage policy.
  • Some content providers could use an origin server within the CDN 110 to host the content and avoid the need to maintain a copy.
  • the content object is stored within the particular POP 120 and is served from that POP to the end user system 102.
  • the end user system 102 receives the content object and processes it for use by the end user 128.
  • the end user system 102 could be a personal computer, media player, handheld computer, tablet, pad, Internet appliance, phone, smart phone, IPTV set top, streaming radio or any other device that receives and plays content objects.
  • a number of the end user systems 102 could be networked together. Although this embodiment only shows a single content originator 106 and a single CDN 110, it is to be understood that there could be many of each in various embodiments.
  • FIG. 2 a block diagram of an embodiment of a CDN 110 is shown. Although only one POP 120 is shown in detail, there are a number of POPs 120 similarly configured throughout the CDN 110.
  • the POPs 120 communicate through a WAN/LAN 114 and/or the Internet 104 when locating content objects.
  • An interface from the Internet 104 to the POP 120 accepts requests for content objects from end user systems 102.
  • the requests come from an Internet protocol (IP) address of the end user device 128 in the form of a URI that causes a HTTP get command.
  • IP Internet protocol
  • the requests for content files from the CDN 1 10 pass through the application layer.
  • Switch fabric 240 assigns the request to one of the edge servers 230 according to a routing scheme such as round robin, load balancing, etc.
  • the switch fabric 240 is aware of which edge servers 230 have what capabilities and assigns requests within the group having the capability to store and serve the particular content object referenced in the URI.
  • a protocol such as cache array routing protocol (CARP) is used in this embodiment to disperse the URIs between the group of edge servers 230 to spread out loading. Every time that a particular URI is requested from the group, it is assigned to the same edge server 230 using CARP.
  • CARP cache array routing protocol
  • the edge servers 230 gathered in a particular group as neighbors can be the other servers in the current POP 120, less loaded servers in the current POP 120, servers having the capability to process the content object, a subset of servers assigned to a customer using the CDN to serve the content object, or some other grouping of servers in the POP 120.
  • the switch fabric 240 assigns the request to one of the edge servers 230, which performs CARP to either service the request itself or reassign it to a neighboring edge server 230.
  • the switch fabric 240 sends each packet flow or request to an edge server 230 listed in the configuration of the switch fabric 240.
  • This embodiment does not have awareness of the particular capabilities of any edge server 230. The assignment can be performed by choosing the edge server 230 with the least amount of connections or the fastest response time, but the switch fabric 240 in this embodiment assigns the packet flow somewhat arbitrarily using round robin or random methodologies.
  • an algorithm like CARP is used by the chosen edge server 230 to potentially reassign the packet flow between a group of edge servers 230 to the one edge server 230 dictated by the algorithm.
  • the switch fabric 240 could choose a second edge server 230-2 being the next in the round robin rotation.
  • the second edge server 230-2 would perform CARP on the request and find that the first edge server 230-1 is being assigned this type of request.
  • the request would be reassigned to the first edge server 230-1 to be fulfilled.
  • the CDN 110 is used to host content for others.
  • Content providers 108 upload content to a CDN origin server 248.
  • the content object can be stored in the CDN origin server 248.
  • the CDN origin server 248 serves the content object within the CDN 110 to various edge servers 230 in various POPs 120. After the content provider 108 places a content object on the CDN origin server 248 it need not be hosted on the origin server 112 redundantly.
  • the CDN origin sever 248 could be integral to an edge server 230.
  • Some embodiments include an optional storage array 234 in the POP 120 or elsewhere in the CDN 110.
  • the storage array 234 can provide hosting, storage and/or caching.
  • Edge servers 230 can revert to the storage array 234 for certain content, for example, very large files or infrequently requested files. Flushing of a cache of an edge server 230 could move the content to the storage array 234 until it is ultimately flushed from the storage array 234 after which subsequent requests would be fulfilled by an origin server 112 to repopulate cache in the POP 110.
  • Requests from end user systems 102 are assigned to an edge server 230 that may cache, store or host the requested content object.
  • the edge server 230 receiving a request does not have the content object stored for immediate serving. This so-called "cache miss" triggers a process within the CDN 110 to effectively find the content object (or portion thereof) while providing adequate QoS.
  • the content may be found in neighboring edge servers 230 in the same POP 120, in another POP 120, in a CDN origin server 248, in a POP storage array 234, or even an external origin server 112.
  • the various edge and origin servers 230, 248 are grouped for various URIs uniquely.
  • One URI may look to one group of servers 230, 248 on a cache miss while another URI will look to a different group of servers 230, 248.
  • One embodiment uses a policy-based storage scheme.
  • Customers of the CDN 110 can specify a policy that allows great flexibility in how their data is stored and cached.
  • the policy can specify SSD or spinning media, edge caching or storage array caching, and under what circumstances to store or cache in the various options.
  • a customer may specify a policy that will enforce a class of storage that exclusively uses SSD for caching and/or hosting because of the reduced carbon footprint or speed.
  • the edge server 230 in this embodiment includes both a magnetic drive(s) 312 and a SSD(s) 308. Where there are multiple drives 308, 312, they can be arranged in a RAID array or a RAID array of magnetic drives 312 and/or a RAID array of SSDs 308.
  • the magnetic drives 312 are spinning hard drives of a 1.8, 2.5 or 3.5 inch configuration with a SATA, SAS, PATA, SCSI, FirewireTM, USB 2.0, USB 3.0, Ethernet, ThunderboltTM, PCI express, RAID, or other interface.
  • the SSD 308 could be 1.8 inch, 2.5 inch, 3.5 inch, mini-PCI or PCI configurations with the appropriate interfaces.
  • the SSD(s) 308 and magnetic drives 312 could be integral to a chassis with other components of the edge server 230 or in a separate rack slice.
  • a hardware processor 304 is coupled to the storage drives 308, 312 and a network interface 316.
  • the processor 304 choreographs providing storage, hosting and caching on the drives 308, 312 under the command of software.
  • a cache manager 320 is software in the application layer that customizes the processor 304 to make the edge server 230 a special-purpose computer that is suitable for use in a CDN 110.
  • the cache manager 320 could coexist with other application software to allow the edge server 230 to provide other services and processing, for example, media serving, FlashTM or SilverlightTM serving, DRM, encryption, encoding, other software as a service (SAS) or cloud computing.
  • the other services may also use the drives 308, 312.
  • the interface to the drives are bandwidth limited by either the interface itself and/or the drive 308, 312 on the interface along with the throughput of the network interface 316.
  • Any bottleneck between the storage and the Internet 104 caps the maximum data flow out of the edge server 230.
  • Magnetic drives 312 are inexpensive storage, but have poor seek times due to the drive head having to move to the data location on the spinning drive to read information. As the number of simultaneous data requests are serviced by the magnetic drives 312, the seek times can be the biggest bottleneck reducing the aggregate data flow from the drives 312. SSDs 308 have better random access times with very little seek time to allow servicing a larger number of
  • SSD 308 suffer from a limited number of writes to each cell of flash memory.
  • the bottleneck of the network interface 316 and processing power is solved by having many edge servers 230 acting in parallel to divide up the load on the POP 120.
  • the cache manager 320 maintains content scoring 324 for the content objects requested from the edge server 230. Where a new content object is requested for the first time, a score is assigned using available information such as size and popularity of other content from the same content originator, path, file format, encoding, etc.
  • the content scoring 324 changes over time as popularity is assessed. Popularity may have a periodicity associated with it, for example, movie content objects are more likely to be viewed in prime time and on Saturday than other times of day or week. Considering popularity hourly could result in the movie being written to the cache during prime time each night only to be flushed daily.
  • Another dimension to the cache manager 320 deciding which type of drive to use in caching or hosting a content object is how the drive operates, i.e., a drive model 328.
  • Different drive architectures and technologies have different advantages and disadvantages.
  • the drive model 328 appreciates the cost per byte of storage, the interface bandwidth and when it degrades from saturation, the average seek times as well as how seek time degrades with the number of simultaneous requests being serviced, the impact of writing on the life of the drive, the degradation curve as more reads and/or writes occur simultaneously.
  • a new type of drive added to the edge server 230 would cause a new drive model 328 to be available to the cache manager 320.
  • These drive models 328 could be loaded externally from off-line analysis and/or could be devised from historical performance.
  • a drive model 328 generally applicable to a new SSD 308 could be used, but it would be updated as the drive degrades over time with drives nearing likely failure because of excessive writing being used infrequently for tasks that would result in additional writes.
  • a SSD 308 might initially be used for caching, but transitioned to hosting as it nears end of life.
  • Another factor tracked by the drive model 328 is the degradation in throughput of a SSD 308 as it is over utilized.
  • SSDs 308 perform background housekeeping functions to even out wear level and more compactly utilize blocks of flash memory. Under constant loading, the amount of housekeeping increases, which limits the amount of storage operations that can also be serviced.
  • the cache manager 320 can estimate the amount of housekeeping functions being performed by monitoring performance and throttle back storage operations for a particular SSD 308 in favor of other SSds 308 or magnetic drives 312 so as to not saturate the internal processing of the SSD 308 in a way that would curtail storage operations. In a simple example, the cache manager 320 could take a SSD 308 offline for a period of time each day. [0034] As a magnetic drive 312 degrades, the treatment by the cache manager 320 could be very different based upon the drive model 328.
  • Magnetic drives 312 have sectors that start failing and as that happens, the drive 312 may read and re-read that sector to help reliably predict the contents. As sectors start going bad and read times increase, less popular content might be stored on that magnetic drive 312 while magnetic drives 312 with less wear would receive more popular content that is stored on magnetic drives 312.
  • certain classes of information need more reliable storage, which is appreciated by the cache manager 320.
  • Hosted content may not have any backup copy.
  • Cached content always has a backup on an origin server. Where there is no backup, a drive model 328 for the most reliable storage would be used. For example, a magnetic drive 312 that is beyond infant mortality, but not so worn as to result in likely errors could be favored for content that is hosted without any backup.
  • the cache manager 320 Deciding what to store where is performed by the cache manager 320.
  • Servicing content requests is performed by a content serving function 332.
  • the various SSD 308 and magnetic drives 312 are referenced to find a content object by the content serving function 332. Where not found, a storage array 234 or origin server 112 is referenced to find the content object.
  • the cache manager 320 may decide to store in its edge server 230 a content object that was requested, but not found locally. Once the content serving function 332 finds the content object, it is returned to the end user 128 through the network interface 316.
  • FIG. 4A a diagram of an embodiment of a storage architecture 400-1 for a CDN 110 is shown.
  • the storage architecture 400-1 is maintained by the cache manager 320 and has multiple segments shown from left to right with the higher QoS generally being on the left and the lower QoS being on the right. There is correspondingly greater expense storing on the left-most segments with respect to the right-most segments.
  • the cache manager 320 oversees SSD caching 412, magnetic drive hosting 416 and magnetic drive caching 420. Hosting of content provides better QoS because there are no cache misses to contend with, but may store content that is very infrequently requested.
  • the size of the caches 412, 420 provide additional constraint to the cache manager 320 in deciding what to cache where.
  • Very large objects like high-definition movies could quickly saturate a particular SSD 308 and quickly churn many writes.
  • a 250 GB SSD 308 could store around fifteen different HD movies at the same time from a library of thousands. If the single SSD 308 was the only cache, the top fifteen movies would constantly be changing resulting in many different movies being written to the SSD 308.
  • the cache manager 320 could score small content objects (relative to the size of the SSD 308) and popular content objects in the SSD cache 412 and place others in the magnetic drive cache 420.
  • the content scoring 324 is considered in deciding between the SSD and magnetic drive caches 412, 420.
  • the cache manager 320 considers the aggregate effect of writing to the SSD 308 against the number of content objects that can be stored in the SSD 308 by referring to the drive model 328. Storage on the SSD 308 results in much higher QoS to the end users 128 as those content objects are sourced much more quickly on average as the POP 120 becomes heavily loaded. It is difficult for the cache manager 320 to predict future requests for content objects. Using past requests as a guide, the cache manager 320 makes educated guesses that are stored as content scoring 324.
  • Originator profiles 336 include popularity curves for all content and/or defined groups of content. For example, there could be an originator profile 336 for the ACME-video-news.com domain for *.mov files that indicates that the new *.mov files are very popular for one day until the news cycle has moved on. A new *.mov file would be presumed to be as popular as those previously loaded from the ACME-video-news.com domain.
  • the originator profile 336 could also give initial predictions of how soon content will change. Quickly changing content might not be a good candidate for SSD 308 as writing and replacing information will be frequent.
  • originator profile 336 Another factor stored in the originator profile 336 is the request periodicity. For example, a restaurant serving breakfast may be very popular each morning, but not popular in other times during the day. Traditional caching algorithms without access to application layer information would push out that content each day only to have it requested again each day. Integrating popularity over a day period will keep the content in the cache, but an hour period would push out the content. Another example of a site showing weather alerts might have a four hour period because old weather alerts are very infrequently referenced.
  • the originator profiles 336 can be for an entire domain, certain directories, or categories of content defined in any number ways.
  • small *.gif files for a particular domain may remain popular for a month at a time as the *.gif file corresponds to a monthly calendar icon on the web page such that a month long integration time for determining popularity would be appropriate.
  • some embodiments will query to other edge servers 230 in the current POP 120, or other POPs 120 if there is no scoring found in the current POP 120.
  • Some embodiments treat the content scoring 324 as a distributed database that is automatically reconciled throughout the CDN 110. Other embodiments could reconcile missing scoring, but could favor regional scoring to appreciate what could be geographic preference for a particular content object.
  • the magnetic drives 312 both host 416 and cache 420 content.
  • Magnetic drive hosting 416 is an option available to the content originators 106. Hosted information is divided among the edge servers 230 in each POP 120 and stored on the magnetic drive(s) 312. The magnetic drives 312 also hold a magnetic drive cache 420. Items too large or infrequently requested or otherwise having low content scoring 324 are stored in the magnetic drive cache 420 instead of the SSD cache 412. If the content scoring 324 is very low, it is likely that the content object will be aged out of the cache until requested again.
  • FIG. 4B a diagram of another embodiment of a storage architecture 400-2 for a CDN 110 is shown. This embodiment includes a SSD premium cache 408 that is favored over the SSD cache 412. Customers can select to add SSD caching for some or all of their content in the SSD 308.
  • Some of the SSD 308 is reserved for the SSD premium cache 408. Remaining capacity of the SSD 308 is used as a SSD cache 412 for content not designated for the SSD premium cache 408. Some embodiments only have SSD premium cache 408 on the SSD 308 without other caching.
  • FIG. 4C a diagram of yet another embodiment of a storage architecture 400-3 for a CDN 110 is shown.
  • This embodiment varies from the embodiment of FIG. 4B in that a portion of the SSD 308 is used to host certain content designated by content originators 106.
  • the hosted content is stored in the SSD hosting 404 irrespective of content scoring in this embodiment.
  • the content originator 106 can force SSD storage on the SSD 308.
  • Some embodiments allow the content originator 106 to specify content for hosting, but leave it to the cache manager 320 to decide between SSD 308 and magnetic drive 312 for that storage.
  • the content scoring 324 is used along with drive models 328 to decide between the SSD 308 and magnetic drive 312 much in the way that the cache manager 320 decides between SSD cache 412 and magnetic drive cache 420.
  • FIG. 4D a diagram of still another embodiment of a storage architecture 400-4 for a CDN 110 is shown.
  • This embodiment has both SSD hosting 404 and SSD caching 412 on the SSD 308.
  • the magnetic drive(s) 312 has magnetic drive hosting 416 and magnetic drive caching 420.
  • a storage array cache 424 in the storage array 234 in the POP 120 can be used for hosting and caching under the control of the cache manager. Moving storage away from the edge generally increases latency, but prevents having to go to an origin server 112, 248 to retrieve missing content. In some cases, the edge may actually be slower than storage in a server elsewhere in the POP 120 or CDN 110.
  • FIG. 5 a flowchart of an embodiment of a process 500 for distributing content with a CDN 110 is shown.
  • the depicted portion of the process 500 begins in block 504 where the content originator 106 or customer configures hosting and/or caching with the CDN 110.
  • the customer can select hosting and/or caching.
  • Premium hosting and/or caching can be selected by the customer to use SSD 308.
  • the selection can be for individual content objects or for groups of content objects. Groups can be designated by path, domain, size, score, format, encoding, or any other way to designate a group.
  • a customer can specify SSD hosting for content objects having a score above a first threshold and SSD caching above a second threshold. For content objects below the second threshold, magnetic drives 312 are used for storage.
  • preloading or partial loading is performed for content objects at block 508.
  • Hosted content objects are loaded in the CDN origin server 248 and possibly in POP storage arrays 234 and edge servers 230.
  • Some embodiments allow partially loading content into hosts and caches. For example, a predetermined time segment or size of a segment can be loaded so that the content object can start playing quickly while the remainder is gathered from higher latency sources.
  • Important portions of a content object are determined in any number of ways such that those portions can be hosted or pre-populated into caches. For example, the first few seconds of a video could be stored in SSD 308 for quick access while the remainder is found in magnetic drives 312.
  • the frames used while fast forwarding or rewinding a streamed video could be hosted or cached in SSD 308 while the actual playback would be from magnetic drives 312.
  • a storage policy could be defined for certain video codecs that periodically store complete frames with delta frames stored in-between. The complete frames are used for fluid fast forwarding and other things such that SSD 308 could be specified in the policy for the complete frames.
  • a popular large file could be handled differently through a properly designed policy. Where a large file is heavily requested, the interface to the SSD 308 or magnetic drive 312 or the network interface(s) could be overwhelmed by the flow of data. The popular large file could be divided among several drives 308, 312 to spread out that load in a manageable way.
  • An automated algorithm could determine how many segments to divide the content object into based upon its size and/or popularity.
  • the cache manager 320 works new content into the process by scoring the content and determining which of the multiple storage options to use for a particular content object. For some content this will include loading content and for others it simply means that the edge servers 230 are informed to load the content object if ever requested at which point, the cache manager 320 will decide where to store the content from the several options. At some point after the content originator 106 becomes a customer of the CDN 110, end users 128 will start requesting the content object in block 516.
  • the requested content object is returned to the end user 128 from the POP 120.
  • the cache manager 320 performs housekeeping functions in block 524 to update the drive models 328, originator profiles 336, and content scoring 324. Additionally, content could be moved around between the various hosting and caching options on the different physical hardware.
  • the customer intake in blocks 504, 508 and 12 happen in parallel to the serving of content by the content serving function 332 in blocks 516 and 520 and the housekeeping by the cache manager 320 in block 524.
  • FIG. 6 a flowchart of an embodiment of a process 512 for ingesting new content into the CDN 110 is shown.
  • This ingest process 512 corresponds to block 512 of FIG. 5.
  • the customer configuration is retrieved to know hosting and caching options.
  • Various embodiments support specifying SSD 308 or magnetic drive 312 caching and/or hosting on individual content objects or groups of content objects.
  • the SSD hosting is populated through to the SSDs 308 of the edge servers 230 in the various POPs 120. Where only hosting is specified without the SSD option, the content is propagated to magnetic drives 312 in the various edge servers 230 in the various POPs 120.
  • Hosting may be staged into a hierarchy where the CDN origin server 248, the POP storage array 234, the magnetic drive 312 in an edge server 230, and the SSD 308 in an edge server 230 are utilized as content becomes 5 increasingly requested.
  • Cached content is scored in block 618 with periodicity for integration of the popularity initially presumed along0 with other factors such as the popularity curve from information stored in the originator profiles 336.
  • the Table shows an example of a portion of the content scoring 324.
  • Another example in the Table involves a content object that is split and scored separately.
  • a small initial portion of 10 MB for ... ⁇ HD_movie.mpeg is scored highly for a video content object.
  • the remainder of .. AHD movie.mpeg has a low score.
  • the initial portion could be stored on SSD 308 and played with little latency while the remainder is retrieved more slowly, for example.
  • Some embodiments allow pre-population of the SSD premium cache 408 in block 620, which can be specified in a storage policy. For example, a content originator may have just updated their logo image and know that it will be popular so they would select it for pre- population in the SSD premium cache 408.
  • Content objects or groups of content objects that can be cached by the CDN 110 are specified to the edge servers 230 in block 624. For example, a customer may specify that all request for content from an ACME.llnw.com domain would be cached relying on an origin server at ACME.com.
  • Pre-population of the non-premium caches 412, 420, 424 in the hierarchy is performed in block 628 based upon initial scoring.
  • FIG. 7 a flowchart of an embodiment of a process 520 for serving content to an end user 128 is shown.
  • the serving process 520 corresponds to block 520 in FIG. 5.
  • the depicted portion of the process begins in block 704 where a request for content has been routed to a POP 120 which further routes the request using the switch fabric 240 to an edge server 230.
  • the edge server determines if the content object is stored local to the edge server 230 somewhere in the SSD 308 or magnetic drive 312.
  • the found content object is slotted into appropriate cache in block 716.
  • scores above 900 may be stored in the SSD cache 412 and scores below 100 stored in the storage array cache 424 with everything else stored in the magnetic drive cache 420.
  • These thresholds can be moved around based upon the drive models, specifically, the relative sizes for each different type storage.
  • Some embodiments can track the number of SSD writes per request as a figure of merit to determine scoring based upon the configuration of a particular edge server 230. If the ratio is low such that there are few writes for a large number of requests relative to other content objects, the content file is a good candidate for the SSD cache 412. The content object is written into the appropriate cache in block 720.
  • the content object is sourced in block 724. Spooling out from the cache can happen simultaneously to spooling into the cache such that the end user 128 need not wait for a complete load before receiving the initial portion of the content object.
  • the content scoring 324 is updated to reflect another request for the content object.
  • FIG. 8 a flowchart of an embodiment of a housekeeping process 524 for maintaining balance between the various caches is shown. The housekeeping process 524 corresponds to block 524 in FIG. 5. The depicted portion of the process begins in block 804 where the content scoring 324 is loaded. Only a subset of the content as determined in block 808 is analyzed at a given time.
  • Past flushes and reloads into the SSD 308 are good candidates for analysis.
  • content objects that are reloaded into the SSD cache 412 often are scrutinized to determine if the period over which popularity is determined were increased would the ratio of writes per request decrease. If that is the case, the period of analysis is changed in the originator profile 336 in block 816.
  • Inferences to other similar content objects could be drawn in block 820.
  • the inference could be simply an average of this analysis, for example, for large *.jpg files in a given path, the analysis finding a day periodicity works for most of the content objects could lead to a general presumption to use the day period for all new *.jpg files in that path.
  • According to the new scoring content is moved between the various caches 412, 420, 424 in block 824.
  • FIG. 9 an exemplary environment with which embodiments may be implemented is shown with a computer system 900 that can be used by a designer 904 to design, for example, electronic designs.
  • the computer system 900 can include a computer 902, keyboard 922, a network router 912, a printer 908, and a monitor 906.
  • the monitor 906, processor 902 and keyboard 922 are part of a computer system 926, which can be a laptop computer, desktop computer, handheld computer, mainframe computer, etc.
  • the monitor 906 can be a CRT, flat screen, etc.
  • a designer 904 can input commands into the computer 902 using various input devices, such as a mouse, keyboard 922, track ball, touch screen, etc. If the computer system 900 comprises a mainframe, a designer 904 can access the computer 902 using, for example, a terminal or terminal interface. Additionally, the computer system 926 may be connected to a printer 908 and a server 910 using a network router 912, which may connect to the Internet 918 or a WAN.
  • the server 910 may, for example, be used to store additional software programs and data.
  • software implementing the systems and methods described herein can be stored on a storage medium in the server 910.
  • the software can be run from the storage medium in the server 910.
  • software implementing the systems and methods described herein can be stored on a storage medium in the computer 902.
  • the software can be run from the storage medium in the computer system 926. Therefore, in this embodiment, the software can be used whether or not computer 902 is connected to network router 912.
  • Printer 908 may be connected directly to computer 902, in which case, the computer system 926 can print whether or not it is connected to network router 912.
  • FIG. 10 an embodiment of a special-purpose computer system 1000 is shown.
  • the enterprise platform 104 is one example of a special-purpose computer system 1000.
  • the third-party ad creation tool 108 may run on the enterprise platform 104 or another special-purpose computer system.
  • the above methods may be implemented by computer- program products that direct a computer system to perform the actions of the above-described methods and components.
  • Each such computer-program product may comprise sets of instructions (codes) embodied on a computer-readable medium that directs the processor of a computer system to perform corresponding actions.
  • the instructions may be configured to run in sequential order, or in parallel (such as under different processing threads), or in a combination thereof. After loading the computer-program products on a general purpose computer system 926, it is transformed into the special-purpose computer system 1000.
  • Special-purpose computer system 1000 comprises a computer 902, a monitor 906 coupled to computer 902, one or more additional user output devices 1030 (optional) coupled to computer 902, one or more user input devices 1040 (e.g., keyboard, mouse, track ball, touch screen) coupled to computer 902, an optional communications interface 1050 coupled to computer 902, a computer-program product 1005 stored in a tangible computer-readable memory in computer 902.
  • Computer-program product 1005 directs system 1000 to perform the above- described methods.
  • Computer 902 may include one or more processors 1060 that communicate with a number of peripheral devices via a bus subsystem 1090.
  • peripheral devices may include user output device(s) 1030, user input device(s) 1040, communications interface 1050, and a storage subsystem, such as random access memory (RAM) 1070 and non- volatile storage drive 1080 (e.g., disk drive, optical drive, solid state drive), which are forms of tangible computer-readable memory.
  • RAM random access memory
  • non- volatile storage drive 1080 e.g., disk drive, optical drive, solid state drive
  • Computer-program product 1005 may be stored in non-volatile storage drive 1080 or another computer-readable medium accessible to computer 902 and loaded into memory 1070.
  • Each processor 1060 may comprise a microprocessor, such as a microprocessor from Intel ® or Advanced Micro Devices, Inc. ® , or the like.
  • the computer 902 runs an operating system that handles the communications of product 1005 with the above-noted components, as well as the communications between the above-noted components in support of the computer-program product 1005.
  • Exemplary operating systems include Windows or the like from Microsoft Corporation, Solaris from Sun Microsystems, LINUX, UNIX, and the like.
  • User input devices 1040 include all possible types of devices and mechanisms to input information to computer system 902. These may include a keyboard, a keypad, a mouse, a scanner, a digital drawing pad, a touch screen incorporated into the display, audio input devices such as voice recognition systems, microphones, and other types of input devices.
  • user input devices 1040 are typically embodied as a computer mouse, a trackball, a track pad, a joystick, wireless remote, a drawing tablet, a voice command system.
  • User input devices 1040 typically allow a user to select objects, icons, text and the like that appear on the monitor 906 via a command such as a click of a button or the like.
  • User output devices 1030 include all possible types of devices and mechanisms to output information from computer 902. These may include a display (e.g., monitor 906), printers, non-visual displays such as audio output devices, etc.
  • Communications interface 1050 provides an interface to other communication networks and devices and may serve as an interface to receive data from and transmit data to other systems, WANs and/or the Internet 918.
  • Embodiments of communications interface 1050 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), a (asynchronous) digital subscriber line (DSL) unit, a Fire Wire ® interface, a USB ® interface, a wireless network adapter, and the like.
  • communications interface 1050 may be coupled to a computer network, to a Fire Wire ® bus, or the like.
  • communications interface 1050 may be physically integrated on the motherboard of computer 902, and/or may be a software program, or the like.
  • RAM 1070 and non- volatile storage drive 1080 are examples of tangible computer- readable media configured to store data such as computer-program product embodiments of the present invention, including executable computer code, human-readable code, or the like.
  • Other types of tangible computer-readable media include floppy disks, removable hard disks, optical storage media such as CD-ROMs, DVDs, bar codes, semiconductor memories such as flash memories, read-only-memories (ROMs), battery-backed volatile memories, networked storage devices, and the like.
  • RAM 1070 and non- volatile storage drive 1080 may be configured to store the basic programming and data constructs that provide the functionality of various embodiments of the present invention, as described above.
  • RAM 1070 and non-volatile storage drive 1080 may be stored in RAM 1070 and non-volatile storage drive 1080. These instruction sets or code may be executed by the processor(s) 1060.
  • RAM 1070 and non- volatile storage drive 1080 may also provide a repository to store data and data structures used in accordance with the present invention.
  • RAM 1070 and non- volatile storage drive 1080 may include a number of memories including a main random access memory (RAM) to store of instructions and data during program execution and a read-only memory (ROM) in which fixed instructions are stored.
  • RAM 1070 and non- volatile storage drive 1080 may include a file storage subsystem providing persistent (non- volatile) storage of program and/or data files.
  • RAM 1070 and non- volatile storage drive 1080 may also include removable storage systems, such as removable flash memory.
  • Bus subsystem 1090 provides a mechanism to allow the various components and subsystems of computer 902 communicate with each other as intended. Although bus subsystem 1090 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple busses or communication paths within the computer 902.
  • the processing units may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described above, and/or a combination thereof.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described above, and/or a combination thereof.
  • the embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed, but could have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.
  • embodiments may be implemented by hardware, software, scripting languages, firmware, middleware, microcode, hardware description languages, and/or any combination thereof.
  • the program code or code segments to perform the necessary tasks may be stored in a machine readable medium such as a storage medium.
  • a code segment or machine- executable instruction may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a script, a class, or any combination of instructions, data structures, and/or program statements.
  • a code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, and/or memory contents.
  • Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
  • any machine -readable medium tangibly embodying instructions may be used in implementing the methodologies described herein.
  • software codes may be stored in a memory.
  • Memory may be implemented within the processor or external to the processor.
  • the term "memory" refers to any type of long term, short term, volatile, nonvolatile, or other storage medium and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored.
  • the term “storage medium” may represent one or more memories for storing data, including read only memory (ROM), random access memory (RAM), magnetic RAM, core memory, magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine readable mediums for storing information.
  • ROM read only memory
  • RAM random access memory
  • magnetic RAM magnetic RAM
  • core memory magnetic disk storage mediums
  • optical storage mediums flash memory devices and/or other machine readable mediums for storing information.
  • machine-readable medium includes, but is not limited to portable or fixed storage devices, optical storage devices, wireless channels, and/or various other storage mediums capable of storing that contain or carry instruction(s) and/or data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne un procédé et un système permettant une optimisation de l'antémémoire dans une architecture d'antémémoire hybride avec un disque électronique et un dispositif de stockage magnétique pour un réseau de diffusion de contenu (réseau CDN). Le réseau CDN a un certain nombre de points de présence (points POP) distribués géographiquement dans l'internet. Des clients du réseau CDN paient pour le stockage d'objets de contenu. Une gestion d'antémémoire dans un point POP analyse des informations relatives aux objets de contenu afin de déterminer si le stockage sera effectué sur un disque électronique (SSD) au lieu d'un disque magnétique. Les informations utilisées dans cette analyse proviennent de la couche d'application ou d'une couche supérieure dans le modèle d'interconnexion des systèmes ouverts (OSI). Les objets de contenu sont diffusés aux utilisateurs finals à partir soit du disque SSD, soit du dispositif de stockage magnétique.
PCT/US2011/041913 2009-10-02 2011-06-24 Optimisation de coûts d'écriture pour une architecture de stockage pour un réseau de diffusion de contenu WO2012177267A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/US2011/041913 WO2012177267A1 (fr) 2011-06-24 2011-06-24 Optimisation de coûts d'écriture pour une architecture de stockage pour un réseau de diffusion de contenu
US13/316,289 US8321521B1 (en) 2011-06-24 2011-12-09 Write-cost optimization of CDN storage architecture
US13/662,202 US20130110984A1 (en) 2011-02-01 2012-10-26 Write-cost optimization of cdn storage architecture
US14/195,645 US8965997B2 (en) 2009-10-02 2014-03-03 Content delivery network cache grouping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2011/041913 WO2012177267A1 (fr) 2011-06-24 2011-06-24 Optimisation de coûts d'écriture pour une architecture de stockage pour un réseau de diffusion de contenu

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/023410 Continuation-In-Part WO2012105967A1 (fr) 2008-09-19 2011-02-01 Architecture de gestion des actifs destinée à des réseaux de distribution de contenu

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/316,289 Continuation US8321521B1 (en) 2009-10-02 2011-12-09 Write-cost optimization of CDN storage architecture

Publications (1)

Publication Number Publication Date
WO2012177267A1 true WO2012177267A1 (fr) 2012-12-27

Family

ID=47422860

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/041913 WO2012177267A1 (fr) 2009-10-02 2011-06-24 Optimisation de coûts d'écriture pour une architecture de stockage pour un réseau de diffusion de contenu

Country Status (1)

Country Link
WO (1) WO2012177267A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100082879A1 (en) * 2008-09-26 2010-04-01 Mckean Brian D Priority command queues for low latency solid state drives
US20100131671A1 (en) * 2008-11-24 2010-05-27 Jaspal Kohli Adaptive network content delivery system
US20100199036A1 (en) * 2009-02-02 2010-08-05 Atrato, Inc. Systems and methods for block-level management of tiered storage
US20100262633A1 (en) * 2009-04-14 2010-10-14 International Business Machines Corporation Managing database object placement on multiple storage devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100082879A1 (en) * 2008-09-26 2010-04-01 Mckean Brian D Priority command queues for low latency solid state drives
US20100131671A1 (en) * 2008-11-24 2010-05-27 Jaspal Kohli Adaptive network content delivery system
US20100199036A1 (en) * 2009-02-02 2010-08-05 Atrato, Inc. Systems and methods for block-level management of tiered storage
US20100262633A1 (en) * 2009-04-14 2010-10-14 International Business Machines Corporation Managing database object placement on multiple storage devices

Similar Documents

Publication Publication Date Title
US8321521B1 (en) Write-cost optimization of CDN storage architecture
US11665259B2 (en) System and method for improvements to a content delivery network
US8370520B2 (en) Adaptive network content delivery system
US8612668B2 (en) Storage optimization system based on object size
US8527645B1 (en) Distributing transcoding tasks across a dynamic set of resources using a queue responsive to restriction-inclusive queries
US8886769B2 (en) Selective content pre-warming in content delivery networks based on user actions and content categorizations
EP2359536B1 (fr) Système de distribution de contenu de réseau adaptatif
US8219711B2 (en) Dynamic variable rate media delivery system
US20190044850A1 (en) Dynamically optimizing content delivery using manifest chunking
US9178928B2 (en) Scalable content streaming system with server-side archiving
US9235587B2 (en) System and method for selectively routing cached objects
Koch et al. Category-aware hierarchical caching for video-on-demand content on YouTube
US10341454B2 (en) Video and media content delivery network storage in elastic clouds
WO2012177267A1 (fr) Optimisation de coûts d'écriture pour une architecture de stockage pour un réseau de diffusion de contenu

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11868082

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11868082

Country of ref document: EP

Kind code of ref document: A1