CN107277561A - Content distributing network - Google Patents

Content distributing network Download PDF

Info

Publication number
CN107277561A
CN107277561A CN201610215447.XA CN201610215447A CN107277561A CN 107277561 A CN107277561 A CN 107277561A CN 201610215447 A CN201610215447 A CN 201610215447A CN 107277561 A CN107277561 A CN 107277561A
Authority
CN
China
Prior art keywords
node
media data
content
cache
caching nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610215447.XA
Other languages
Chinese (zh)
Inventor
孙振岗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING UNION VOOLE TECHNOLOGY Co Ltd
Original Assignee
BEIJING UNION VOOLE TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING UNION VOOLE TECHNOLOGY Co Ltd filed Critical BEIJING UNION VOOLE TECHNOLOGY Co Ltd
Priority to CN201610215447.XA priority Critical patent/CN107277561A/en
Publication of CN107277561A publication Critical patent/CN107277561A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23103Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion using load balancing strategies, e.g. by placing or distributing content on different disks, different memories or different servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23109Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion by placing content in organized collections, e.g. EPG data repository
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26258Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for generating a list of items to be played back in a given order, e.g. playlist, or scheduling item distribution according to such list

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

This application provides a kind of content distributing network, including by the content publisher node of network connection, core memory node, central administration node, central database and the edge caching nodes of more than two, wherein:Content publisher node obtains for receiving the media data for needing to issue and its attribute parameter information and cache node push list is saved in into central database, and media data is sent into core memory node;Core memory node is used to receive and storage medium data;Central administration node is used to receive and the service request of user is redirected into suitable edge caching nodes;Edge caching nodes are used to be downloaded according to cache node push list and cache media data, and by local cache or return the media data return user downloaded in source.The application can largely reduce the complexity of core memory by above-mentioned means.

Description

Content distributing network
Technical field
The application is related to areas of information technology, especially, is related to a kind of content distributing network.
Background technology
With the fast development of internet, the popularization of intelligent television, network traffic data is doubled and redoubled, it may appear that network congestion simultaneously the situation for influenceing service application experience occurs.Content distributing network(CDN, Content Delivery Network)As a kind of system that can improve internet content efficiency of transmission, it can be greatly improved, be increasingly widely applied particular for the Efficiency of QoS of the multimedia content deliveries such as video.CDN is, based on network and the Edge Server for being deployed in various regions, using functional modules such as the load balancing, content distribution, scheduling of central platform, user is obtained required content nearby, so as to reduce network congestion, improves response speed and hit rate that user accesses.
In existing CDN system, in order to realize that user can access the purpose of content needed for Edge Server is obtained nearby, central platform needs to manage and safeguard the download state of media data, not only need to record which Edge Server has been downloaded and cached media data, also need to download exception of maintenance management Edge Server etc., the management complexity of system centre platform is largely improved, the service performance of central platform can be reduced to a certain extent.
The content of the invention
The application provides a kind of content distributing network, for solve existing central store management complexity it is high and the problem of influence system service performance.
A kind of content distributing network disclosed in the present application, including passing through the content publisher node of network connection, core memory node, central administration node, central database and the edge caching nodes of more than two, the content publisher node is connected with core memory node and central database respectively, the edge caching nodes are connected with central administration node, core memory node and central database respectively, and the central administration node is connected with core memory node;Wherein:The content publisher node is used to receive the media data for needing to issue, the attribute parameter information for the media data that the media data is obtained will be scanned, and the cache node push list for the media data that strategy is determined is pushed according to default node, it is sent in central database, and the media data is sent to core memory node;The core memory node is used to receiving and storing the media data that the content publisher node is uploaded;The central administration node is used to receive the service request that user is sent by client, and the loading condition and default load balancing reported according to each edge caching nodes, and the service request is redirected into suitable edge caching nodes;And, receive the edge caching nodes returns source download request, and the download address of core memory node is provided for the edge caching nodes;The attribute parameter information and its cache node that the central database is used to receive and preserve the media data that the content publisher node is received push list;The edge caching nodes are used to push list from the core memory node download media data according to the cache node of central database preservation, and are buffered in the cache node push list for locally updating the central database;And, the media data that the media data of local cache or time source are downloaded is returned to client by the client-side service request redirected according to the central administration node.
It is preferred that, also include more than one layer of intermediate caching nodes being arranged between the core memory node and the edge caching nodes, tree-like storage structure of the intermediate caching nodes with core memory node, edge caching nodes formation using core memory node as root node, by leaf node of edge caching nodes;The cache node that the intermediate caching nodes are used to be preserved according to the central database pushes list from the core memory node download media data, and the cache node push list for locally updating the central database is cached to by default medium cache policy;And, when the media data that the intermediate caching nodes of edge caching nodes or lower floor need back source to download, the source download media data is returned directly from local return or from the node on upper strata.
It is preferred that, the edge caching nodes are additionally operable to release medium data;When the media data is issued by edge caching nodes, media data to be released is uploaded to core memory node by the edge caching nodes step by step by its upper strata intermediate caching nodes, and the property parameters of the media data and its cache node push list are preserved to central database.
It is preferred that, the edge caching nodes directly provide content service for client, specifically include area load balanced device, push server, local data base, more than one medium cache node and its corresponding streaming media server;Wherein:The area load balanced device is used for the state for obtaining and being reported to the central administration node each streaming media server in the edge caching nodes, and the service request for the client for being redirected the central administration node according to the state of each streaming media server distributes to suitable streaming media server;The cache node that the push server is used to be preserved according to the central database pushes list from core memory node download media data, and corresponding medium cache node is cached to according to default medium cache policy, the cache node for updating the central database pushes list;The media data that the streaming media server is used to download in the media data for caching the medium cache node or time source returns to client;The local data base use with central database identical data base management system, for the content of synchronous central database, data, services are provided for the push server.
It is preferred that, the core memory node is provided with content indexing node and more than one content memory node;The content indexing node is used to managing and distributing content memory node, when receiving the media data storage request that content publisher node is sent, content indexing node is that the media data distributes a content memory node according to the storage state of current all the elements memory node;The content memory node is received and stored after the media data, and storage information alteration is reported to the content indexing node.
It is preferred that, the central administration node specifically includes the state acquisition server and GSLB device by network connection, wherein:The state acquisition server is connected with the content indexing node of core memory node and the area load balanced device of edge caching nodes, for obtaining the media storage situation of core memory node and the state of each edge caching nodes;And for receiving time source download request of the edge caching nodes, and the storage state of each content memory node reported according to content indexing node provides download address for the edge caching nodes;The GSLB device is connected with internet domain server, the distributed deployment by the way of dynamic extendible capacity, service request for receiving user, and the state and default load balancing of each edge caching nodes obtained according to the state acquisition server, the service request is redirected to suitable edge caching nodes.
It is preferred that, the default medium cache policy includes:The media data is stored with default block size burst;According to the number of local medium cache node and the numbering of data block, each data block and the caching mapping relations of local medium cache node of the media data are calculated using uniformity hash algorithm;Judge whether the quantity available of local medium cache node changes, if so, then recalculating the caching mapping relations using uniformity hash algorithm.
It is preferred that, the push server is specifically included:List query module, pushes medium related to this node in list for the cache node by default push time cycle Help Center's database and pushes record;Queue maintenance module is downloaded, queue is downloaded for pushing to add the medium after record sorts according to push priority and issue date, notifies the medium cache node to perform media data downloading process;Wherein, perform when the media data is downloaded according to choosing period of time parallel downloading mode;Downloading data monitoring module, for checking the integrality of the media data after the download is complete, and updates the cache node push list of the central database.
It is preferred that, the content publisher node specifically includes medium authentication unit, medium scanning unit and dielectric memory cell, wherein:The medium authentication unit is used for the integrality for receiving and detecting the media data for needing to issue;The medium scanning unit is used to be scanned the media data, obtain the attribute parameter information of the media data, and the cache node push list that strategy determines the media data is pushed according to default node, and the attribute parameter information and cache node push list are saved in central database;The dielectric memory cell is used to send media storage request to the content indexing node of core memory node, and the content memory node returned with the content indexing node is connected and uploads the media data.
It is preferred that, the content indexing node is additionally operable to the backup instances by default each media data of BACKUP TIME cycle inspection, when there is media data to need backup, specifies the media data of a content memory node backup other guide memory node;And/or, the content distributing network also includes system configuration node, pass through network connection with the content publisher node and central administration node respectively, for Unified number, IP address, geographical position and operator to be configured and stored for each node, it is that content publisher node configuration and management node push strategy, load balancing is configured and managed for the central administration node, and the load balancing includes node screening conditions and combinations thereof mode, screening order.
Compared with prior art, the application has advantages below:
In the application preferred embodiment, central database be stored with cache node push list, the cache node push list renewal performed by edge caching nodes, can largely reduce the complexity of core memory;Further, since media data issue after each edge caching nodes can active obtaining media data, can be prevented effectively from a large number of users download same medium when return source pressure it is big the problem of, largely reduce the bandwidth pressure of core memory.
Brief description of the drawings
Fig. 1 is a kind of structural representation of content distributing network in the embodiment of the present application;
Fig. 2 is the structural representation of another content distributing network in the embodiment of the present application;
Fig. 3 is medium issue schematic flow sheet in the embodiment of the present application;
Fig. 4 is medium push schematic flow sheet in the embodiment of the present application;
Fig. 5 is SCS and other node annexation schematic diagrames in the embodiment of the present application;
Fig. 6 be the embodiment of the present application in central administration node composition structure and its with other node annexation schematic diagrames;
Fig. 7 is media data storage organization schematic diagram in the embodiment of the present application.
Embodiment
To enable above-mentioned purpose, the feature and advantage of the application more obvious understandable, the application is described in further detail with reference to the accompanying drawings and detailed description.
Reference picture 1, show a kind of structural representation of content distributing network in the embodiment of the present application, including content publisher node 11, core memory node 12, central administration node 13, central database 14 and the edge caching nodes 20 of more than two, by network connection between above-mentioned node, wherein:
Content publisher node 11 is used to receive the media data for needing to issue, scanning obtains the attribute parameter information of the media data, the cache node that strategy determines is pushed according to default node push list and be saved in central database, and by the media data storage to core memory node;
Content publisher node 11 in this preferred embodiment can specifically include medium authentication unit, medium scanning unit and dielectric memory cell, wherein:Medium authentication unit is used for the integrality for receiving and detecting the media data for needing to issue;Medium scanning unit is used to be scanned the media data, obtain the attribute parameter information of the media data, and the cache node push list that strategy determines the media data is pushed according to default node, the attribute parameter information and cache node are pushed into list and are saved in central database 14;Dielectric memory cell is used to send media storage request to core memory node 12, and the address and port returned with core memory node 12 is connected and uploads the media data.
In application scheme, media data can be uploaded to content publisher node 11 by third party and be issued, and can also locally issued by system maintenance personnel, can also use other media data published methods.
The main distinction of above-mentioned third party's published method and local published method is that third party's published method does not need coded system to recompile, and local published method needs to recompile for medium file, generates unified file identification(FID, File Identity).
Below, by taking local published method as an example, the issue flow of the application media data is illustrated, as shown in figure 3, specifically including:
(1)Post staff sends document No. request to coded system;
(2)Coded system is that medium file to be released generates unified file identification FID;
(3)Coded system uploads the file after coding(It is preferred to use transmitting file in ftp agreements)To medium publisher node;
(4)Medium publisher node is detected after the media data newly uploaded, checks whether media data has uploaded completion, and verifies dielectric integrity;
(5)After the completion of medium upload, medium publisher node can be scanned to media data, the file such as generation m3u8, index index, and each property parameters of medium are written in central database;
(6)Central database is completed after insertion operation, and media data is uploaded to core memory node;
(7)Complete after uploading, medium publisher node notifies central administration node, media data formally intervenes management;
(8)The cache node that strategy setting needs to push is pushed according to default node(Such as edge caching nodes), the state of media data in central database is updated, media data is carried out and pushes in advance.
The media data storage that core memory node 12 is used to be sent according to content publisher node 11 is asked, the media data that reception and in a hard disk storage content publisher node 12 are uploaded;
When it is implemented, core memory node 12 can be provided with a content indexing node(CIN, Content Index Node)With one or more content memory nodes(CSN, Content Storage Node).CIN is responsible for and distributed the media data of CSN and its storage, and content publisher node 11 sends request to CIN first when uploading media data, and CIN returns to a CSN point according to current all CSN storage condition, is used for uploading media data;After the completion of media data upload, CSN can be reported to CIN;Afterwards, CIN can specify another CSN to go backup medium data on current CSN.
Central administration node 13 is used to receive the service request that user is sent by client, and the loading condition and default load balancing reported according to each edge caching nodes 20, and the service request is redirected(Such as HTTP redirection mode)To suitable edge caching nodes 20;And, receive the edge caching nodes 20 returns source download request(The media data download request that i.e. edge caching nodes 20 are sent to core memory node 12), it is the download address that corresponding edge caching nodes 20 provide core memory node 12;
The attribute parameter information and its cache node that central database 14 is used to preserve the media data of the reception of content publisher node 11 push list;
The cache node that edge caching nodes 20 are used to be preserved according to central database 14 pushes list from the download media data of core memory node 12, and is buffered in locally, and afterwards, the cache node updated in central database 14 pushes list;And, the media data that the media data of local cache or time source are downloaded is returned to client by the client-side service request redirected according to central administration node 13.
In this preferred embodiment, default medium cache policy can specifically include:
(1)Media data is in cache node(CACHE)In with default block size(Illustrated below by taking 64M as an example)Burst is stored;
(2)According to the number of local CACHE nodes and the numbering of data block, each data block and the caching mapping relations of local CACHE nodes of the media data are calculated using uniformity hash algorithm;
When it is implemented, can be according to the unique identification of media data file(FID)And the numbering of data block calculates Hash(Hash)Value, sets up the caching mapping relations of the data block and CACHE nodes.
Above-mentioned fid can be obtained by way of carrying out MD5 computings to the property value of media data file, store the file index based on fid, and same file can only deposit portion, can not only save memory space, can also facilitate and file content is verified.
(3)Judge whether the quantity available of local CACHE nodes changes(Such as some CACHE node delay machine, addition one new CACHE node), if so, then recalculating the caching mapping relations using uniformity hash algorithm.
Above-mentioned caching mapping policy is issued to local each CACHE nodes, the Hash that each CACHE nodes are calculated according to file f id+64M pieces number(Hash)Value, mapping(Map)Into caching mapping policy, you can navigate to some local CACHE node;When there is medium to cache task, 64M blocks can be strategically stored on different CACHE nodes.
Media data is stored in cache node with default block size burst, is stored due to each burst on cache nodes different in the cluster, the single-deck IO performance bottlenecks that a large amount of focus medium access can be avoided to produce.
Reference picture 2, shows the structural representation of another content distributing network in the embodiment of the present application, is with the main distinction of the embodiment shown in above-mentioned Fig. 1:More than one layer of intermediate caching nodes 30 are additionally provided with this preferred embodiment between core memory node 12 and edge caching nodes 20, the intermediate caching nodes 30 are formed with core memory node 12 for root node, with tree-like storage structure of the edge caching nodes 20 for leaf node with core memory node 12, edge caching nodes 20.
The cache node that intermediate caching nodes 30 in this preferred embodiment are used to be preserved according to central database 14 pushes list from the download media data of core memory node 12, and the cache node push list for locally updating central database 14 is cached to by default medium cache policy;And, when the media data that edge caching nodes 20 or the intermediate caching nodes of lower floor 30 need back source to download, directly from it is local return or go back to source from upper layer node download the media data.
When CDN system is used for video distribution, because the data volume of video class media data is big, edge caching nodes can not store all media datas(Otherwise cost can significantly rise), so in actual moving process, returning the probability downloaded in source very big.When customer volume exceedes certain scale, the pressure of core memory can be very big.This preferred embodiment can effectively reduce core memory pressure and its bandwidth by the technological means of above-mentioned Bedding storage, save system operation cost;Further, since using multi-level backup mode(That is media data is stored in addition to core memory node is stored in also in nodes at different levels), edge caching nodes can largely improve medium speed of download from local upper layer node downloading data, provide the user with more smooth service, and user experience can be more preferable.
It should be noted that, in the preferred embodiment, when system maintenance personnel are using release medium data by the way of being uploaded by edge caching nodes 20, media data directly can be uploaded to content publisher node 11, media data can also be uploaded to content publisher node step by step by intermediate caching nodes 30.
In a further preferred embodiment, CDN system can also include system configuration node(CMS, CDN Management System), for configuring Unified number and IP address, management ip address section, operator, regional information for the server of each node(Or geographical position)Deng;The node needed for guiding covering configuration strategy, load balancing and content publisher node needed for configuration and management GSLB device pushes strategy etc..
In another further preferred embodiment, core memory node 12 can specifically include a content indexing node and the content memory node of more than two, wherein:
Content indexing node(CIN, Content Index Node)Major function include:
(1)Registration is provided for CSN, the All Media information being locally stored is sent to CIN, CIN record All Media memory states by CSN;
(2)To central administration node(CCN, Content Center Node)Registration, provides CIN IP address in itself and port to CCN, each cache node is handed down to for CCN;
(3)For intermediate caching nodes or the medium cache node of edge caching nodes(CACHE)Inquiry medium is provided, when CACHE returns source downloading data, CIN will return to corresponding CSN nodes;
(4)By the backup instances of each medium in default BACKUP TIME cycle timing inspection local cache node, when there is medium to need backup, some CSN is specified to back up other CSN media data.
Content memory node(CSN, Content Storage Node)Major function include:
(1)Registered to CIN, when CSN starts, medium information is simultaneously all reported to CIN by scanning local hard drive, while the service IP address and port of the machine are provided, when local medium has situation about increasing or decreasing to notify the change of CIN information;
(2)The media data of upload is received and preserves, the medium of issue is ultimately stored in CSN hard disk;
(3)It is that intermediate caching nodes or edge caching nodes provide download service, when receiving medium download request, provides data for it and download;
(4)Perform the backup instruction that CIN is sent.
Further, central administration node 13 can specifically include the state acquisition server by network connection as the GSLB node of CDN system(SCS, State Collection Service)With one or more GSLB devices(GSLB, Global Service Load Balance), wherein:
The annexation of state acquisition server S CS and other nodes is as shown in figure 5, its major function includes:
(1)For CIN, intermediate caching nodes(Or L2 cache node)With the area load balanced device of edge caching nodes(SLB, Service Load Balance)Registration is provided;
(2)Upper layer node checking is provided for SLB, and returns to CIN information to SLB;
(3)Gather and forward SLB state to GSLB;Wherein, GSLB can specifically be divided into several classes such as GSLB-CONF, GSLB-HTTP, GSLB-VVCP, GSLB-P2SP;
(4)Receive intermediate caching nodes(Include two grades of SLB and multiple CACHE nodes)And edge caching nodes(Including SLB and multiple CACHE nodes and its VSS nodes)Return source download request, and according to the CIN each CSN reported storage state provide the information such as download address and download port.
GSLB device(GSLB, Global Service Load Balance)With internet domain name service system(DNS, Domain Name System)Connection, as shown in fig. 6, the distributed deployment by the way of dynamic extendible capacity, its major function includes:
(1)Connection SCS obtains all SLB of the whole network state;
(2)The window externally serviced is provided for whole CDN system, with disclosed domain name and port mode, service is externally provided, receives the service request of user terminal, and the service request of user terminal is oriented to(Redirect)To suitable service node(Edge caching nodes).
This preferred embodiment uses the above-mentioned technological means that SCS and GSLB is separately positioned, and its advantage includes:It can avoid influenceing the problem of total system is serviced because of single traffic failure;Can strengthening system cluster service ability;Flexible expansion can be realized by DNS by being arranged on the GSLB of periphery, can avoid failure when running into attack to greatest extent;Using distributed load equalizing mode, each GSLB can undertake loadbalancing tasks, it is to avoid single-point GSLB high load conditions occur.
Further, central database 14 preserves the attribute parameter information that content publisher node 11 uploads the media data of issue;After completing to upload media data to core memory node, push list can be write, intermediate caching nodes and edge caching nodes are pushed to, perform push task by intermediate caching nodes and edge caching nodes afterwards;Completion status can be write in central database after each cache node completes push task(Update cache node and push list), used during for this group caching or subordinate's cache request medium.Central database can be using ripe data base management system(Such as MySQL)Management service is provided.
Further, intermediate caching nodes 30 can specifically include SLB, push server as the buffer service node in the middle of edge caching nodes 20 and core memory node 12(PUSH), one or more medium cache nodes(CACHE)And local data base(DB), it is mainly used in carrying time source pressure of edge caching nodes, by the push function of core memory node, media data is first pushed to intermediate caching nodes(One or more levels intermediate buffer), when edge caching nodes need back source downloading data, data are directly returned to edge caching nodes by intermediate caching nodes, so as to mitigate the pressure of core memory node.
Intermediate caching nodes 30 are for lower level node(Lower floor's intermediate caching nodes or edge caching nodes)There is provided in service process, for the media data for needing back source to download(The i.e. local media data not cached), by the way of being cached while downloading, can such as use 16K to be downloaded and cache for minimum burst unit, as long as the data downloaded will be buffered, network is fully utilized.
Edge caching nodes 20 can specifically include area load balanced device as the node that content service is directly provided for client(SLB, Service Load Balance), push server(PUSH), one or more medium cache nodes(CACHE)And its corresponding streaming media server(VSS, Streaming Server), local data base(DB), used by SLB scheduling, client returned to by the VSS data that directly data cached in local CACHE or time source are downloaded.
Edge caching nodes 20 are in service process is provided the user, for the media data for needing back source to download(The i.e. local media data not cached), it is preferred to use the mode cached when downloading, it can such as use 16K to be downloaded and cache for minimum burst unit, as long as the data downloaded will be buffered, network is fully utilized, on the other hand, additionally it is possible to improve efficiency of service.
Further, during edge caching nodes return source downloading data, if receiving same media data service request, the edge caching nodes media data service request same to this will not produce the new request that source download is returned to upper layer node, but after waiting the media data for receiving upper layer node return, returning to simultaneously needs all users of this media data, and the mode of this merging request processing can further save distribution bandwidth.Accordingly, intermediate caching nodes are when carrying out back source downloading data, it would however also be possible to employ the mode of above-mentioned merging request processing saves the network bandwidth.
Area load balanced device(SLB, Local Load Balance)Major function include:
(1)To CCN registrations there is provided native service IP and port, and report this group of streaming media server(VSS)Loading condition;
(2)Local CACHE registrations are received, CACHE Node registries collect the information that management CACHE is reported to SLB, manage this group of CACHE node;
(3)Streaming media server list is provided, client request can provide available streaming media server address list to client in SLB;
(4)Medium cache policy is managed, such as, according to uniformity hash algorithm, the CACHE nodes in this group can be handed down to medium according to 64M piecemeals, when CACHE nodes change in group, recalculates and issues;
(5)The redirection of service request, when there is CACHE download requests, is redirected to this group of CACHE node;
(6)The load balancing of client request, if client is single connection request, SLB can return to the small point of load to client according to load in group.
Medium cache node(CACHE)It is mainly used in:
(1)Registered to SLB, the state of streaming media server is reported in timing, receive uniformity Hash strategy and upper layer node, core memory address information that SLB is issued;
(2)Receiving stream media server registration, streaming media server is signed in after CACHE, and own load information is reported in timing;
(3)Medium is inquired about to CIN, SLB, CACHE is needed when downloading medium to CIN or SLB inquiry medium download address;
(4)There is provided and download media feature, when streaming media server downloads medium, media data is obtained by CACHE;
(5)Streaming media server caching is provided;
(6)CACHE can go any accessible memory node or cache node downloading data in CDN platforms.
Push server(PUSH)It is mainly used in:
(1)Regular check is deployed in the database on cache node, determines whether that medium needs push;
(2)It was found that when there are push medium requirements, starting and downloading, the CACHE in this node is notified to download medium;
(3)When completing medium push, the cache node updated the data in storehouse pushes list.
Push server PUSH can specifically include following functional module:
List query module, for by the default push time cycle(Such as 1 minute)The cache node of Help Center's database pushes medium related to this node in list and pushes record;
Queue maintenance module is downloaded, queue is downloaded for pushing to add the medium after record sorts according to push priority and issue date, notifies the medium cache node CACHE to perform media data downloading process;Wherein, performing can be according to choosing period of time parallel downloading mode when the media data is downloaded;
State update module, for checking the integrality of the media data after the download is complete, and updates the cache node push list of the central database.
Local data base(DB), using with central database identical data base management system, for the content of synchronous central database, data, services are provided for local PUSH.
Streaming media server(VSS, Streaming Server), main flow host-host protocol is supported, is mainly used in:(1)Registered to local CACHE, report the information such as local load and connection number;(2)Client terminal playing request is received, streaming broadcasting is supported;(3)Authentication is played to user;(4)Media data is obtained to CACHE, the redirection information that CACHE is sent is received.
Below, illustrate the process that the media data of above-mentioned intermediate caching nodes 30 and edge caching nodes 20 is pushed with reference to Fig. 4, specifically include:
(1)By the default push time cycle(Such as 1 minute)The medium related to this node pushes record in Help Center's database(When intermediate caching nodes 30 or edge caching nodes 20 are provided with central database using the synchronous local data base of principal and subordinate's generation, directly it can be obtained from local data base).
(2)Above-mentioned medium is pushed to add after record sorts according to push priority and issue date and downloads queue, CACHE nodes execution media data downloading process is notified;Wherein, can be according to choosing period of time parallel downloading mode when performing the media data download;Such as in the network idle period, high concurrent downloading mode can select;In the period more than network access personnel, low parallel downloading mode can select.
The application is preferred to use storage organization form as shown in Figure 7, using core memory as summit, progressively downward transmitting medium data, forms approximate tree.
When the cache node for performing push process(Intermediate caching nodes or edge caching nodes)When including upper strata intermediate caching nodes, download request is sent to upper strata intermediate caching nodes first;If the upper strata intermediate caching nodes preserve the media data, medium file is directly returned;Otherwise, go back to source to core memory node and download the media data.
Because CDN is used for video distribution, correspondence media data is big, and edge caching nodes can not possibly store all media datas, so in actual playing process, the probability for returning source download is very big, if with starlike distribution, after customer volume increases, core memory pressure is aggravated.The application is stored using above-mentioned hierarchy, is had the following advantages that:Reduce core memory pressure;Core memory computer room bandwidth is reduced, cost is saved;Multi-level backup is realized, media data is not merely stored in core memory node, stored also in nodes at different levels;Speed of download can be accelerated, edge caching nodes are downloaded from upper layer node, can accelerate media data speed of download, provide the user more smooth service.
(3)The integrality of the medium file is checked after the completion of download, cache node in central database is updated afterwards and pushes list.
The application passes through above-mentioned technological means, on the one hand, core memory node can be made to safeguard medium state and its download state(It is such as abnormal to download), reduce the complexity of core memory management;On the other hand, by after media data active push to fringe node, user can directly download, and reducing back source frequency reduces the network pressure of core memory;In addition, issuing in advance after media data, it is possible to reduce the generation of network congestion phenomenon, time source stress problems that can particularly avoid a large number of users from being produced when downloading same medium.
The problem of the application is further solves very few core memory node and not enough bandwidth, additionally provides a kind of scheme by the release medium data of edge caching nodes 20, specifically includes:
(1)Medium post staff or other staff upload medium file by edge caching nodes;
(2)Edge caching nodes scanning obtains the attribute parameter information of medium file, and pushing strategy generating cache node according to default node pushes list;
(3)Above-mentioned medium file is directly uploaded to core memory node by edge caching nodes(Or core memory node is uploaded to step by step by upper strata intermediate caching nodes), preserved while the property parameters of the medium file and its cache node are pushed into list to central database.
The application can not only reduce pressure and the bandwidth limitation of core memory node, save cost by above-mentioned means(Can few setting core memory as far as possible quantity), can also realize that high concurrent medium is issued, accelerate medium issue speed, uploading to the medium of any edge caching nodes can realize that the whole network is downloaded;In addition, based on above-mentioned technological means, being also based on CDN system and realizing Dropbox business, provide the user cloud storage service.
It should be noted that said system embodiment belongs to preferred embodiment, necessary to involved unit and module not necessarily the application.
Each embodiment in this specification is described by the way of progressive, what each embodiment was stressed be between the difference with other embodiment, each embodiment identical similar part mutually referring to.
Above to a kind of content distributing network provided herein, it is described in detail, specific case used herein is set forth to the principle and embodiment of the application, and the explanation of above example is only intended to help and understands the present processes and its core concept;Simultaneously for those of ordinary skill in the art, according to the thought of the application, it will change in specific embodiments and applications, in summary, this specification content should not be construed as the limitation to the application.

Claims (10)

1. a kind of content distributing network, it is characterized in that, including passing through the content publisher node of network connection, core memory node, central administration node, central database and the edge caching nodes of more than two, the content publisher node is connected with core memory node and central database respectively, the edge caching nodes are connected with central administration node, core memory node and central database respectively, and the central administration node is connected with core memory node;Wherein:
The content publisher node is used to receive the media data for needing to issue, the attribute parameter information for the media data that the media data is obtained will be scanned, and the cache node push list for the media data that strategy is determined is pushed according to default node, it is sent in central database, and the media data is sent to core memory node;
The core memory node is used to receiving and storing the media data that the content publisher node is uploaded;
The central administration node is used to receive the service request that user is sent by client, and the loading condition and default load balancing reported according to each edge caching nodes, and the service request is redirected into suitable edge caching nodes;And, receive the edge caching nodes returns source download request, and the download address of core memory node is provided for the edge caching nodes;
The attribute parameter information and its cache node that the central database is used to receive and preserve the media data that the content publisher node is received push list;
The edge caching nodes are used to push list from the core memory node download media data according to the cache node of central database preservation, and are buffered in the cache node push list for locally updating the central database;And, the media data that the media data of local cache or time source are downloaded is returned to client by the client-side service request redirected according to the central administration node.
2. content distributing network according to claim 1, it is characterized in that, also include more than one layer of intermediate caching nodes being arranged between the core memory node and the edge caching nodes, tree-like storage structure of the intermediate caching nodes with core memory node, edge caching nodes formation using core memory node as root node, by leaf node of edge caching nodes;
The cache node that the intermediate caching nodes are used to be preserved according to the central database pushes list from the core memory node download media data, and the cache node push list for locally updating the central database is cached to by default medium cache policy;And, when the intermediate caching nodes of edge caching nodes or lower floor need back source to download media data, the source download media data is returned directly from local return or from the node on upper strata.
3. content distributing network according to claim 2, it is characterised in that the edge caching nodes are additionally operable to release medium data;When the media data is issued by edge caching nodes, media data to be released is uploaded to core memory node by the edge caching nodes step by step by its upper strata intermediate caching nodes, and the property parameters of the media data and its cache node push list are preserved to central database.
4. content distributing network according to claim 1, it is characterized in that, the edge caching nodes directly provide content service for client, specifically include area load balanced device, push server, local data base, more than one medium cache node and its corresponding streaming media server;Wherein:
The area load balanced device is used for the state for obtaining and being reported to the central administration node each streaming media server in the edge caching nodes, and the service request for the client for being redirected the central administration node according to the state of each streaming media server distributes to suitable streaming media server;
The cache node that the push server is used to be preserved according to the central database pushes list from core memory node download media data, and corresponding medium cache node is cached to according to default medium cache policy, the cache node for updating the central database pushes list;
The media data that the streaming media server is used to download in the media data for caching the medium cache node or time source returns to client;
The local data base use with central database identical data base management system, for the content of synchronous central database, data, services are provided for the push server.
5. content distributing network according to claim 4, it is characterised in that the core memory node is provided with content indexing node and more than one content memory node;The content indexing node is used to managing and distributing content memory node, when receiving the media data storage request that content publisher node is sent, content indexing node is that the media data distributes a content memory node according to the storage state of current all the elements memory node;The content memory node is received and stored after the media data, and storage information alteration is reported to the content indexing node.
6. content distributing network according to claim 5, it is characterised in that the central administration node specifically includes the state acquisition server and GSLB device by network connection, wherein:
The state acquisition server is connected with the content indexing node of core memory node and the area load balanced device of edge caching nodes, for obtaining the media storage situation of core memory node and the state of each edge caching nodes;And for receiving time source download request of the edge caching nodes, and the storage state of each content memory node reported according to content indexing node provides download address for the edge caching nodes;
The GSLB device is connected with internet domain server, the distributed deployment by the way of dynamic extendible capacity, service request for receiving user, and the state and default load balancing of each edge caching nodes obtained according to the state acquisition server, the service request is redirected to suitable edge caching nodes.
7. content distributing network according to claim 4, it is characterised in that the default medium cache policy includes:
The media data is stored with default block size burst;
According to the number of local medium cache node and the numbering of data block, each data block and the caching mapping relations of local medium cache node of the media data are calculated using uniformity hash algorithm;
Judge whether the quantity available of local medium cache node changes, if so, then recalculating the caching mapping relations using uniformity hash algorithm.
8. content distributing network according to claim 4, it is characterised in that the push server is specifically included:
List query module, pushes medium related to this node in list for the cache node by default push time cycle Help Center's database and pushes record;
Queue maintenance module is downloaded, queue is downloaded for pushing to add the medium after record sorts according to push priority and issue date, notifies the medium cache node to perform media data downloading process;Wherein, perform when the media data is downloaded according to choosing period of time parallel downloading mode;
Downloading data monitoring module, for checking the integrality of the media data after the download is complete, and updates the cache node push list of the central database.
9. content distributing network according to claim 5, it is characterised in that the content publisher node specifically includes medium authentication unit, medium scanning unit and dielectric memory cell, wherein:
The medium authentication unit is used for the integrality for receiving and detecting the media data for needing to issue;
The medium scanning unit is used to be scanned the media data, obtain the attribute parameter information of the media data, and the cache node push list that strategy determines the media data is pushed according to default node, and the attribute parameter information and cache node push list are saved in central database;
The dielectric memory cell is used to send media storage request to the content indexing node of core memory node, and the content memory node returned with the content indexing node is connected and uploads the media data.
10. content distributing network according to claim 6, it is characterised in that:
The content indexing node is additionally operable to the backup instances by default each media data of BACKUP TIME cycle inspection, when there is media data to need backup, specifies the media data of a content memory node backup other guide memory node;
And/or,
The content distributing network also includes system configuration node, pass through network connection with the content publisher node and central administration node respectively, for Unified number, IP address, geographical position and operator to be configured and stored for each node, it is that content publisher node configuration and management node push strategy, load balancing is configured and managed for the central administration node, and the load balancing includes node screening conditions and combinations thereof mode, screening order.
CN201610215447.XA 2016-04-08 2016-04-08 Content distributing network Pending CN107277561A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610215447.XA CN107277561A (en) 2016-04-08 2016-04-08 Content distributing network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610215447.XA CN107277561A (en) 2016-04-08 2016-04-08 Content distributing network

Publications (1)

Publication Number Publication Date
CN107277561A true CN107277561A (en) 2017-10-20

Family

ID=60052059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610215447.XA Pending CN107277561A (en) 2016-04-08 2016-04-08 Content distributing network

Country Status (1)

Country Link
CN (1) CN107277561A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108616600A (en) * 2018-05-11 2018-10-02 深圳市网心科技有限公司 Resource regulating method, client server, node device, network system and medium
CN109040337A (en) * 2018-07-19 2018-12-18 网宿科技股份有限公司 A kind of information query method, Edge Server and information query system
CN109151512A (en) * 2018-09-12 2019-01-04 中国联合网络通信集团有限公司 The method and device of content is obtained in CDN network
CN109194718A (en) * 2018-08-09 2019-01-11 玄章技术有限公司 A kind of block chain network and its method for scheduling task
WO2019114129A1 (en) * 2017-12-13 2019-06-20 平安科技(深圳)有限公司 Scheduling device and method for push server and computer-readable storage medium
CN109981461A (en) * 2017-12-27 2019-07-05 华为技术有限公司 A kind of data transmission method, apparatus and system
CN110536179A (en) * 2019-06-28 2019-12-03 三星电子(中国)研发中心 A kind of content distribution system and method
CN110769266A (en) * 2019-10-22 2020-02-07 山东云缦智能科技有限公司 Method for realizing high availability and high concurrency of CMAF low-delay live broadcast
CN111064713A (en) * 2019-02-15 2020-04-24 腾讯科技(深圳)有限公司 Node control method and related device in distributed system
CN111327922A (en) * 2020-03-18 2020-06-23 湖南快乐阳光互动娱乐传媒有限公司 Data updating method, system and medium after video content updating
CN112019604A (en) * 2020-08-13 2020-12-01 上海哔哩哔哩科技有限公司 Edge data transmission method and system
CN112235201A (en) * 2020-08-31 2021-01-15 贵阳忆联网络有限公司 Method and system for realizing edge deployment
CN112241413A (en) * 2019-07-18 2021-01-19 腾讯科技(深圳)有限公司 Pre-push content management method and device and computer equipment
CN112513830A (en) * 2019-07-15 2021-03-16 华为技术有限公司 Back-source method and related device in content distribution network
CN112688980A (en) * 2019-10-18 2021-04-20 上海哔哩哔哩科技有限公司 Resource distribution method and device, and computer equipment
CN112738149A (en) * 2019-10-29 2021-04-30 贵州白山云科技股份有限公司 Data transmission system and method
CN112751885A (en) * 2019-10-29 2021-05-04 贵州白山云科技股份有限公司 Data transmission system and method
CN112866310A (en) * 2019-11-12 2021-05-28 北京金山云网络技术有限公司 CDN back-to-source verification method and verification server, and CDN cluster
CN112929319A (en) * 2019-12-05 2021-06-08 中国电信股份有限公司 Content service method, system, apparatus and computer-readable storage medium
CN112995251A (en) * 2019-12-13 2021-06-18 北京金山云网络技术有限公司 Source returning method and device, electronic equipment and storage medium
CN114390053A (en) * 2022-01-12 2022-04-22 中国联合网络通信集团有限公司 Service content scheduling method, device, equipment and storage medium
CN114979271A (en) * 2022-05-11 2022-08-30 浪潮云信息技术股份公司 CDN cache layered scheduling method based on edge cloud computing
CN115022177A (en) * 2022-06-08 2022-09-06 阿里巴巴(中国)有限公司 CDN system, back-to-source method, CDN node and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101005372A (en) * 2006-01-19 2007-07-25 思华科技(上海)有限公司 Cluster cache service system and its realizing method
CN101257396A (en) * 2007-03-02 2008-09-03 中国科学院声学研究所 System for distributing multi-field content based on P2P technique as well as corresponding method
CN101588468A (en) * 2008-05-20 2009-11-25 华为技术有限公司 A kind of media playing method, device and system based on P2P
CN101594292A (en) * 2008-05-30 2009-12-02 中兴通讯股份有限公司 Content delivery method, service redirection method and system, node device
CN102368776A (en) * 2011-11-25 2012-03-07 中国科学技术大学 Optimization function module of node list in content distribution/delivery network (CDN)
CN103685551A (en) * 2013-12-25 2014-03-26 乐视网信息技术(北京)股份有限公司 Method and device for updating CDN (content delivery network) cache files
CN104185036A (en) * 2014-09-10 2014-12-03 北京奇艺世纪科技有限公司 Video file source returning method and device
CN104618506A (en) * 2015-02-24 2015-05-13 庄奇东 Crowd-sourced content delivery network system, method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101005372A (en) * 2006-01-19 2007-07-25 思华科技(上海)有限公司 Cluster cache service system and its realizing method
CN101257396A (en) * 2007-03-02 2008-09-03 中国科学院声学研究所 System for distributing multi-field content based on P2P technique as well as corresponding method
CN101588468A (en) * 2008-05-20 2009-11-25 华为技术有限公司 A kind of media playing method, device and system based on P2P
CN101594292A (en) * 2008-05-30 2009-12-02 中兴通讯股份有限公司 Content delivery method, service redirection method and system, node device
CN102368776A (en) * 2011-11-25 2012-03-07 中国科学技术大学 Optimization function module of node list in content distribution/delivery network (CDN)
CN103685551A (en) * 2013-12-25 2014-03-26 乐视网信息技术(北京)股份有限公司 Method and device for updating CDN (content delivery network) cache files
CN104185036A (en) * 2014-09-10 2014-12-03 北京奇艺世纪科技有限公司 Video file source returning method and device
CN104618506A (en) * 2015-02-24 2015-05-13 庄奇东 Crowd-sourced content delivery network system, method and device

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019114129A1 (en) * 2017-12-13 2019-06-20 平安科技(深圳)有限公司 Scheduling device and method for push server and computer-readable storage medium
CN109981461B (en) * 2017-12-27 2020-10-09 华为技术有限公司 Data transmission method, device and system
CN109981461A (en) * 2017-12-27 2019-07-05 华为技术有限公司 A kind of data transmission method, apparatus and system
CN108616600A (en) * 2018-05-11 2018-10-02 深圳市网心科技有限公司 Resource regulating method, client server, node device, network system and medium
CN108616600B (en) * 2018-05-11 2021-12-03 深圳市网心科技有限公司 Resource scheduling method, client server, node device, network system, and medium
CN109040337A (en) * 2018-07-19 2018-12-18 网宿科技股份有限公司 A kind of information query method, Edge Server and information query system
CN109194718A (en) * 2018-08-09 2019-01-11 玄章技术有限公司 A kind of block chain network and its method for scheduling task
CN109151512A (en) * 2018-09-12 2019-01-04 中国联合网络通信集团有限公司 The method and device of content is obtained in CDN network
CN111064713A (en) * 2019-02-15 2020-04-24 腾讯科技(深圳)有限公司 Node control method and related device in distributed system
CN111064713B (en) * 2019-02-15 2021-05-25 腾讯科技(深圳)有限公司 Node control method and related device in distributed system
CN110536179B (en) * 2019-06-28 2021-11-26 三星电子(中国)研发中心 Content distribution system and method
CN110536179A (en) * 2019-06-28 2019-12-03 三星电子(中国)研发中心 A kind of content distribution system and method
CN112513830A (en) * 2019-07-15 2021-03-16 华为技术有限公司 Back-source method and related device in content distribution network
CN112241413A (en) * 2019-07-18 2021-01-19 腾讯科技(深圳)有限公司 Pre-push content management method and device and computer equipment
CN112688980B (en) * 2019-10-18 2022-10-25 上海哔哩哔哩科技有限公司 Resource distribution method and device, and computer equipment
CN112688980A (en) * 2019-10-18 2021-04-20 上海哔哩哔哩科技有限公司 Resource distribution method and device, and computer equipment
CN110769266A (en) * 2019-10-22 2020-02-07 山东云缦智能科技有限公司 Method for realizing high availability and high concurrency of CMAF low-delay live broadcast
CN112738149A (en) * 2019-10-29 2021-04-30 贵州白山云科技股份有限公司 Data transmission system and method
CN112751885A (en) * 2019-10-29 2021-05-04 贵州白山云科技股份有限公司 Data transmission system and method
CN112738149B (en) * 2019-10-29 2023-04-25 贵州白山云科技股份有限公司 Data transmission system and method
CN112866310A (en) * 2019-11-12 2021-05-28 北京金山云网络技术有限公司 CDN back-to-source verification method and verification server, and CDN cluster
CN112866310B (en) * 2019-11-12 2022-03-04 北京金山云网络技术有限公司 CDN back-to-source verification method and verification server, and CDN cluster
CN112929319A (en) * 2019-12-05 2021-06-08 中国电信股份有限公司 Content service method, system, apparatus and computer-readable storage medium
CN112929319B (en) * 2019-12-05 2024-05-31 中国电信股份有限公司 Content service method, system, device and computer readable storage medium
CN112995251B (en) * 2019-12-13 2023-02-03 北京金山云网络技术有限公司 Source returning method and device, electronic equipment and storage medium
CN112995251A (en) * 2019-12-13 2021-06-18 北京金山云网络技术有限公司 Source returning method and device, electronic equipment and storage medium
CN111327922A (en) * 2020-03-18 2020-06-23 湖南快乐阳光互动娱乐传媒有限公司 Data updating method, system and medium after video content updating
CN112019604A (en) * 2020-08-13 2020-12-01 上海哔哩哔哩科技有限公司 Edge data transmission method and system
CN112019604B (en) * 2020-08-13 2023-09-01 上海哔哩哔哩科技有限公司 Edge data transmission method and system
CN112235201A (en) * 2020-08-31 2021-01-15 贵阳忆联网络有限公司 Method and system for realizing edge deployment
CN114390053B (en) * 2022-01-12 2023-07-04 中国联合网络通信集团有限公司 Service content scheduling method, device, equipment and storage medium
CN114390053A (en) * 2022-01-12 2022-04-22 中国联合网络通信集团有限公司 Service content scheduling method, device, equipment and storage medium
CN114979271A (en) * 2022-05-11 2022-08-30 浪潮云信息技术股份公司 CDN cache layered scheduling method based on edge cloud computing
CN115022177A (en) * 2022-06-08 2022-09-06 阿里巴巴(中国)有限公司 CDN system, back-to-source method, CDN node and storage medium
CN115022177B (en) * 2022-06-08 2023-10-24 阿里巴巴(中国)有限公司 CDN system, source returning method, CDN node and storage medium

Similar Documents

Publication Publication Date Title
CN107277561A (en) Content distributing network
US9015416B2 (en) Efficient cache validation and content retrieval in a content delivery network
US10432708B2 (en) Content delivery network
US10785341B2 (en) Processing and caching in an information-centric network
CN101257396B (en) System for distributing multi-field content based on P2P technique as well as corresponding method
US7272613B2 (en) Method and system for managing distributed content and related metadata
CA2859163C (en) Content delivery network
US20100312861A1 (en) Method, network, and node for distributing electronic content in a content distribution network
US9176779B2 (en) Data access in distributed systems
CN101540775B (en) Method and device for distributing contents and network system for distributing contents
US8099402B2 (en) Distributed data storage and access systems
EP1892921A2 (en) Method and sytem for managing distributed content and related metadata
US8954976B2 (en) Data storage in distributed resources of a network based on provisioning attributes
US9432452B2 (en) Systems and methods for dynamic networked peer-to-peer content distribution
CN104320410A (en) All-service CDN system based on HTTP and working method thereof
CN1941736A (en) Content distributing system and method for re-directing user request
CN101005369A (en) Distritive content sending net and distributive content sending and up transfering method
CN102035815B (en) Data acquisition method, access node and system
JP2013525931A (en) Dynamic binding used for content delivery
WO2010060106A1 (en) Adaptive network content delivery system
CN104714965A (en) Static resource weight removing method, and static resource management method and device
CN107277092A (en) Content distributing network and its data download method
US10324980B2 (en) Method and system for caching of video files
CN109618003B (en) Server planning method, server and storage medium
CN102868542B (en) The control method and system of service quality in a kind of service delivery network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20200602