CN110351290A - Data cache method and system - Google Patents
Data cache method and system Download PDFInfo
- Publication number
- CN110351290A CN110351290A CN201910647297.3A CN201910647297A CN110351290A CN 110351290 A CN110351290 A CN 110351290A CN 201910647297 A CN201910647297 A CN 201910647297A CN 110351290 A CN110351290 A CN 110351290A
- Authority
- CN
- China
- Prior art keywords
- node
- cache
- fringe
- data
- program data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/10—Architectures or entities
- H04L65/1013—Network architectures, gateways, control or user entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/611—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for multicast or broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/52—Network services specially adapted for the location of the user terminal
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Transfer Between Computers (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention provides a kind of data cache method and system, this method comprises: fringe nodes all in CDN system are divided into one or more node clusters, each fringe node all has contents directory;When object edge node needs to obtain order program data from central node, the program request rate of the order program data is calculated, the object edge node is provides the fringe node of access service for user;Judge whether the program request rate is greater than the first cache threshold;If so, obtaining the order program data from the central node, while the pre-cache message of the order program data is sent to other fringe nodes of the node cluster where the object edge node, so that other described fringe nodes update respective contents directory;After order program data caching is completed, the caching for sending the order program data to other described fringe nodes completes message, so that other described fringe nodes update respective contents directory.The program request pressure of central node can be alleviated through the invention.
Description
Technical field
The present invention relates to the communications fields, and in particular, to a kind of data cache method and system.
Background technique
Content distributing network (Content Delivery Network, hereinafter referred to as CDN), by using distributed frame
A central node and multiple fringe nodes is arranged in structure, and fringe node can provide access service for nearest user, so both
Central node pressure can be alleviated, and network transmission bandwidth can be saved.There is no user to need in the fringe node nearest from user
When video content, fringe node needs the video content of buffered in advance central node to providing service for user, therefore, interior
Hold caching method and effect is particularly important to CDN system.
Currently, CDN content caching method is implemented as follows: each independent buffered video content of marginal point, when in video
Hold program request temperature and meet caching and require and when the uncached video content of marginal point, fringe node is requested in this video to central node
Hold and cache and download, realizes and purpose is cached to center node content.
In existing CDN content caching method, for the video content of different brackets temperature, fringe node is using directly all
Caching method is cached, and be will lead to the lower content of temperature in this way and is repeated to occupy each fringe node space, causes largely to store
The waste in space.
Summary of the invention
The present invention is directed at least solve one of the technical problems existing in the prior art, a kind of data cache method is proposed
And system.
A kind of data cache method is provided to achieve the purpose of the present invention, which comprises
It is divided according to geographic location area, fringe nodes all in CDN system is divided into one or more node clusters, until
A rare node cluster includes multiple fringe nodes, and each fringe node all has contents directory, the content mesh
Employ the data buffer storage information of each fringe node in record present node group;
When object edge node needs to obtain order program data from central node, the program request rate of the order program data is calculated,
The object edge node for for user provide access service fringe node;
Judge whether the program request rate is greater than the first cache threshold;
If so, obtain the order program data from the central node, while to the node where the object edge node
Other fringe nodes of group send the pre-cache message of the order program data, so that other described fringe nodes update in respective
Hold catalogue;
After order program data caching is completed, the caching for sending the order program data to other described fringe nodes is complete
At message, so that other described fringe nodes update respective contents directory.
Preferably, when the program request rate is less than or equal to the first cache threshold, the method also includes:
Judge whether the program request rate is greater than or equal to the second cache threshold, second cache threshold is less than described first
Cache threshold;
If so, the contents directory of the object edge querying node itself, where detecting the object edge node
The whether existing one or more fringe nodes of all fringe nodes have cached or the just program request number described in pre-cache in node cluster
According to;
When all fringe nodes are uncached or non-pre-cache described in order program data when, obtain institute from the central node
Order program data is stated, while sending the pre-cache to other fringe nodes of the node cluster where the object edge node and disappearing
Breath, so that other described fringe nodes update respective contents directory;
After order program data caching is completed, the caching is sent to other described fringe nodes and completes message, with
Other described fringe nodes are made to update respective contents directory.
Preferably, when detect in the node cluster where the object edge node have one or more fringe nodes
When caching or caching the order program data, stop data buffer storage.
Preferably, the contents directory includes:
Each edge node identities, cache on demand Data Identification, fringe node pass corresponding with cache on demand data in group
It is table, pre-cache order program data mark, fringe node and pre-cache order program data mapping table.
Preferably, the program request rate for calculating the order program data includes:
The request number of times of the order program data in the unit of account time;
When the object edge node obtains the order program data from central node for the first time, the request of the order program data
Number is obtained from the central node.
A kind of data buffering system, the system comprises: group's division module, computing module, first judgment module, broadcast mould
Block;
Fringe nodes all in CDN system are divided by group's division module for dividing according to geographic location area
One or more node clusters, wherein at least one node cluster include multiple fringe nodes, and each fringe node all has
Contents directory, the contents directory are used to record the data buffer storage information of each fringe node in present node group;
The computing module is used to calculate the point when object edge node needs to obtain order program data from central node
The program request rate of multicast data, the object edge node for for user provide access service fringe node;
The first judgment module is delayed for judging whether the program request rate is greater than the first cache threshold, and being greater than first
The broadcast module is triggered when depositing threshold value;
The broadcast module is used to obtain the order program data from the central node, while to the object edge node
Other fringe nodes of the node cluster at place send the pre-cache message of the order program data, so that other described fringe nodes are more
New respective contents directory;
After order program data caching is completed, the caching for sending the order program data to other described fringe nodes is complete
At message, so that other described fringe nodes update respective contents directory.
Preferably, further includes: the second judgment module and enquiry module;
Second judgment module is used to judge the program request when the program request rate is less than or equal to the first cache threshold
Whether rate is greater than or equal to the second cache threshold, and second cache threshold is less than first cache threshold;If so, will judgement
As a result it is sent to the enquiry module;
The enquiry module is used to make the contents directory of the object edge querying node itself, to detect the target side
The whether existing one or more fringe nodes of all fringe nodes have cached or pre- slow in node cluster where edge node
Deposit the order program data;And all fringe nodes are uncached or non-pre-cache described in order program data when, trigger described wide
Broadcasting module.
Preferably, further includes: caching disabled module;
The caching disabled module is for the node cluster where inquiring the object edge node in the enquiry module
In when having one or more fringe nodes and having cached or cached the order program data, stop data buffer storage.
Preferably, the contents directory includes:
Each edge node identities, cache on demand Data Identification, fringe node pass corresponding with cache on demand data in group
It is table, pre-cache order program data mark, fringe node and pre-cache order program data mapping table.
Preferably, the program request rate is the request number of times of the order program data in the unit time.
The invention has the following advantages:
Data cache method provided by the invention and system, by fringe nodes all in CDN system be divided into one it is multiple
Node cluster calculates the program request rate of order program data, and in program request rate when destination node needs to obtain order program data from central node
When greater than the first cache threshold, from central node obtain order program data, while to the node cluster where object edge node its
His fringe node sends the pre-cache message of order program data;After order program data caching is completed, sent to other fringe nodes
The caching of order program data completes message, so that other fringe nodes update respective contents directory;The present invention uses fringe node
The mode for dividing group realizes each fringe node content caching in node cluster according to the different program request rates of order program data, thus
In the case where guaranteeing that fringe node memory space is constant, this mode can cache more on-demand contents, and then realize in alleviation
Heart node program request pressure purpose.
Detailed description of the invention
Fig. 1 is a kind of flow chart of data cache method provided in an embodiment of the present invention;
Fig. 2 is another flow chart of data cache method provided in an embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram of data buffering system provided in an embodiment of the present invention;
Fig. 4 is another structural schematic diagram of data buffering system provided in an embodiment of the present invention.
Specific embodiment
To make those skilled in the art more fully understand technical solution of the present invention, come with reference to the accompanying drawing to the present invention
The data cache method and system of offer are described in detail.
Embodiment one
Fig. 1 show a kind of flow chart of data cache method provided in an embodiment of the present invention, and in the present embodiment, data are slow
Deposit method the following steps are included:
Step 100: starting.
Step 101: being divided according to geographic location area, fringe nodes all in CDN system are divided into one or more
Node cluster, at least one node cluster include multiple fringe nodes, and each fringe node all has contents directory, and contents directory is used
The data buffer storage information of each fringe node in record present node group.
Specifically, contents directory include: each edge node identities in group, cache on demand Data Identification, fringe node with
Cache on demand data mapping table, pre-cache order program data mark, fringe node and pre-cache order program data corresponding relationship
Table.
Wherein, cache on demand Data Identification refers to the content recognition number or product identification that can identify each order program data
Number;Pre-cache order program data mark refer to can identify it is each just in the content recognition number or product identification of pre-cache order program data
Number;Edge node identities refer to the identifier that can identify each fringe node;Fringe node is corresponding with cache on demand data
Relation table refers to the corresponding relationship recorded between each fringe node and cache on demand data identifier;Fringe node delays with pre-
It deposits order program data mapping table and refers to the corresponding relationship recorded between each fringe node and pre-cache order program data identifier.
Step 102: when object edge node needs to obtain order program data from central node, calculating the program request of order program data
Rate, object edge node for for user provide access service fringe node.
Specifically, the program request rate for calculating order program data includes:
The request number of times of order program data in the unit of account time;
When object edge node obtains order program data from central node for the first time, the request number of times of order program data is from centromere
Point obtains.
Step 103: judging whether program request rate is greater than the first cache threshold;If so, executing step 104;Otherwise, step is executed
106。
Step 104: obtaining order program data from central node, while to other sides of the node cluster where object edge node
Edge node sends the pre-cache message of order program data, so that other fringe nodes update respective contents directory.
Step 105: after order program data caching is completed, the caching completion for sending order program data to other fringe nodes disappears
Breath, so that other fringe nodes update respective contents directory.
Step 106: terminating.
Data cache method provided in an embodiment of the present invention, by fringe nodes all in CDN system be divided into one it is multiple
Node cluster calculates the program request rate of order program data, and in program request rate when destination node needs to obtain order program data from central node
When greater than the first cache threshold, from central node obtain order program data, while to the node cluster where object edge node its
His fringe node sends the pre-cache message of order program data;After order program data caching is completed, sent to other fringe nodes
The caching of order program data completes message, so that other fringe nodes update respective contents directory;The present invention uses fringe node
The mode for dividing group realizes each fringe node content caching in node cluster according to the different program request rates of order program data, thus
In the case where guaranteeing that fringe node memory space is constant, this mode can cache more on-demand contents, and then realize in alleviation
Heart node program request pressure purpose.
Embodiment two
Fig. 2 show another flow chart of data cache method provided in an embodiment of the present invention, in the present embodiment, data
Caching method the following steps are included:
Step 200: starting.
Step 201: being divided according to geographic location area, fringe nodes all in CDN system are divided into one or more
Node cluster, at least one node cluster include multiple fringe nodes, and each fringe node all has contents directory, and contents directory is used
The data buffer storage information of each fringe node in record present node group.
Step 202: when object edge node needs to obtain order program data from central node, calculating the program request of order program data
Rate, object edge node for for user provide access service fringe node.
Step 203: judging whether program request rate is greater than the first cache threshold;If so, executing step 204;Otherwise, step is executed
207。
Step 204: obtaining order program data from central node, while to other sides of the node cluster where object edge node
Edge node sends the pre-cache message of order program data, so that other fringe nodes update respective contents directory;
Step 205: after order program data caching is completed, the caching completion for sending order program data to other fringe nodes disappears
Breath, so that other fringe nodes update respective contents directory.
Step 206: terminating.
Step 207: judging whether program request rate is greater than or equal to the second cache threshold, the second cache threshold is less than the first caching
Threshold value;If so, executing step 208;Otherwise, step 206 is executed.
Step 208: the contents directory of object edge querying node itself.
Step 209: the whether existing one or more of all fringe nodes in the node cluster where detection object edge node
Fringe node has cached or just in pre-cache order program data;If it is not, executing step 204;If so, executing step 210.
Step 210: stopping data buffer storage, execute step 202.
Data cache method provided in an embodiment of the present invention, when object edge node needs to obtain program request number from central node
According to when, according to order program data calculate program request rate, and program request rate be greater than the first cache threshold when, from central node obtain program request number
According to, and to the pre-cache message of other fringe nodes of the node cluster where object edge node transmission order program data, so that its
He updates respective contents directory by fringe node;After order program data caching is completed, cached to the transmission of other fringe nodes
At message, so that fringe node updates respective contents directory;Less than the first cache threshold and it is greater than the second caching in program request rate
When threshold value, the contents directory of object edge querying node itself, all edge sections in node cluster where detection object edge node
The whether existing one or more fringe nodes of point have cached or just in pre-cache order program datas, when all fringe nodes are uncached
Or when non-pre-cache order program data, order program data is obtained from central node, while other fringe nodes being notified to update in respective
Hold catalogue.The present invention, when object edge node needs to obtain order program data from central node, for the higher program request of program request rate
Data, object edge node directly obtain order program data from central node, ensure that order program data is quickly pushed to user;For
The lower order program data of program request rate, object edge node again in the node cluster where inquiring it in the contents directory of itself not
Have any one fringe node cached or pre-cache order program data after, then from central node obtain order program data, guaranteeing
In the case that fringe node memory space is constant, this mode can cache more video contents, and then realize and alleviate centromere
Point program request pressure purpose.
Embodiment three
For above-mentioned data cache method, the present invention also provides a kind of data buffering systems, as shown in figure 3, data are slow
Deposit system includes: group division module, computing module, first judgment module, broadcast module.
Fringe nodes all in CDN system are divided into one for dividing according to geographic location area by group's division module
Or multiple node clusters, wherein at least one node cluster include multiple fringe nodes, each fringe node all has contents directory,
Contents directory is used to record the data buffer storage information of each fringe node in present node group.
Specifically, contents directory includes:
Each edge node identities, cache on demand Data Identification, fringe node pass corresponding with cache on demand data in group
It is table, pre-cache order program data mark, fringe node and pre-cache order program data mapping table.
Computing module is used to calculate order program data when object edge node needs to obtain order program data from central node
Program request rate, object edge node for for user provide access service fringe node.
Specifically, program request rate is the request number of times of order program data in the unit time.Computing module can be by object edge section
The request number of times that point itself obtains order program data can also obtain the request number of times from central node, to calculate program request rate.
First judgment module is used to judge whether program request rate to be greater than the first cache threshold, and when being greater than the first cache threshold
Trigger broadcast module.
Broadcast module be used for from central node obtain order program data, while to the node cluster where object edge node its
His fringe node sends the pre-cache message of order program data, so that other fringe nodes update respective contents directory;
After order program data caching is completed, the caching for sending order program data to other fringe nodes completes message, so that
Other fringe nodes update respective contents directory.
Data buffering system provided in an embodiment of the present invention, group's division module divide fringe nodes all in CDN system
For a multiple node clusters, when destination node needs to obtain order program data from central node, computing module calculates order program data
Program request rate, first judgment module program request rate be greater than the first cache threshold when, trigger broadcast module;Broadcast module is from center
Node obtains order program data, while sending the pre- of order program data to other fringe nodes of the node cluster where object edge node
Buffered message;After order program data caching is completed, the caching for sending order program data to other fringe nodes completes message, so that
Other fringe nodes update respective contents directory;The present invention is in such a way that fringe node divides group, not according to order program data
Same program request rate realizes each fringe node content caching in node cluster, thus guaranteeing that fringe node memory space is constant
In the case of, this mode can cache more on-demand contents, and then realize and alleviate central node program request pressure purpose.
In another embodiment of the invention, as shown in figure 4, data buffering system further include: the second judgment module and
Enquiry module.
Second judgment module is used to judge the program request when the program request rate is less than or equal to the first cache threshold
Whether rate is greater than or equal to the second cache threshold, and second cache threshold is less than first cache threshold;If so, will judgement
As a result it is sent to the enquiry module.
Enquiry module is used to make the contents directory of the object edge querying node itself, to detect the object edge section
The whether existing one or more fringe nodes of all fringe nodes have cached or just in pre-cache institute in node cluster where point
State order program data;And all fringe nodes are uncached or non-pre-cache described in order program data when, trigger the broadcast mould
Block.
Data buffering system provided in an embodiment of the present invention can be made by the second judgment module of setting and enquiry module
Object edge node is when the program request rate of order program data is lower, first choice inquiry own content catalogue, to determine in present node group
There is not any one fringe node to cache or just in pre-cache order program data, ensure that the constant feelings of fringe node memory space
Under condition, more video contents can be cached, alleviate central node program request pressure.
In another embodiment of the present invention, further includes: caching disabled module;
Caching disabled module be used for inquire object edge node in enquiry module where node cluster in have one or
Multiple fringe nodes have cached or just in cache on demand data, stop data buffer storage.
In the present invention, order program data can be file, video, audio form data, below with reference to Fig. 4 to using invent
The process that the data cache method and system of offer carry out video content caching is illustrated:
1, when object edge node is needed from the high video content of center nodal cache program request amount, fringe node is according to view
Frequency content-on-demand rate p size judges whether to need this nodal cache, if when program request rate p > the first cache threshold P1, fringe node
This video content is requested to cache from central node, meanwhile, object edge node is into group in this video of other edge-node broadcasts
Hold pre-cache message.
2, after the completion of object edge nodal cache, object edge node is into group in this video of other edge-node broadcasts
Hold buffered message more new content to be written in the contents directory of respective node after other fringe nodes receive this message in group.
If 3, when the first cache threshold P1 >=program request rate cache threshold P2 of p >=second, object edge node is inquired certainly first
Body contents directory, if this all uncached video content of other fringe nodes in this group, and this view of other nodes in group is not received
The pre-cache message of frequency content, then this fringe node this video content pre-cache message of other edge-node broadcasts into group, together
When start from central node request buffered video content.
4, after the completion of caching, fringe node this video content of other node broadcasts buffered message into group, in group its
After his fringe node receives this message, more new content is written in the contents directory of respective node.
It is understood that the principle that embodiment of above is intended to be merely illustrative of the present and the exemplary implementation that uses
Mode, however the present invention is not limited thereto.For those skilled in the art, essence of the invention is not being departed from
In the case where mind and essence, various changes and modifications can be made therein, these variations and modifications are also considered as protection scope of the present invention.
Claims (10)
1. a kind of data cache method, which is characterized in that the described method includes:
It is divided according to geographic location area, fringe nodes all in CDN system is divided into one or more node clusters, at least
One node cluster includes multiple fringe nodes, and each fringe node all has contents directory, and the contents directory is used
The data buffer storage information of each fringe node in record present node group;
When object edge node needs to obtain order program data from central node, the program request rate of the order program data is calculated, it is described
Object edge node for for user provide access service fringe node;
Judge whether the program request rate is greater than the first cache threshold;
If so, obtain the order program data from the central node, while to the node cluster where the object edge node
Other fringe nodes send the pre-cache message of the order program data, so that other described fringe nodes update respective content mesh
Record;
After order program data caching is completed, the caching completion for sending the order program data to other described fringe nodes disappears
Breath, so that other described fringe nodes update respective contents directory.
2. data cache method according to claim 1, which is characterized in that it is slow to be less than or equal to first in the program request rate
When depositing threshold value, the method also includes:
Judge whether the program request rate is greater than or equal to the second cache threshold, second cache threshold is less than first caching
Threshold value;
If so, the contents directory of the object edge querying node itself, to detect the node where the object edge node
The whether existing one or more fringe nodes of all fringe nodes have cached or the just order program data described in pre-cache in group;
When all fringe nodes are uncached or non-pre-cache described in order program data when, obtain the point from the central node
Multicast data, while the pre-cache message is sent to other fringe nodes of the node cluster where the object edge node, with
Other described fringe nodes are made to update respective contents directory;
After order program data caching is completed, the caching is sent to other described fringe nodes and completes message, so that institute
It states other fringe nodes and updates respective contents directory.
3. data cache method according to claim 2, which is characterized in where detecting the object edge node
Node cluster in when having one or more fringe nodes and having cached or cached the order program data, it is slow to stop data
It deposits.
4. data cache method according to claim 1-3, which is characterized in that the contents directory includes:
Each edge node identities in group, cache on demand Data Identification, fringe node and cache on demand data mapping table,
Pre-cache order program data mark, fringe node and pre-cache order program data mapping table.
5. data cache method according to claim 4, which is characterized in that the program request rate for calculating the order program data
Include:
The request number of times of the order program data in the unit of account time;
When the object edge node obtains the order program data from central node for the first time, the request number of times of the order program data
It is obtained from the central node.
6. a kind of data buffering system, which is characterized in that the system comprises: group's division module, computing module, first judge mould
Block, broadcast module;
Fringe nodes all in CDN system are divided into one for dividing according to geographic location area by group's division module
Or multiple node clusters, wherein at least one node cluster include multiple fringe nodes, each fringe node all has content
Catalogue, the contents directory are used to record the data buffer storage information of each fringe node in present node group;
The computing module is used to calculate the program request number when object edge node needs to obtain order program data from central node
According to program request rate, the object edge node for for user provide access service fringe node;
The first judgment module is being greater than the first caching threshold for judging whether the program request rate is greater than the first cache threshold
The broadcast module is triggered when value;
The broadcast module is used to obtain the order program data from the central node, while to where the object edge node
Other fringe nodes of node cluster send the pre-cache message of the order program data so that other described fringe nodes update it is each
From contents directory;
After order program data caching is completed, the caching completion for sending the order program data to other described fringe nodes disappears
Breath, so that other described fringe nodes update respective contents directory.
7. data buffering system according to claim 6, which is characterized in that further include: the second judgment module and inquiry
Module;
Second judgment module is used to judge that the program request rate is when the program request rate is less than or equal to the first cache threshold
No to be greater than or equal to the second cache threshold, second cache threshold is less than first cache threshold;If so, by judging result
It is sent to the enquiry module;
The enquiry module is used to make the contents directory of the object edge querying node itself, to detect the object edge section
The whether existing one or more fringe nodes of all fringe nodes have cached or just in pre-cache institute in node cluster where point
State order program data;And all fringe nodes are uncached or non-pre-cache described in order program data when, trigger the broadcast mould
Block.
8. data buffering system according to claim 7, which is characterized in that further include: caching disabled module;
In the node cluster cached where disabled module is used to inquire the object edge node in the enquiry module
When there are one or more fringe nodes to cache or cached the order program data, stop data buffer storage.
9. according to the described in any item data buffering systems of claim 6-8, which is characterized in that the contents directory includes:
Each edge node identities in group, cache on demand Data Identification, fringe node and cache on demand data mapping table,
Pre-cache order program data mark, fringe node and pre-cache order program data mapping table.
10. data buffering system according to claim 9, which is characterized in that the program request rate is described in the unit time
The request number of times of order program data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910647297.3A CN110351290B (en) | 2019-07-17 | 2019-07-17 | Data caching method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910647297.3A CN110351290B (en) | 2019-07-17 | 2019-07-17 | Data caching method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110351290A true CN110351290A (en) | 2019-10-18 |
CN110351290B CN110351290B (en) | 2021-08-06 |
Family
ID=68175607
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910647297.3A Active CN110351290B (en) | 2019-07-17 | 2019-07-17 | Data caching method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110351290B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101291425A (en) * | 2008-06-17 | 2008-10-22 | 中兴通讯股份有限公司 | Method and system realizing content dynamically publishing based on hotness of user's demand |
CN101447937A (en) * | 2009-02-27 | 2009-06-03 | 北京理工大学 | Rapid data positioning method based on path division and multi-distributed-directory |
CN101527736A (en) * | 2009-04-09 | 2009-09-09 | 中兴通讯股份有限公司 | Service content processing method and updating method in distributed file system and device thereof |
CN102098310A (en) * | 2011-02-22 | 2011-06-15 | 中国联合网络通信集团有限公司 | Streaming media content service method and system |
CN104320410A (en) * | 2014-11-11 | 2015-01-28 | 南京优速网络科技有限公司 | All-service CDN system based on HTTP and working method thereof |
CN104348841A (en) * | 2013-07-23 | 2015-02-11 | 中国联合网络通信集团有限公司 | Content delivery method, analysis and management and control system and content delivery network system |
US8966033B2 (en) * | 2009-08-17 | 2015-02-24 | At&T Intellectual Property I, L.P. | Integrated proximity routing for content distribution |
CN104683485A (en) * | 2015-03-25 | 2015-06-03 | 重庆邮电大学 | C-RAN based internet content caching and preloading method and system |
CN107872478A (en) * | 2016-09-26 | 2018-04-03 | 中国移动通信有限公司研究院 | A kind of content buffering method, device and system |
-
2019
- 2019-07-17 CN CN201910647297.3A patent/CN110351290B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101291425A (en) * | 2008-06-17 | 2008-10-22 | 中兴通讯股份有限公司 | Method and system realizing content dynamically publishing based on hotness of user's demand |
CN101447937A (en) * | 2009-02-27 | 2009-06-03 | 北京理工大学 | Rapid data positioning method based on path division and multi-distributed-directory |
CN101527736A (en) * | 2009-04-09 | 2009-09-09 | 中兴通讯股份有限公司 | Service content processing method and updating method in distributed file system and device thereof |
US8966033B2 (en) * | 2009-08-17 | 2015-02-24 | At&T Intellectual Property I, L.P. | Integrated proximity routing for content distribution |
CN102098310A (en) * | 2011-02-22 | 2011-06-15 | 中国联合网络通信集团有限公司 | Streaming media content service method and system |
CN104348841A (en) * | 2013-07-23 | 2015-02-11 | 中国联合网络通信集团有限公司 | Content delivery method, analysis and management and control system and content delivery network system |
CN104320410A (en) * | 2014-11-11 | 2015-01-28 | 南京优速网络科技有限公司 | All-service CDN system based on HTTP and working method thereof |
CN104683485A (en) * | 2015-03-25 | 2015-06-03 | 重庆邮电大学 | C-RAN based internet content caching and preloading method and system |
CN107872478A (en) * | 2016-09-26 | 2018-04-03 | 中国移动通信有限公司研究院 | A kind of content buffering method, device and system |
Also Published As
Publication number | Publication date |
---|---|
CN110351290B (en) | 2021-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106850581B (en) | Distribution backup method, system and server for interactive live broadcast streaming media data | |
CN100518105C (en) | Method, system and content distribution network for monitoring network | |
CN107181734B (en) | Streaming media cache replacement method of CDN-P2P network architecture | |
CN107231395A (en) | Date storage method, device and system | |
EP2719142B1 (en) | Locating and retrieving segmented content | |
US8880650B2 (en) | System and method for storing streaming media file | |
CN103164202B (en) | A kind of gray scale dissemination method and device | |
CN105357246B (en) | Caching method based on information centre's network and system | |
KR20130088774A (en) | System and method for delivering segmented content | |
CN101764831A (en) | Method and system for sharing stream media data, and stream media node | |
CN101212646A (en) | System and method for implementing video-on-demand with peer-to-peer network technique | |
CN104113735A (en) | Distributed video monitoring storing system and method thereof | |
US20120016916A1 (en) | Method and Apparatus for Processing and Updating Service Contents in a Distributed File System | |
CN102546711A (en) | Storage adjustment method, device and system for contents in streaming media system | |
WO2010133140A1 (en) | Multimedia network with content segmentation and service method thereof | |
CN105635196A (en) | Method and system of file data obtaining, and application server | |
WO2012075970A1 (en) | Method, device and system for obtaining media content | |
CN104580274A (en) | Content replacement method, system and node in CDN | |
CN110445626B (en) | Block packing and broadcasting method and system, equipment and storage medium | |
CN102497389B (en) | Big umbrella caching algorithm-based stream media coordination caching management method and system for IPTV | |
US8423593B2 (en) | Content distribution system | |
CN101998144A (en) | Content management method and system | |
US10705978B2 (en) | Asynchronous tracking for high-frequency and high-volume storage | |
CN104602035A (en) | Streaming media on-demand method and streaming media on-demand system | |
CN110351290A (en) | Data cache method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |