CN103369368A - Video cloud on-demand cache scheduling method supporting multi-code-rate version - Google Patents
Video cloud on-demand cache scheduling method supporting multi-code-rate version Download PDFInfo
- Publication number
- CN103369368A CN103369368A CN2013102530563A CN201310253056A CN103369368A CN 103369368 A CN103369368 A CN 103369368A CN 2013102530563 A CN2013102530563 A CN 2013102530563A CN 201310253056 A CN201310253056 A CN 201310253056A CN 103369368 A CN103369368 A CN 103369368A
- Authority
- CN
- China
- Prior art keywords
- version
- request
- buffer memory
- user
- memory group
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention discloses a video cloud on-demand cache scheduling method supporting a multi-code-rate version. Considering the correlation of versions with the same video content but different code rates, a cache group share scheduling algorithm is adopted to reduce the cache resource expenditure. According to the method, for the same video content, video files of multiple code rate versions are stored in a streaming media server to adapt different requirements of a user; when the user requests the video file of a certain code rate version, direct service is performed if the file of the code rate version has been cached in the cache server; whether a file with an adjacent code rate version is cached is looked up in the cache server if the file of the code rate version is not cached; and if the file of the code rate version is cached, the request is firstly added into the cache group to reduce the wait time length of the user.
Description
Technical field
The invention belongs to the video on demand techniques field, relate to that the video cloud point is broadcast, the buffer memory group scheduling, particularly relate to a kind of video cloud point of multi code Rate of Chinese character version of supporting and broadcast the buffer scheduling method.
Background technology
Along with the development of Internet technology is popularized with continuous, video-on-demand applications is experiencing the behavior of large-scale consumer Concurrency Access to the impact of service performance.Because the diversity of client configuration and bandwidth is different, causes the demand of multi code Rate of Chinese character video version greatly to increase, therefore, the buffer scheduling of broadcasting for the video cloud point of the big data quantity of this support multi code Rate of Chinese character version just seems extremely important.The applicant is new through looking into, and retrieves the following several pieces of patents that belong to the video request program cache field related to the present invention, and they are respectively:
1. Chinese patent 201010019401.3, the collaboration method in a kind of video on-demand system and video on-demand system;
2. Chinese patent 200910084617.5, the method for magnetic disc cache replacement in a kind of p2p video on-demand system;
3. Chinese patent 200710053576.4, a kind of video frequency request program caching method based on p 2 p technology;
4. Chinese patent 200810068259.4, a kind of video on-demand system and data cache method thereof and dispatch server.
In above-mentioned patent 1, the inventor has proposed cooperation caching method and the video on-demand system in a kind of video on-demand system, can guarantee in the situation of local user's service ability, to be redirected on the suitable server from different districts user's request, thereby improve the buffer efficiency of server and the service ability of system.
In above-mentioned patent 2, the inventor has proposed the method for magnetic disc cache replacement in a kind of p2p video on-demand system, each client regularly sends the latest data cache information of this node to neighbor node, then needs are replaced each data cached data block, draw the node number of this data block of active demand and the node number of this data block of primary demand, draw the priority of each data block, replace it again at last.This invention can more effective raising grid in collaborative between the node, reduce the pressure of source of media server.
In above-mentioned patent 3, the inventor has proposed a kind of caching method of video frequency request program, and the method obtains program request person's quantity of institute's request program, judges whether hot programs of institute's request program, and non-hot programs is write hard disk.Owing to not needing to read continually hard disk, thereby having improved the efficient of buffer memory.
In above-mentioned patent 4, the inventor has proposed a kind of video on-demand system based on p2p and data cache method and dispatch server.The method is selected candidate seed nodes and is stored its information from described user node, then generate the active cache task according to the program in candidate seed nodes and the VOD system, this task is dispensed to corresponding candidate seed nodes, make described candidate seed nodes carry out program caching, thereby improved service quality and the reliability of video on-demand system.
Look into newly according to above-mentioned, the existing problem of prior art is, all do not consider the different of the diversity of client configuration and bandwidth, and the correlation of program request between the different code check version files of identical content video.
Summary of the invention
In order to overcome above-mentioned the deficiencies in the prior art, the object of the present invention is to provide a kind of video cloud point of many versions of supporting to broadcast the buffer scheduling method, consider same video content, identical duration but correlation between the different code check version file, thereby obtain better video-on-demand service quality.
To achieve these goals, the technical solution used in the present invention is:
A kind of video cloud point of multi code Rate of Chinese character version of supporting is broadcast the buffer scheduling method, for same video content, the video file of a plurality of code check versions of storage is to adapt to the different request of user in the streaming media server, video file when a certain code check version of user request, if the file of this code check version is in caching server during buffer memory, then directly service will be if buffer memory not then will be asked other code check version buffer memory groups of this video of adding.
The present invention is based on the streaming media service framework under the cloud computing platform, adopt following buffer memory group to share dispatching algorithm to reduce the cache resources expense:
Step1: will ask waiting time T in the waiting list (WaitingQueue) less than please the seeking out of Tmax, and put into user's request queue (RequestQueue), user's request queue is put in user's request that the current time sheet arrives;
Step2: judge whether the request in user's request queue was all served by streaming media server, if then skip to Step9, otherwise from user's request queue, get one also the user of not processed mistake ask q
u
Step3: q is accepted in prediction
uWhether the bandwidth of rear server and spatial cache resource overload, if overload judges then whether this request once was present in the request waiting list, if then denial of service, otherwise user's request queue is put in user's request, skip to Step2; If non-overloading then skips to Step4;
Step4: whether the code check version (Version) that checks requested document exists, if exist, then finds actual demand buffer memory group and adds this buffer memory group, skips to Step2;
Step5: check whether the close low code check version of requested document exists, if exist then find and add this interim buffer memory group, skip to Step2; If there is no, check then whether the close high code rate version of requested document exists, if exist then find and add this interim buffer memory group, skip to Step2;
Step6: check that whether the request quantity to certain video file same code rate version in the interim buffer memory group that adds reaches N, if reach, then skips to Step8;
Step7: whether the request to certain video file same code rate version in the interim buffer memory group that adds that checks exists the T time in this interim buffer memory group, if exist, then skip to Step8; If the time less than T, then skips to Step2;
Step8: whether predictive server bandwidth and spatial cache resource overload, if non-overloading, then the video file of acquisition phase code rate version from the cloud storage is these buffer memory groups of please looking for novelty out, make these requests find and add the actual demand buffer memory group of newly opening, skip to Step2; If overload then skips to Step2;
Step9: computation requests waiting list parameter and all buffer memory group parameters skip to Step10;
Step10: finish;
The concept of using in the above-mentioned algorithm steps and being defined as follows:
Request waiting list (WaitingQueue): when the user serves when the current time sheet temporarily can't be satisfied, need to put into the request waiting list, wait for next timeslice processing;
Maximum wait duration Tmax:Tmax is one and has had this duration and not yet serviced request greater than 1 natural number in the request waiting list, and it is left out from formation, namely refuses this time request;
User's request queue (RequestQueue): need to ask the user that the current time sheet is processed, must put into user's request queue, processed successively in order by system;
Multi code Rate of Chinese character version Version[1-n]: the video that refers to the same content of same duration has n code check version of different code checks, n be one greater than 1 natural number, the code check start context is according to the ascending Version[1 that is decided to be respectively of code check] to Version[n], Version[n] expression Version[n-1] close high code rate version, Version[n-2] expression Version[n-1] close low code check version;
Actual demand buffer memory group: the buffer memory group of the video file of the code check version of user's actual request;
Interim buffer memory group: but buffer memory group that code check version different identical from actual request buffer memory group content;
N:N is not less than 1 natural number, represents that the request to video file code check version is total in the interim buffer memory group, when sum surpasses N, then opens up the service of actual demand buffer memory group;
T:T is not less than 1 natural number, represents in the interim buffer memory group duration to the request service of video file code check version, when this duration surpasses T, then opens up the service of actual demand buffer memory group.
The parameter of request waiting list comprises among the described Step9: the stand-by period, buffer memory group parameter comprises: ask quantity, actual request quantity temporarily, ask the time of advent; Computational methods are all T/A difference+1, then skip to Step10.
Compared with prior art, the invention has the advantages that: considered the identical duration of same video content but correlation between the different code check version file, proposed a kind of buffer scheduling method for the video request program of multi code Rate of Chinese character version, thereby better video-on-demand service quality can be provided.
Description of drawings
Accompanying drawing 1 is realized the main process schematic diagram of caching function for the present invention is based on streaming media service framework under the cloud computing platform.
Accompanying drawing 2 is realized the system architecture diagram of caching function for the present invention is based on streaming media service framework under the cloud computing platform.
Accompanying drawing 3 is depicted as main functional modules and the cooperative relationship figure thereof of system.
Accompanying drawing 4 is for the present invention is based on the buffer scheduling algorithm flow chart that the video cloud point is broadcast.
Embodiment
Below in conjunction with drawings and Examples the present invention is described in further details.
A kind of video cloud point of many versions of supporting of the present invention is broadcast the buffer scheduling method, comprises two aspect technology: the streaming media service framework that (1) is broadcast based on the video cloud point, consider same video content but correlation between the different code check version file; (2) adopt the buffer memory group to share dispatching algorithm to reduce the cache resources expense.
As shown in Figure 1, the present invention considers same video content but correlation between the different code check version file, when the user asks a certain version file of a certain video, if do not preserve the buffer memory group of the code check version of asking in the caching server, then can other code check version buffer memory groups of this video will should be asked to add.
The streaming media service framework that the present invention is based under the cloud computing platform is realized caching function, its system architecture as shown in Figure 2:
User's order request at first arrives the system front end management node, and the media file that management node is asked the user according to certain cache policy positions and return to the user.The user visits corresponding server according to position of media, is received by the VOD Server in the VOD server.VOD Server provides service according to the file reading out data that the user asks for the user; VOD Server obtains data from cache module, the file that cache module reads according to VOD Server and data time stamp are from buffer memory or obtain data provide data for VOD Server from the cloud storage, and carry out the buffer memory of data in spatial cache according to the access rule of VOD Server.
Figure 3 shows that main functional modules and the cooperative relationship thereof of system:
It is the interpolation of asking the enterprising row cache strategy of forwarding module according to cache policy traditional user that the user asks the forwarding module major function, according to cache policy the user who receives is asked to select corresponding server, and formation resource URL returns the user.Data read module and cache module are operated in each caching server, finish VOD Server based on the data read process of buffer memory.The corresponding VOD server of request carried out the VOD order program service after the user obtained resource URL address from the request forwarding module.After VOD Server in the server received user's order request, the path that configures according to the file of asking and VOD Server thereof was carried out opening, reading of file and is served shutoff operation after complete.
Wherein, native system by expansion among the linux user's attitude file system---FUSE realizes one with the data read module of caching function, and be mounted in the catalogue, if streaming media service software document catalogue is pointed to the FUSE mount directory, then the request of streaming media server reading out data is no longer directly obtained data from the I/O of system, but enter in the program of FUSE realization, by subscriber-coded realization processing procedure, namely data access and read operation are realized by User Defined.In order to realize the buffer memory of data, need to be take FUSE as file system of basis customization.After file operation requests arrives, through the Virtual File System of VFS(linux) and the FUSE module forwards after, request of data finally arrives custom file system.Owing to having adopted the pattern of buffer memory, system can not obtain data according to the direct reading disk array of data of request, so will ask further to be forwarded in the cache module, selects corresponding data reading manner that service is provided by cache module.
Figure 4 shows that the buffer memory management method of cache module among the present invention, the buffer scheduling algorithm flow chart of namely broadcasting based on the video cloud point, implementation process is as follows in detail:
Step1: the initialization user asks waiting list WaitingQueue and user's request queue RequestQueue.When the user asked to arrive, waiting list joined request first; When waiting time surpasses Tmax, it is forwarded among user's request queue RequestQueue;
Step2: user's request queue RequestQueue is put in the user's request that arrives in the current time sheet.If user's request queue RequestQueue was served by streaming media server, then skip to Step7, otherwise take out a request as next step processing object;
Step3: bandwidth and the spatial cache resource of predicting the rear server that accepts request.Please be averaging the required spatial cache value of request resource size budget that required bandwidth average is preset server bandwidth and attached by this request by the management node statistical disposition in the buffer memory group server;
Step4: whether the code check version Version that checks requested document exists.A plurality of code check versions of buffer memory group server stores identical content, successively video is carried out version number's coding from small to large according to the code check size, each video file attached label filename and code check version number information, thus the label information code check version number that checks the file of same file name in the buffer memory group is judged.If exist then find actual demand buffer memory group and add, then skip to Step2; Otherwise enter next step;
Step5: whether low code check version close according to code check version number search request file exists, if exist then find and add interim buffer memory group, skips to Step2; If there is no, check then whether the close high code rate version of requested document exists, if exist then find and add interim buffer memory group, skip to Step2; If do not exist, then enter next step;
Step6: buffer memory group management node checks, the request quantity to certain video file same code rate version in the interim buffer memory group that adds reaches N, perhaps there is the T time in this request in this interim buffer memory group, then whether predictive server bandwidth and spatial cache resource overload, if non-overloading, then from the cloud storage, obtain the video file of phase code rate version, be these buffer memory groups of please looking for novelty out, make these requests find and add the actual demand buffer memory group of newly opening, then skip to Step2; If overload then jumps directly to Step2;
Step7: buffer memory group management node is responsible for revising the user and is asked waiting list and user's request queue and all buffer memory group parameters, wherein ask the parameter of waiting list to comprise: the stand-by period, buffer memory group parameter comprises: ask quantity, actual request quantity temporarily, ask the time of advent; Computational methods are all T/A difference+1, then enter next step;
Step8: finish.
Claims (3)
1. support the video cloud point of multi code Rate of Chinese character version to broadcast the buffer scheduling method for one kind, it is characterized in that: for same video content, the video file of a plurality of code check versions of storage is to adapt to the different request of user in the streaming media server, video file when a certain code check version of user request, if the file of this code check version is in caching server during buffer memory, then directly service will be if buffer memory not then will be asked other code check version buffer memory groups of this video of adding.
2. the video cloud point of described support multi code Rate of Chinese character version is broadcast the buffer scheduling method according to claim 1, it is characterized in that: based on the streaming media service framework under the cloud computing platform, adopt following buffer memory group to share dispatching algorithm to reduce the cache resources expense:
Step1: will ask waiting time T in the waiting list (WaitingQueue) less than please the seeking out of Tmax, and put into user's request queue (RequestQueue), user's request queue is put in user's request that the current time sheet arrives;
Step2: judge whether the request in user's request queue was all served by streaming media server, if then skip to Step9, otherwise from user's request queue, get one also the user of not processed mistake ask q
u
Step3: q is accepted in prediction
uWhether the bandwidth of rear server and spatial cache resource overload, if overload judges then whether this request once was present in the request waiting list, if then denial of service, otherwise user's request queue is put in user's request, skip to Step2; If non-overloading then skips to Step4;
Step4: whether the code check version (Version) that checks requested document exists, if exist, then finds actual demand buffer memory group and adds this buffer memory group, skips to Step2;
Step5: check whether the close low code check version of requested document exists, if exist then find and add this interim buffer memory group, skip to Step2; If there is no, check then whether the close high code rate version of requested document exists, if exist then find and add this interim buffer memory group, skip to Step2;
Step6: check that whether the request quantity to certain video file same code rate version in the interim buffer memory group that adds reaches N, if reach, then skips to Step8;
Step7: whether the request to certain video file same code rate version in the interim buffer memory group that adds that checks exists the T time in this interim buffer memory group, if exist, then skip to Step8; If the time less than T, then skips to Step2;
Step8: whether predictive server bandwidth and spatial cache resource overload, if non-overloading, then the video file of acquisition phase code rate version from the cloud storage is these buffer memory groups of please looking for novelty out, make these requests find and add the actual demand buffer memory group of newly opening, skip to Step2; If overload then skips to Step2;
Step9: computation requests waiting list parameter and all buffer memory group parameters skip to Step10;
Step10: finish;
The concept of using in the above-mentioned algorithm steps and being defined as follows:
Request waiting list (WaitingQueue): when the user serves when the current time sheet temporarily can't be satisfied, need to put into the request waiting list, wait for next timeslice processing;
Maximum wait duration Tmax:Tmax is one and has had this duration and not yet serviced request greater than 1 natural number in the request waiting list, and it is left out from formation, namely refuses this time request;
User's request queue (RequestQueue): need to ask the user that the current time sheet is processed, must put into user's request queue, processed successively in order by system;
Multi code Rate of Chinese character version Version[1-n]: the video that refers to the same content of same duration has n code check version of different code checks, n be one greater than 1 natural number, the code check start context is according to the ascending Version[1 that is decided to be respectively of code check] to Version[n], Version[n] expression Version[n-1] close high code rate version, Version[n-2] expression Version[n-1] close low code check version;
Actual demand buffer memory group: the buffer memory group of the video file of the code check version of user's actual request;
Interim buffer memory group: but buffer memory group that code check version different identical from actual request buffer memory group content;
N:N is not less than 1 natural number, represents that the request to video file code check version is total in the interim buffer memory group, when sum surpasses N, then opens up the service of actual demand buffer memory group;
T:T is not less than 1 natural number, represents in the interim buffer memory group duration to the request service of video file code check version, when this duration surpasses T, then opens up the service of actual demand buffer memory group.
3. the video cloud point of described support multi code Rate of Chinese character version is broadcast the buffer scheduling method according to claim 2, it is characterized in that: the parameter of request waiting list comprises among the described Step9: the stand-by period, buffer memory group parameter comprises: ask quantity, actual request quantity temporarily, ask the time of advent; Computational methods are all T/A difference+1, then skip to Step10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310253056.3A CN103369368B (en) | 2013-06-24 | 2013-06-24 | Video cloud on-demand cache scheduling method supporting multi-code-rate version |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310253056.3A CN103369368B (en) | 2013-06-24 | 2013-06-24 | Video cloud on-demand cache scheduling method supporting multi-code-rate version |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103369368A true CN103369368A (en) | 2013-10-23 |
CN103369368B CN103369368B (en) | 2015-04-15 |
Family
ID=49369756
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310253056.3A Active CN103369368B (en) | 2013-06-24 | 2013-06-24 | Video cloud on-demand cache scheduling method supporting multi-code-rate version |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103369368B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104202356A (en) * | 2014-08-07 | 2014-12-10 | 西安交通大学 | A video file deploying method of a video cloud on-demand system based on a multi-code-rate version |
WO2015153723A1 (en) * | 2014-04-01 | 2015-10-08 | Invoke Ltd. | A method and system for real-time cloud storage of video content |
CN106612269A (en) * | 2015-10-27 | 2017-05-03 | 中兴通讯股份有限公司 | Multimedia resource issuing method and device |
CN110113669A (en) * | 2019-06-14 | 2019-08-09 | 北京达佳互联信息技术有限公司 | Obtain method, apparatus, electronic equipment and the storage medium of video data |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1404302A (en) * | 2001-09-05 | 2003-03-19 | 北京中科大洋科技发展股份有限公司 | TV program making system and method with double-bit rate video stream |
CN101026769A (en) * | 2007-01-12 | 2007-08-29 | 西安交通大学 | Multi-path media synchronous display control method |
CN102685179A (en) * | 2011-03-18 | 2012-09-19 | 丛林网络公司 | Modular transparent proxy cache |
US20120238619A1 (en) * | 2011-03-16 | 2012-09-20 | Miragen Therapeutics | Micro-rna for the regulation of cardiac apoptosis and contractile function |
-
2013
- 2013-06-24 CN CN201310253056.3A patent/CN103369368B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1404302A (en) * | 2001-09-05 | 2003-03-19 | 北京中科大洋科技发展股份有限公司 | TV program making system and method with double-bit rate video stream |
CN101026769A (en) * | 2007-01-12 | 2007-08-29 | 西安交通大学 | Multi-path media synchronous display control method |
US20120238619A1 (en) * | 2011-03-16 | 2012-09-20 | Miragen Therapeutics | Micro-rna for the regulation of cardiac apoptosis and contractile function |
CN102685179A (en) * | 2011-03-18 | 2012-09-19 | 丛林网络公司 | Modular transparent proxy cache |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015153723A1 (en) * | 2014-04-01 | 2015-10-08 | Invoke Ltd. | A method and system for real-time cloud storage of video content |
CN104202356A (en) * | 2014-08-07 | 2014-12-10 | 西安交通大学 | A video file deploying method of a video cloud on-demand system based on a multi-code-rate version |
CN106612269A (en) * | 2015-10-27 | 2017-05-03 | 中兴通讯股份有限公司 | Multimedia resource issuing method and device |
WO2017071524A1 (en) * | 2015-10-27 | 2017-05-04 | 中兴通讯股份有限公司 | Multimedia resource publishing method and apparatus |
CN110113669A (en) * | 2019-06-14 | 2019-08-09 | 北京达佳互联信息技术有限公司 | Obtain method, apparatus, electronic equipment and the storage medium of video data |
Also Published As
Publication number | Publication date |
---|---|
CN103369368B (en) | 2015-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101237429B (en) | Stream media living broadcasting system, method and device based on content distribution network | |
CN101616170B (en) | Method for supplying media stream service and system thereof | |
US8832295B2 (en) | Peer-assisted fractional-storage streaming servers | |
US20100094967A1 (en) | Large Scale Distributed Content Delivery Network | |
US20100037225A1 (en) | Workload routing based on greenness conditions | |
CN104025553B (en) | Optimization engine and correlation technique in mobile cloud accelerator | |
WO2013159703A1 (en) | Offline download method, multimedia file download method and system thereof | |
CN100553331C (en) | Based on the content distribution in the video network of P2P technology and storage system and method thereof | |
CN102571959A (en) | System and method for downloading data | |
CN103152423A (en) | Cloud storage system and data access method thereof | |
CN102439934A (en) | Method and system for managing multilevel caches of edge server in cdn | |
CN101136911A (en) | Method to download files using P2P technique and P2P download system | |
CN102724314B (en) | A kind of distributed caching client based on metadata management | |
CN102316097B (en) | Streaming media scheduling and distribution method capable of reducing wait time of user | |
CN103108008A (en) | Method of downloading files and file downloading system | |
CN103369368B (en) | Video cloud on-demand cache scheduling method supporting multi-code-rate version | |
CN105376218A (en) | Stream media system and method for fast responding to user request | |
CN102006328B (en) | Peer-to-peer (P2P) streaming media distributed network system and data transmission method thereof | |
CN102291629A (en) | P2P (peer-to-peer) proxy on-demand system and implementation method applied to IPTV (Internet protocol television) | |
EP2252057B1 (en) | Method and system for storing and distributing electronic content | |
CN102710790B (en) | Memcached implementation method and system based on metadata management | |
CN102497389B (en) | Big umbrella caching algorithm-based stream media coordination caching management method and system for IPTV | |
Lee et al. | Video quality adaptation for limiting transcoding energy consumption in video servers | |
CN103905923A (en) | Content caching method and device | |
Liao et al. | A novel data replication mechanism in P2P VoD system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |