CN104469539A - A cooperation buffering method, streaming media managing subsystem and server - Google Patents

A cooperation buffering method, streaming media managing subsystem and server Download PDF

Info

Publication number
CN104469539A
CN104469539A CN201310423243.1A CN201310423243A CN104469539A CN 104469539 A CN104469539 A CN 104469539A CN 201310423243 A CN201310423243 A CN 201310423243A CN 104469539 A CN104469539 A CN 104469539A
Authority
CN
China
Prior art keywords
buffer memory
memory group
server
service
medium data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310423243.1A
Other languages
Chinese (zh)
Inventor
董振江
孙奇
马书超
李俊
孙健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
ZTE Corp
Original Assignee
University of Science and Technology of China USTC
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC, ZTE Corp filed Critical University of Science and Technology of China USTC
Priority to CN201310423243.1A priority Critical patent/CN104469539A/en
Publication of CN104469539A publication Critical patent/CN104469539A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23109Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion by placing content in organized collections, e.g. EPG data repository
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6125Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via Internet

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present invention discloses a cooperation buffering method, and the method includes: a managing subsystem obtains service information and buffering information of more than one servers in a server cluster, and generates a service status table according to the service information and the buffering information; and when receiving a service request, the managing subsystem looks up the service status table according to streaming media data requested by the service request, to redirect to a first server in servers buffering the requested data when determining the servers buffering the requested streaming media data exists in the server cluster, so that the first serve provides a streaming media data service to a party requested the service. The present invention further discloses a streaming media managing subsystem and server. Using the technical solution of the present invention, not only the transmission rate and buffering utilization ratio of the disk can be increased, but also an effect of load balancing can be achieved.

Description

A kind of cooperation caching method, Streaming Media ADMINISTRATION SUBSYSTEM and server
Technical field
The present invention relates to streaming media service technology, particularly relate to a kind of cooperation caching method, Streaming Media ADMINISTRATION SUBSYSTEM and server.
Background technology
The develop rapidly of network technology greatly facilitates the universal of multimedia application, and the Stream Media Application in multimedia application has become a mainstream applications of current the Internet.Stream media service system adopts Real Time Streaming mode to provide base MMS (Multimedia Message Service) to user, as video request program, remote teaching, video conference, can make depending on multi-medium data by network with real-time, play without the need to the mode downloaded, thus shorten the stand-by period, reduce the memory space requirement of client, enhance the real-time of media services.IPTV (IPTV, Internet Protocol Television) be Internet Protocol Television again, it is the stream media service system utilizing broadband IP network, using domestic television set or computer as primary terminal electrical equipment, transmit digital television signal based on IP agreement, the streaming media service comprising digital television program is provided.
Because IPTV is to system storage, system I/O (I/O, Input/Output), computing capability, network export the aspects such as IO ability and all have high requirements, simultaneously again due to program temperature reason, also all there is high request for file distribution mode and system memory size; Therefore, once there is network congestion, network delay and shake will be caused, and then affect the service impression of user.And network congestion is because the contradiction between limited resource and the demand of Stream Media Application causes.
Single buffer memory and redirecting technique is adopted in existing stream media technology, alleviate the contradiction between limited resource and the demand of Stream Media Application, but because the request access way of home server is not considered from whole cluster angle, therefore, not only cause Buffer Utilization low, but also can load imbalance be caused, in addition, re-orientation processes process may cause network bottleneck between server frequently.
Summary of the invention
In view of this, the main purpose of the embodiment of the present invention is to provide a kind of cooperation caching method, Streaming Media ADMINISTRATION SUBSYSTEM and server, not only can improve transmission rate and the Buffer Utilization of disk, and can reach the effect of load balancing.
For achieving the above object, the technical scheme of the embodiment of the present invention is achieved in that
Embodiments provide a kind of cooperation caching method, be applied to stream media service system, described stream media service system comprises ADMINISTRATION SUBSYSTEM and server cluster; Described server cluster comprises more than one server; Described method comprises: described ADMINISTRATION SUBSYSTEM obtains information on services and the cache information of the described more than one server in described server cluster, and generates service status table according to described information on services and described cache information; When described ADMINISTRATION SUBSYSTEM receives service request, described service status table is inquired about according to the stream medium data that described service request is asked, determine to exist in described server cluster buffer memory when asking the server of stream medium data to some extent, be redirected to the first server in the server of buffer memory request msg to some extent, make described first server provide stream medium data service to described service requester.
Preferably, described first server provides stream medium data service to described service requester, comprising: determine whether asked stream medium data is just sent out, when not being sent out, start to described service requester send ask stream medium data; This institute ask stream medium data being just sent out and the stream medium data sent exceeds setting threshold time, start to send to described service requester the stream medium data downloaded.
Preferably, described method also comprises: described local cache is on average divided into more than one buffer memory group by described server, using described buffer memory group as the unit of stored stream media data; Wherein, described buffer memory group includes more than three cache blocks; Described buffer memory group is connected into work chained list and idle chained list, and wherein, described in each in described work chained list, the buffer status of buffer memory group comprises structure state, running status and idle condition; Wherein, described structure state comprises the state of the size needs adjustment of cache blocks in buffer memory group, and wherein, described server determines the size of cache blocks in described buffer memory group according to the average time interval of service requester request; Described running status comprises after in described buffer memory group, the size of cache blocks is adjusted, the state that buffer memory group is externally served with sliding window form; Described idle condition comprises the state that described sliding window stops forward slip.
Preferably, described method also comprises: when described ADMINISTRATION SUBSYSTEM determines all not to be cached with in the buffer memory group of the Servers-all in described server cluster the stream medium data that described service request asks, according to the second server of described service status table determination current cache load lower than setting threshold, and be redirected described second server; Described second server is made to provide stream medium data service to described service requester.
Preferably, described second server provides stream medium data service to described service requester, comprising: from source and course media library, inquire about asked stream medium data, and is downloaded in described local cache by inquired about stream medium data; And when in described local cache, the stream medium data of buffer memory reaches setting threshold, start to send described asked stream medium data to described service requester.
Preferably, described method also comprises: when described second server determines to have enough internal memories, and from described idle chained list, read one group of buffer memory group is that described service requester is served, and this group buffer memory group is added described work chained list; When determining there are not enough internal memories, according to the Flow Velocity in a period of time, each group of buffer memory group is sorted, determine one group of buffer memory group that request number of times is minimum and discharge this group buffer memory group, and using this group buffer memory group as the buffer memory group of carrying out serving for described service requester, and this group buffer memory group is added described work chained list.
Preferably, described information on services at least comprises: the load state of the described more than one server in described server cluster and the identify label number ID of current service program; Described cache information at least comprises: the utilance of the starting position of the buffer memory group of current institute service packages, data cached length, cache object, buffer status, the utilance of spatial cache and the end time of buffer memory group in each server in described server cluster.
The embodiment of the present invention additionally provides a kind of cooperation caching method, is applied to stream media service system, and described stream media service system comprises ADMINISTRATION SUBSYSTEM and server cluster, and described server cluster comprises more than one server; Described method comprises: after receiving service request, determines whether asked stream medium data has been downloaded in local cache; When not downloading to local cache, from source and course media library, inquire about asked stream medium data, and the stream medium data inquired is downloaded in described local cache; And when the stream medium data of buffer memory reaches setting threshold in described local cache, start the stream medium data sending described request to service requester; When downloading in local cache, determine whether asked stream medium data is just sent out, when not being sent out, start to service requester send ask stream medium data; This institute ask stream medium data being just sent out and the stream medium data sent exceeds setting threshold time, start to send to service requester the stream medium data downloaded.
Preferably, described method comprises: described local cache is on average divided into more than one buffer memory group, using described buffer memory group as the unit of stored stream media data; Wherein, described buffer memory group includes more than three cache blocks; Described buffer memory group is connected into work chained list and idle chained list, and wherein, described in each in described work chained list, the buffer status of buffer memory group all comprises structure state, running status and idle condition; Wherein, described structure state comprises the state of the size needs adjustment of cache blocks in buffer memory group, and wherein, described server determines the size of cache blocks in described buffer memory group according to the average time interval of described service requester request; Described running status comprises after in described buffer memory group, the size of cache blocks is adjusted, the state that buffer memory group is externally served with sliding window form; Described idle condition comprises the state that described sliding window stops forward slip.
Preferably, described server determines the size of cache blocks in described buffer memory group according to the average time interval of described service requester request, comprising: average time interval when the n-th service requester request of files in stream media i arrives determined by following formula: ARAI i n = &Sigma; j = 1 n - 1 I i j n - 1 , n > 1 &infin; , n = 1 ; Wherein, the request total amount of files in stream media i when n is current time T, for a jth time interval of files in stream media i; The described time interval determined by following formula: I i j = &infin; , j = 1 T i j + 1 - T i j , 1 < j < n T - T i j , j = n ; Wherein, T i jrepresent a jth request of files in stream media i the time of advent, T i j+1represent jth+1 request of files in stream media i the time of advent; As the total quantity n=1 that files in stream media i asks, be defined as ∞; When time, discharge described buffer memory group; When time, buffer memory group keeps structure state, is extended to by predetermined serviceable time P and during p'<2F, determine average time interval and the size of cache blocks in described buffer memory group is adjusted to described P' and code check is long-pending; During p'>=2F, keep the size of cache blocks in described buffer memory group; Wherein, described predetermined serviceable time P is determined by the size of cache blocks in current cache group and code check, and described F is the service time of setting; When time, keep the size of cache blocks in described buffer memory group.
Preferably, described method also comprises: when buffer memory group is in structure state, described server determines described average time interval size according to described average time interval is served described service requester, comprising: when time, discharge described buffer memory group, and described buffer memory group is taken out from described work chained list, join described idle chained list, directly provide service by described server; When time, described buffer memory group keeps structure state, is extended to by predetermined serviceable time P if p'<2F, determine average time interval during p'>=2F, described buffer memory group enters running status; Wherein, described predetermined serviceable time P is determined by current cache size and code check, and described F is the service time of setting; When described buffer memory group enters running status; When described buffer memory group is in running status, to adopt in sliding window data cached serves each service requester, wherein, described sliding window at least takies a cache blocks, and, timing travels through each service requester, determines cache blocks and the buffer status of the current use of each service requester.
Preferably, described method also comprises: pre-read and padding data, comprise: when last service requester determining in described sliding window leaves rear end cache blocks, described sliding window switches to the cache blocks that next is filled with data, wherein, described rear end cache blocks is last cache blocks of described sliding window; When determining that the time departure of last service requester is more than or equal to the filling time R of setting, equal the data stuffing of single cache blocks size to described rear end cache blocks, so that described sliding window switches to this cache blocks from disk reading.
Preferably, described method also comprises: when described buffer memory group is in idle condition, judges whether the file of described buffer memory group institute buffer memory is unique; If so, then after all service requests terminate, discharge described buffer memory group, described buffer memory group is taken out from work chained list, joins described idle chained list; Otherwise, discharge described buffer memory group after keeping content to the end time of buffer memory group, described buffer memory group taken out from work chained list, joins described idle chained list.
Preferably, described method also comprises: it is the buffer memory group of structure state that described ADMINISTRATION SUBSYSTEM inquires about state in described service status table according to described service request, and whether the buffer memory group judging to be in arbitrary server in described server cluster structure state is cached with the stream medium data that described service request is asked; Service request described in the non-buffer memory of buffer memory group determining to be in each server in described server cluster structure state ask stream medium data time, the state of inquiring about in described service status table is the buffer memory group of running status, judges whether to be cached with in arbitrary described server in described server cluster the stream medium data that described service request is asked.
The embodiment of the present invention additionally provides a kind of Streaming Media ADMINISTRATION SUBSYSTEM, is applied to stream media service system, and described stream media service system comprises described ADMINISTRATION SUBSYSTEM and server cluster, and described server cluster comprises more than one server; Described ADMINISTRATION SUBSYSTEM comprises generation unit, query unit, the first determining unit and the second determining unit, wherein; Described generation unit, for obtaining information on services and the cache information of the described more than one server in described server cluster, and generates service status table according to described information on services and described cache information; Described query unit, during for receiving service request, inquires about described service status table according to the stream medium data that described service request is asked; Described first determining unit, for determine to exist in described server cluster buffer memory ask the server of stream medium data to some extent time, be redirected to the first server in the server of buffer memory request msg to some extent, make described first server provide stream medium data service to described service requester; Described second determining unit, for determine the Servers-all in described server cluster buffer memory group in be not all cached with the stream medium data that described service request asks time, according to the second server of described service status table determination current cache load lower than setting threshold, and be redirected described second server; Described second server is made to provide stream medium data service to described service requester.
The embodiment of the present invention additionally provides a kind of streaming media server, is applied to stream media service system, and described stream media service system comprises ADMINISTRATION SUBSYSTEM and server cluster, and described server cluster comprises more than one server; Wherein, described server comprises determination module, download module, the first sending module, the second sending module and the 3rd sending module, wherein: described determination module, after receiving service request, determine whether asked stream medium data has been downloaded in local cache; Described download module, when not downloading to local cache for asked stream medium data, inquires about asked stream medium data from source and course media library, and is downloaded in described local cache by the stream medium data inquired; Described first sending module, when the stream medium data for buffer memory in described local cache reaches setting threshold, starts the stream medium data sending described request to service requester; Described second sending module, when downloading in local cache for asked stream medium data, determines whether asked stream medium data is just sent out, when not being sent out, start to service requester send ask stream medium data; Described 3rd sending module, this institute ask stream medium data being just sent out and the stream medium data sent exceeds setting threshold time, start to send to service requester the stream medium data downloaded.
Preferably, described server also comprises division module; Described division module, for described local cache is on average divided into more than one buffer memory group, using described buffer memory group as the unit of stored stream media data, wherein, described buffer memory group includes more than three cache blocks; And described buffer memory group is connected into work chained list and idle chained list, wherein, described in each in described work chained list, the buffer status of buffer memory group all comprises structure state, running status and idle condition; Wherein, described structure state comprises the state of the size needs adjustment of cache blocks in buffer memory group, and wherein, described server determines the size of cache blocks in described buffer memory group according to the average time interval of described service requester request; Described running status comprises after in described buffer memory group, the size of cache blocks is adjusted, the state that buffer memory group is externally served with sliding window form; Described idle condition comprises the state that described sliding window stops forward slip.
The technical scheme that the embodiment of the present invention provides, ADMINISTRATION SUBSYSTEM obtains information on services and the cache information of the described more than one server in server cluster, and generates service status table according to described information on services and described cache information; When described ADMINISTRATION SUBSYSTEM receives service request, described service status table is inquired about according to the stream medium data that described service request is asked, determine to exist in described server cluster buffer memory when asking the server of stream medium data to some extent, be redirected to the first server in the server of buffer memory request msg to some extent, make described first server provide stream medium data service to described service requester; So, the present invention passes through the service request of the independently whole server cluster of ADMINISTRATION SUBSYSTEM unified management, the information on services reported according to each server and cache information are served service requester, to reach load balancing, and transmission rate and the Buffer Utilization of disk can be improved.
Further, the buffer memory group of the embodiment of the present invention adopts the pre-read mode of more than three buffer memorys, not only avoid system concurrency amount high when be connected the problem that may occur postponing, but also improve hard disk reading efficiency, thus improve the service ability of these application.
Accompanying drawing explanation
Fig. 1 is the realization flow schematic diagram of the inventive method embodiment cooperation caching method;
Fig. 2 is the structural representation that the sliding window of the inventive method embodiment runs forward;
Fig. 3 is the structural representation that the cache blocks of the inventive method embodiment starts padding data;
Fig. 4 is that the inventive method embodiment adopts randomized mode to carry out the action sequence schematic diagram of buffer memory switching on a timeline;
Fig. 5 is the composition structural representation of embodiment of the present invention stream media service system;
Fig. 6 is the composition structural representation of embodiment of the present invention Streaming Media ADMINISTRATION SUBSYSTEM;
Fig. 7 is the composition structural representation of embodiment of the present invention streaming media server.
Embodiment
Below in conjunction with the drawings and specific embodiments, the technical solution of the present invention is further elaborated.
The inventive method embodiment is applied to stream media service system, and described stream media service system comprises ADMINISTRATION SUBSYSTEM and server cluster; Described server cluster comprises more than one server; Fig. 1 is the realization flow schematic diagram of embodiment of the present invention cooperation caching method, and as shown in Figure 1, described cooperation caching method comprises:
Step 101, described ADMINISTRATION SUBSYSTEM obtains information on services and the cache information of the described more than one server in described server cluster, and generates service status table according to described information on services and described cache information;
Here, described information on services at least comprises: the load state of server and the ID of current service program;
Here, described cache information at least comprises: the utilance of the starting position of the buffer memory group of current institute service packages, data cached length, spatial cache, buffer status, the utilance of cache object and the end time of buffer memory group in server described in each; By the starting position of buffer memory group and data cached length can determine the data of buffer memory in buffer memory group whole file position (as head, ending .... etc.).
Here, described according to described information on services and described cache information generation service status table, ADMINISTRATION SUBSYSTEM preserve with specific data structure (as chained list, Hash etc.) as described in information on services and as described in cache information, those skilled in the art can realize according to various prior art, repeats no more here.
Step 102, when described ADMINISTRATION SUBSYSTEM receives service request, inquires about described service status table according to the stream medium data that described service request is asked;
Here, at least comprise in described service request: the ID of the program that service requester is asked;
Step 103, described ADMINISTRATION SUBSYSTEM determines to exist in described server cluster buffer memory when asking the server of stream medium data to some extent, is redirected to the first server in the server of buffer memory request msg to some extent;
Here, ADMINISTRATION SUBSYSTEM is starting position according to the group of buffer memory described in agent list and described data cached length, determines whether to be cached with in arbitrary server in described server cluster the stream medium data that described service request is asked.
Here, described redirected described first server comprises: described ADMINISTRATION SUBSYSTEM sends redirect message to described first server, wherein, at least comprise in described redirect message: the ID of the program that the port information of server and service requester are asked, the ID of the program that described port information is asked according to service requester by described ADMINISTRATION SUBSYSTEM inquires about described service status table and determines; And described first server returns response message to described ADMINISTRATION SUBSYSTEM.
Step 104, described first server provides stream medium data service to described service requester;
Here, described first server provides stream medium data service to described service requester, comprising:
Determine whether asked stream medium data is just sent out, when not being sent out, start to service requester send ask stream medium data; This institute ask stream medium data being just sent out and the stream medium data sent exceeds setting threshold time, start to send to service requester the stream medium data downloaded.
Described method also comprises step 105, when described ADMINISTRATION SUBSYSTEM determines all not to be cached with in the buffer memory group of the Servers-all in described server cluster the stream medium data that described service request asks, according to the second server of described service status table determination current cache load lower than setting threshold, and be redirected described second server;
Here, described ADMINISTRATION SUBSYSTEM is the utilance of load state according to server described in agent list and described spatial cache, determines the second server of current cache load lower than setting threshold.
Here, described redirected described second server comprises: described ADMINISTRATION SUBSYSTEM sends redirect message to described second server, wherein, at least comprise in described redirect message: the port information of server, described port information is inquired about described service status table by described ADMINISTRATION SUBSYSTEM and is determined; And described second server returns response message to described ADMINISTRATION SUBSYSTEM.
Corresponding, step 106, described second server provides stream medium data service to described service requester, comprising:
From source and course media library, inquire about asked stream medium data, and inquired about stream medium data is downloaded in described local cache; And when in described local cache, the stream medium data of buffer memory reaches setting threshold, start to send described asked stream medium data to service requester.
Further, in order to avoid when system concurrency amount height, be connected the situation that may occur postponing, the inventive method embodiment adopts the pre-read mode of more than three buffer memorys.Concrete, described local cache is on average divided into more than one buffer memory group by described server, using described buffer memory group as the unit of stored stream media data; Wherein, described buffer memory group includes more than three cache blocks;
Described buffer memory group is connected into work chained list and idle chained list, and wherein, described in each in described work chained list, the buffer status of buffer memory group comprises structure state, running status and idle condition; Wherein,
Described structure state comprises the state of the size needs adjustment of cache blocks in buffer memory group, and wherein, described server determines the size of cache blocks in described buffer memory group according to the average time interval of service requester request;
Described running status comprises after in described buffer memory group, the size of cache blocks is adjusted, the state that buffer memory group is externally served with sliding window form;
Described idle condition comprises the state that described sliding window stops forward slip.
Further, described server is according to the average time interval of service requester request the size of cache blocks in the described buffer memory group determined, comprises the following steps:
When time, discharge described buffer memory group;
When time, buffer memory group keeps structure state, is extended to by predetermined serviceable time P and during p'<2F, determine average time interval and the size of cache blocks in described buffer memory group is adjusted to described P' and code check is long-pending; During p'>=2F, keep the size of cache blocks in described buffer memory group; Wherein, described predetermined serviceable time P is determined by the size of cache blocks in current cache group and code check, and described F is the service time of setting;
When time, keep the size of cache blocks in described buffer memory group.
Here, described p'<2F is the handoff reservation regular hour for cache blocks.
Describe formula more easily in order to follow-up, define symbol as shown in table 1;
Table 1
Here, average time interval when the n-th service requester request of files in stream media i arrives determined by formula (1):
ARAI i n = &Sigma; j = 1 n - 1 I i j n - 1 , n > 1 &infin; , n = 1 - - - ( 1 ) ;
In formula (1), the request total amount of files in stream media i when n is current time T, for a jth time interval of files in stream media i; The described time interval determined by formula (2):
I i j = &infin; , j = 1 T i j + 1 - T i j , 1 < j < n T - T i j , j = n - - - ( 2 ) ;
In formula (2), T i jrepresent a jth request of files in stream media i the time of advent, T i j+1represent jth+1 request of files in stream media i the time of advent; As the total quantity n=1 that files in stream media i asks, be defined as ∞; And as total number of requests n=1, ∞ will be defined as;
Further, when second server adopts buffer memory group, described method is further comprising the steps of:
Step 201, when determining to have enough internal memories, from described idle chained list, read one group of buffer memory group is that described service requester is served, and this group buffer memory group is added described work chained list;
Here, need the buffer memory by distributing a setting of the size by predetermined serviceable time P in the connected cache blocks of two in buffer memory group, the new request of the service requester that server receives will be served by this buffer memory group, and this buffer memory group enters structure state; Such as: suppose that the size of each cache blocks in buffer memory group is 2M, be so 4M altogether by two connected cache blocks, if the code check of stream is 0.1Mb/s, so predetermined serviceable time P=40S.Here, the object arranging predetermined serviceable time is when buffer memory group enters running status, needs to carry out the switching between cache blocks, is determined by predetermined serviceable time.
Step 202, when determining there are not enough internal memories, has two kinds of implementation methods: the first, directly service requester is served by described second server; The second, first according to the Flow Velocity in a period of time, each group of buffer memory group is sorted, determine one group of buffer memory group that request number of times is minimum and discharge this group buffer memory group, and using this group buffer memory group as the buffer memory group of carrying out serving for described service requester, and this group buffer memory group is added described work chained list.
Wherein, the follow-up processing flow of the second implementation method is identical with step 201, compared with step 201, is only the increase in the step of release buffer memory group.
Further, described determine to be cached with in arbitrary server in described server cluster the stream medium data that described service request asks before, described cooperation caching method also comprises:
It is the buffer memory group of structure state that described ADMINISTRATION SUBSYSTEM inquires about state in described service status table according to described service request, and whether the buffer memory group judging to be in arbitrary server in described server cluster structure state is cached with the stream medium data that described service request is asked;
Service request described in the non-buffer memory of buffer memory determining to be in each server in described server cluster structure state ask stream medium data time, the state of inquiring about in described service status table is the buffer memory of running status, judges whether to be cached with in arbitrary described server in described server cluster the stream medium data that described service request is asked.
Further, when buffer memory group is in structure state, described first server determines described average time interval size according to described average time interval is served described service requester, comprising:
When time, discharge described buffer memory group, and described buffer memory group is taken out from described work chained list, join described idle chained list, directly provide service by described server;
When time, described buffer memory group keeps structure state, is extended to by predetermined serviceable time P if p'<2F, determine average time interval during p'>=2F, described buffer memory group enters running status; Wherein, described predetermined serviceable time P is determined by current cache size and code check, and described F is the service time of setting;
When described buffer memory group enters running status;
When described buffer memory group is in running status, to adopt in sliding window data cached serves each service requester, wherein, described sliding window at least takies a cache blocks, and, timing travels through each service requester, determines cache blocks and the buffer status of the current use of each service requester.
Here, server is after the cache blocks determining the current use of each service requester and buffer status, also need the cache blocks of current for each service requester use and buffer status to send to ADMINISTRATION SUBSYSTEM, so that described ADMINISTRATION SUBSYSTEM generates service status table.
Further, when the buffer memory in server is in running status, described method also comprises the filling process of pre-read and empty buffer memory, concrete:
When last service requester determining in described sliding window leaves rear end cache blocks, described sliding window switches to the cache blocks that next is filled with data;
Here, described sliding window takies a cache blocks to I haven't seen you for ages, generally can take plural cache blocks; When sliding window takies a cache blocks, this cache blocks can be described rear end cache blocks; When sliding window takies two or more cache blocks, described rear end cache blocks can be last cache blocks of described sliding window;
When determining that the time departure of last service requester is more than or equal to the filling time R of setting, equal the data stuffing of single cache blocks size to described rear end cache blocks, so that sliding window switches to this cache blocks from disk reading.
Here, described filling time R can be random number, is especially the random number in (0, p'/2) interval; Described filling time R can pre-set, also can be switch to before next is filled with the cache blocks of data at described sliding window to arrange, those skilled in the art can change accordingly according to the size of various prior art to the described filling time, repeats no more here.
Be four cache blocks for buffer memory group below, the pre-read of the inventive method embodiment and the filling process of empty buffer memory are described, Fig. 2 is the structural representation that the sliding window of the inventive method embodiment runs forward, in fig. 2, and symbol T 0b, T 1b, T 2band T 3bbe respectively the beginning service time of cache blocks (Buffer) 0,1,2 and 3; Symbol T 0e, T 1e, T 2eand T 3ebe respectively the end service time of cache blocks 0,1,2 and 3, that is, the switching time of cache blocks; Symbol T 0f, T 1f, T 2fand T 3fbe respectively the beginning filling time of cache blocks 0,1,2 and 3; As shown in Figure 2, this buffer memory group has four cache blocks 0,1,2 and 3, and wherein, sliding window takies 3 cache blocks, and these three cache blocks 0,1 and 2 are all cached with data; Sliding window goes ahead operation, and buffer memory group is in running status; The unappropriated cache blocks of sliding window 3 is empty buffer memory, not yet starts to fill.
Fig. 3 is the structural representation that the cache blocks of the inventive method embodiment starts padding data, and as shown in Figure 3, this buffer memory group has four cache blocks 0,1,2 and 3 equally, and wherein, sliding window takies cache blocks 1 and 2; Last service requester in sliding window leaves cache blocks 0, and server generates a filling time R, works as T 0fduring=R, cache blocks 0 starts padding data.
Fig. 4 is that the inventive method embodiment adopts randomized mode to carry out the action sequence schematic diagram of buffer memory switching on a timeline, as shown in Figure 4, and beginning T service time of cache blocks 0 0bcorresponding to end T service time of cache blocks 3 3e; End T service time of cache blocks 0 0ecorresponding to beginning T service time of cache blocks 1 1b; End T service time of cache blocks 1 1ecorresponding to beginning T service time of cache blocks 2 2b; End T service time of cache blocks 2 2ecorresponding to beginning T service time of cache blocks 3 3b; That is, last cache blocks terminates service time and corresponds to beginning service time of a rear cache blocks, and like this, sliding window to carry out the switching between cache blocks service time in the beginning of end service time of last cache blocks or a rear cache blocks.
Buffer memory group described in the embodiment of the present invention refer to more than three dynamically application and the memory headroom that size is identical.Here, be four cache blocks for buffer memory group, when receiving streaming media service request, it is dynamically four cache blocks allocation address spaces that server can call malloc function.The structure partial content of described buffer memory group is as follows:
As can be seen from the structure of four cache blocks, it records the size of total four cache blocks, remaining data amount, the cache blocks of current service, two cache blocks initial addresses, the address sending data next time and sizes.Wherein the address (SendPtr) of the remaining data amount (LeftSize) of the cache blocks of current service and the data of needs transmission next time can upgrade numerical value after each last service requester to sliding in window sends data, i.e. LeftSize-=DATA_SIZE, SendPtr+=DATA_SIZE, wherein, DATA_SIZE is each data volume sent; If LeftSize is kept to 0, just carry out the switching of cache blocks, upgrade the single cache blocks used immediately and become the single cache blocks (BufferInUse) brought into use and the cache blocks initial address (SendPtr) pointing to switching, i.e. SendPtr=BufferPtr [BufferInUse].
Be four cache blocks for buffer memory group in this method embodiment, those skilled in the art can be as required, according to the mode of the embodiment of the present invention, described buffer memory group is set to many cache way such as three cache blocks, five cache blocks, even some cache blocks can be adjusted to circulus, and then buffer memory group is managed.
Further, when described buffer memory group is in running status, according to the end time of buffer memory group determine that buffer memory group enters idle condition; The end time of described buffer memory group determined by formula (3):
E i j = S i j + L i , j = 1 T i lastest + D i j , j > 1 - - - ( 3 ) ;
In formula (3), for the jth block cache block that files in stream media i distributes end time, i.e. cache blocks enter the time of idle condition; then represent time started, i.e. cache blocks the time of advent of last request of service; L ifor the duration of files in stream media i, by cache blocks size and the code check W of stream determine; for the time started of the most newly assigned cache blocks of files in stream media i, for the jth cache blocks of files in stream media i with jth-1 cache blocks the difference of time started, the jth cache blocks of files in stream media i with jth-1 cache blocks be the adjacent cache blocks of the different content of cache flow media file i, but be not necessarily connected in terms of content; Described determined by formula (4):
D i j = L i , j = 1 S i j - S i j - 1 , j > 1 - - - ( 4 ) ;
When the end time of described buffer memory group after determining, buffer memory group starts to enter idle condition, and server reclaims described buffer memory group, and concrete comprises the following steps:
Whether the file judging described buffer memory group institute buffer memory is unique;
If so, then after all service requests terminate, discharge described buffer memory group, described buffer memory group is taken out from work chained list, joins described idle chained list;
Otherwise, discharge described buffer memory group after keeping content to the end time of buffer memory group, described buffer memory group taken out from work chained list, joins described idle chained list.
By above-mentioned process, server can by buffer memory group from work chained list to the switching of idle chained list.
Fig. 5 is the composition structural representation of embodiment of the present invention stream media service system, and as shown in Figure 5, described stream media service system comprises ADMINISTRATION SUBSYSTEM and server cluster, and described server cluster comprises more than one server; Wherein,
Described ADMINISTRATION SUBSYSTEM, for obtaining information on services and the cache information of the described more than one server in described server cluster, and generates service status table according to described information on services and described cache information; And, when receiving service request, inquire about described service status table according to the stream medium data that described service request is asked; And, determine to exist in described server cluster buffer memory when asking the server of stream medium data to some extent, be redirected to the first server in the server of buffer memory request msg to some extent, make described first server provide stream medium data service to described service requester.
Described first server, for determining whether asked stream medium data is just sent out, when not being sent out, start to service requester send ask stream medium data; This institute ask stream medium data being just sent out and the stream medium data sent exceeds setting threshold time, start to send to service requester the stream medium data downloaded.
Here, described information on services at least comprises: the load state of the described more than one server in described server cluster and the ID of current service program;
Described cache information at least comprises: the utilance of the starting position of the buffer memory group of current institute service packages, data cached length, cache object, buffer status, the utilance of spatial cache and the end time of buffer memory group in each server in described server cluster.
Preferably, described ADMINISTRATION SUBSYSTEM, also for determine the Servers-all in described server cluster buffer memory group in be not all cached with the stream medium data that described service request asks time, according to the second server of described service status table determination current cache load lower than setting threshold, and be redirected described second server; Described second server is made to provide stream medium data service to described service requester.
Described second server, for inquiring about asked stream medium data from source and course media library, and is downloaded in described local cache by inquired about stream medium data; And when in described local cache, the stream medium data of buffer memory reaches setting threshold, start to send described asked stream medium data to service requester.
Fig. 6 is the composition structural representation of embodiment of the present invention Streaming Media ADMINISTRATION SUBSYSTEM, is applied to stream media service system, and described stream media service system comprises ADMINISTRATION SUBSYSTEM and server cluster, and described server cluster comprises more than one server; As shown in Figure 6, described ADMINISTRATION SUBSYSTEM comprises generation unit 61, query unit 62, first determining unit 63 and the second determining unit 64, wherein;
Described generation unit 61, for obtaining information on services and the cache information of the described more than one server in described server cluster, and generates service status table according to described information on services and described cache information;
Described query unit 62, during for receiving service request, inquires about described service status table according to the stream medium data that described service request is asked;
Described first determining unit 63, for determine to exist in described server cluster buffer memory ask the server of stream medium data to some extent time, be redirected to the first server in the server of buffer memory request msg to some extent, make described first server provide stream medium data service to described service requester;
Described second determining unit 64, for determine the Servers-all in described server cluster buffer memory group in be not all cached with the stream medium data that described service request asks time, according to the second server of described service status table determination current cache load lower than setting threshold, and be redirected described second server; Described second server is made to provide stream medium data service to described service requester.
Fig. 7 is the composition structural representation of embodiment of the present invention streaming media server, is applied to stream media service system, and described stream media service system comprises ADMINISTRATION SUBSYSTEM and server cluster; Described server cluster comprises more than one server; As shown in Figure 7, described server comprises determination module 71, download module 72, first sending module 73, second sending module 74 and the 3rd sending module 75, wherein:
Described determination module 71, after receiving service request, determines whether asked stream medium data has been downloaded in local cache;
Described download module 72, when not downloading to local cache for asked stream medium data, inquires about asked stream medium data from source and course media library, and is downloaded in described local cache by the stream medium data inquired;
Described first sending module 73, when the stream medium data for buffer memory in described local cache reaches setting threshold, starts the stream medium data sending described request to service requester;
Described second sending module 74, when downloading in local cache for asked stream medium data, determines whether asked stream medium data is just sent out, when not being sent out, start to service requester send ask stream medium data;
Described 3rd sending module 75, this institute ask stream medium data being just sent out and the stream medium data sent exceeds setting threshold time, start to send to service requester the stream medium data downloaded.
Further, described server also comprises division module; Described division module, for described local cache is on average divided into more than one buffer memory group, using described buffer memory group as the unit of stored stream media data, wherein, described buffer memory group includes more than three cache blocks; And described buffer memory group is connected into work chained list and idle chained list, wherein, described in each in described work chained list, the buffer status of buffer memory group all comprises structure state, running status and idle condition; Wherein,
Described structure state comprises the state of the size needs adjustment of cache blocks in buffer memory group, and wherein, described server determines the size of cache blocks in described buffer memory group according to the average time interval of service requester request;
Described running status comprises after in described buffer memory group, the size of cache blocks is adjusted, the state that buffer memory group is externally served with sliding window form;
Described idle condition comprises the state that described sliding window stops forward slip.
It will be understood by those of skill in the art that the practical function of the ADMINISTRATION SUBSYSTEM of the embodiment of the present invention and each unit of server or module can refer to the associated description of aforementioned cooperation caching method and understands.Those skilled in the art it is also understood that the ADMINISTRATION SUBSYSTEM of the embodiment of the present invention and each unit described in server or module realize by the processor of described ADMINISTRATION SUBSYSTEM and server, also realize by concrete logical circuit.
The above, be only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.

Claims (17)

1. a cooperation caching method, is characterized in that, is applied to stream media service system, and described stream media service system comprises ADMINISTRATION SUBSYSTEM and server cluster; Described server cluster comprises more than one server; Described method comprises:
Described ADMINISTRATION SUBSYSTEM obtains information on services and the cache information of the described more than one server in described server cluster, and generates service status table according to described information on services and described cache information;
When described ADMINISTRATION SUBSYSTEM receives service request, described service status table is inquired about according to the stream medium data that described service request is asked, determine to exist in described server cluster buffer memory when asking the server of stream medium data to some extent, be redirected to the first server in the server of buffer memory request msg to some extent, make described first server provide stream medium data service to described service requester.
2. method according to claim 1, is characterized in that, described first server provides stream medium data service to described service requester, comprising:
Determine whether asked stream medium data is just sent out, when not being sent out, start to described service requester send ask stream medium data; This institute ask stream medium data being just sent out and the stream medium data sent exceeds setting threshold time, start to send to described service requester the stream medium data downloaded.
3. method according to claim 1, is characterized in that, described method also comprises:
Described local cache is on average divided into more than one buffer memory group by described server, using described buffer memory group as the unit of stored stream media data; Wherein, described buffer memory group includes more than three cache blocks;
Described buffer memory group is connected into work chained list and idle chained list, and wherein, described in each in described work chained list, the buffer status of buffer memory group comprises structure state, running status and idle condition; Wherein,
Described structure state comprises the state of the size needs adjustment of cache blocks in buffer memory group, and wherein, described server determines the size of cache blocks in described buffer memory group according to the average time interval of service requester request;
Described running status comprises after in described buffer memory group, the size of cache blocks is adjusted, the state that buffer memory group is externally served with sliding window form;
Described idle condition comprises the state that described sliding window stops forward slip.
4. method according to claim 3, is characterized in that, described method also comprises:
When described ADMINISTRATION SUBSYSTEM determines all not to be cached with in the buffer memory group of the Servers-all in described server cluster the stream medium data that described service request asks, according to the second server of described service status table determination current cache load lower than setting threshold, and be redirected described second server; Described second server is made to provide stream medium data service to described service requester.
5. method according to claim 4, is characterized in that, described second server provides stream medium data service to described service requester, comprising:
From source and course media library, inquire about asked stream medium data, and inquired about stream medium data is downloaded in described local cache; And when in described local cache, the stream medium data of buffer memory reaches setting threshold, start to send described asked stream medium data to described service requester.
6. method according to claim 4, is characterized in that, described method also comprises:
When described second server determines to have enough internal memories, from described idle chained list, read one group of buffer memory group is that described service requester is served, and this group buffer memory group is added described work chained list;
When determining there are not enough internal memories, according to the Flow Velocity in a period of time, each group of buffer memory group is sorted, determine one group of buffer memory group that request number of times is minimum and discharge this group buffer memory group, and using this group buffer memory group as the buffer memory group of carrying out serving for described service requester, and this group buffer memory group is added described work chained list.
7. the method according to any one of claim 1 to 6, is characterized in that, described information on services at least comprises: the load state of the described more than one server in described server cluster and the identify label number ID of current service program;
Described cache information at least comprises: the utilance of the starting position of the buffer memory group of current institute service packages, data cached length, cache object, buffer status, the utilance of spatial cache and the end time of buffer memory group in each server in described server cluster.
8. a cooperation caching method, is characterized in that, is applied to stream media service system, and described stream media service system comprises ADMINISTRATION SUBSYSTEM and server cluster, and described server cluster comprises more than one server; Described method comprises:
After receiving service request, determine whether asked stream medium data has been downloaded in local cache;
When not downloading to local cache, from source and course media library, inquire about asked stream medium data, and the stream medium data inquired is downloaded in described local cache; And when the stream medium data of buffer memory reaches setting threshold in described local cache, start the stream medium data sending described request to service requester;
When downloading in local cache, determine whether asked stream medium data is just sent out, when not being sent out, start to service requester send ask stream medium data; This institute ask stream medium data being just sent out and the stream medium data sent exceeds setting threshold time, start to send to service requester the stream medium data downloaded.
9. method according to claim 8, is characterized in that, described method comprises:
Described local cache is on average divided into more than one buffer memory group, using described buffer memory group as the unit of stored stream media data; Wherein, described buffer memory group includes more than three cache blocks;
Described buffer memory group is connected into work chained list and idle chained list, and wherein, described in each in described work chained list, the buffer status of buffer memory group all comprises structure state, running status and idle condition; Wherein,
Described structure state comprises the state of the size needs adjustment of cache blocks in buffer memory group, and wherein, described server determines the size of cache blocks in described buffer memory group according to the average time interval of described service requester request;
Described running status comprises after in described buffer memory group, the size of cache blocks is adjusted, the state that buffer memory group is externally served with sliding window form;
Described idle condition comprises the state that described sliding window stops forward slip.
10. method according to claim 9, is characterized in that, described server determines the size of cache blocks in described buffer memory group according to the average time interval of described service requester request, comprising:
Average time interval when the n-th service requester request of files in stream media i arrives determined by following formula:
ARAI i n = &Sigma; j = 1 n - 1 I i j n - 1 , n > 1 &infin; , n = 1 ;
Wherein, the request total amount of files in stream media i when n is current time T, for a jth time interval of files in stream media i; The described time interval determined by following formula:
I i j = &infin; , j = 1 T i j + 1 - T i j , 1 < j < n T - T i j , j = n ;
Wherein, T i jrepresent a jth request of files in stream media i the time of advent, T i j+1represent jth+1 request of files in stream media i the time of advent; As the total quantity n=1 that files in stream media i asks, be defined as ∞;
When time, discharge described buffer memory group;
When time, buffer memory group keeps structure state, is extended to by predetermined serviceable time P and during p'<2F, determine average time interval and the size of cache blocks in described buffer memory group is adjusted to described P' and code check is long-pending; During p'>=2F, keep the size of cache blocks in described buffer memory group; Wherein, described predetermined serviceable time P is determined by the size of cache blocks in current cache group and code check, and described F is the service time of setting;
When time, keep the size of cache blocks in described buffer memory group.
11. methods according to claim 9, is characterized in that, described method also comprises:
When buffer memory group is in structure state, described server determines described average time interval size according to described average time interval is served described service requester, comprising:
When time, discharge described buffer memory group, and described buffer memory group is taken out from described work chained list, join described idle chained list, directly provide service by described server;
When time, described buffer memory group keeps structure state, is extended to by predetermined serviceable time P if p'<2F, determine average time interval during p'>=2F, described buffer memory group enters running status; Wherein, described predetermined serviceable time P is determined by current cache size and code check, and described F is the service time of setting;
When described buffer memory group enters running status;
When described buffer memory group is in running status, to adopt in sliding window data cached serves each service requester, wherein, described sliding window at least takies a cache blocks, and, timing travels through each service requester, determines cache blocks and the buffer status of the current use of each service requester.
12. methods according to claim 9, is characterized in that, described method also comprises: pre-read and padding data, comprising:
When last service requester determining in described sliding window leaves rear end cache blocks, described sliding window switches to the cache blocks that next is filled with data, and wherein, described rear end cache blocks is last cache blocks of described sliding window;
When determining that the time departure of last service requester is more than or equal to the filling time R of setting, equal the data stuffing of single cache blocks size to described rear end cache blocks, so that described sliding window switches to this cache blocks from disk reading.
13. methods according to any one of claim 9 to 12, it is characterized in that, described method also comprises:
When described buffer memory group is in idle condition, judge whether the file of described buffer memory group institute buffer memory is unique;
If so, then after all service requests terminate, discharge described buffer memory group, described buffer memory group is taken out from work chained list, joins described idle chained list;
Otherwise, discharge described buffer memory group after keeping content to the end time of buffer memory group, described buffer memory group taken out from work chained list, joins described idle chained list.
14. methods according to any one of claim 9 to 12, it is characterized in that, described method also comprises:
It is the buffer memory group of structure state that described ADMINISTRATION SUBSYSTEM inquires about state in described service status table according to described service request, and whether the buffer memory group judging to be in arbitrary server in described server cluster structure state is cached with the stream medium data that described service request is asked;
Service request described in the non-buffer memory of buffer memory group determining to be in each server in described server cluster structure state ask stream medium data time, the state of inquiring about in described service status table is the buffer memory group of running status, judges whether to be cached with in arbitrary described server in described server cluster the stream medium data that described service request is asked.
15. 1 kinds of Streaming Media ADMINISTRATION SUBSYSTEM, it is characterized in that, be applied to stream media service system, described stream media service system comprises described ADMINISTRATION SUBSYSTEM and server cluster, described server cluster comprises more than one server; Described ADMINISTRATION SUBSYSTEM comprises generation unit, query unit, the first determining unit and the second determining unit, wherein;
Described generation unit, for obtaining information on services and the cache information of the described more than one server in described server cluster, and generates service status table according to described information on services and described cache information;
Described query unit, during for receiving service request, inquires about described service status table according to the stream medium data that described service request is asked;
Described first determining unit, for determine to exist in described server cluster buffer memory ask the server of stream medium data to some extent time, be redirected to the first server in the server of buffer memory request msg to some extent, make described first server provide stream medium data service to described service requester;
Described second determining unit, for determine the Servers-all in described server cluster buffer memory group in be not all cached with the stream medium data that described service request asks time, according to the second server of described service status table determination current cache load lower than setting threshold, and be redirected described second server; Described second server is made to provide stream medium data service to described service requester.
16. 1 kinds of streaming media servers, it is characterized in that, be applied to stream media service system, described stream media service system comprises ADMINISTRATION SUBSYSTEM and server cluster, described server cluster comprises more than one server; Wherein, described server comprises determination module, download module, the first sending module, the second sending module and the 3rd sending module, wherein:
Described determination module, after receiving service request, determines whether asked stream medium data has been downloaded in local cache;
Described download module, when not downloading to local cache for asked stream medium data, inquires about asked stream medium data from source and course media library, and is downloaded in described local cache by the stream medium data inquired;
Described first sending module, when the stream medium data for buffer memory in described local cache reaches setting threshold, starts the stream medium data sending described request to service requester;
Described second sending module, when downloading in local cache for asked stream medium data, determines whether asked stream medium data is just sent out, when not being sent out, start to service requester send ask stream medium data;
Described 3rd sending module, this institute ask stream medium data being just sent out and the stream medium data sent exceeds setting threshold time, start to send to service requester the stream medium data downloaded.
17. servers according to claim 16, is characterized in that, described server also comprises division module;
Described division module, for described local cache is on average divided into more than one buffer memory group, using described buffer memory group as the unit of stored stream media data, wherein, described buffer memory group includes more than three cache blocks; And described buffer memory group is connected into work chained list and idle chained list, wherein, described in each in described work chained list, the buffer status of buffer memory group all comprises structure state, running status and idle condition; Wherein,
Described structure state comprises the state of the size needs adjustment of cache blocks in buffer memory group, and wherein, described server determines the size of cache blocks in described buffer memory group according to the average time interval of described service requester request;
Described running status comprises after in described buffer memory group, the size of cache blocks is adjusted, the state that buffer memory group is externally served with sliding window form;
Described idle condition comprises the state that described sliding window stops forward slip.
CN201310423243.1A 2013-09-16 2013-09-16 A cooperation buffering method, streaming media managing subsystem and server Pending CN104469539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310423243.1A CN104469539A (en) 2013-09-16 2013-09-16 A cooperation buffering method, streaming media managing subsystem and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310423243.1A CN104469539A (en) 2013-09-16 2013-09-16 A cooperation buffering method, streaming media managing subsystem and server

Publications (1)

Publication Number Publication Date
CN104469539A true CN104469539A (en) 2015-03-25

Family

ID=52914785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310423243.1A Pending CN104469539A (en) 2013-09-16 2013-09-16 A cooperation buffering method, streaming media managing subsystem and server

Country Status (1)

Country Link
CN (1) CN104469539A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071840A (en) * 2022-01-18 2022-02-18 南京秦之邦科技有限公司 Remote control system and method for urban lamps
CN114095759A (en) * 2020-08-03 2022-02-25 海能达通信股份有限公司 Streaming media redirection method and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1874489A (en) * 2006-06-28 2006-12-06 华中科技大学 Network organization method of overlapped multichannels in video on demand system of peer-to-peer network
US20100083328A1 (en) * 2008-09-24 2010-04-01 Alcatel-Lucent Client configuration and management for fast channel change of multimedia services
CN102497389A (en) * 2011-11-11 2012-06-13 中国科学技术大学 Big umbrella caching algorithm-based stream media coordination caching management method and system for IPTV

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1874489A (en) * 2006-06-28 2006-12-06 华中科技大学 Network organization method of overlapped multichannels in video on demand system of peer-to-peer network
US20100083328A1 (en) * 2008-09-24 2010-04-01 Alcatel-Lucent Client configuration and management for fast channel change of multimedia services
CN102497389A (en) * 2011-11-11 2012-06-13 中国科学技术大学 Big umbrella caching algorithm-based stream media coordination caching management method and system for IPTV

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘侃,李俊,杨坚: "《一种共享存储视频服务器集群的请求调度策略》", 《小型微型计算机系统》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114095759A (en) * 2020-08-03 2022-02-25 海能达通信股份有限公司 Streaming media redirection method and related device
CN114095759B (en) * 2020-08-03 2024-01-12 海能达通信股份有限公司 Stream media redirection method and related device
CN114071840A (en) * 2022-01-18 2022-02-18 南京秦之邦科技有限公司 Remote control system and method for urban lamps

Similar Documents

Publication Publication Date Title
US10425474B2 (en) Selective access of multi-rate data from a server and/or peer
CN104284201A (en) Video content processing method and device
US20140165119A1 (en) Offline download method, multimedia file download method and system thereof
US7254617B2 (en) Distributed cache between servers of a network
CN101277211A (en) Method and apparatus for buffering data
CN103581245A (en) Content delivery method and system of content delivery network
CN101136911A (en) Method to download files using P2P technique and P2P download system
CN102882939A (en) Load balancing method, load balancing equipment and extensive domain acceleration access system
CN107277093A (en) Content distributing network and its load-balancing method
CN104967873A (en) Streaming live scheduling method, system and scheduling server
CN105592163B (en) A kind of communication means and system
CN205430501U (en) Mobile terminal web advertisement video and positive video seamless handover device
CN102244644A (en) Method and device for releasing multimedia file
CN101188736A (en) Stream media ordering system and method with STB as the server
CN111597259B (en) Data storage system, method, device, electronic equipment and storage medium
KR101066872B1 (en) System and method for content delivery using cache server, and cache server thereof
CN103825916A (en) Resource downloading method and resource downloading system
CN105227665B (en) A kind of caching replacement method for cache node
CN102497389A (en) Big umbrella caching algorithm-based stream media coordination caching management method and system for IPTV
CN104469539A (en) A cooperation buffering method, streaming media managing subsystem and server
CN102223288A (en) Method, system and device for scheduling resources
CN110784534B (en) Data service method, device and system and electronic equipment
CN101958934B (en) Electronic program guide incremental content synchronization method, device and system
CN100596191C (en) Stream media ordering system and method with TV set as the server
CN103970593A (en) Multi-process interaction method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150325

WD01 Invention patent application deemed withdrawn after publication