CN101026744A - Distributed flow media distribution system, and flow media memory buffer and scheduling distribution method - Google Patents

Distributed flow media distribution system, and flow media memory buffer and scheduling distribution method Download PDF

Info

Publication number
CN101026744A
CN101026744A CNA200710096229XA CN200710096229A CN101026744A CN 101026744 A CN101026744 A CN 101026744A CN A200710096229X A CNA200710096229X A CN A200710096229XA CN 200710096229 A CN200710096229 A CN 200710096229A CN 101026744 A CN101026744 A CN 101026744A
Authority
CN
China
Prior art keywords
media
streaming
station
slice
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA200710096229XA
Other languages
Chinese (zh)
Other versions
CN100579208C (en
Inventor
谢主中
陈俊楷
李继优
喻德
陶宏
彭宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ud Network Co ltd
Ut Starcom China Co ltd
Original Assignee
UTStarcom Telecom Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by UTStarcom Telecom Co Ltd filed Critical UTStarcom Telecom Co Ltd
Priority to CN200710096229A priority Critical patent/CN100579208C/en
Publication of CN101026744A publication Critical patent/CN101026744A/en
Priority to PCT/CN2008/000466 priority patent/WO2008119235A1/en
Application granted granted Critical
Publication of CN100579208C publication Critical patent/CN100579208C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Based on slices of memory buffer, and stream media content stored in magnetic disc in mode of heat degree statistics (HDS), as well as based on intelligent distribution and user configuration in mode of heat degree statistics, the invention realizes sharing slices of media content inside media station and between media stations. Providing stream media services for terminal users as many as possible, the method reduces network flux and IO frequency for accessing magnetic disc as far as possible. The invention also discloses three methods of buffering memory based on HDS of distribution type stream media distribution system (DTSMDS), as well as dispatch and distribution method (DDM) by using slice of media content as memory buffer unit inside media station and between media stations. DTSMDS and DDM increase hit rate of memory buffer for media content, and reduce IO frequency so as to prolong service life of magnetic disc, and raise reliability and stability of system.

Description

Distributed streaming media distribution system and streaming media memory buffering and scheduling distribution method
Technical Field
The invention relates to the field of multimedia network communication, in particular to a high bit rate transmission system of distributed streaming media.
Background
With the development of multimedia network communication technology, high bit rate multimedia streaming, especially high bit rate video streaming, has evolved from handling thousands of synchronous users to millions of users. For example, high bit rate streaming services represented by IPTV have been developed to the stage of millions of users, and conventional distribution by centralized powerful machines or clustered machines has not been able to meet such demands. To this end, chinese patent publication No. CN1713721 has proposed a "distributed multimedia streaming system and method and apparatus for media content distribution".
However, in the prior art, no matter a cluster streaming server or a distributed streaming media distribution system is used, due to the limited memory of the basic unit for streaming service, the hit rate of the cached streaming media file or the slice of the streaming media file (the streaming media program such as a film, a file, a television program, a music piece is divided into smaller segments, which are called "slices") is low, the IO access frequency of the disk is high, and thus the damage rate of the disk is high, and the system maintenance cost is high.
Disclosure of Invention
Aiming at the defects existing in the streaming media system during streaming service, the invention provides a method for sharing and distributing streaming media slices based on Cache (Cache) intelligent management and scheduling of heat statistics in a distributed streaming media system.
The distributed streaming media distribution system is composed of a plurality of areas, wherein one area is a system headquarters, each area comprises a home media station and at least one edge media station, the home media station is used for storing streaming media content, network copying and distributing the stored streaming media content to the edge media station according to a heat statistic mode, the edge media station is communicated with the home media station through a network, and the edge media station stores slices of the most hot streaming media content in a memory buffer and a disk based on user requests and the heat statistic mode so as to provide streaming services.
Wherein, as the area of system headquarters, still include: the media position register is used for recording the position information of the media content slice of the streaming media distribution system; and a media asset manager for managing media assets of the streaming media distribution system; and the content manager is used for being responsible for the content management of the streaming media distribution system.
Wherein the edge media station comprises: a media director for receiving a streaming media service request transmitted from the outside, and determining the position of the slice of the streaming media service in the edge media station; and at least one media engine, which is used for storing the slice of the streaming media service in a memory buffer or in a storage manner, realizing the streaming service taking the slice as a streaming service unit, switching the streaming service under the control of the media director, and realizing the slice distribution and sharing between the media director and a home media station or other edge media stations.
Wherein the media director comprises: the stream service director is used for receiving a stream media service request sent from the outside and controlling and switching the stream service of the media engine; the storage manager manages the position and information of the slices of the media contents stored in the magnetic disks in all the media engines in one media station; the intelligent cache manager is used for managing the slice positions and information of the media contents cached in all the memories in the media station; and a DHT node manager based on the DHT, which is used for issuing the slice information of the media content buffered in the media station and receiving the slice information issued by other media stations.
Wherein the media engine comprises: the stream service unit is used for providing stream service taking media content slices as units and carrying out stream service and switching control by matching with a stream service director; the memory cache management unit is used for realizing local content cache management in the media engine and reporting and updating cached media content slice information to the intelligent cache manager and the DHT node manager; and the disk storage unit is used for storing the slices of the media content and forming cluster storage in the media war under the management of the storage manager.
The slices of the media content buffered in the memory are based on heat statistics, and the media content slices with relatively high cache heat are cached to improve the cache hit rate.
The media content slices buffered in the memory of each media engine in the media station realize service switching under the control of the media director so as to achieve slice service sharing.
The information sharing between the media stations is realized by the DHT management mode through the media content slices buffered in the memory between the media stations, and the sharing is realized by the media content slices buffered in the memory between the media stations through the copying mode.
Wherein each of the home media station and the edge media station is an independent cluster streaming server.
The media director is a pair of master-slave dual-backup media directors, the number of the media engines is multiple, the multiple media engines are media engines with the same function, and service load balancing is achieved under the control of the media directors.
In the memory buffering method of the streaming media in the distributed streaming media distribution system, the distributed streaming media distribution system comprises a media station, and the streaming media is subjected to memory buffering in the media station based on heat statistics. The following three methods are included.
First, the memory buffer unit is a slice of the streaming media content, and the heat statistics are based on a mechanism of preset heat and long-time heat statistics. The preset heat and the long-time heat statistics of all the slices are sorted by comparison, and the slices with the long-time heat ranking higher than the preset threshold value are reserved in the memory for buffering.
Secondly, the memory buffer unit is a slice of the streaming media content, and the heat statistics are based on the heat change frequency. The heat change frequencies of all the slices within a certain time are sorted by comparison, and the slices which exceed the heat change frequency of the preset threshold and are ranked at the top are reserved in the memory for buffering.
Thirdly, the memory buffer unit is based on an aging policy which is accessed least in a certain time period in units of data elements and an associated mechanism in the slice, wherein the data elements are a plurality of disk operation units divided by the slice. Each data element is predicted by ranking the most recent access heat and combining the contextual relationship of the data element within the slice, the heat ranking is high and the data elements predicted to be used soon will be retained in memory for buffering due to the intra-slice association.
In the method for scheduling and managing streaming media in a distributed streaming media distribution system according to the present invention, the distributed streaming media distribution system includes a media station, and the media station includes a pair of media directors and at least one media engine, and the method includes: (1) a receiving step in which the media director receives a streaming media request for streaming media content having a plurality of slices from a user; (2) a query step of querying whether a slice of the streaming media content is present in a media engine of the media station; (3) a judging step of judging whether the media engine has the capability of streaming media service or not when the condition that the section exists in the media engine of the media station is inquired; (4) a selecting step of selecting the media engine as a streaming service engine by the media director under the condition that the media engine is judged to have the streaming service capability; and (5) performing streaming service by the selected media engine.
In the executing step, when the streaming service of the slice is close to ending, the selected media engine notifies the media director to execute the querying step, the judging step, the selecting step, and the executing step on the next slice.
In the method for scheduling and distributing streaming media in a distributed streaming media distribution system of the present invention, the media station comprises a pair of media directors and at least one media engine, and the method comprises: (1) a selection step of selecting a target media station for streaming media service of streaming media content having a plurality of slices; (2) a determination step of confirming, by the DHT node in the media director, one or more source media stations where a slice of the streaming media content is located; (3) a request step of selecting the media station with the shortest distance to the target media station from the one or more source media stations to send a copy request according to the DHT table result and the routing position information; (4) a receiving step of receiving the copy request from the target media station in case the media engine of the media station has a streaming media service capability; (5) a copy step of copying the slice to the media engine of the target media station; and (6) an executing step of streaming service by the media engine of the target media station.
Wherein, in the executing step, when the streaming service of the one slice is close to ending, the media engine of the target media station notifies the media director to execute the determining step, the requesting step, the receiving step, the copying step, and the executing step for a next slice.
The distributed streaming media system and the method for caching, scheduling and distributing the media content can greatly improve the hit rate of the streaming media file slices buffered by the memory and effectively reduce the access frequency of the disk IO, thereby prolonging the service life of the disk and ensuring the reliability and stability of the system.
Drawings
The various aspects of the present invention will become more apparent to the reader after reading the detailed description of the invention with reference to the attached drawings. Wherein,
FIG. 1 shows a schematic diagram of a distributed streaming media distribution system of the present invention;
FIG. 2 is a schematic diagram showing the use of an intelligent Cache manager and streaming service switching control within a media station in accordance with the present invention;
FIG. 3 is a schematic diagram illustrating intelligent Cache management based on DHT node topology in the distributed streaming media distribution system according to the present invention;
FIG. 4 is a schematic diagram illustrating sharing of Cache slices and copy among media stations in the distributed streaming media distribution system according to the present invention; and
fig. 5 is a schematic flow chart illustrating that the media station of the present invention performs streaming service by using a Cache integrated management and scheduling method.
Detailed Description
Embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a distributed streaming media distribution system based on a network topology architecture, which can implement multi-level streaming media distribution and service.
As shown in fig. 1, the distributed streaming media distribution system is composed of a plurality of areas (i.e., area 1 to area n), where one area can be used as a system headquarters, and here, area 1 is used as the system headquarters.
In each zone, a home media station 20 is included and communicates with using the networkOf a plurality of edge media stations 30a1~30an. The home media station 20 and the edge media station 30 described abovea1~30anEach comprising a pair of Media Director (MD) A and a plurality of Media Engines (ME) Ba1~BanE.g. media engine Ba1Media engine Ba2Up to media engine Ban. Here, each edge media station 30a1~30anAre all independent cluster streaming servers.
Here, the area 1 is a system headquarters, and the area 1 additionally includes, compared with other areas not being system headquarters: a location information Media Location Register (MLR) 11 for recording media content slices of the streaming media distribution system; a Media Asset Manager (MAM) 12 for managing media assets of the streaming media distribution system; and a content manager (CM for short) 13 for taking charge of content management of the streaming media distribution system.
Wherein, the media director a is used for receiving the streaming media service transmitted from the outside, and querying the slice of the streaming media file at the edge media station 30 through the intelligent cache manager (which will be explained in detail as "intelligent cache manager" hereinafter)a1~30anIf so, determining the location of the edge media station; media engine Ba1~BanThe system is used for caching and storing the slices of the streaming media service and realizing the switching and control of the slices according to the capability of the media engine for streaming media service.
Here, the media director a may preferably be a master/slave dual back-up media director. Thus, when the main media director fails, the slave media director can seamlessly receive the service and the user, and the reliability of the system is guaranteed. Media engine Ba1~BanIs a set of load-balanced media engines, i.e. they are media engines with the same functionality, under the schedule of the media director a such that each media engine B isa1~BanIs relatively heavy to avoid over-heavy part of the media engine and over-heavy part of the media engineIs idle, so as to achieve the balance effect.
FIG. 2 is a schematic diagram of the control flow of the present invention using the intelligent Cache manager and the stream service switching in the media station.
As shown in FIG. 2, each edge media station 30a1~30anUsing a plurality of media engines B it hasa1~BanAnd respectively caching a plurality of slices of the hot streaming media file based on heat statistics, and providing the capacity of up to tens of thousands of streaming services for one access slice area.
More specifically, the media director a includes: a streaming service director 43 for receiving a streaming service request transmitted from the outside and controlling and switching a streaming service of the media engine; a storage manager 44 that manages the location and information of the slices of disk-stored media content in all media engines within one media station; an intelligent cache manager 41 for managing the slice positions and information of the media contents cached in all the memories in the media station; and a Distributed Hash table (Distributed Hash table) based DHT node manager 42 for distributing the memory-buffered media content slice information in the media station and receiving the slice information Distributed from other media stations.
Wherein, the media engine Ba1~BanEach of which contains a Disk storage area (denoted Disk in fig. 2) Da1~DamAnd Cache unit (shown as Cache in FIG. 2) Ca1~CamAnd a stream service unit La1~LamSaid Cache unit Ca1~CamLocal content buffering management within one media engine is implemented and buffered media content slice information is reported and updated to the intelligent cache manager 42 and the DHT node manager 41. The stream service unit La1~LamA streaming service in units of media content slices is provided, and streaming service and switching control are performed in cooperation with a streaming service director. Media engine Ba1~BanThe number of the cells can be increased to more than 100 according to the user size of the cell with high expansibility.
It is noted that at the edge media station 30a1~30anInternal intelligent Cache manager 42 performs Cache management and schedules Cache unit C in each media engine based on heat statisticsa1~CamTo Cache the unit Ca1~CamAnd through the media engine Ba1~BanThe inter-stream service switching and control mechanism and the distributed storage management are combined to realize the edge media station 30a1~30anCluster stream service within.
The media director A and the intelligent Cache manager 42 are both realized by software modules, and the media engine Ba1~BanAs well as by software modules.
The DHT (distributed Hash table) is a common data indexing method, and the DHT distributes the DHT to different places, that is, there is a DHT table at each DHT node to receive information issued by adjacent DHT nodes, so as to form a network to share information.
On the other hand, the Cache management and scheduling based on the heat statistics include the following three modes. More specifically, (1) based on the preset heat and the long window heat statistics, the most popular streaming media file is cached. The method is based on the preset popularity and the long window statistical popularity result when uploading the program, the most popular stream media file is cached in a plurality of media engines in a media station in a slicing mode, the highest Cache priority is provided in the three modes, and the stream media file is mainly the latest popular movie and the stream media program with better audience rating and longer popularity time; (2) and changing the latest and hottest slice of the streaming media file in the Cache based on the heat frequency. The method is characterized in that the slice of the most popular stream media file is quickly captured according to the change of the heat frequency, the three modes have medium Cache priority, and the stream media file is mainly the live broadcast of a Racing or the hot broadcast of a related VOD program caused by the occurrence of an emergency; (3) and (3) Cache management and scheduling when neither the heat statistics nor the heat frequency of the streaming media file reaches the modes (1) and (2). The method adopts an aging strategy of the least recent access and a mechanism of the association in the streaming media file segment, which take the data elements (the slice of the streaming media file is divided into a plurality of disk operating units, which are called as data elements) as units, so as to buffer and retain the data elements with the streaming media file segment association and the high recent access frequency as much as possible, and to eliminate the data elements without the association and with the low recent access frequency in a limited way.
Referring to fig. 2, it is assumed that a user currently requests a streaming service for a specified streaming media file, the streaming media file is divided into m slices, and the m slices are distributed to each coal body engine B of the media station based on Cache allocation and managementa1~BanNow, the working principle of using the intelligent Cache manager 42 in the media station and performing the stream service switching and control will be described in detail for the specific conditions: first, the media director a of the media station receives a streaming service request from a user, and queries the intelligent Cache manager 42 whether the slice 1 of the streaming media file is in each media engine B to which the media station belongsa1~BanPerforming the following steps; second, the smart Cache manager 42 queries and determines that slice 1 is in media engine Ba1And based on the media engine Ba1With the capability of streaming services, media engine B is directed to media director Aa1Selecting to execute the streaming service 1; third, when the streaming service of slice 1 is nearing the end, the media engine Ba1The message that the streaming service of the slice 1 is about to end is fed back to the media director A, and the media director A queries that the next slice (namely the slice 2) exists in the media engine B through the intelligent Cache manager 42a2And based on the media engine Ba2With the ability to stream services, information is returned to media engine Ba1(ii) a Finally, media engine Ba1Immediately switch to media engine B upon ending streaming service for slice 1a2And media engine Ba2Proceed to the media engine Ba1The same virtual IP address and port provide streaming service 2 to the user, with subsequent switchingThe slice flow service (up to flow service m) process also proceeds in sequence as described above. When streaming media files with m slices (streaming services 1-m), a user set top box is not needed to participate in the streaming media files.
Fig. 3 shows a schematic diagram of intelligent Cache management based on DHT node topology in the distributed streaming media distribution system of the present invention.
As shown in fig. 3, in the distributed streaming media distribution system, the home media station a, the home media station b, and the home media station c can directly communicate with each other. More specifically, the home media station a includes an edge media station 1, an edge media station 2, and an edge media station 3; home media station b comprises edge media station 4, edge media station 5 and edge media station 6; the home media station c comprises an edge media station 7, an edge media station 8 and an edge media station 9.
As can be seen from the schematic diagram of the media station shown in fig. 2 using the intelligent Cache manager 42 and the streaming service switching control, the information contained in the DHT node of the home media station a is shared with the DHT nodes of the edge media stations 1, 2, and 3, the information contained in the DHT node of the home media station b is shared with the DHT nodes of the edge media stations 4, 5, and 6, and the information contained in the DHT node of the home media station c is shared with the DHT nodes of the edge media stations 7, 8, and 9.
This fig. 3 will now be described in detail through a DHT node-based process of adding and deleting a slice of a streaming media file. When the information added or deleted by the Cache is required to be issued by the DHT node in the edge media station 1, the information can only be issued to the home media station a to which the edge media station belongs, and the home media station a issues the information added or deleted by the Cache to the edge media stations 2 and 3 except the edge media station 1 as an issuing source, and meanwhile, the home media station a also issues the information to the home media stations b and c directly connected with the home media station a. It should be noted that, after receiving the information issued by the home media station a, the edge media stations 2 and 3 stop broadcasting, and the home media stations b and c continue to issue to their respective subordinate edge media stations. More specifically, home media station b is published to edge media stations 4, 5, and 6, and home media station c is published to edge media stations 7, 8, and 9. To avoid repeated distribution in a loop, the home media stations b and c do not distribute the information distributed by the home media station a to each other, that is, if the distribution source is not the edge media station or itself to which the home media station belongs, the home media station does not distribute the information to other home media stations. Therefore, the Cache content information sharing among the media as shown in fig. 3 has the characteristics of high flexibility and expandability, does not need to be searched when the shared information is used, and can avoid the bottleneck problem of the traditional centralized management.
Fig. 4 shows a schematic diagram of a distributed streaming media distribution system of the present invention for sharing slices and implementing copy between media stations.
Referring to fig. 4, home media station a corresponds to edge media station a1, edge media station a2, and edge media station b 1. If the streaming media file requested to perform the streaming service by the user includes slice 1, slice 2 and slice m, slice 1 is cached in the media engine of the edge media station a1, slice 2 is cached in the media engine of the edge media station b1, and slice m is cached in the media engine of the home media station a. When the streaming service is implemented in the edge media station a2, the following process is performed:
(1) confirming, by the DHT node in the media director of the edge media station a2, the media engine where slice 1 of the streaming media file is located;
(2) selecting the edge media station a1 with the shortest distance from the edge media station a2 to send a request for copying slice 1 according to the DHT table result and the routing position information;
(3) receiving the copy request from the edge media station a2 in the event that the media engine of the edge media station a1 has copy services capability;
(4) copy slice 1 into the media engine of edge media station a 2;
(5) slice 1 is streamed by the media engine of edge media station a 2;
(6) the above steps (1) to (5) are performed for the slice 2 and the slice m in this order.
It should be noted that although slice m is cached in the media engine of home media site a instead of edge media site a1 or b1, the process of copying slices is exactly the same as slice 1.
Fig. 5 is a schematic flow chart illustrating that the media station of the present invention performs streaming service by using a Cache integrated management and scheduling method.
As shown in fig. 5, the edge media station combines the Cache management and scheduling methods for streaming service in and among the media stations to execute the service for streaming media files, and the specific implementation flow can be embodied by the following steps:
(1) a media director of the edge media station receives a streaming media request of a user (step S500);
(2) the intelligent cache manager searches for the next slice of the streaming media file (step S502), the intelligent cache manager being within the media director of the edge media station;
when a slice is present within the edge media station,
(a) judging and determining whether the slice exists (step S504);
(b) if the slice is in a certain media engine of the edge media station, determining whether the media engine has the capability of performing streaming service (step S506);
(c) if the media engine has the capability of streaming service, designating the media engine caching the slice to perform streaming service (step S508);
(d) when the streaming service for the slice is about to end, returning to the intelligent cache manager (step S510) and executing step (2);
(e) if the media engine in step (b) does not have the ability of performing the streaming service, selecting the media engine with the streaming service ability and the cache space to copy (step S512);
(f) designating the selected media engine for streaming service (step S514); and
(g) when the streaming service is about to end, the server returns to the intelligent cache manager (step S516) and performs step (2). When the slice is not within the edge media station,
(i) querying the home media station and all edge media stations to which the home media station belongs based on the DHT node (step S518);
(ii) judging and determining whether the slice exists (step S520);
(iii) if the slice does not exist in the home media station and the edge media station, reading the slice from the storage system, and selecting the media engine caching the slice for streaming service (step S526), and then returning to the step (2);
(iv) if the slice exists in the home media station or the edge media station, a copy request is sent to the media station which caches the slice (step S522);
(v) judging whether a service permission of the opposite terminal is obtained before timeout (step S524);
(vi) if the service permission is obtained, executing the steps (e) - (g); and
(vii) if the service permission is not obtained, step (iii) is performed.
As described above, the Cache content information sharing at the media station is used to avoid the low efficiency of the traditional centralized management information search, and the Cache content sharing is realized through network copy, so that the purposes of reducing disk IO access and prolonging the service life of the hard disk are achieved.
Hereinbefore, specific embodiments of the present invention are described with reference to the drawings. However, those skilled in the art will appreciate that various modifications and substitutions can be made to the specific embodiments of the present invention without departing from the spirit and scope of the invention. Such modifications and substitutions are intended to be included within the scope of the present invention as defined by the appended claims.

Claims (21)

1. A distributed stream media distribution system is characterized in that the distributed stream media distribution system is composed of a plurality of areas, wherein one area is a system headquarters, each area comprises a home media station and at least one edge media station,
wherein, the home media station is used for storing the streaming media content, and network copying and distributing the stored streaming media content to the edge media station according to the hot statistical mode,
the edge media station communicates with the home media station through a network and stores the most popular slice of the streaming media content in memory buffer and disk based on user requests and heat statistics to provide streaming services.
2. The distributed streaming media distribution system of claim 1, further comprising, as a region of a system headquarters:
the media position register is used for recording the position information of the media content slice of the streaming media distribution system;
a media asset manager for managing media assets of the streaming media distribution system; and
and the content manager is used for being responsible for content management of the streaming media distribution system.
3. The distributed streaming media distribution system of claim 1, wherein the edge media station comprises:
a media director for receiving a streaming media service request transmitted from the outside, and determining the position of the slice of the streaming media service in the edge media station; and
and the media engine is used for buffering or storing the slices of the streaming media service in a memory, realizing the streaming service taking the slices as streaming service units, switching the streaming service under the control of the media director, and realizing the distribution and sharing of the slices between the media director and a home media station or other edge media stations.
4. The distributed streaming media distribution system of claim 1, wherein the media director comprises:
the stream service director is used for receiving a stream media service request sent from the outside and controlling and switching the stream service of the media engine;
the storage manager manages the position and information of the slices of the media contents stored in the magnetic disks in all the media engines in one media station;
the intelligent cache manager is used for managing the slice positions and information of the media contents cached in all the memories in the media station; and
the DHT node manager based on the DHT is used for publishing the slice information of the media contents buffered in the media station and receiving the slice information published by other media stations.
5. The distributed streaming media distribution system of claim 1, wherein the media engine comprises:
the stream service unit is used for providing stream service taking media content slices as units and carrying out stream service and switching control by matching with a stream service director; and
and the memory cache management unit is used for realizing local content cache management in the media engine and reporting and updating the cached media content slice information to the intelligent cache manager and the DHT node manager.
And the disk storage unit is used for storing the slices of the media content and forming cluster storage in the media war under the management of the storage manager.
The distributed streaming media distribution system of claim 1 wherein the in-memory buffered media content slices are based on heat statistics, caching media content slices that are relatively hot to improve cache hit rates.
7. The distributed streaming media distribution system of claim 1 wherein the slicing of media content buffered in memory in each media engine in the media station enables service switching under control of the media director to achieve slicing service sharing.
8. The distributed streaming media distribution system of claim 1, wherein the memory-buffered media content slices between the media stations are shared by DHT management, and the memory-buffered media content slices between the media stations are shared by copying.
9. The distributed streaming media distribution system of claim 1 wherein the home media station, the edge media station are each independent clustered streaming servers.
10. The distributed streaming media distribution system of claim 1,
the media director is a pair of master-slave dual back-up media directors,
the media engines are multiple, and the multiple media engines are media engines with the same function.
11. A method for buffering the memory of streaming media in a distributed streaming media distribution system, the distributed streaming media distribution system comprises a media station,
and performing memory buffering on the streaming media in the media station based on the heat statistics.
12. The method of claim 11, wherein the memory buffer unit is a slice of the streaming media content, and the heat statistics are based on a mechanism of preset heat and long-time heat statistics.
13. The method of claim 11,
the memory buffer unit is a slice of the streaming media content, and the heat degree statistic is based on the heat degree change frequency.
14. The method of claim 11,
the unit of memory buffer is based on an aging policy and an associated mechanism within the slice that is least accessed for a certain period of time in units of data elements, wherein the data elements are a plurality of disk operating units divided by the slice.
15. The method of claim 13, wherein the predetermined heat and long-term heat statistics for all slices are sorted by comparison, and slices with long-term heat ranking above a predetermined threshold will be retained in memory for buffering.
16. The method of claim 14, wherein the frequency of the heat change of all slices within a certain time is sorted by comparison, and the slices that exceed the frequency of the heat change of the preset threshold and are ranked first are retained in the memory for buffering.
17. The method of claim 15, wherein each data element is predicted by ranking the most recent access heat and associating a contextual relationship of the data element within the slice, the heat ranking being high and the data element predicted to be used due to the intra-slice association being retained in memory for buffering.
18. A method for scheduling and managing streaming media in a distributed streaming media distribution system, wherein the distributed streaming media distribution system includes a media station, and the media station includes a pair of media directors and at least one media engine, the method comprising:
(1) a receiving step in which the media director receives a streaming media request for streaming media content having a plurality of slices from a user;
(2) a query step of querying whether a slice of the streaming media content is present in a media engine of the media station;
(3) a judging step of judging whether the media engine has the capability of streaming media service or not when the condition that the section exists in the media engine of the media station is inquired;
(4) a selecting step of selecting the media engine as a streaming service engine by the media director under the condition that the media engine is judged to have the streaming service capability; and
(5) and executing the streaming service by the selected media engine.
19. The method of claim 18, wherein in the executing step, the selected media engine notifies the media director to execute the querying step, the determining step, the selecting step, and the executing step for a next slice when the streaming service of the one slice is approaching an end.
20. A method for scheduling and distributing streaming media in a distributed streaming media distribution system, wherein a media station comprises a pair of media directors and at least one media engine, the method comprising:
(1) a selection step of selecting a target media station for streaming media service of streaming media content having a plurality of slices;
(2) a determination step of confirming, by the DHT node in the media director, one or more source media stations where a slice of the streaming media content is located;
(3) a request step of selecting the media station with the shortest distance to the target media station from the one or more source media stations to send a copy request according to the DHT table result and the routing position information;
(4) a receiving step of receiving the copy request from the target media station in case the media engine of the media station has a streaming media service capability;
(5) a copy step of copying the slice to the media engine of the target media station; and
(6) performing, by the media engine of the target media station, a streaming service.
21. The method of claim 19, wherein in the executing step, the media engine of the target media station notifies the media director to execute the determining step, the requesting step, the receiving step, the copying step, and the executing step for a next slice when the streaming service for the one slice is approaching an end.
CN200710096229A 2007-03-30 2007-03-30 Distributed flow media distribution system, and flow media memory buffer and scheduling distribution method Active CN100579208C (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN200710096229A CN100579208C (en) 2007-03-30 2007-03-30 Distributed flow media distribution system, and flow media memory buffer and scheduling distribution method
PCT/CN2008/000466 WO2008119235A1 (en) 2007-03-30 2008-03-10 Distribution system for distributing stream media, memory buffer of stream media and distributing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200710096229A CN100579208C (en) 2007-03-30 2007-03-30 Distributed flow media distribution system, and flow media memory buffer and scheduling distribution method

Publications (2)

Publication Number Publication Date
CN101026744A true CN101026744A (en) 2007-08-29
CN100579208C CN100579208C (en) 2010-01-06

Family

ID=38744583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200710096229A Active CN100579208C (en) 2007-03-30 2007-03-30 Distributed flow media distribution system, and flow media memory buffer and scheduling distribution method

Country Status (2)

Country Link
CN (1) CN100579208C (en)
WO (1) WO2008119235A1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008119235A1 (en) * 2007-03-30 2008-10-09 Utstarcom Telecom Co., Ltd. Distribution system for distributing stream media, memory buffer of stream media and distributing method
WO2009097716A1 (en) * 2008-01-31 2009-08-13 Huawei Technologies Co., Ltd. A method of slicing media content, a method, device and system for providing media content
CN101841553A (en) * 2009-03-17 2010-09-22 日电(中国)有限公司 Method, user node and server for requesting location information of resources on network
CN101287002B (en) * 2008-05-21 2010-12-29 华中科技大学 Method for enhancing amount of concurrent media flow of flow media server
CN102123318A (en) * 2010-12-17 2011-07-13 曙光信息产业(北京)有限公司 IO acceleration method of IPTV application
CN102333120A (en) * 2011-09-29 2012-01-25 广东高新兴通信股份有限公司 Flow storage system for load balance processing
CN101409823B (en) * 2007-10-10 2012-04-25 华为技术有限公司 Method, apparatus and system for implementing network personal video recorder
CN102647357A (en) * 2012-04-20 2012-08-22 中兴通讯股份有限公司 Context routing processing method and context routing processing device
CN101998173B (en) * 2009-08-27 2012-11-07 华为技术有限公司 Distributed media sharing play controller as well as media play control system and method
CN103036967A (en) * 2012-12-10 2013-04-10 北京奇虎科技有限公司 Data download system and device and method for download management
CN103051701A (en) * 2012-12-17 2013-04-17 北京网康科技有限公司 Cache admission method and system
CN103281383A (en) * 2013-05-31 2013-09-04 重庆大学 Timing sequence recording method for distributed-type data source
CN103905923A (en) * 2014-03-20 2014-07-02 深圳市同洲电子股份有限公司 Content caching method and device
CN104202650A (en) * 2014-09-28 2014-12-10 西安诺瓦电子科技有限公司 Streaming media broadcast system and method and LED display screen system
CN105207993A (en) * 2015-08-17 2015-12-30 深圳市云宙多媒体技术有限公司 Data access and scheduling method in CDN, and system
CN106604043A (en) * 2016-12-30 2017-04-26 Ut斯达康(深圳)技术有限公司 Internet-based live broadcast method and live broadcast server
CN106648593A (en) * 2016-09-29 2017-05-10 乐视控股(北京)有限公司 Calendar checking method and device for terminal equipment
CN106708865A (en) * 2015-11-16 2017-05-24 杭州华为数字技术有限公司 Method and device for accessing window data in stream processing system
CN107566509A (en) * 2017-09-19 2018-01-09 广州南翼信息科技有限公司 A kind of information issuing system for carrying high-volume terminal
WO2018153237A1 (en) * 2017-02-23 2018-08-30 中兴通讯股份有限公司 Caching method and system for replaying live broadcast, and playing method and system
CN108574685A (en) * 2017-03-14 2018-09-25 华为技术有限公司 A kind of Streaming Media method for pushing, apparatus and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2385683A (en) * 2002-02-22 2003-08-27 Thirdspace Living Ltd Distribution system with content replication
CN1227592C (en) * 2002-09-17 2005-11-16 华为技术有限公司 Method for managing stream media data
US20050235047A1 (en) * 2004-04-16 2005-10-20 Qiang Li Method and apparatus for a large scale distributed multimedia streaming system and its media content distribution
CN100579208C (en) * 2007-03-30 2010-01-06 Ut斯达康通讯有限公司 Distributed flow media distribution system, and flow media memory buffer and scheduling distribution method

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008119235A1 (en) * 2007-03-30 2008-10-09 Utstarcom Telecom Co., Ltd. Distribution system for distributing stream media, memory buffer of stream media and distributing method
CN101409823B (en) * 2007-10-10 2012-04-25 华为技术有限公司 Method, apparatus and system for implementing network personal video recorder
WO2009097716A1 (en) * 2008-01-31 2009-08-13 Huawei Technologies Co., Ltd. A method of slicing media content, a method, device and system for providing media content
CN101287002B (en) * 2008-05-21 2010-12-29 华中科技大学 Method for enhancing amount of concurrent media flow of flow media server
CN101841553B (en) * 2009-03-17 2014-03-12 日电(中国)有限公司 Method, user node and server for requesting location information of resources on network
CN101841553A (en) * 2009-03-17 2010-09-22 日电(中国)有限公司 Method, user node and server for requesting location information of resources on network
CN101998173B (en) * 2009-08-27 2012-11-07 华为技术有限公司 Distributed media sharing play controller as well as media play control system and method
CN102123318A (en) * 2010-12-17 2011-07-13 曙光信息产业(北京)有限公司 IO acceleration method of IPTV application
CN102123318B (en) * 2010-12-17 2014-04-23 曙光信息产业(北京)有限公司 IO acceleration method of IPTV application
CN102333120A (en) * 2011-09-29 2012-01-25 广东高新兴通信股份有限公司 Flow storage system for load balance processing
CN102333120B (en) * 2011-09-29 2014-05-21 高新兴科技集团股份有限公司 Flow storage system for load balance processing
CN102647357A (en) * 2012-04-20 2012-08-22 中兴通讯股份有限公司 Context routing processing method and context routing processing device
WO2013155979A1 (en) * 2012-04-20 2013-10-24 中兴通讯股份有限公司 Method and device for processing content routing
CN106850817A (en) * 2012-12-10 2017-06-13 北京奇虎科技有限公司 A kind of download management equipment, method and data downloading system
CN103036967A (en) * 2012-12-10 2013-04-10 北京奇虎科技有限公司 Data download system and device and method for download management
CN103051701B (en) * 2012-12-17 2016-02-17 北京网康科技有限公司 A kind of buffer memory access method and device
CN103051701A (en) * 2012-12-17 2013-04-17 北京网康科技有限公司 Cache admission method and system
CN103281383B (en) * 2013-05-31 2016-03-23 重庆大学 A kind of time sequence information recording method of Based on Distributed data source
CN103281383A (en) * 2013-05-31 2013-09-04 重庆大学 Timing sequence recording method for distributed-type data source
CN103905923A (en) * 2014-03-20 2014-07-02 深圳市同洲电子股份有限公司 Content caching method and device
CN104202650A (en) * 2014-09-28 2014-12-10 西安诺瓦电子科技有限公司 Streaming media broadcast system and method and LED display screen system
CN105207993A (en) * 2015-08-17 2015-12-30 深圳市云宙多媒体技术有限公司 Data access and scheduling method in CDN, and system
CN106708865A (en) * 2015-11-16 2017-05-24 杭州华为数字技术有限公司 Method and device for accessing window data in stream processing system
CN106708865B (en) * 2015-11-16 2020-04-03 杭州华为数字技术有限公司 Method and device for accessing window data in stream processing system
CN106648593A (en) * 2016-09-29 2017-05-10 乐视控股(北京)有限公司 Calendar checking method and device for terminal equipment
CN106604043A (en) * 2016-12-30 2017-04-26 Ut斯达康(深圳)技术有限公司 Internet-based live broadcast method and live broadcast server
WO2018153237A1 (en) * 2017-02-23 2018-08-30 中兴通讯股份有限公司 Caching method and system for replaying live broadcast, and playing method and system
CN108574685A (en) * 2017-03-14 2018-09-25 华为技术有限公司 A kind of Streaming Media method for pushing, apparatus and system
CN107566509A (en) * 2017-09-19 2018-01-09 广州南翼信息科技有限公司 A kind of information issuing system for carrying high-volume terminal
CN107566509B (en) * 2017-09-19 2020-09-11 广州南翼信息科技有限公司 Information publishing system capable of bearing large-batch terminals

Also Published As

Publication number Publication date
WO2008119235A1 (en) 2008-10-09
CN100579208C (en) 2010-01-06

Similar Documents

Publication Publication Date Title
CN100579208C (en) Distributed flow media distribution system, and flow media memory buffer and scheduling distribution method
EP2227888B1 (en) Predictive caching content distribution network
US7058014B2 (en) Method and apparatus for generating a large payload file
Thouin et al. Video-on-demand networks: design approaches and future challenges
CN100518305C (en) Content distribution network system and its content and service scheduling method
WO2009079948A1 (en) A content buffering, querying method and point-to-point media transmitting system
CN102546711B (en) Storage adjustment method, device and system for contents in streaming media system
US10326854B2 (en) Method and apparatus for data caching in a communications network
CN101594292A (en) Content delivery method, service redirection method and system, node device
Yu et al. Integrated buffering schemes for P2P VoD services
CN102316097A (en) Streaming media scheduling and distribution method capable of reducing wait time of user
CN111050188B (en) Data stream scheduling method, system, device and medium
US20110209184A1 (en) Content distribution method, system, device and media server
CN102256163A (en) Video-on-demand system based on P2P (Peer-To-Peer)
KR20100073154A (en) Method for data processing and asymmetric clustered distributed file system using the same
CN102118315B (en) Method for fluidizing, recording and reading data and system adopting same
CN102497389A (en) Big umbrella caching algorithm-based stream media coordination caching management method and system for IPTV
US20090100188A1 (en) Method and system for cluster-wide predictive and selective caching in scalable iptv systems
CN108419097A (en) Video sharing method based on clustering tree under a kind of mobile ad hoc network
Jiang et al. A replica placement algorithm for hybrid CDN-P2P architecture
CN101540884B (en) Construction method of equivalent VoD system based on jump graph
Zhuo et al. Efficient cache placement scheme for clustered time-shifted TV servers
Silva et al. Joint content-mobility priority modeling for cached content selection in D2D networks
CN113992653B (en) CDN-P2P network content downloading, pre-storing and replacing method based on edge cache
Makhkamov et al. Energy Efficient Technique and Algorithm Based on Artificial Intelligence in Content Delivery Networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: UT SIDAKANG (CHINA) CO. LTD.

Free format text: FORMER OWNER: UT STARCOM COMMUNICATION CO., LTD.

Effective date: 20130320

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 310053 HANGZHOU, ZHEJIANG PROVINCE TO: 100027 DONGCHENG, BEIJING

TR01 Transfer of patent right

Effective date of registration: 20130320

Address after: Beihai Manhattan building 6 No. 100027 Beijing Dongcheng District, Chaoyangmen North Street 11

Patentee after: UTSTARCOM (CHINA) CO.,LTD.

Address before: 310053 No. six, No. 368, Binjiang District Road, Zhejiang, Hangzhou

Patentee before: UTSTARCOM TELECOM Co.,Ltd.

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100027 11 Floor of Beihai Wantai Building, 6 Chaoyangmen North Street, Dongcheng District, Beijing

Patentee after: UT Starcom (China) Co.,Ltd.

Address before: 100027 11 Floor of Beihai Wantai Building, 6 Chaoyangmen North Street, Dongcheng District, Beijing

Patentee before: UTSTARCOM (CHINA) CO.,LTD.

TR01 Transfer of patent right

Effective date of registration: 20190128

Address after: 518000 Lenovo Building, No. 016, Gaoxin Nantong, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, on the east side of the third floor

Patentee after: UD NETWORK CO.,LTD.

Address before: 100027 11 Floor of Beihai Wantai Building, 6 Chaoyangmen North Street, Dongcheng District, Beijing

Patentee before: UT Starcom (China) Co.,Ltd.

TR01 Transfer of patent right