WO2009109092A1 - Procédé, système et dispositif pour fournir du contenu à la demande - Google Patents

Procédé, système et dispositif pour fournir du contenu à la demande Download PDF

Info

Publication number
WO2009109092A1
WO2009109092A1 PCT/CN2008/073609 CN2008073609W WO2009109092A1 WO 2009109092 A1 WO2009109092 A1 WO 2009109092A1 CN 2008073609 W CN2008073609 W CN 2008073609W WO 2009109092 A1 WO2009109092 A1 WO 2009109092A1
Authority
WO
WIPO (PCT)
Prior art keywords
demand
content
flash memory
demand content
providing
Prior art date
Application number
PCT/CN2008/073609
Other languages
English (en)
Chinese (zh)
Inventor
罗泽文
王子钟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2009109092A1 publication Critical patent/WO2009109092A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/252Processing of multiple end-users' preferences to derive collaborative data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17336Handling of requests in head-ends

Definitions

  • Embodiments of the present invention relate to the field of communications technologies, and in particular, to a method, system, and apparatus for providing on-demand content. Background technique
  • VOD Video On Demand
  • the operator prepares a large number of movies in advance in the video on demand system, and the user can issue an on-demand request to the video on demand system to control the play of the program at will.
  • the server cost and network bandwidth cost of video on demand are high.
  • some new services are integrated, such as: nPVR (network Personal Video Recording), TSTV (Time Shift) Television, time-shifted TV).
  • Services such as TVOD (TV On Demand), VOD, and TSTV require servers to provide streaming services to users.
  • the performance of a single server is mainly limited by the key factors such as CPU (Central Processing Unit) speed, memory, network bandwidth, I/O (Input/Output) read and write speed.
  • CPU Central Processing Unit
  • memory memory
  • I/O Input/Output
  • the clock speed of a single CPU has been increased to more than 3G, and one server can be equipped with multiple CPUs. Therefore, the CPU speed problem has not become a bottleneck; the memory can currently be configured to more than a dozen GB, and generally 4GB or 8GB is enough.
  • the streaming media server already supports multiple network card binding aggregations to increase the bandwidth, for example, three GE (Gigabit Ethernet, Ethernet interface) port binding aggregation can reach the bandwidth of nearly 3G, and The 10GE NIC has also started production, so the bandwidth problem of the network is not the main problem affecting the performance of streaming services.
  • three GE Gigabit Ethernet, Ethernet interface
  • streaming media servers mostly use disk arrays as storage, which is severely limited by the inherent characteristics of hard disk machinery, even if the data is dispersed into multiple disks to improve the overall
  • I/O read/write speed technology but the speed of improvement is not obvious, traditional SAN (Storage Area Network, storage area i or network), DAS (Direct Access Storage, direct access storage), NAS (Network Attached) Storage, network attached storage)
  • Storage I / O read and write speed is generally around lGBit / S. Therefore, relatively speaking, the I/O read/write speed of the disk has become a key factor restricting the performance of the streaming server, which is a technical problem facing the industry.
  • the video that the user clicks on most is buffered into the memory in advance, and when the user clicks on the hot film, the service is directly read out from the memory, and the memory has a higher speed of reading and writing than the hard disk, thereby avoiding frequent access. Disk, relieves the pressure of slow disk I/O read and write.
  • the streaming media system sorts the cache priority of the stream content by counting the number of requests and the frequency of requests to determine which files are placed in the buffer and which files are removed from the buffer.
  • some streaming media applications still use 32-bit, which results in a maximum of 4 GB of available memory for such streaming media applications.
  • IPTV Internet Protocol Television
  • a movie has a capacity of 1 GB.
  • the new hardware system and operating system have begun to support 64-bit, 32-bit streaming media application system can be upgraded to support 64-bit by modifying the code, theoretically can support 17179869184GB memory, but limited to the number of memory slots of the motherboard and single-block memory.
  • the capacity of the bar in general, a server can be equipped with a maximum of ten GB of memory, and can only cache a dozen or so movies, which has the problem of a small number of memory storage videos.
  • the memory cost is relatively high.
  • the 1GB memory stick in the market needs several hundred yuan or more, which increases the investment burden to some extent.
  • the data of the hot chip needs to be read from the hardware into the buffer every time it is started, which may greatly affect the startup speed and reduce the operation speed.
  • Embodiments of the present invention provide a method, system, and apparatus for providing on-demand content to improve I/O read/write speed and performance of a single streaming media server.
  • an embodiment of the present invention provides a method for providing on-demand content, including: counting hotness of on-demand content; storing the on-demand content in a flash memory according to the statistical heat, and depositing the on-demand memory into the flash memory Content is preferentially provided when the media server receives an on-demand request.
  • the embodiment of the present invention further provides a method for providing on-demand content, including: receiving an on-demand request of a user; and preferentially reading the on-demand content stored in the flash memory in the streaming media server according to the on-demand request of the user .
  • the embodiment of the present invention further provides a system for providing on-demand content, including: a streaming media server and a content manager, where the content manager is configured to calculate the popularity of the on-demand content, according to the statistical heat
  • the on-demand content is stored in a flash memory of the streaming media server;
  • the streaming media server is configured to save the on-demand content in a flash memory of the streaming media server, and preferentially read the deposit when receiving an on-demand request The on-demand valley of flash memory.
  • the embodiment of the present invention further provides a content manager, including: a heat statistics module, configured to calculate the heat of the on-demand content; and a storage module, configured to store the on-demand content according to the heat of the statistics module
  • the streaming server is in the flash memory.
  • the embodiment of the present invention further provides a streaming media server, including: at least one flash memory for storing on-demand content; a media request receiving module, configured to receive an on-demand request of a user; and a media providing module, configured to When receiving the on-demand request of the user, the media request receiving module preferentially reads the on-demand content in the flash memory in the streaming media server.
  • a streaming media server including: at least one flash memory for storing on-demand content; a media request receiving module, configured to receive an on-demand request of a user; and a media providing module, configured to When receiving the on-demand request of the user, the media request receiving module preferentially reads the on-demand content in the flash memory in the streaming media server.
  • the embodiment of the present invention has the following advantages: the embodiment of the present invention improves the read/write speed of the I/O by storing the hot slice in the flash memory of the streaming media server, thereby improving the single streaming media. Server performance. DRAWINGS
  • FIG. 1 is a flowchart of a method for providing on-demand content according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram showing the structure of a plug-in type media server hardware system according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram showing the structure of a hardware server hardware system of a motherboard slot type according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of a system network structure according to an embodiment of the present invention.
  • FIG. 5 is another schematic diagram of a system network structure according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of a user's on-demand broadcast according to an embodiment of the present invention.
  • FIG. 7 is a flowchart of a heat statistics algorithm according to an embodiment of the present invention.
  • FIG. 8 is a flowchart of timing distribution according to heat according to an embodiment of the present invention.
  • FIG. 9 is a structural diagram of a system for providing on-demand content according to an embodiment of the present invention.
  • FIG. 10 is a structural diagram of a content manager according to an embodiment of the present invention.
  • FIG. 11 is a structural diagram of a streaming media server according to an embodiment of the present invention. detailed description
  • the embodiment of the invention provides a method, a system and a device for providing on-demand content.
  • the flash memory as a hot-storage memory, the I/O read/write speed is greatly improved, thereby improving a single streaming media server. Performance.
  • a flowchart of a method for providing on-demand content includes the following steps:
  • Step S101 Calculate the heat of the on-demand content.
  • the hotness of the on-demand content is determined according to the number of times of the user's on-demand and the on-demand time to determine which on-demand content is a hot film.
  • Step S102 The on-demand content is stored in the flash memory according to the statistical heat, and the on-demand content stored in the flash memory is preferentially provided when the media server receives the on-demand request.
  • the entire contents of the hot slice are stored in the flash according to the size of the flash memory of the streaming server. Save or just save the slice header of the hot slice to the flash memory and save other content to the disk array.
  • the streaming server when the streaming server receives the user's request for on-demand, it preferentially reads the on-demand content stored in the flash memory to provide services for the user. If there is no on-demand content in the flash memory that meets the user's request for on-demand, then consider reading from the disk array or the local disk. take. If only the slice header of the on-demand content is stored in the flash memory, after the streaming media server finishes playing the slice header, the on-demand content corresponding to the slice header is found from the disk array or the local disk to continue playing from the end of the slice header.
  • one or more flash memories are configured on each streaming media server.
  • FIG. 2 it is a schematic diagram of the structure of the streaming media server hardware system of the rear plug-in type. Only one flash is shown in Figure 2, but the actual application is not limited to one flash. It is also possible to add a method to the motherboard slot. As shown in Figure 3, it is a schematic diagram of the system structure of the streaming media server hardware of the motherboard slot. Only one flash memory is shown in Figure 3, but the actual application is not limited to one flash memory.
  • FIG. 4 it is a schematic diagram of a system network structure according to an embodiment of the present invention.
  • the client in the embodiment of the present invention may be a mobile phone, a set top box, a PC, etc., but the embodiment of the present invention is described by taking a mobile phone in a wireless network as an example.
  • the streaming media server is deployed in a cluster and deployed behind the load balancer.
  • the client first accesses the load balancer, and the load balancer schedules an optimal streaming server to serve the user.
  • the load balancer can perform comprehensive scheduling by content, load, and area. Scheduling by content means that the user is scheduled to provide services to the streaming media server that has content to be viewed by the user.
  • the scheduling is performed on the streaming media server with lower performance indicators such as CPU and memory.
  • Service, by area scheduling refers to scheduling a user to provide services on a media server that is geographically closest to the user.
  • the content management system in addition to the common content management functions such as publishing, modifying, and deleting content, the content management system also needs to bear the content heat statistics, determine which content or title is stored in the flash memory of the streaming media server, and what content or The intro is the middle of the flash memory of the streaming server.
  • FIG. 6 it is a flowchart of a user's on-demand broadcast according to an embodiment of the present invention. Specifically, the following steps are included:
  • Step S601 The user's request for on-demand is accessed into the load balancer. If there is no load equalizer, the user's on-demand request is directly connected to the streaming server, and the on-demand process directly jumps to step S604.
  • Step S602 Select an appropriate streaming server according to the content of the user's on-demand.
  • the load balancer prioritizes the media server having the content slice header in the flash memory according to the content of the user's on-demand, that is, the load balancer needs to distinguish the on-demand according to the content scheduling. Whether the content is flash content or general disk content, consider the flash content first, and then consider other factors such as the region.
  • Step S603 forwarding the user's request for on-demand to the streaming server selected in step S602.
  • Step S604 preferentially reading the on-demand content in the flash memory according to the user request.
  • the media server receives the user's request, it preferentially considers reading the content providing service in the flash memory. If the flash memory does not have the content requested by the user, consider reading from the disk array or the local disk. If only the slice header is stored in the flash memory, the media server first plays the slice header, and then finds the corresponding content from the disk array or the local disk to continue playing from the end of the slice header.
  • Step S605 The streaming media service sends a media stream providing service.
  • Step S606 The streaming media server notifies the content management system of the user's request for on-demand.
  • Step S607 the content management system generates an on-demand record, which is stored in a database or a file. There may be multiple records in the same content, and each record includes but is not limited to the following information: content ID, user IP, on-demand time, on-demand duration, etc.
  • the information is stored in the structure of the data table and is recorded as Tab-Stat-Consume. If there are a large number of users, it may generate huge amounts of data every day or even every hour.
  • This data table uses the sub-table technology, which can be divided into time periods (for example, on time, by day, by week, by month, etc.), Size tables (for example, 200MB - a sub-table), can also be divided into records according to the number of records (for example, 3 million records a sub-table).
  • Step S608 the user requests the end of the on-demand.
  • Step S609 after receiving the end request of the user, the streaming media server points the user The broadcast duration is also reported to the content management system.
  • Step S610 the content management system finds the record generated by the corresponding step S607, and writes the on-demand duration into the corresponding record.
  • the capacity of the flash memory is increased by several tens of times compared with the capacity of the memory, the capacity for storing all the contents cannot be achieved. Therefore, it is necessary to store the hot film in the flash memory, thereby improving the utilization rate of the flash memory, thereby reducing the I/O reading and writing pressure. Considering the actual application, many users may quit after a few minutes after seeing the first few minutes.
  • the first level can store up to 10 minutes of the title (if the entire movie is less than 10 minutes to store the entire movie), the second level can store up to 30 minutes of the title (if the entire movie is less than 30 minutes to store the entire movie), the third The entire movie is stored at the level.
  • the foregoing embodiment of the present invention is described by taking the foregoing classification method as an example, but is not limited to the above-mentioned classification method.
  • the following may include, but not limited to, the following special cases: only one level, either the entire movie, or all of them. The title of the time.
  • flash and general disk movies There are two working modes for the distribution of flash and general disk movies.
  • One is the backup mode, that is, the general disk stores all the movies, and the flash memory stores the title (including the entire movie) to accelerate, even if the flash memory is completely damaged, the streaming server It is also possible to read content from a normal disk to provide services normally.
  • This mode is highly reliable, but there is video redundancy; the other is the mutual exclusion module, that is, in the flash storage section, it is not repeated on the general disk. This mode is less reliable, but there is no film redundancy.
  • the flash write of flash memory has a life limit, and reading has no life limit in theory.
  • the flash memory is divided into 128K blocks, and the lifetime of each data block is 100,000 times of repeated erasing. That is, after each block of data is repeatedly erased 100,000 times, the availability of this data block cannot be guaranteed, but not Will affect other data blocks.
  • flash controllers and drivers In order to improve the flash erasing life of flash memory, flash controllers and drivers generally use wear leveling algorithms, which are preferred when updating data. A block of data that is erased a small number of times, so that the data blocks of the flash memory are used evenly. According to this situation, it is necessary to control the number of times the flash memory is erased, and at the same time, two strategies are used for control. One strategy is to use the wear leveling algorithm, and the wear leveling algorithm uses block processing. Different flash controllers and driver block sizes may be different, assuming that the minimum block is
  • the server should try to ensure that the nxBlock size data is first buffered (11 takes 1, 2, 3 and other natural numbers), and then write to the flash memory at one time, so as to avoid the frequent avoidance of a certain data block. Perform a erase operation to extend the life of the flash memory.
  • Another strategy is to use a policy that is distributed on a hot basis, as shown in Figure 7. Specifically, the following steps are included:
  • Step S701 setting a timer to periodically start the task of distributing the movie or the title to the flash memory according to the heat.
  • Step S702 sorting and counting which movie or title in the media server is stored in the flash memory or deleted from the flash memory.
  • the heat statistics algorithm it is counted which movie titles and long titles (possibly the entire movie) added to each media server are put into the flash memory, and which movie titles (possibly the entire movie) are removed from the flash memory. in.
  • Step S703 Notify the corresponding statistical result to each corresponding media server.
  • Step S704 according to the result of the content server statistics, the content or the title to be deleted is erased from the flash memory, and the content or the slice header requiring the new promotion is added to the flash memory.
  • Step S705 Notifying the load balancer of the content of the flash memory or the result of the slice header update, so that the load balancer is accurately scheduled to the corresponding media server when scheduled by content.
  • FIG. 8 it is a flowchart of a heat statistics algorithm according to an embodiment of the present invention, which takes the most recent n days (n can be matched, recommended to be 7 days) on-demand record from the Tab_Stat_Consume table instead of taking the content from the release to the current time.
  • the on-demand record is used to calculate the statistical heat, which is considered to be the most relevant to the current time period, and the correlation is considered to be zero beyond this time period.
  • step S801 the pre-statistic processing of the original data is performed first.
  • the number of on-demand times for each level of each content is counted from the original record table Tab_Stat_Consume (for example, as shown in Table 1) (for example, as shown in Table 2). Assume that the current time is 2007-12-25, take the records of the last 7 days for statistics, and filter other useless data according to the on-demand time. As shown in Table 1, the on-demand time of the first and second records exceeds 30 minutes. The condition of level 3 is satisfied, that is, the number of times the content A satisfies the level 3 in the last 7 days is 2, and so on, the pre-processing result of Table 2 can be formed.
  • Table 1 Content on-demand record table
  • Step S802 ranking the content according to the ranking rule.
  • each content is ranked, there are, but not limited to, two ranking rules as follows.
  • One ranking rule is: The higher the number of on-demand times, the higher the ranking, the higher the ranking.
  • the highest number of on-demand times compares the next level of on-demand times until the lowest level of on-demand times is compared, as shown in Table 2; the other is that the ranking rule is: Set a weight for each level, according to each The weighted sum of the number of on-demand times is ranked from large to small.
  • Step S803 storing the content slice header in the flash memory.
  • the content slice header is stored in flash memory according to a principle of storing in flash memory and a difference comparison algorithm.
  • the principle of depositing into flash memory includes but is not limited to one of the following principles.
  • the principle of proportional allocation pre-specified the proportion of storage capacity of each level, such as: 50% capacity level 1 content title, 30% capacity level 2 Content title, 20% capacity level 3 content title (generally, the whole movie), priority is given to the top content as high-level content, and from the high level to the low level according to the specified capacity. Enter the content header of each level and store the flash memory.
  • the difference comparison algorithm is as follows: From each current streaming server, the content slice list stored in the flash memory is taken out; a delete list (DelList) is created for each media server, and a new list (AddList) is stored; The content slice header is compared with the content slice header list in the current flash memory, and the content slice header is stored in the corresponding DelList in the current flash memory. Corresponding AddList; Finally, each streaming server is notified to update the content slice header to the flash memory. If there is a load balancer in the system, the content of the content slice header in the flash media of each load media server is notified to the load balancer.
  • the difference comparison algorithm when the difference between the old and new content titles is compared, only the content IDs are the same and the same level of the slice header is used to think that the content does not need to be changed, otherwise the update is needed.
  • the hot film is stored in the flash memory of the streaming media server, thereby improving the read/write speed of the I/O, thereby improving the performance of the single streaming media server. Moreover, since the flash memory has the feature that the power-off data is not lost, the restart does not need to read the data of the hot chip from the hard disk into the flash memory, and the startup speed can be improved.
  • a structural diagram of a system for providing on-demand content includes: a streaming media server 91, a load balancer 92, a content manager 93, and a disk array 94.
  • the content manager 93 is configured to calculate the popularity of the on-demand content, and store the on-demand content in the flash memory of the streaming server 91 according to the statistical heat to improve the performance of the streaming server 91;
  • the content manager 93 also needs to bear the content heat statistics in addition to the common content management functions such as publishing/repairing/removing content, and determining which content or title is stored in the flash memory of the streaming media server 91. Which content or title is removed from the flash memory medium function of the streaming server 91.
  • the content manager 93 also needs to manage the user's on-demand records, and save the on-demand records in a database or a file.
  • each record includes but is not limited to the following information: content ID, on-demand Time, on-demand duration, etc., the above information is stored in the structure of the data table. If there are a large number of users, it may generate huge amounts of data every day or even every hour.
  • This data table uses the sub-table technology to divide the table by time (for example: on time , by day, by week, by month, etc., of course, the embodiment of the present invention is not limited to the above-mentioned time period table), and may be divided into tables according to the capacity (for example, 200 MByte - a sub-table), or may be divided into records according to the number of records (for example, 3 million records a sub-table).
  • the content manager 93 notifies the load balancer 92 of the distribution of the hot slice or the hot slice every time when the hot slice or hot slice is updated to the flash memory, or is notified by the streaming server 91. To the load balancer 92, so that the load balancer 92, when scheduled by content, prioritizes the streaming server 91 that stores the on-demand content in the flash memory that matches the user's on-demand request.
  • the streaming server 91 is configured to save the on-demand valley in the flash memory of the streaming server 91.
  • the streaming media server 91 includes one or more flash memories, and the flash memory is plugged into the rear board of the streaming server 91 or plugged into the motherboard slot of the streaming server 91.
  • the flash controller and driver in the streaming server 91 should adopt the wear leveling algorithm, and the wear leveling algorithm adopts the block processing method. Different flash controllers and driver block sizes may be different, and the minimum block is supported.
  • Block unit: Byte
  • the streaming server 91 should try to buffer the data of nxBlock size first (11, 1, 2, 3) Waiting for the natural number), and then writing the flash memory of the streaming server 91 once, this can minimize the frequent erasing of a certain data block and prolong the life of the flash memory.
  • the flash memory has a limitation on the erasing life, although the flash controller and driver of the streaming server 91 need to adopt the wear leveling algorithm, in order to reduce the risk of life, the content manager 93 does not recommend to update the hot film to the flash memory in real time, and it is recommended according to the heat. Regularly update the hotspots to flash memory (such as updating once a day, but not limited to days).
  • the disk array 94 is used to save the entire content of the on-demand movie or other content in the on-demand movie except the title of the on-demand movie.
  • the streaming server 91 receives the user request, it preferentially considers reading the content in the flash memory of the streaming server 91 to provide services for the user. If there is no on-demand content in the flash memory that meets the requirements of the user's on-demand, consider the disk array 94 or the local disk. Read. If only the slice header is stored in the flash memory, the streaming server 91 finds the corresponding content from the disk array or the local disk after playing the slice header, and continues to play from the end of the slice header.
  • the load balancer 92 is configured to select the streaming server 91 according to the on-demand content, and preferentially select the streaming server 91 that stores the on-demand content in the flash memory.
  • the load balancer 92 preferentially selects the streaming server 91 that stores the content of the content in the flash memory according to the content of the user's on-demand content, that is, the load balancer 92 is required to distinguish the on-demand when scheduling by content.
  • the content is the content in the flash memory or the content on the general disk.
  • the content in the flash memory has higher priority than the content of the general disk, and then consider other factors such as the area.
  • FIG. 10 it is a structural diagram of a content manager according to an embodiment of the present invention, including: a heat statistics module 1001, configured to calculate the heat of on-demand content;
  • the storage module 1002 is configured to store the on-demand content into the flash memory of the streaming server according to the heat statistics of the heat statistics module 1001.
  • the heat statistics module 1001 includes: a record management sub-module 10011, configured to manage an on-demand record, where the on-demand record includes at least a content identifier, an on-demand time, and an on-demand duration; the level division sub-module 10012 is configured to be managed according to the record management sub-module 10011.
  • the on-demand time in the on-demand record classifies the on-demand content;
  • the statistics sub-module 10013 is configured to calculate the levels of the level division sub-module 10012. The number of on-demand content of the on-demand content.
  • the storage module 1002 includes:
  • the ranking sub-module 10021 is configured to rank the on-demand content according to the level divided by the sub-module 10012 according to the number of on-demand counted by the statistical sub-module 10013;
  • the content storage sub-module 10022 is configured to store the on-demand content of each level into the flash memory according to the ranking of the ranking sub-module 10021 and the principle of storing the flash memory.
  • the storage capacity of each level divided by the sub-module 10012 is set in advance, and the on-demand content of each level is stored in the flash memory according to the stored storage capacity of each level, the ranking of the ranking sub-module 10021, and the difference comparison algorithm.
  • the content manager further includes: a notification module 1003, configured to notify the load balancer of the update result of the on-demand content after the storage module 1002 stores the on-demand content in the flash memory of the streaming server.
  • a structural diagram of a streaming media server includes: at least one flash memory 111 for storing on-demand content, and the flash memory 111 is plugged into a backplane of the streaming media server or plugged into the streaming media. On the motherboard slot of the server;
  • the media request receiving module 112 is configured to receive an on-demand request of the user
  • the media providing module 113 is configured to preferentially read the on-demand content in the flash memory 111 in the streaming media server when the media request receiving module 112 receives the user's request for the broadcast.
  • the media providing module 113 is further configured to: when the content of the flash memory 111 in the streaming media server does not meet the content of the user's request for on-demand, read the on-demand content that matches the user's request for on-demand from the disk array; or
  • the streaming media server further includes: a flash control and driver module 114, configured to buffer data of a predetermined block size, and store the on-demand content in the memory 111.
  • a flash control and driver module 114 configured to buffer data of a predetermined block size, and store the on-demand content in the memory 111.
  • the present invention can be implemented by hardware, or by software plus necessary general hardware platform. Based on such understanding, the technical solution of the present invention can be produced by software.
  • the form of the product is reflected, the software product can be stored in a non-volatile storage medium
  • a computer device (may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

L'invention porte sur un procédé, un système et un dispositif pour fournir du contenu à la demande. Le procédé pour fournir du contenu à la demande consiste à : compter le degré de chaleur du contenu à la demande ; stocker le contenu à la demande dans une mémoire flash conformément au degré de chaleur compté, et fournir le contenu à la demande stocké dans la mémoire flash de préférence lorsqu'un serveur multimédia reçoit une requête de contenu à la demande. Le mode de réalisation de la présente invention améliore la vitesse de lecture et d'écriture de l'entrée/sortie par stockage du film chaud dans une mémoire flash d'un serveur de contenu multimédia, et améliore ainsi la capacité d'un serveur de contenu multimédia unique. Et étant donné que la mémoire flash possède la caractéristique selon laquelle aucune donnée n'est perdue à la mise hors tension, il n'est pas nécessaire de lire les données de film chaud depuis le disque dur vers la mémoire flash même au redémarrage, ainsi la vitesse de démarrage peut être améliorée.
PCT/CN2008/073609 2008-03-04 2008-12-19 Procédé, système et dispositif pour fournir du contenu à la demande WO2009109092A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200810007659.4 2008-03-04
CN2008100076594A CN101232600B (zh) 2008-03-04 2008-03-04 一种提供点播内容的方法、系统和装置

Publications (1)

Publication Number Publication Date
WO2009109092A1 true WO2009109092A1 (fr) 2009-09-11

Family

ID=39898736

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2008/073609 WO2009109092A1 (fr) 2008-03-04 2008-12-19 Procédé, système et dispositif pour fournir du contenu à la demande

Country Status (2)

Country Link
CN (1) CN101232600B (fr)
WO (1) WO2009109092A1 (fr)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101232600B (zh) * 2008-03-04 2011-07-20 华为技术有限公司 一种提供点播内容的方法、系统和装置
CN101729357B (zh) * 2008-10-14 2013-06-05 华为技术有限公司 媒体文件存储处理与业务处理方法及装置、服务器集群
CN101729272B (zh) * 2008-10-27 2013-01-23 华为技术有限公司 内容分发方法、系统、设备及媒体服务器
CN102088626B (zh) * 2009-12-02 2014-08-13 Tcl集团股份有限公司 一种在线视频推荐方法及视频门户服务系统
CN101945100A (zh) * 2010-07-30 2011-01-12 中山大学 一种数字家庭流媒体服务器及服务方法
CN102006506A (zh) * 2010-11-24 2011-04-06 深圳市同洲电子股份有限公司 一种视频服务器的分级存储管理方法及装置、视频服务器
CN102065283B (zh) * 2010-12-23 2013-10-02 浙江宇视科技有限公司 一种视频监控数据存储管理方法及其装置
CN102263986A (zh) * 2011-08-22 2011-11-30 中兴通讯股份有限公司 网络电视系统中的节目处理方法及装置
CN102790915B (zh) * 2012-07-09 2016-12-21 上海聚力传媒技术有限公司 一种用于向p2p节点预推送视频资源的方法与装置
CN103595694A (zh) * 2012-08-14 2014-02-19 腾讯科技(深圳)有限公司 流媒体播放方法和系统、内存服务器
CN103856535B (zh) * 2012-12-05 2018-09-04 腾讯科技(北京)有限公司 一种获取用户数据的方法和装置
CN103095562B (zh) * 2013-01-30 2016-07-27 深圳中网信通科技有限公司 云计算智能网关
CN103916693B (zh) * 2014-04-02 2018-06-08 深圳市瑞驰信息技术有限公司 一种预留存储空间的方法及其装置
CN106162218B (zh) * 2015-04-03 2020-11-06 中兴通讯股份有限公司 一种节目录制控制方法、系统以及管理、热度统计服务器
CN107431723A (zh) * 2015-05-13 2017-12-01 谷歌公司 针对点播内容模仿广播电视频道冲浪
CN105335517A (zh) * 2015-11-06 2016-02-17 努比亚技术有限公司 选择热度多媒体的方法及终端
WO2017117808A1 (fr) * 2016-01-08 2017-07-13 王晓光 Procédé et système de gestion de stockage de réseau vidéo
CN106454396A (zh) * 2016-10-26 2017-02-22 山东浪潮商用系统有限公司 一种提高直播时移电视并发能力的实现方法
CN106506665B (zh) * 2016-11-18 2019-09-24 郑州云海信息技术有限公司 一种分布式视频监控系统的负载均衡方法及平台
CN108965909B (zh) * 2018-08-01 2021-02-02 中国联合网络通信集团有限公司 一种冷门视频评估方法和系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1972311A (zh) * 2006-12-08 2007-05-30 华中科技大学 一种基于集群均衡负载的流媒体服务器系统
CN1972436A (zh) * 2006-12-13 2007-05-30 中山大学 一种数字电视节目点播控制方法
CN101025721A (zh) * 2006-02-22 2007-08-29 三星电子株式会社 根据优先级次序操作闪存的设备和方法
CN101232600A (zh) * 2008-03-04 2008-07-30 华为技术有限公司 一种提供点播内容的方法、系统和装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1604642A (zh) * 2004-11-04 2005-04-06 复旦大学 一种广播视频节目系统中信息发布优先级排列的方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025721A (zh) * 2006-02-22 2007-08-29 三星电子株式会社 根据优先级次序操作闪存的设备和方法
CN1972311A (zh) * 2006-12-08 2007-05-30 华中科技大学 一种基于集群均衡负载的流媒体服务器系统
CN1972436A (zh) * 2006-12-13 2007-05-30 中山大学 一种数字电视节目点播控制方法
CN101232600A (zh) * 2008-03-04 2008-07-30 华为技术有限公司 一种提供点播内容的方法、系统和装置

Also Published As

Publication number Publication date
CN101232600A (zh) 2008-07-30
CN101232600B (zh) 2011-07-20

Similar Documents

Publication Publication Date Title
WO2009109092A1 (fr) Procédé, système et dispositif pour fournir du contenu à la demande
US8612668B2 (en) Storage optimization system based on object size
CA2841216C (fr) Architecture de serveur de stockage modulaire a gestion de donnees dynamique
US7085843B2 (en) Method and system for data layout and replacement in distributed streaming caches on a network
US8745262B2 (en) Adaptive network content delivery system
EP2359536B1 (fr) Système de distribution de contenu de réseau adaptatif
US7444662B2 (en) Video file server cache management using movie ratings for reservation of memory and bandwidth resources
JP4663718B2 (ja) ブロックマップキャッシングおよびvfsスタック可能なファイルシステムモジュールに基づく分散型のストレージアーキテクチャ
WO2009062385A1 (fr) Système et procédé de stockage de fichier de flux multimédia
WO2011143946A1 (fr) Procédé et système permettant de gérer les mémoires caches à plusieurs niveaux d'un serveur de périphérie dans un cdn
CN105376218B (zh) 一种快速响应用户请求的流媒体系统和方法
US10606510B2 (en) Memory input/output management
JP2003525486A (ja) 有限要求リオーダを用いるディスク・スケジューリング・システム
WO2023226314A1 (fr) Procédé et appareil de traitement pouvant être mis à l'échelle d'une mémoire cache d'application, dispositif et support
US10782888B2 (en) Method and device for improving file system write bandwidth through hard disk track management
Liu et al. Performance of a storage system for supporting different video types and qualities
US10572464B2 (en) Predictable allocation latency in fragmented log structured file systems
US11146832B1 (en) Distributed storage of files for video content
US10078642B1 (en) Dynamic memory shrinker for metadata optimization
US20200186849A1 (en) Method and system for reducing drop-outs during video stream playback
Sarhan et al. A simulation-based analysis of scheduling policies for multimedia servers
Ling et al. Division-based Video Data Access Method for Hot/Cold Tiered Storage Systems
US7334103B2 (en) Methods and apparatus for improving the breathing of disk scheduling algorithms
Halvorsen et al. Storage systems support for multimedia applications
Srinilta Techniques for improving performance in continuous media servers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08873129

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08873129

Country of ref document: EP

Kind code of ref document: A1