CN101771666A - Terrestrial mobile multimedia broadcasting based method for managing cache of multimedia data distributing broadcasting system - Google Patents

Terrestrial mobile multimedia broadcasting based method for managing cache of multimedia data distributing broadcasting system Download PDF

Info

Publication number
CN101771666A
CN101771666A CN200810240824A CN200810240824A CN101771666A CN 101771666 A CN101771666 A CN 101771666A CN 200810240824 A CN200810240824 A CN 200810240824A CN 200810240824 A CN200810240824 A CN 200810240824A CN 101771666 A CN101771666 A CN 101771666A
Authority
CN
China
Prior art keywords
broadcasting
data
broadcasting system
multimedia
data distributing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200810240824A
Other languages
Chinese (zh)
Inventor
邓晖
杨贵君
刘刚
李良旺
郑志军
王江昆
朱秋果
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MCCTV MOBILE MULTIMEDIA NETWORK CO Ltd
Original Assignee
MCCTV MOBILE MULTIMEDIA NETWORK CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MCCTV MOBILE MULTIMEDIA NETWORK CO Ltd filed Critical MCCTV MOBILE MULTIMEDIA NETWORK CO Ltd
Priority to CN200810240824A priority Critical patent/CN101771666A/en
Publication of CN101771666A publication Critical patent/CN101771666A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a terrestrial mobile multimedia broadcasting based method for managing cache of a multimedia data distributing broadcasting system, relates to the field of broadcasting communication, and is applicable to a terrestrial mobile multimedia broadcasting (DAB/TMMB) system. The method is used for broadcasting and distributing data request of high-capacity multimedia data content and response processing of terrestrial mobile multimedia broadcasting data aiming at the response performance of mobile Internet content at a broadcast or communication physical transmission data server.

Description

A kind of a kind of method based on the ground mobile multimedia broadcast managing cache of multimedia data distributing broadcasting system
1, technical field
The present invention relates to the broadcast communication field, be applicable to ground mobile multimedia broadcast (DAB/TMMB) system, be used for the ground mobile multimedia broadcast data broadcasting and distribute big capacity multimedia data contents request of data and response processing, at the response performance of mobile Internet content at broadcasting or communication physical transfer data, services end.
2, background technology
As a kind of ground mobile multimedia broadcasting system of new media, it is to adapt to social development that mobile Internet is used (WEB service), embodies the product of scientific and technological progress, and it is a kind of new broadcast system.
In ground digital multimedia broadcast system, the professional form of carrying out is varied, comprise: video, audio frequency, stock, transport information, look business such as audio frequency website propelling movement and need be synchronized to the uniform data distributing broadcasting system, respectively the network carrier of full-service operation is distributed, the request of response user interactions, reasonable dynamic assignment user's request data, the mass data convection current is mutual, be distributed to terrestrial broadcast networks with the optimal network state policy, the data distributing broadcasting system cache management is particularly important, improves network effective broadband utilance and satisfaction of users.
The data distribution systems cache management has been realized mobile Internet kernel WEB service.Data distribution systems adopts a kind of asymmetric multithreading flowing structure (Asymmetric Muti-Threads Pipeline) to realize concurrent processing in the request task.At this design feature, propose and adopted independently cache management mechanism and more excellent replacement policy to realize the Web target cache management of data distribution systems.As shown in Figure 1.
It is more suitable under mobile Internet access and distribution platform, than the service environment of heavy load.Data distribution systems is under the heavier situation of load, and performance improves more obvious, and service quality is better.
3, summary of the invention
The invention provides a kind of based on the ground mobile multimedia broadcast data distributing broadcasting system.In the method, the structure of buffer memory realizes and replacement policy, adopt independently caching mechanism, in internal memory, divide a slice internal memory and realize buffer memory by own management separately, simultaneously on the basis of traditional Internet Server strategy LRU replacement policy, combine the size of Web object and accessed frequency, propose the S-FLRU replacement policy, and adopted multistage S-FLRU to realize the replacement policy of data distributing broadcasting system.
(1) accounts for very high ratio according to static requests in the HTTP request, adopt efficient buffer memory to reduce the obstruction that magnetic disc i/o causes; Secondly, use multithread mode, improve the concurrency of handling, give full play to the performance of data distributing broadcasting system; At last, provide service, reduce the expense of data copy and system call at the kernel buffer memory;
(2) S-FLRU strategy: supposing has N Web object in data distributing broadcasting system Cache, is Si for its size of any object i simultaneously.In supposing during k, in Cache object set with C (k) expression, the object i that is accessed in being illustrated in during the k with ik, if ik in Cache, then visit is hit; If in Cache, then we must not eliminate some objects to hold ik.Suppose that N 〉=0 expression is in order to hold ikCache except the space that has living space and also need.We introduce a variable V i, if we will eliminate object i, then Vi are changed to 1; If object of reservation i then is changed to Vi 0 (only the object in Cache just has corresponding Vi variable correspondence with it).We suppose Δ Tik indicated object i the k stage from once be accessed to the current time difference between accessed, so the accessed frequency of 1/ Δ Tik indicated object.When superseded, Perfected process is to eliminate such one group of Web object, requires it to satisfy under the condition that big or small sum is equal to or greater than N, accessed frequency accumulative total and reach minimum, so have with drag:
This model is a knapsack problem of minimizing, and says accurately, be placed on object our object that will eliminate from Cache just in the knapsack.In this model with knapsack problem in be worth with weight ratio corresponding be: (1/ Δ Tik)/S (promptly superseded object, preferably accessed frequency is little, is again the big web object of file).So this paper obtains after adopting greedy method that the object among the Cache is arranged by ascending order by Si* Δ Tik size: S 1* Δ T 1k≤ S 2* Δ T 2k≤ ...≤S | c (k) |* Δ T | c (k) | k
When superseded, can begin to eliminate up to there being enough spaces to hold new object successively from having peaked object.The replacement policy that claims this integrated access frequency and file size is S-FLRU.
When realizing this replacement policy, consider that data distributing broadcasting system adopts asymmetric multithreading flowing structure, eliminate bottleneck and the further concurrency that improves that choosing of object do not become system in order to make, adopted a plurality of S-FLRU to row Parallel Implementation replacement policy.According to file size classification is carried out in the S-FLRU formation, when a new object enters in the buffer memory, eliminated according to the object that its size is selected to be eliminated in corresponding S-FLRU formation.This implementation has overcome under the situation that adopts single LRU chain, owing to LRU is that critical resource causes many other process queues to wait for the shortcoming of operation LRU, demonstration along with the user to the mobile Internet request, the load of Web Server increases, and remarkable advantages is revealed in many S-FLRU queue table.
(3) in order be when handling request to obtain object that this request visits whether in buffer memory by searching rapidly, adopted the Hash chained list to search, its structure chart as shown in Figure 2.
Adopted wherein a kind of the shortest a kind of Hash function of conflict chain that makes.Adopt array to simulate chained list to the conflict chain in implementation procedure, the purpose of doing like this is to exchange the time for the space, and minimizing use chained list is dynamically application and discharges internal memory cost.Each conflict chain has been stipulated a length according to test result, will eliminate in this chain an object when full by force when certain conflict chain and satisfy new object and enter the conflict chain.
(4) page distributes and discharges: based on buddy algorithm, and it has been done improved the cache management of realizing the web object.In order to realize page management, safeguarded that in the controlled area page or leaf uses bitmap, Free_Area array, page table entry array.Their correlation figure as shown in Figure 3.
System adopts Free_Area array and page table entry array to safeguard a multistage free page chain, if accessed Web object not in buffer memory, then distributes suitable page or leaf piece (may be that one page also might be that continuous multipage is formed) to give object according to the file size of Web object.Allocation strategy: if in single page or leaf piece, can not hold object, then preferentially distribute big page or leaf piece to object, for example: a Web object needs 9 pages and just can deposit down, then we at first distribute by the mode of 9=8+1, promptly earlier from figure, select to distribute one page piece in the page or leaf piece chain of 23 free time pointed in the Free_Area array, and then from the free page piece chain of 20 indications, distribute one page.If, in the idle chain of 23 indications, there has not been free block, then will distribute by the method for salary distribution of 9=4+4+1, promptly from the chain of 22 indications, distribute two and then from 20, distribute one page, and the like.
4, description of drawings
Fig. 1 storage overall construction drawing
The organization chart of Fig. 2 HASH
Fig. 3 page management structure chart
5, embodiment
In the invention process data distribution systems, simultaneously according to mobile Internet interaction response HTTP processing procedure: connecting and receiving request, analysis request, request processing, response data sends, data distribution systems adopts asymmetric multithreading flowing structure, utilize different sets of threads to handle each step, make the step energy parallel processing between each request, stream treatment between the step in the request.Data distributing broadcasting system is divided into four classes with sets of threads:
The first kind is responsible for accepting connection;
Second class is responsible for receiving request;
The 3rd class is responsible for analysis request and file I/O;
The 4th class is responsible for data and is sent, and decides Thread Count than being listed as according to the consuming time of each stage.
This paper distributes this design feature at data broadcasting, and cache management mechanism realizes the management of Web target cache in order to reduce the distribution performance proposition of file I/O raising data broadcasting and to have adopted independently.Data broadcasting distribution is application a slice memory space from Installed System Memory, is used for buffer memory Web object and user to ask to safeguard control information.Whole memory block is by self-management and maintenance, the operating system that data distribution systems carries not with intervention.

Claims (1)

1. one kind based on ground mobile multimedia broadcast multimedia data distributing broadcasting system (Data ContentDistributing Broadcasting System), its feature cache management adopts a kind of asymmetric multithreading flowing structure (Asymmetric Muti-Threads Pipeline) to realize concurrent processing in the request task, response contents distribution control proposes and has adopted independently that cache management mechanism and more excellent replacement policy realize the mobile Internet Web target cache management that data distributing broadcasting system is right.
CN200810240824A 2008-12-26 2008-12-26 Terrestrial mobile multimedia broadcasting based method for managing cache of multimedia data distributing broadcasting system Pending CN101771666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810240824A CN101771666A (en) 2008-12-26 2008-12-26 Terrestrial mobile multimedia broadcasting based method for managing cache of multimedia data distributing broadcasting system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810240824A CN101771666A (en) 2008-12-26 2008-12-26 Terrestrial mobile multimedia broadcasting based method for managing cache of multimedia data distributing broadcasting system

Publications (1)

Publication Number Publication Date
CN101771666A true CN101771666A (en) 2010-07-07

Family

ID=42504266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810240824A Pending CN101771666A (en) 2008-12-26 2008-12-26 Terrestrial mobile multimedia broadcasting based method for managing cache of multimedia data distributing broadcasting system

Country Status (1)

Country Link
CN (1) CN101771666A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103763188A (en) * 2014-01-22 2014-04-30 四川九洲空管科技有限责任公司 Multi-type message real-time processing method and device
CN103780507B (en) * 2014-02-17 2017-03-15 杭州华三通信技术有限公司 The management method of cache resources and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103763188A (en) * 2014-01-22 2014-04-30 四川九洲空管科技有限责任公司 Multi-type message real-time processing method and device
CN103763188B (en) * 2014-01-22 2016-08-31 四川九洲空管科技有限责任公司 A kind of polymorphic type message real-time processing method and device
CN103780507B (en) * 2014-02-17 2017-03-15 杭州华三通信技术有限公司 The management method of cache resources and device

Similar Documents

Publication Publication Date Title
US6463508B1 (en) Method and apparatus for caching a media stream
CN107734004A (en) A kind of high concurrent SiteServer LBS based on Nginx, Redis
CN101262488B (en) A content distribution network system and method
US6721850B2 (en) Method of cache replacement for streaming media
CN106170016A (en) A kind of method and system processing high concurrent data requests
US7117242B2 (en) System and method for workload-aware request distribution in cluster-based network servers
CN103179433B (en) System, method and service node for providing video contents
CN105407004B (en) The method and device of content distribution is carried out based on edge hotspot
CN101491055B (en) Dispatching request fragments from a response aggregating surrogate
MX2014007165A (en) Application-driven cdn pre-caching.
US20100030866A1 (en) Method and system for real-time cloud computing
CN1269896A (en) Internet caching system
CN103294548A (en) Distributed file system based IO (input output) request dispatching method and system
CN103176849A (en) Virtual machine clustering deployment method based on resource classification
US20050010648A1 (en) Apparatus and methods for information transfer using a cached server
CN103843384B (en) Load balance based on geo-location
TW202207031A (en) Load balancing for memory channel controllers
CN1922607A (en) Method and apparatus for hierarchical selective personalization
Perwej The ambient scrutinize of scheduling algorithms in big data territory
CN101771666A (en) Terrestrial mobile multimedia broadcasting based method for managing cache of multimedia data distributing broadcasting system
CN1825800A (en) Load method, load system and mobile station using load method
Gaber et al. Predictive and content-aware load balancing algorithm for peer-service area based IPTV networks
Geyer et al. Pipeline Group Optimization on Disaggregated Systems.
US9996468B1 (en) Scalable dynamic memory management in a network device
CN109194767A (en) A kind of flow medium buffer dispatching method suitable for mixing network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20100707